DOA Estimation for Coherent Sources in Transformed Space

The existence of coherent sources results in the rank deficit of sample covariance matrix. Classic MUSIC(Multiple Signal Classification) can not classify coherent sources, and instead, generate an equivalent sources somewhere between them. In the proposed method, first, a specially designed transformation is constructed, which can suppress the coherent interfering sources while retain desired coherent sources. With the transformation the collected array signal can be mapping into a new data space. Since in the process of transformation, the contribution of the coherent interfering sources is suppressed, applying the classical music to the transformed data space will result in accurate DOA estimation of the coherent sources. Simulation experiment show that compared to the classical music, this method can accomplish accurate DOA estimation of coherent sources.


Introduction
Estimations of DOA by the means of sensor array processing is a hot spot and attracts many researchers to investigate it.It is widely applied in radar, sonar, seismology and underwater signal sources estimations.MUSIC is a class of high resolution DOA estimate algorithm and widely employed in those fields.MUSIC is known as a high resolution algorithm for DOA estimation, but in case of finite data samples it can not resolve adjacent sources with large power level differences between them.However, classical MUSIC has a serious drawback that it cannot conduct DOA with the existence of coherent sources, which results in rank deficit of source covariance matrix.It is well known that coherent sources extensively exist in real world, for example, multi-path propagation and jamming.To overcome this shortcoming, several effective methods have been developed, such as spatial smoothing and weighted subspace fitting.
Such array signal processing techniques also can be used to neuroscience.Here we also discuss how to use the method to coherent brain source localization.
One of the most active areas of research in contemporary neuroscience concerns the issue of functional connectivity and neuronal integration.At the microscopic level, increasing evidence show that relevant information in the brain is coded by accurate timing of neuronal discharges and Synchronized rhythmic neural firing has a role in solving the binding problem, i.e., the integration of distributed information into a unified representation.At all levels of description of cortical networks, the synchronization hypothesis get more and more supports in neurophysiologic literature.At the macroscopic level, Functional connectivity between cortical areas may appear as correlated time behavior of neural activity.
To investigate cortico-cortical synchrony noninvasively in the human brain, new analysis tools must be developed.FMRI have been used to estimates connectivity between brain areas.However, its temporal resolvability is not high enough to measure oscillatory activity and to observe transient formation of neuronal assemblies.Magnetoencephalography (MEG) and electroencephalography (EEG) scalp recordings have unsuppressed temporal resolution to characterize neuronal coupling and commonly used to study inter-regional functional connectivity.Indeed, task-dependent interactions have been reported between signals recorded by different MEG sensors or EEG electrodes.However, these findings are limited to correlations within the measurement device and reveal little on the synchrony between specific cortical areas.
The signal recorded by a MEG sensor or an EEG electrode cannot be directly attributed to the underlying cortical region.The complex relationship between the signal detected by a sensor and an activated brain area is given by the solution of the forward problem (i.e., the calculation of the magnetic field or electric potential generated by a point source).Especially electric potentials (EEG) are smeared out because of the inhomogeneous conductivity structure of the human head.
The activity of even a small cortical area is recorded by several sensors, leading to severe spreading in sensor-based measures.The spreading is particularly problematic when describing interdependencies between signals.
Ideally, in order to study neuronal interactions one has to go beyond the sensor level, as need two steps: first sources have to been localized and then their temporal courses have to been estimated.Based on both the source locations and waveforms, one can investigate their interactions and psychophysiological implications.Many authors studied the algorithms for localizing neuronal sources.Among these methods, beamforming and music are two most popular algorithms and then attract many attentions.
Beamforming have been shown to provide reliable estimates both of the spatial location and the time courses of activity of neuronal sources.Furthermore, literature shows music is more accurate than beamforming.We select to develop music-type methods to localize the sources.
At mentioned above, Functional connectivity between cortical areas may appear as correlated time behavior of neural activity.To study interregional interactions within brain, methods focusing on handling correlated time courses should be developed.
However, in principal, classic music can not deal with correlated sources.Some authors developed some modified music to this case of weakly and moderately correlated sources.When sources is highly correlated (e.g. a extreme case, fully correlated), these methods will fail.
In 2001, J.Gross et.al.presented a pioneering technique, DICS, uses a spatial filter to localize coherent brain regions and provides the time courses of their activity.DICS is beamforming type method, designed for interactions between sources at specified frequency bands.
Nonetheless, there is a pitfall ( 16)in this approach that meg beamformer methodology is based on the underlying assumption that no distinct neuronal sources are perfectly linearly related.In fact, in the presence of high, long-lasting, source correlation, the estimated signal intensity and temporal distortion will grow worse/deteriorate.Furthermore, DICS need to select reference points before imaging of the coherent sources, as may results in different operators get different results.
Here, we present a subspace-based method to localize fully correlated sources (the correlation coefficient between sources equal to 1).Throughout the paper the terms coherent sources will be used to denote such strictly linear relationships between time courses.The key point of this method is to decrease correlation between sources largely enough that classic music can easily localize them.After accurately finding the positions of coherent sources, estimations of source time courses is relatively easy and have many reliable methods to do that.Since the present method by adding an inverse source into the head model classify coherent sources.This paper we focus on a new DOA estimate method, which can not only estimate DOA of coherent sources, but also can identify closely paced sources.In the following analysis, for the simplicity, we only discuss the case of two sources.The case of more than two sources is more complex, but still can be deled with by this means.

recall of classical MUSIC
Classical MUSIC, initially proposed by Schmidt, is used to solve the problem of DOA estimation in array signal processing.Suppose there are r sources impinging on m array sensors from different scalar directions.The manifold vector may therefore be specified as ) (θ a .The set of r manifold vectors may been expressed as The data sample x(t) collected from array sensors can be expressed as

., ( )]
Where s(t) is the time courses of sources and n(t) is Gaussian noise.Based on the assumption that the additive noise n(t) are uncorrelated with the source time courses, then, Where superscript H denotes the Hermitian transpose.The autocorrelation of x(t) can be partitioned as Φ Φ are the signal subspace and the noise subspace, respectively.since the signal subspace is orthogonal to the noise subspace, we can obtain the cost function as the following to estimate the DOA, Theoretically, while ( ) i a θ exactly is the manifold vector associated with the actual sources, The MUSIC algorithm uses this property to estimate direction of sources.when applied in real data, because of the effects of noise and computation errors, J does not equal to zero.( ) i a θ , which let J reach to its local maximums of J, is the manifold vector of the true sources.by this means, one can find the directions of the true sources.

MUSIC's disability of coherent source estimation
Subspace based approaches have two advantages 1) decrease computational load 2) avoid nonlinear search.However, it is based on an assumption that the source time courses is independent or weak correlated.To strongly correlated sources, classic music doesn't not handle them.Modified music has been developed to deal with this case.R music Rap music, fines et al can deal with strongly correlated sources.However, when sources are fully correlated (the Correlation coefficient between sources proximally equals 1), all those methods will fail.For instance, the time course of source 1 is s 1 , and source 2 s 2 .Suppose S 2 =kS 1 .Their lead matrix is a 1 and a 2 , respectively.Then the scalp EEG y=a 1 s 1 +a 2 s 2 .Since s 2 =ks 1 , y=(a 1 +ka 2 )s 1 .In theory, instead of the true locations of both s 1 and s 2 which are associated with lead matrix s 1 and s 2 , classic MUSIC will mistakenly localize s1 and s2 at the location corresponding to the lead matrix a1+ka2.Our idea is, since the cc is 1, if cc can decrease to a small enough degree that classic music can identify them, we can easily identify them with any further processing.But the new problem arises, it is an inverse problem and we have no prior information of the source position.Sitting the constructing source at each gird of the whole head model is a kind of methods.While constructing source is just positioned at the location of any true sources, since the cc between the combination of constructing source and one of true source and the other is small enough, the cost function will find two peaks and classic music will easily localize them.Sitting construct source at other location results in the cost function getting one peak since constructing source and two true sources are coherent and then they will only generate an equivalent source.From the number of local peaks, one can know when the constructing source is just the positions of the true sources.

the algorithm formulation
Firstly, with a prior information, the approximate direction of the coherent interfering sources can be estimated.The collection of these direction vectors, ( ) 1, , by injecting equation (1,9) into (1.11), the signal coming form coherent interfering direction will be suppressed.And thus, its effect on coherent sources is removed and can be estimated correctly.

Simulation test
In order to validate the effectiveness of the proposed method, we take a conventional two source uniform linear array as examples to compare this method with the other sequential forms.We follow the simulations in in order to draw performance comparison between the various sequential forms of MUSIC.The sources are far field narrowband and impinging on the array from scalar direction θ.The array manifold vector may therefore be specified as Where θ =0 is broadside to the array, and ( ) a m q = .The source time series are assumed to be complex zero mean Gaussian distribution with covariance matrix P. suppose there are 15 sensors and two sources at 25 and 30 degree.In another simulation, the angles of two sources are at 14 and 16.The source covariance matrix is specified as transformed space, the signal from the direction to be suppressed can be obtained,