On the Fisher’s Z transformation of correlation random fields F. Carbonell1 ∗, K. J. Worsley1 and N. J. Trujillo-Barreto2

1

Department of Mathematics and Statistics, McGill University, Montreal, Canada 2 Cuban Neuroscience Center, Havana, Cuba Abstract

One of the most interesting problems studied in Random Field Theory (RFT) is to approximate the distribution of the maximum of a random field. This problem usually appears in a general hypothesis testing framework, where the statistics of interest is the maximum of a random field of known distribution. In this paper we use the RFT approach for the comparison of two independent random fields R1 and R2 . Our statistics of interest is the maximum of a random field G, resulting from the difference between the Fisher’s Z transformation of R1 and R2 , respectively. The Fisher’Z transformation guarantees a Gaussian distribution at each point of G but, unfortunately, G is not transformed to a Gaussian random field. Hence, standard results of RFT for Gaussian random fields are not longer available for G. We show here that the distribution of the maximum of G can be still approximated by that of a Gaussian random field, provided some correction by its spatial smoothness. Indeed, we present a quite general setting for obtaining such a correction, namely, by allowing different smoothness parameters for the components of G. Finally, the performance of our method is illustrated by means of both, numerical simulations and real Electroencephalography data recorded during a face recognition experimental paradigm.

1

Introduction

A typical problem arising in brain imaging analysis is to study how brain functional activity changes with the experimental condition, status of the subject, etc. One of the most successful approaches for characterizing such a relationship if the general linear model Y (t) ∼ Xβ(t), where a set of dependent variables Y (t) represents the brain activity at location (voxel) t and X denotes a set of predictor variables. Of particular interest in recent years has been assessing the relationship between the brain activity at two different locations t and s (i.e. the relationship between Y (t) and Y (s)). This is known as the problem of brain functional connectivity, which was originally defined for fMRI (functional Magnetic Resonance Imaging) data as the temporal correlation between spatially remote neurophysiological events (Friston et al., 1993a). This concept was extended to any other image measurements by saying that two different regions of the brain were functionally connected if they showed similar anatomical features over subjects (Worsley et al., 2005). Several approaches have been proposed for assessing functional connectivity in images (McLntosh and GonzalezLima, 1994; Strother et al., 1995; Horowitz et al., 1996; Friston et al., 1997; Cao and Worsley, 1999a), which are typically based on: 1) estimation of the covariance (or correlation) structure among different brain regions and 2) statistical inference about the resulting correlation maps. Indeed, the most difficult task is the one related with the statistical inference. That is, to make inferences for the null hypothesis H0 : R(s, t) = 0 for all s, t, where R(s, t) is an image of correlation coefficients (i.e. R(s, t) = Corr(Y (s), Y (t))). Since one is usually seeking for pairs of high correlated regions, the maximum over t and s of the statistical image R (denoted by Rmax ) is a natural test statistics for H0 (Cao and Worsley, 1999a). Therefore, a threshold based on the upper quantiles of the distribution of Rmax under H0 should be chosen and pairs of regions where R exceeds that threshold would be declared as statistically significant correlated. Notice that the previous analysis involves the application of several (usually thousands) non independent (correlated) statistical tests, known as the multiple comparisons problem. In this situation, the application of the well-known Bonferroni-type corrections would produce a conservative decision threshold value due to the loss of statistical power. Another method for correcting for multiple comparisons is to choose a suitable threshold while controlling the False Discovery Ratio (FDR) (Genovese et al., 2002), defined as the expected proportion of falsely rejected voxels among those rejected. As it was commented in Genovese et al. (2002), a FDR threshold becomes more conservative as the correlation in tests increases, which a limitation for typical brain imaging data. Indeed, ∗ The

first author’s work was supported by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada.

1

high correlated tests are usually obtained after common brain imaging pre-processing steps (e.g. spatial smoothing for increasing the signal to noise ratio). An alternative approach for solving the multiple comparisons problem in high correlated tests is given by Random Field Theory, (Adler, 1981; Worsley et al., 1998; Cao and Worsley, 1999a; Worsley et al., 2005). As usual in neuroimaging literature, brain images are modeled as Gaussian random fields (Worsley et al., 1992; Friston et al., 1993b), and typical hypothesis testing problems produce random fields with known distributions such as Z, t, χ2 , F, T 2 (Hasofer, 1978; Worsley et al., 1992; Worsley, 1994; Cao and Worsley, 1999b). Then, Random Field Theory consists on approximating the P-value that the local maximum exceeds a high threshold value via the geometry of the excursion set, the set of points where the random field exceeds the threshold value. For high thresholds the excursion set consists of isolated regions, each containing a local maximum, where the number of such regions is determined by the Euler Characteristic of the set. Then for high thresholds near the maximum of the random field, the Euler Characteristic takes the value one if the maximum is above the threshold, and zero otherwise. As a consequence, for high thresholds, the expected Euler Characteristic approximates the P-value of the maximum of the random field. Explicit expression for the expected Euler Characteristic have been already obtained for cross-correlation, homologous correlation and maximal canonical correlations random fields (Cao and Worsley, 1999a; Worsley et al., 2004; Taylor and Worsley, 2008). In this way, the problem of detecting functional connectivity through correlation random fields has been successfully solved. Another related and perhaps more interesting problem is to characterize the variability of the correlation R(s, t) among brain regions from one experimental condition to another or just seeing this variability respecting external global variables such as gender, age, etc. (Kim et al., 2008; Cohen et al., 2008; Jafri et al., 2008). The aim of this paper is to provide a solution to this problem by using the Random Field Theory approach. The motivation comes from analyzing Electroencephalography (EEG) data recorded during a face recognition experimental paradigm, where a subject observes images of unfamiliar faces and scrambled versions of the same faces. We then seek for differences in the data correlation structure between both experimental conditions (unfamiliar and scrambled). A classical statistical approach for the comparison of correlation coefficients is through the so-called Fisher ’Z transformation (Fisher, 1921). The key idea is that a correlation coefficient R(X, Y ) between the random variables X and Y can be transformed to an approximately normal (standard Gaussian) variable Z by means of the Fisher ’Z 1+R transformation Z(R) = 12 ln( 1−R ). In such a way, the null hypothesis of equality between independent correlation coefficients R1 (X1 , Y1 ) and R2 (X2 , Y2 ) (H0 : R1 = R2 ) can be tested through the statistics G = Z(R1 ) −Z(R2 ) (with a corresponding variance correction). In principle one could use this approach for the case of independent Gaussian random fields Xi (s) and Yi (t), i = 1, 2, defined over voxels s and t, respectively. In fact, it has been already proposed in Chung et al. (2005), where the authors interpreted Z as a Gaussian random field and approximated the P-value of Zmax by random field theory. Unfortunately, such approach has a major drawback: even when G(s, t) is a Gaussian random variable for each s, t, G = G(s, t) is not a Gaussian random field. In this paper we overcome the above limitation by presenting results for the correct application of random field theory to the Fisher’s Z transformation of correlation random fields. Our proposal is based on general results presented in Vanmarcke (1983) and Worsley et al. (1992) for non Gaussian random fields. Specifically, expressions for the expected Euler Characteristic of a non Gaussian random field can be obtained by the following steps: 1) To ”Gaussianize” the random field (i.e. transforming it to a random field with Gaussian distribution at each point) and 2) To apply standard results of random field theory to this ”Gaussianized” version, provided certain correction by its spatial smoothness. Hence, we consider that G(s, t) is a ”Gaussianized” random field and provide explicit expressions for such smoothness correction term in a quite general setting. The plan of the paper is the following. In Section 1 we present general results about the approximation of the maximum of a random field through the expected Euler Characteristic of its excursion sets. In Section 2 we present the main results of this paper. This is, we explain the features of random field theory for the problem of comparison of two independent correlation random fields. Finally, in Section 3 we evaluate la performance of our method by both simulated data and real EEG data.

2

The geometry of random fields .

Let Z (u) , u ∈ U ⊂ RD be a real valued isotropic Gaussian random field with E(Z) = 0, V ar(Z) = 1 and V ar(Z) =Λ, . where Z denotes the derivative of Z. As we mention in the Introduction section, an approximation to the Pvalue of the maximum of Z, at high thresholds z, is given by the expected Euler Characteristic χ(U∩Az ), where Az = {Z ≥ z} is called the excursion set of Z at level z. The Euler Characteristic χ(U∩Az ) counts the number of connected components of the excursion set Az , minus the number of holes. For high thresholds z, the holes tends to disappear and the Euler Characteristic counts the number of peaks above z. For even higher threshold values z near Zmax = maxZ(u), the Euler Characteristic takes the value 1 if the maximum is above z, and 0 otherwise. u∈U

2

Thus, for high thresholds, E(χ(U∩Az )) approximates P (Zmax ≥ z) (Adler, 1981). The advantage of using the Euler Characteristic is that the following expression has been found for its expectation: µ ¶ D X P max Z(u) ≥ z ≈ E(χ(u ∈ U : Z(u) ≥ z)) = µd (U) ρZ d (z), u∈U

d=0

where µD (U) is be the Lebesgue measure of U , µd (U), d = 0, ..., D − 1 denotes the d−dimensional intrinsic volume or Minkowski functional of U, ρZ d (z) denotes the d-dimensional Euler Characteristic Gaussian density (Adler, 1981), z2 2

d+1

d

ρZ d (z) = ρZ d (z) =

det(Λ) 2D (2π)− 2 e− P(Z ≥ z), d = 0,

Hd−1 (z), d ≥ 1,

with Z distributed as a normal random variable and [d/2]

Hd (z) = d!

X (−1)i z d−2i , d≥1 2i i!(d − 2i)! i=0

denotes the dth Hermite polynomial, Stegun and Abramowitz (1978). Let T (u) , u ∈ U ⊂ RD be a real valued isotropic Gaussian-type random field (made up from i.i.d Gaussian random fields with the same matrix Λ each) and let G be a ”Gaussianization” of T (see remark below). Then, according to results in Vanmarcke (1983) and Worsley et al. (1992) one has d

d

2 2D (2π)− ρG d (z) = λ det(Λ)

where the correction term λ is given by

à λ=

d+1 2

e−

.

det(V ar(G))

z2 2

Hd−1 (z), d ≥ 1,

! D1 .

.

det(V ar(Z))

Remark 1 There are different ways of transforming a non Gaussian random field T to a one with Gaussian distribution at each point (i.e. to ”Gaussianize” a random field). A natural way to do so (Worsley et al., 1992) is by the transformation T = p−1 Z (pT (T )), where pT is the probability distribution function (pdf ) of T at each point and pZ is the normal (standard Gaussian) pdf. As we mention in the Introduction section, our ”Gaussianization” is carried out by means of the Fisher ’s Z transformation.

3

Testing equality of cross-correlation random fields

Let Xi (s) =(X1i (s), ...., Xνi i (s)), s ∈ S ⊂ RM and Yi (t) =(Y1i (t),...., Yνii (t)), t ∈ T ⊂ RN be independent vectors of νi , i = 1, 2 i.i.d smooth stationary zero mean Gaussian random fields, respectively. Then the M + N -dimensional cross-correlation random field Ri (s, t), i = 1, 2 are defined by the sample correlation (Cao and Worsley, 1999a): 0

Xi (s) Yi (t)

Ri (s, t) = q

0

, i = 1, 2.

0

(1)

Xi (s) Xi (s)Yi (t) Yi (t) The Fisher’s Z transformation Zi (s, t) =

1 ln 2

µ

1 + Ri (s, t) 1 − Ri (s, t)

¶ , i = 1, 2

can be used for testing the null hypothesis H0 : R1 (s, t) = R2 (s, t) for all s and t. Indeed, Zi (s, t), i = 1, 2 are independent random fields with (approximately) Gaussian distribution (Kenney, 1951) with zero mean and variance 1 νi −3 , i = 1, 2 at each point (s, t), respectively. In other words, Z1 (s, t) and Z2 (s, t) are ”Gaussianized” versions of R1 (s, t) and R2 (s, t), respectively. Hence, the ”Gaussianized” random field G(s, t) =

Z1 (s, t) − Z2 (s, t) q 1 1 ν1 −3 + ν2 −3

3

(2)

can be used for testing H0 . As usual in random field theory, for thresholding G(s, t) one just needs to evaluate the corresponding expected EC. According to the previous section, E(χ((s, t) ∈ S × T : G(s, t) ≥ z)) =

M +N X

µd (S × T ) ρG d (z),

d=0

where

.

d

2D (2π)− ρG d (z) = det(V ar(G))

d+1 2

e−

z2 2

Hd−1 (z), d ≥ 1,

(3) .

Z and ρG 0 (z) = ρ0 (z) = P(Z ≥ z). An explicit expression for the smoothness correction term V ar(G) is given by the following theorem. .

.

Theorem 2 If V ar(Xi ) = Λix , V ar(Yi ) = Λiy , i = 1, 2, then ¶ µ 0 (ν2 − 3)Λ1x + (ν1 − 3)Λ2x . 0 (ν2 − 3)Λ1y + (ν1 − 3)Λ2y V ar(G) = . ν1 + ν2 − 6 Proof. According to Lemma 4.2 in Cao and Worsley (1999a), s D

1

−1

t D

1

−1

1

1

Ri = (1 − Ri2 ) 2 ai 2 (Λix ) 2 zix , Ri = (1 − Ri2 ) 2 bi 2 (Λiy ) 2 ziy , i = 1, 2, where ai ∼ χ2νi , bi ∼ χ2νi , zix ∼ N ormalM (0, IM ), ziy ∼ N ormalN (0, IN ), i = 1, 2, independently and independent D

of R1 and R2 . Here, = means equality in distribution and χ2ν denotes the Chi-square distribution with ν degrees of freedom. Hence, s D

1

−1

t D

1

1

−1

1

Zi = (1 − Ri2 )− 2 ai 2 (Λix ) 2 zix , Zi = (1 − Ri2 )− 2 bi 2 (Λiy ) 2 ziy , i = 1, 2, and so, s D

1

−1

1

1

− 12

− 12

1 2

− 21

−1

1

G = c((1 − R12 )− 2 a1 2 (Λ1x ) 2 z1x − (1 − R22 )− 2 a2 2 (Λ2x ) 2 z2x ), t D

G = c((1 − R12 ) where c =

q

b1 (Λ1y ) z1y − (1 − R22 ) .

1 1 1 ν1 −3 + ν2 −3

s

− 12

1 2

b2 (Λ2y ) z2y ),

t

. On the other hand, since G = (G, G)0 then  .

V ar(G) =

s

s

t

s

0 E(G(G) )

E(G(G)0 ) s

t

t

 s t E(G(G)0 ) t

t

.

E(G(G)0 )

s

Hence, according to (4)-(5), E(G(G)0 ) = E(G(G)0 ) = 0 and s

s

t

t

1 2 −1 2 E(G(G)0 ) = c2 [E((1 − R12 )−1 )E(a−1 )E(a−1 1 )Λx + E((1 − R2 ) 2 )Λx ], 1 2 −1 2 E(G(G)0 ) = c2 [E((1 − R12 )−1 )E(b−1 )E(b−1 1 )Λy + E((1 − R2 ) 2 )Λy ]. √

Since ai ∼ χ2νi , bi ∼ χ2νi , √νi −1R2i ∼ tνi −1 , i = 1, 2 (tν denotes the t distribution with ν degrees of freedom) then 1−Ri

E(a−1 i )

E(b−1 i ) =

=

νi − 2 , i = 1, 2, νi − 3

E((1 − Ri2 )−1 ) = and finally,

1 , νi − 2

µ ¶ (ν2 − 3)Λ1x + (ν1 − 3)Λ2x 0 . 0 (ν2 − 3)Λ1y + (ν1 − 3)Λ2y V ar(G) = . ν1 + ν2 − 6 4

(4) (5)

Notice that in the theorem above we have considered a general situation by allowing different smoothness on the Gaussian random fields Xi and Yi , i = 1, 2. In the neuroimaging literature a Gaussian random field Z(u), u ∈ U ⊂ RD is often modeled as white noise convolved with an isotropic Gaussian filter. The width of the filter is measured by its Full Width at Half Maximum (F W HM ) (Worsley et al., 1992), which is a standard parameter for characterizing the smoothness of a random field (as higher the F W HM value, smoother the random field). It is then straightforward to show that . 4 log 2 V ar(Z) = Λ = ID , F W HM 2 which motivates us to define F W HM = (4 log 2)1/2 det(Λ)−1/(2D) .

d

for any stationary random field. Then, we can express the correction term det(V ar(G)) 2D in (3) by means of the F W HM as follows. Corollary 3 The F W HM of the random field G is given by F W HMG

=

M

1

−2 −2 − 2(M +N ) (ν1 + ν2 − 6) 2 [(ν2 − 3)F W HMX + (ν1 − 3)F W HMX ] 1 2

×[(ν2 − 3)F W HMY−2 + (ν1 − 1

(6)

N 3)F W HMY−2 ]− 2(M +N ) , 2

where F W HMXi , F W HMYi , i = 1, 2 are the F W HM of Xi , Yi , i = 1, 2, respectively. Therefore, for a given significance level α, the null hypothesis H0 is rejected at high values of the threshold z for which P( max G(s, t) ≥ z) ≤ α, where (s,t)∈S×T

P(

max

(s,t)∈S×T

G(s, t)

≥ z) =

M +N X d=0

=

d

d+1 z2 (4 log 2) 2 µd (S × T ) (2π)− 2 e− 2 Hd−1 (z) d F W HMG

i+j+1 i+j M X N X (2π)− 2 (4 log 2) 2

i=0 j=0

i+j F W HMG

µi (S) µj (T ) e−

z2 2

(7)

Hi+j−1 (z).

Finally, some comments to take into account in the computational implementation of the previous results. Notice that formula (7) requires the numerical approximations of the F W HM as well as the intrinsic volumes µi (S), i = 0, ..., M and µj (T ), j = 0, ..., N . By one hand, the F W HM s involved in the definition of F W HMG (see previous corollary) are easily estimated according to Kiebel et al. (1999). On the other hand, since real applications are usually carried out for the case M = N = 3 (3D brain regions S and T ), the intrinsic volumes of the whole brain are commonly approximated by that of a sphere with the same volume, which gives a very good approximation for any non-spherical search regions. Indeed, the approximation of intrinsic volumes of spherical regions is easily carried out by following Worsley et al. (1996). Hence, the intrinsic volumes of any other particular brain region such as an hemisphere are accordingly approximated.

4

Applications

In this section we illustrate the performance of our methods by means of numerical simulations and real data recorded in an Electroencephalography (EEG) experiment. Simulated data corresponds to 1-D images (M = N = 1) while real EEG data is interpreted as a multivariate time series with dimension given by the number of scalp recordings. As it is shown below, one can reconstruct 3D images of the sources inside the brain that generate such electrical measurements. Hence, our real data shall consists on 3D images (M = N = 3) resulting from such a reconstruction.

4.1

Simulations

A number of ν1 and ν2 pairs of 1-D (M = N = 1) Gaussian random fields (Xik (s), Yik (t)), k = 1, 2, i = 1, ..., νk , s ∈ S, t ∈ T , S = T = [0, 128] were simulated as in Cao and Worsley (1999a). For each k = 1, 2, we first simulated νk pairs of independent white noise processes uki (s), wik (t), i = 1, ..., νk on a larger region, and νk pairs of scalar

5

1.5

0.8 0.6

1

0.4 0.5

0.2 0

0

−0.2 −0.5 −0.4

6

6

5 5 4 4

3 2

3

1 2

0 −1

1

−2

Figure 1: Top: Fisher ’s Z transformation (Z1 and Z2) of two independent cross correlation RFs with samples size ν1 = 20 and ν2 = 30, respectively. Signals were added at positions (32,32), (16,80), (64,64) in Z1 and (32,32), (16,80) in Z2. Bottom: Gaussianized random field G and its corresponding thresholding at z=4.19. normal random variables Uik , Wik with zero mean unit variance and correlation βk , independent of uki (s), vik (t). Let 2 fe be a Gaussian function scaled so that the convolution of fe with itself is f (s) = e−s /2 . Then, for each k = 1, 2, Z Z Xik (s) = fe(s − s1 )uki (s1 )ds1 + f (s − sk0 )(Uik − fe(s1 )uki (s1 )ds1 ), Z Z k k k k e Yi (t) = f (t − s1 )wi (s1 )ds1 + f (t − t0 )(Wi − fe(s1 )wik (s1 )ds1 ) are smooth independent Gaussian random fields with a correlated ”signal ” at the position sk0 and tk0 , respectively. Additional signals can be incorporated at different points (skj , tkj ), j = 1, ..., pk , by adding pk + 1 terms to Xik (s) and Yik (t) above, instead of one, each with independent Uik , Wik and correlation βk . Top panel of Figure 1 shows the random fields Z1 and Z2 resulting from the Fisher ’s Z transformation to the cross-correlation random fields (1) corresponding to the parameters ν1 = 20, ν2 = 30, (s10 , t10 ) = (s20 , t20 ) = (32, 32), (s11 , t11 ) = (s21 , t21 ) = (16, 80), (s12 , t12 ) = (64, 64), β1 = 0.85 and β2 = 0.7. The smoothness parameters were chosen as F W HMX1 = 6, F W HMY1 = 9, F W HMX2 = 7, F W HMY2 = 10. Bottom left panel of Figure 1 shows the resulting Gaussianized random field G(s, t), which smoothness, according to (6), is characterized by F W HMG = 7.27. As expected, the random field G has high values around the position (s, t) = (64, 64), since similar signals were added in both Z1 and Z2 at (32, 32) and (16, 80). Indeed, it is evidenced at bottom right panel in Figure 1, were the random field G was thresholded at value z = 4.19, obtained by formula (7) with significance level α = 0.05.

6

Figure 2: Left Panel: Average time series of some representatives electrodes at conditions Unfamiliar(blue) and Scrambled(red). Right Panel: Average scalp distribution of 128 electrodes and corresponding average 3D inverse solution at time instant t = 170ms.

4.2

Real EEG data

Real EEG data was obtained during a face recognition experimental paradigm Henson et al. (2003) that consisted on presentation of 76 images of faces to the subject: 38 famous faces (F) and 38 unfamiliar faces (U). Additionally, each of the facial images was followed by the presentation of a scrambled (S) version of the same face. Faces were presented at time zero, and EEG data over 128 electrodes was recorded over 800 milliseconds (time window W = [−200, 600]ms). Hence, for each of the three conditions (F, U and S), our primary data is just but independent repetitions (trials) of 38, 38 and 76 multivariate time series (128 time series over W = [−200, 600]ms). As it was explained in Henson et al. (2003), differences between conditions U and S are related with the face perception process, whereas differences between F and U are associated to the face recognition process. In this paper, we are going to use only the data corresponding to the face perception process (i.e. conditions U and S). To get and idea of such data, left panel of Figure 2 shows the overall mean time series for conditions U(in blue) and S (in red) (average over all respective independent observations) corresponding to 6 representative electrodes, where the letters L, R, A and P in the labels mean Left, Right, Anterior and Posterior regions, respectively. It can be seen that comparison between conditions U and S reveals a pronunciated right posterior negativity around t = 170ms , which is known as the N170 component (see Henson et al. (2003) for further details). This is more evident in the right top panel of Figure 2, which shows the average scalp distribution of the 128 electrodes measurements at time instant t = 170ms . At this point, one should be cautious when interpreting the EEG topography in terms of neural activation regions, which requires the use of specific tomographic techniques. Indeed, Low Resolution Electromagnetic Tomography (LORETA) (Pascual-Marqui et al., 1994) allows the reconstruction of the 3D neural

7

Figure 3: Top Panel: Correlation random fields R1 and R2 corresponding to conditions Unfamiliar and Scrambled at time instant t = 170ms. Bottom panel: Corresponding random field Z and 3D representation of significant points after thresholding at z = 6.728.

8

generators of 128 EEG electrical recordings. Specifically, the relationship between the electric field J(t) at all voxels in the brain (written as a vector) at time t = 170ms and the EEG measurements M(t) at all electrodes on the scalp (written as a vector of length 128) is given by the simple linear model M(t) = KJ(t) + e(t), where the leadfield K is a known function of the position of the electrodes on the scalp, the position of the points inside the brain, and the electrical properties of the brain tissue and scalp. The measurement error e(t) is assumed to be a vector of independent Gaussian processes at time t. Then, the LORETA solution of the inverse problem is given by b = (K0 K + λ2 L0 L)−1 KM(t), J(t) where λ is a regularizing parameter and L is a discrete version of the Laplacian operator that force the solution to be spatially smooth. In our case we choose parameters in such a way that LORETA solutions were mapped over 3244 3D-voxels of dimension 7mm × 7mm × 7mm each. Then, for each condition U and S, LORETA 3D images were calculated at t = 170ms for each of the 38 trials. Right bottom panel of Figure 2 shows, for each condition, a maximum intensity projection map of the average of such 38 solutions. Notice that, for condition U, high positive values appear in both hemispheres over the posterior (occipital) region, which concurs with the corresponding topographic map. On the other hand, condition S reveals very low negative values in the posterior left hemisphere as well as positive values in the anterior (frontal) right hemisphere, which is in fully agreement with the experimental paradigm (Gobbini and Haxby, 2007). We used the residuals obtained by subtracting the average of each group (unfamiliar and scrambled) from the data in that group, to give νi = 38 − 1 = 37 effectively independent observations for each condition. Our further analysis is then based on seeking differences in the inter-hemispheric correlations between conditions U and S. For it, we then choose s ∈ S = Left Hemisphere and t ∈ T = Right Hemisphere and [Xi (s), Yi (t)], i = 1, 2 as the νi = 37 (zero mean) independent 3D images corresponding to conditions U and S, respectively. As we previously mentioned, we use Kiebel et al. (1999) for obtaining the estimates F W HMX1 = 19.3180mm, F W HMX2 = 20.3741mm, F W HMY1 = 20.3432mm and F W HMY2 = 22.2544mm, which according to 6 give F W HMG = 20.5177mm. Additionally, the estimation of the intrinsic volumes are given by µ0,1,2,3 (S) = µ0,1,2,3 (T ) = 1, 191.24mm, 1.395 × 104 mm2 , 5.037 × 105 mm3 . Then, according to (7), at significance level α = 0.05 we obtain the decision threshold z = 6.728. Top panel in Figure 3 shows the correlation random fields R1 (s, t) and R2 (s, t) corresponding to conditions U and S, respectively, whereas left bottom panel shows the resulting random field Z(s, t). Right bottom panel of this figure also shows a 3D representation of the significant points (s, t) of Z(s, t) after thresholding at z = ±6.728 (for detecting both positive and negative significant regions). There are evident significant differences in the cross correlation of conditions U and S between temporal (middle) region in both hemispheres. That is, the inter-hemispheric cross correlations in the temporal region are significant (positively) different between conditions U and S (R1 > R2 in red). Notice also significant positive differences between the right frontal (anterior) region and left occipital (posterior) region. In contrast, notice also negative differences (R1 < R2 in blue) between left frontal and right occipital regions. These results are consistent with the classical cognitive ”core model” for face recognition and perception Gobbini and Haxby (2007) which involves the processing and transfer of information in a neuronal network that includes bilateral brain structures in the occipital and frontal regions as well as in the superior-temporal area.

5

Conclusions

In this paper we have presented a random field theory approach for dealing with different correlation random fields in hypothesis testing problems. In particular, we focused on the problem of testing equality between two independent cross correlation random fields. It was shown how the Fisher’s Z transformation can be used in this problem, provided certain corrections for a feasible application of random field theory. Our approach is quite general in the sense that allows different smoothness parameter values in the Gaussian random fields involved in the analysis. Indeed, this last consideration has important practical consequences in problems with real data coming from different groups of subjects and perhaps under different experimental conditions as well, where equality of smoothness becomes a pretty restrictive assumption. The performance of the proposed method was evaluated by means of both numerical simulations and real EEG data.

9

Acknowledgement The authors would like to thank Rik Henson (MRC Cognition and Brain Sciences Unit, Cambrigde) and Will Penny (FIL, Wellcome Department of Imaging Neuroscience, University College London) for kindly providing the experimental data.

References Adler, R. J., 1981. The Geometry of Random Fields. John Wiley and Sons Inc. Cao, J., Worsley, K., 1999a. The geometry of correlation fields with an application to functional connectivity of the brain. The Annals of Applied Probability 9 (4), 1021–1057. Cao, J., Worsley, K. J., 1999b. The detection of local shape changes via the geometry of Hotelling’s T 2 fields. The Annals of Statistics 27 (3), 925–942. Chung, M., Dalton, K., Robbins, S., Evans, A., Davidson, R., 2005. Partial Correlation Mapping of Cognitive Measure and Cortical Thickness in Autism. Tech. Rep. 1109, Department of Statistics, University of Wisconsin-Madison. Cohen, A. L., Fair, D. A., Dosenbach, N., Miezin, F. M., Dierker, D., Van Essen, D. C., Schlaggar, B. L., Petersen, S. E., 2008. Defining functional areas in individual human brains using resting functional connectivity MRI. Neuroimage, in press. Fisher, R., 1921. On the probable error of a coefficient of correlation deduced from a small sample. Metron 1 (4), 3–32. Friston, K., Buechel, C., Fink, G., Morris, J., Rolls, E., Dolan, R., 1997. Psychophysiological and Modulatory Interactions in Neuroimaging. Neuroimage 6 (3), 218–229. Friston, K., Frith, C., Liddle, P., Frackowiak, R., 1993a. Functional connectivity: the principal-component analysis of large (PET) data sets. J Cereb Blood Flow Metab 13 (1), 5–14. Friston, K., Worsley, K., Frackowiak, R., Mazziotta, J., Evans, A., 1993b. Assessing the significance of focal activations using their spatial extent. Human Brain Mapping 1 (3), 210–220. Genovese, C., Lazar, N., Nichols, T., 2002. Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate. Neuroimage 15 (4), 870–878. Gobbini, M., Haxby, J., 2007. Neural systems for recognition of familiar faces. Neuropsychologia 45 (1), 32–41. Hasofer, A., 1978. Upcrossings of Random Fields. Advances in Applied Probability 10, 14–21. Henson, R., Goshen-Gottstein, Y., Ganel, T., Otten, L., Quayle, A., Rugg, M., 2003. Electrophysiological and Haemodynamic Correlates of Face Perception, Recognition and Priming. Cerebral Cortex 13 (7), 793–805. Horowitz, B., Grady, C., Mentis, M., Pietrini, P., Ungerleider, L., Rapoport, S., Haxby, J., 1996. Brain functional connectivity changes as task difficulty is altered. NeuroImage 3, S248. Jafri, M. J., Pearlson, G. D., Stevens, M., D. Calhouna, V. D., 2008. A method for functional network connectivity among spatially independent resting-state components in schizophrenia. Neuroimage 39, 1666–1681. Kenney, J., 1951. Mathematics of statistics. D. Van Nostrand Princeton, NJ. Kiebel, S., Poline, J., Friston, K., Holmes, A., Worsley, K., 1999. Robust Smoothness Estimation in Statistical Parametric Maps Using Standardized Residuals from the General Linear Model. Neuroimage 10 (6), 756–766. Kim, D., Pearlson, G., Kiehl, K., Bedrick, E., Demirci, O., Calhoun, V., 2008. A method for multi-group interparticipant correlation: Abnormal synchrony in patients with schizophrenia during auditory target detection. Neuroimage 39 (3), 1129–1141. McLntosh, A., Gonzalez-Lima, F., 1994. Structural equation modeling and its application to network analysis in functional brain imaging. Human Brain Mapping 2 (1-2), 2–22.

10

Pascual-Marqui, R., Michel, C., Lehmann, D., 1994. Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. International Journal of Psychophysiology 18 (1), 49–65. Stegun, M., Abramowitz, I., 1978. Handbook of Mathematical Functions with Formulas Graphs and Mathematical Tables. National Bureau of Standards. Strother, S., Anderson, J., Schaper, K., Sidtis, J., Liow, J., Woods, R., Rottenberg, D., 1995. Principal component analysis and the scaled subprofile model compared to intersubject averaging and statistical parametric mapping. I: Functional connectivity of the human motor system studied with 15 O water PET. Journal of cerebral blood flow and metabolism 15 (5), 738–753. Taylor, J., Worsley, K., 2008. Random fields of multivariate test statistics, with an application to shape analysis. Annals of Statistics 36, 1–27. Vanmarcke, E., 1983. Random Fields: Analysis and Synthesis. The MIT Press. Worsley, K., Evans, A., Marrett, S., Neelin, P., 1992. A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism 12 (6), 900–18. Worsley, K. J., 1994. Local maxima and the expected Euler characteristic of excursion sets of χ2 , F and t fields. Advances in Applied Probability 26 (1), 13–42. Worsley, K., Marrett, S., Neelin, P., Vandal, A., Friston, K., Evans, A., et al., 1996. A unified statistical approach for determining significant signals in images of cerebral activation. Human Brain Mapping 4 (1), 58–73. Worsley, K., Cao, J., Paus, T., Petrides, M., Evans, A., 1998. Applications of random field theory to functional connectivity. Human Brain Mapping 6 (5-6), 364–367. Worsley, K., Taylor, J., Tomaiuolo, F., Lerch, J., 2004. Unified univariate and multivariate random field theory. Neuroimage 23, 189–195. Worsley, K., Chen, J., Lerch, J., Evans, A., 2005. Comparing functional connectivity via thresholding correlations and singular value decomposition. Philosophical Transactions of the Royal Society B: Biological Sciences 360 (1457), 913–920.

11

On the Fisher's Z transformation of correlation random fields (PDF ...

Our statistics of interest are the maximum of a random field G, resulting from the .... by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada. 1.

462KB Sizes 5 Downloads 390 Views

Recommend Documents

On the Fisher's Z transformation of correlation random ...
more conservative as the correlation in tests increases, which a limitation for typical brain imaging data. Indeed,. *The first author's work was supported by a ...

z-transformation
Partial fraction method and. (iii). Inversion integral method or Residues method. Inversion method or Residues Method: The inverse Z-Transform of U(z) is given ...

The geometry of time-varying cross correlation random ...
denote Var(vec(X)) by Vm when Cov(Xij,Xkl) = ε(i, j, k, l) − δijδkl where ε(i, j, k, l) is ..... NormalM (0,IM ), z2,w2 ∼ NormalN (0,IN ), z3,z4,w3, w4 ∼ NormalP (0,IP ),.

Curse of Dimensionality in Approximation of Random Fields Mikhail ...
Curse of Dimensionality in Approximation of Random Fields. Mikhail Lifshits and Ekaterina Tulyakova. Consider a random field of tensor product type X(t),t ∈ [0 ...

Speech Recognition with Segmental Conditional Random Fields
learned weights with error back-propagation. To explore the utility .... [6] A. Mohamed, G. Dahl, and G.E. Hinton, “Deep belief networks for phone recognition,” in ...

Tail measures of stochastic processes or random fields ...
bi > 0 (or ai > 0, bi = 0) for some i ∈ {1,...,m + 1}, then 0F ∈ (−a,b)c; therefore, ..... ai. )α for every s ∈ E. Therefore, we only need to justify taking the limit inside.

Small Deviations of Gaussian Random Fields in Lq-spaces Mikhail ...
We investigate small deviation properties of Gaussian random fields in the space Lq(RN ,µ) where µ is an arbitrary finite compactly supported Borel measure. Of special interest are hereby “thin” measures µ, i.e., those which are singular with

Approximation Complexity of Additive Random Fields ...
of Additive Random Fields. Mikhail Lifshits and Marguerite Zani. Let X(t, ω),(t, ω) ∈ [0,1]d × Ω be an additive random field. We investigate the complexity of finite ...

Co-Training of Conditional Random Fields for ...
Bootstrapping POS taggers using unlabeled data. In. CoNLL-2003. [26] Berger, A., Pietra, A.D., and Pietra, J.D. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71,. 1996. [26] Kudo, T. and Matsumoto, Y.

Event Correlation on the basis of Activation Patterns
able to map the features of events thrown by all possible sensors to ... RGNG map for byte histograms ..... Activation Patterns we visualize the high dimensional.

On the geometry of a generalized cross-correlation ...
defined for analyzing functional connectivity in brain images, where the main idea ... by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada.

On the geometry of a generalized cross-correlation ...
Random fields of multivariate test statistics, with an application to shape analysis. Annals of Statistics 36, 1—27. van Laarhoven, P., Kalker, T., 1988. On the computational of Lauricella functions of the fourth kind. Journal of. Computational and

Event Correlation on the basis of Activation Patterns - TUGraz-Online
But the problem with neural networks is to chose the correct level of model- ..... Computer Science University of California, Irvine, CA, Tech. Rep., 1995. [6] M. G. ...

Event Correlation on the basis of Activation Patterns - TUGraz-Online
AI Techniques. Traversing Models ... While only a few Decision-Tree based AI-approaches have been ..... System file dumps: Different tools are used to generate ...

Context-Specific Deep Conditional Random Fields - Sum-Product ...
In Uncertainty in Artificial Intelli- gence (UAI), pp ... L. R. Rabiner. A tutorial on hidden markov models and ... ceedings of 13th Conference on Artificial Intelligence.

Detection of oceanic electric fields based on the ...
development of an analytic model for the signatures radiatcd by a target. It is based on a dipolar representation of tlie ship and on the assumption of a three- ...

Ergodicity and Gaussianity for spherical random fields - ORBi lu
hinges on the fact that one can regard T as an application of the type T: S2 ..... analysis of isotropic random fields is much more recent see, for instance, Ref. ...... 47 Yadrenko, M. I˘., Spectral Theory of Random Fields Optimization Software, In

Random Multi-Overlap Structures and Cavity Fields in ... - Springer Link
NJ 08544–1000, USA; e-mail: [email protected]. 785 ... We will have in mind a lattice with a large bulk of N sites (cavity) and M additional spins (N is ...

Ergodicity and Gaussianity for spherical random fields - ORBi lu
the cosmic microwave background CMB radiation, a theme that is currently at the core of physical ..... 14 , we rather apply some recent estimates proved in Refs.

Ergodicity and Gaussianity for spherical random fields
From a mathematical point of view, the CMB can be regarded as a single realization of ... complete orthonormal system for the L2 S2 space of square-integrable ...... 5 Biedenharn, L. C. and Louck, J. D., The Racah-Wigner Algebra in Quantum ...

Random Fields - Union Intersection tests for detecting ...
Statistical Parametric Mapping (SPM) for these two situations be developed separately. In ... Mapping (SPM) approach based on Random Field Theory (RFT).

Random Multi-Overlap Structures and Cavity Fields in ... - Springer Link
1Department of Mathematics, Princeton University, Fine Hall, Washington Road, Princeton,. NJ 08544–1000 ... the spin variables, to be specified each time.