The geometry of time-varying cross correlation random fields F. Carbonell1,2 , K. J. Worsley1 , N. J. Trujillo-Barreto3 and M. Vega-Hernandez3

1

Department of Mathematics and Statistics, McGill University, Montreal, Canada 2 Institute for Cybernetics, Mathematics and Physics, Havana, Cuba 3 Cuban Neuroscience Centre, Havana, Cuba

Abstract The goal of this paper is to assess the P-value of local maxima of time-varying cross correlation random fields. The motivation for this comes from an Electroencephalography (EEG) experiment, where one seek connectivity between all pairs of voxels inside the brain at each time point of the recording window. In this way, we extend the results of (Cao & Worsley, 1999) by searching for high correlations not only over all pairs of voxels, but over all time points as well. We apply our results to an EEG data set of a face recognition paradigm. Our analysis determines those time instants for which there are significantly correlated regions involved in face recognition.

1

Introduction

The main idea behind functional connectivity of brain images is to establish a connectivity between two different regions of the brain on the basis of similar functional response. Several approaches have been followed for assessing functional connectivity [14], [16], [12], [7]. A more general approach uses the sample correlation between all pairs of voxels for estimating the connectivity between images [3], [22], [23], [24]. This has been applied to functional connectivity within resting state fMRI data [24] and anatomical connectivity within cortical thickness measurements [23], [24]. The challenging statistical problem is how to assess the P-value of local maxima of the resulting smooth 6D cross correlation random field (RF), defined by the sample correlation between all pairs of voxels [3]. We extend this idea to time varying cross correlation, as follows. Let Xi (u,t) be the value of the ith RF of one set of RFs at spatial point u ∈ U and time point t in the time interval [0, T ], i = 1, . . . , ν. Let Yi (v,t) be the value of the ith RF of a second set of RFs at spatial point v ∈ V and time point t, i = 1, . . . , ν. Then the time-varying connectivity between spatial points u and v at time point t is measured by the correlation ν P

C(u, v, t) =

Xi (u,t)Yi (v,t)

r νi=1 P

Xi

i=1

(u,t)2

ν P

. Yi

(1)

(v,t)2

i=1

The motivation comes from a typical EEG experiment. A stimulus is presented to the subject and the current sources underlying the recorded evoked potential are reconstructed at all spatial points (voxels) in the brain at every time point measured from the onset of the stimulus. This is repeated over many trials. We then seek connectivity between all pairs of voxels at each time point. 1

In other words, we search for high correlations not only over all pairs of voxels, but over all time points as well. Hence, our main task is to calculate a threshold, based on RF theory, for controlling the false positive rate to say P = 0.05. Thus we need a good approximation to µ ¶ P max C(u, v, t) ≥ c . u∈U,v∈V,t∈[0,T ]

This problem was studied in the seminal works of Adler and Hasofer [1, 10] for the case of Gaussian RFs. The key idea consists of approximating the P-value that the local maxima exceeds a high threshold value via the geometry of the excursion set (i.e. the set of points where the RF exceeds the threshold value). For high thresholds, the excursion set consists of isolated regions, each containing a local maximum. Then, the Euler characteristic (EC) of the excursion set counts the number of such connected components. For high thresholds near the maximum of the RF, the EC takes the value one if the maximum is above the threshold, and zero otherwise. Thus, for these high thresholds, the expected EC approximates the P-value of the maximum of the RF. During the last decade this approach has become an important tool in different areas of applied probability. This is due to the many applications that have emerged principally in medical imaging and astrophysics [9, 2, 18, 20, 21, 22, 5]. So far, results are also available for RFs of standard univariate statistics such as χ2 , T and F RFs [19]. Some other useful extensions were obtained in [4] for the Hotelling’s T 2 and in [3] for the cross correlation RFs. Recent results have been obtained for some multivariate RFs, such as Roy’s maximum root, maximal canonical correlations [17] and Wilk’s Lambda [6]. The work most closely related to this paper, and the one that we shall generalize, is the paper [3] for the cross correlation RF. This paper treats two types of random field: the cross correlation RF C(u, v, 0), a special case of (1) with t fixed (say at 0), and the homologous correlation RF C(0, 0, t), a special case of (1) with u and v fixed (say at 0). The extant results for the cross or homologous correlation RFs do not solve the problem of finding the P-value for the local maxima of (1). Therefore, the extension of the main tools developed in [3] is compelling. The plan of the paper is the following. In the first section we shall state the notation as well as some preliminaries about RF theory. In the second section we derive expressions for the derivatives up to second order of the RF C(u, v, t), while in the third one we obtain its EC density. The analysis of simulations, as well as actual EEG data from a face recognition experiment, are presented in the last section.

2 2.1

Preliminaries Notation

We shall use the notation Im to represent the identity matrix of order m. By N ormalm (µ, Σ) we denote the multivariate normal distribution with mean µ and covariance matrix Σ. If X is a matrix, denote Var(vec(X)) by Vm when Cov(Xij , Xkl ) = ε(i, j, k, l) − δij δkl where ε(i, j, k, l) is symmetric in its arguments and δij = 1 if i = j and 0 otherwise. Let W ishartm (ν, Σ) represent the Wishart distribution of a m × m matrix with expectation νΣ and degrees of freedom ν, and use χ2ν for chiD square distribution with ν degrees of freedom. The symbol = will means equality in distribution. For any vector we shall use subscripts j, |j and the superscript |j to denote the j − th component, the first j components and the last j components, respectively. In a similar way, for any matrix, the subscript |ij shall denote the matrix of the first i rows and first j columns, and the superscripts 2

|ij shall denote the matrix for the last i rows and last j columns. Combinations of subscripts and superscripts can then be used to denote submatrices. For any scalar b, let b+ = b if b > 0 and zero otherwise. For any symmetric matrix B, let B− = B if B is negative definite and zero otherwise and let detrj (B) be the sum of the determinants of all j × j principal minors of B and define detr0 (B) = 1. Finally, if f (u, v) is a scalar function then u

f≡

2.2

∂f v ∂f uv ∂ 2f , f≡ , f ≡ . ∂u ∂v ∂u∂v0

The Geometry of Random Fields

Let C = C (s) , s ∈ S ⊂ RD be a real valued isotropic RF and let S be a compact subset of RD . with a twice differentiable boundary ∂S. Let’s denote the first derivative of C by C and the D × D .. matrix of second-order partial derivatives of C by C. The d-dimensional Euler Characteristic (EC) density is defined as . +

..

.

ρd (c) = E(C d det(−C |d−1 ) | C |d−1 = 0, C = c)φ|d−1 (0, c)

(2)

where E (·) denotes mathematical expectation and φ|d−1 (·, ·) is the joint probability density function . of C |d−1 and C. Let ad = 2π d/2 /Γ(d/2) be the surface area of a unit (d − 1)−sphere in RD . Define µD (S) to be the Lebesgue measure of S and µd (S), d = 0, ..., D−1 to be the d−dimensional intrinsic volume or Minkowski functional Z 1 µd (S) = detrD−1−d (M)ds, aD−d ∂S with M denoting the inside curvature matrix of ∂S at the point s. The expected EC χ of the excursion set Ac = {s ∈ S : C(s) ≥ c} of C above the threshold c inside S is given by E(χ(Ac )) =

D X

µd (S) ρd (c)

(3)

d=0

where ρ0 (c) = P(C ≥ c). Then, the P-value of the maximum of C inside S is well approximated by µ ¶ P max C ≥ c ≈ E(χ(Ac )). (4) s∈S

3

Time-Varying cross correlation field

The RF (1) can be more formally defined as C(u, v, t) = p

X(u, t)0 Y(v, t) X(u, t)0 X(u, t)Y(v, t)0 Y(v, t)

,

(5)

where X(u, t) =(X1 (u, t), ...., Xν (u, t))0 , Y(v, t) =(Y1 (v, t), ...., Yν (v, t))0 , u ∈ U ⊂ RM , v ∈ V ⊂ RN , t ∈ T ⊂ RP are vectors of independent, identically distributed Gaussian RFs with mean 0 and variance 1. Notice that in this section we are going to deal with the general case P ≥ 1. Similar to the case of the cross correlation RF in [3], the result (2) needs to be extended to the RF defined by (5). First, the boundary of U × V × T is not twice differentiable (even with the 3

assumption that the boundaries ∂U, ∂V and ∂T are twice differentiable). Second, even if we assume that the field is isotropic in each of the variables u, v or t for every fixed value of the other two, the field is not isotropic in (u, v, t). Therefore we cannot apply the results in the previous section, and we need a special result that is a straightforward extension of Lemma 3.1 of [3]: Lemma 1 Suppose that a RF C(u, v, t) is stationary in (u, v, t) and isotropic in each of the three arguments for every fixed value of the other two, and satisfies the regularity conditions of Theorem 5.2.2 in (Adler,1981). Let Ac be the excursion set of C defined as Ac = {u ∈ U, v ∈ V, t ∈T : C(u, v, t) ≥ c}. Then E(χ(Ac )) =

M X N X P X

µi (U) µj (V) µk (T ) ρijk (c),

i=0 j=0 k=0

where ρijk (c) has the same expression as in (2) with the parameters of the RF restricted to the first i components of u, the first j components of v and the first k components of t. The following lemma, which expresses the derivatives of a cross correlation RF in terms of independent variables will be also useful in this section. Lemma 2 (Lemma 4.2 in [3]). Let X(u)0 Y(v)

R(u, v) = p

X(u)0 X(u)Y(v)0 Y(v)

be a cross correlation RF defined for u ∈RM , v ∈RN . Then its first two derivatives can be rewritten as v u 1 1 1 1 1 1 R = (1 − R2 ) 2 a− 2 Λx2 zx , R = (1 − R2 ) 2 b− 2 Λy2 zy , uu

1

vv

1

uv

1

1

1

1

1

R = Λx2 (−Ra−1 zx z0x − Ra−1 Qx − (1 − R2 ) 2 a−1 (zx wx0 + wx z0x ) + (1 − R2 ) 2 a− 2 Hx )Λy2 , 1

1

1

1

R = Λy2 (−Rb−1 zy z0y − Rb−1 Qy − (1 − R2 ) 2 b−1 (zy wy0 + wy z0y ) + (1 − R2 ) 2 b− 2 Hy )Λy2 , 1

1

1

R = Λx2 (−R(ab)− 2 zx z0y + (ab)− 2 Qxy )Λy2 .

where a, b ∼ χ2ν , zx , wx ∼ N ormalM (0, IM ), zy , wy ∼ N ormalN (0, IN ), Hx ∼ N ormalM ×M (0, VM ), Hy ∼ N ormalN ×N (0, VN ), ¸ · Qx Qxy ∼ W ishartM +N (IM +N , ν − 2), Q = Qyx Qy independently and independent of R, and Λx , Λy are the variance matrices of the components of u

v

X and Y, respectively. Let

µ

Λx 0

0 λ x IP



µ and (u,t)

Λy 0 (v,t)

0 λ y IP



be the variance matrices of the components of X and Y in (5), respectively. Then the following lemma gives the first and second derivatives of the RF C in terms of independent random variables. 4

Lemma 3 The first and the second derivatives of time varying cross correlation RF (5) can be expressed by u

1

v

1

1

C = (1 − C 2 ) 2 a− 2 Λx2 z1 , uu

1

vv

1

uv

1

1

1

t

1

C = (1 − C 2 ) 2 b− 2 Λy2 z2 ,

1

1

C = (1 − C 2 ) 2 s12 z3 ,

1

1

1

1

2 2 −1 2 2 −2 0 0 2 C = Λx2 [−Ca−1 z1 z01 − Ca−1 Q11 H11 x − (1 − C ) a (z1 w1 + w1 z1 ) + (1 − C ) a x ]Λx , 1

1

1

1

2 2 −1 0 0 2 2 −2 2 C = Λy2 [−Cb−1 z2 z02 − Cb−1 Q11 H11 y ]Λy , y − (1 − C ) b (z2 w2 + w2 z2 ) + (1 − C ) b 1

1

1

2 C = Λx2 [−C(ab)− 2 z1 z02 + (ab)− 2 Q11 xy ]Λy ,

ut

1

−1

1

1

1

1

−1

1

1

1

1

2 2 −1 −2 0 2 C = Λx2 [−Ca−1 s1 2 z1 (a− 2 z03 + b− 2 z04 ) − Ca−1 λx2 Q12 w3 + b− 2 w40 ) x − (1 − C ) a s1 (z1 (a 1

1

1

1

1

1

−1

−2 +w1 (a− 2 z03 + b− 2 z04 )) + (1 − C 2 ) 2 a− 2 λx2 H12 s1 2 z1 (b− 2 z03 − a− 2 z04 )] x − C(ab) 1

1

1

2 +Λx2 (ab)− 2 Q12 xy λy ,

vt

−1

1

1

1

1

−1

1

1

1

2 2 −1 −2 0 2 w3 − a− 2 w40 ) C = Λy2 [−Cb−1 s1 2 z2 (b− 2 z03 − a− 2 z04 ) − Cb−1 λy2 Q12 y − (1 − C ) b s1 [z2 (b 1

1

1

1

1

1

−1

1

1

−2 +w2 (b− 2 z03 − a− 2 z04 )] + (1 − C 2 ) 2 b− 2 λy2 H12 s1 2 z2 (a− 2 z03 + b− 2 z04 )] y − C(ab) 1

1

1

2 +Λy2 (ab)− 2 Q12 yx λx ,

tt

1

1

1

1

1

1

−2 −2 22 C = −Ca−1 s−1 z3 + b− 2 z4 )(a− 2 z03 + b− 2 z04 ) − Ca−1 λx Q22 (λx λy ) 2 [Q22 1 (a x + (ab) xy + Qyx ] 1

1

1

1

−(1 − C 2 ) 2 [s22 (z3 w30 + w3 z03 + z4 w40 + w4 z04 ) + s4 (z3 w40 + w4 z03 )] + (1 − C 2 ) 2 a− 2 λx H22 x 1

1

1

1

1

1

1

1

1

−2 −C(ab)− 2 s−1 z3 + b− 2 z4 )(b− 2 z03 − a− 2 z04 ) + (b− 2 z3 − a− 2 z4 )(a− 2 z03 + b− 2 z04 )] 1 [(a 1

1

1

1

1

1

2 2 −2 −Cb−1 s1−1 (b− 2 z3 − a− 2 z4 )(b− 2 z03 − a− 2 z04 ) − Cb−1 λy Q22 λy H22 y + (1 − C ) b y ,

where s1 =

λx a

+

λy , b

s2 =

λx λy , ab

s4 =

λx a



λy , b

and

a, b ∼ χ2ν , z1 , w1 ∼ N ormalM (0, IM ), z2 , w2 ∼ N ormalN (0, IN ), z3 , z4 , w3 , w4 ∼ N ormalP (0, IP ),  11  Hx H12 x   ∼ N ormal(M +P )×(M +P ) (0, VM +P ), 21 22 Hx Hx   11 Hy H12 y  ∼ N ormal(N +P )×(N +P ) (0, VN +P ),  22 21 Hy Hy   Q12 Q11 Q12 Q11 xy xy x x     22 21 22   Q21 Q Q Q xy xy x x    ∼ W ishartM +N +2P (IM +N +2P , ν − 2), Q=   12  11 12  Q11 Q Q Q y y yx yx     22 21 22 21 Qy Qy Qyx Qyx independently and independent of C. 5

Proof. The RF C defined in (5) can be rewritten as X(p)0 Y(q)

C = C(p, q) = p

X(p)0 X(p)Y(q)0 Y(q)

,

where p(u, v, t) : = (u, t), q(u, v, t) : = (v, t). Hence, by simple rules of the differential calculus it is obtained µp¶ µq¶ µ p ¶|P µ q ¶|P u v t C= C , C= C , C= C + C . |M

|N

Since C(p, q) can be seen as a cross correlation RF defined in RM +P × RN +P then by Lemma 2, u

1

v

1

1

1

t

1

1

1

1

1

1

1

C = (1 − C 2 ) 2 a− 2 Λx2 z1x , C = (1 − C 2 ) 2 b− 2 Λy2 z1y , C = (1 − C 2 ) 2 (a− 2 λx2 z2x + b− 2 λy2 z2y ), and the first result follows by defining z1 : = z1x ∼ N ormalM (0, IM ), z2 : = z1y ∼ N ormalN (0, IN ), −1

1

1

1

1

z3 : = s1 2 (a− 2 λx2 z2x + b− 2 λy2 z2y ) ∼ N ormalP (0, IP ), with s1 =

λx a

+

λy . b

In a similar way, ut

uv

vv

|M N

|N N

|M M

µpq¶|P µqq¶|P µqp¶|P µpp¶|P vt + C , C= C + C , , C= C

µpq¶ , C = C

µqq¶ , C= C

µpp¶ C = C

uu

µpp¶|P P µpq¶|P P µqp¶|P P µqq¶|P P tt C = C + C + C + C .

|M

|M

|N

|N

Thus, by Lemma 2, uu

1

vv

1

uv

1

1

1

1

1

2 2 −1 0 0 2 2 −2 2 C = Λx2 [−Ca−1 z1 z01 − Ca−1 Q11 H11 x − (1 − C ) a (z1 w1 + w1 z1 ) + (1 − C ) a x ]Λx , 1

1

1

1

2 2 −1 0 0 2 2 −2 2 C = Λy2 [−Cb−1 z2 z02 − Cb−1 Q11 H11 y − (1 − C ) b (z2 w2 + w2 z2 ) + (1 − C ) b y ]Λy , 1

1

1

2 C = Λx2 [−C(ab)− 2 z1 z02 + (ab)− 2 Q11 xy ]Λy ,

where w1 : = wx1 ∼ N ormalM (0, IM ), w2 : = wy1 ∼ N ormalN (0, IN ), which distribute independently, and independent of z1 , z2 and z3 . Furthermore, ut

1

1

1

1

1

2 0 2 0 2 2 −2 2 2 −1 2 C = Λx2 [−Ca−1 z1 (z2x )0 − Ca−1 Q12 H12 x ]λx x − (1 − C ) a (z1 (wx ) + w1 (zx ) ) + (1 − C ) a 1

1

1

1

2 +Λx2 [−C(ab)− 2 z1 (z2y )0 + (ab)− 2 Q12 xy ]λy ,

6

vt

1

1

1

1

1

2 2 −1 2 0 2 0 2 2 −2 2 C = Λy2 [−Cb−1 z2 (z2y )0 − Cb−1 Q12 H12 y − (1 − C ) b (z2 (wy ) + w2 (zy ) ) + (1 − C ) b y ]λy 1

1

1

1

2 +Λy2 [−C(ab)− 2 z2 (z2x )0 + (ab)− 2 Q12 yx ]λx ,

tt

1

1

1

1

1

2 2 −1 2 2 0 2 2 0 2 2 −2 2 C = λx2 [−Ca−1 z2x (z2x )0 − Ca−1 Q22 H22 x − (1 − C ) a [zx (wx ) + wx (zx ) ] + (1 − C ) a x ]λx 1

1

1

1

1

1

1

1

−2 2 2 0 2 2 2 +λx2 [−C(ab)− 2 z2x (z2y )0 + (ab)− 2 Q22 zy (zx ) + (ab)− 2 Q22 xy ]λy + λy [−C(ab) yx ]λx 1

1

1

1

1

2 2 −1 2 2 0 2 2 0 2 2 −2 2 H22 +λy2 [−Cb−1 z2y (z2y )0 − Cb−1 Q22 y − (1 − C ) b [zy (wy ) + wy (zy ) ] + (1 − C ) b y ]λy ,

and the expression for the second derivatives follows by defining −1

1

1

1

−1

1

1

1

1

−1

1

1

1

1

1

z4 : = s1 2 (b− 2 λy2 z2x − a− 2 λx2 z2y ) ∼ N ormalP (0, IP ), w3 : = s1 2 (a− 2 λx2 wx2 + b− 2 λy2 wy2 ) ∼ N ormalP (0, IP ), w4 : = s1 2 (b− 2 λy2 wx2 − a− 2 λx2 wy2 ) ∼ N ormalP (0, IP ), which distribute independently, and independent of z1 , z2 , z3 , w1 , w2 and noting that 1

a−1 λx [z2x (wx2 )0 + wx2 (z2x )0 ] + b−1 λy [z2y (wy2 )0 + wy2 (z2y )0 ] = s22 (z3 w30 + w3 z03 + z4 w40 + w4 z04 ) +s4 (z3 w40 + w4 z03 ), where s2 =

4

λx λy ab

and s4 =

λx a



λy . b

EC densities for the time-varying cross correlation field

We shall only derive the EC density for P = 1, which is the case of most interest for the practical applications presented in the next section. A general result for arbitrary P requires a lot of calculations, which, in order to clarify the exposition, shall be exposed in the.. next subsection. . According to (2), the main task in this section is the evaluation of E(C|C = 0, C = c, a, b), . +

which must be combined with E(C D |C = c, a, b) and φ|D−1 (0, c | a, b) to get the final result by taking expectations over a and b. This final operation would require a lot of algebra and very long computations. However, some simplifications holds. Indeed, ρM,N,0 (c) = ρC M,N (c), where C ρM,N (c) denotes the EC densities of the classical cross correlation RF in [3]. In a similar way, H ρ0,0,1 (c) = ρH 1 (c), where ρk (c) denotes the EC densities of the homologous RF defined in [3]. Thus, we only need to calculate the EC densities ρM,N,1 (c) for M + N > 0. To do so, it will be necessary the use of Lemma A.4 in [3]. Correspondingly, the proof of the following theorem is very similar to that of Theorem 5.1 in [3]. Theorem 4 Let D = M + N + 1. For ν > D, N > 0 the EC densities for the RF C are given by M

ρM,N,1 (c) =

N

[ 2 ] [ 2 ] N −2j 1 1 XX det(Λx ) 2 det(Λy ) 2 2ν−2−D M !N ! X

π 2

×(1 − c )

D +1 2

i=0 j=0 k=0

ν−D−1 +i+j+k 2

where

Z

1

Ii,j (λx , λy ) =

(−1)i+j+k cD−1−2(i+j+k)

z

Ii,j (λx , λy )Γ(ν + i + j − D2 ) , i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)!

ν−M −1 +i 2

(1 − z)

0

7

ν−N −1 +j 2

1

(zλy + (1 − z)λx ) 2 dz.

Proof. It will be sufficient to prove the result for Λx = IM , Λy = IN . The result for arbitrary Λx and Λy follows by a linear transformation of the coordinates u and v. From Lemma 3, . +

1

1

1

1

1

E(C D | C = c, a, b) = (2π)− 2 (1 − c2 ) 2 s12 and

1

1

= (2π)− 2 (1 − c2 ) 2 (ab)− 2 (aλy + bλx ) 2

(6)

Γ( ν2 ) − D−1 − D D−ν+2 M N φ|D−1 (0, c | a, b) = ν−1 2 2 π 2 (1 − c2 )− 2 a 2 b 2 . Γ( 2 )

(7)

.

From Lemma 3 we also find that conditional on C |D−1 = 0, C = c, a, b, ! Ã Ã 1 ! − 21 11 −1 11 −2 11 .. 1 −ca Q (ab) Q a H 0 x xy x + C |D−1 = (1 − c2 ) 2 , 1 − 12 11 −1 11 0 b− 2 H11 (ab) Q − cb Q y yx y .

independent of C D . Then, by Lemma 11, M

..

.

E(det(C |D−1 ) | C |D−1 = 0, C = c, a, b) =

N

[ 2 ] [ 2 ] N −2j X XX

(−1)i+j+k 2−(i+j) cD−1−2(i+j+k) (1 − c2 )i+j+k (8)

i=0 j=0 k=0

×a−M +i b−N +j

(ν − 2)!M !N ! . i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)!

Using (6), (7) and (8) together we obtain M

N

[ 2 ] [ 2 ] N −2j XX Γ( ν2 ) − D − D+1 X (−1)i+j+k 2−(i+j) E(C D det(C |D−1 ) | C = c, a, b) φ|D−1 (0, c | a, b) = ν−1 2 2 π 2 Γ( 2 ) i=0 j=0 k=0 . +

..

×cD−1−2(i+j+k) (1 − c2 )

ν−D−1 +i+j+k 2

M +1

N +1

1

a− 2 +i b− 2 +j (aλy + bλx ) 2 (ν − 2)!M !N ! × . i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)!

In order to take expectations over a and c in the expression above it is convenient to make the a change of variables w = a + b, z = a+b , which distribute independently as χ22ν and Beta( ν2 , ν2 ), respectively. Hence M

N

[ 2 ] [ 2 ] N −2j XX Γ( ν2 ) − D − D+1 X (−1)i+j+k 2−(i+j) E(C D det(C |D−1 ) | C = c, w, z) φ|D−1 (0, c | w, z) = ν−1 2 2 π 2 Γ( 2 ) i=0 j=0 k=0 . +

..

×cD−1−2(i+j+k) (1 − c2 )

ν−D−1 +i+j+k 2

z−

M +1 +i 2

N +1

1

(1 − z)− 2 +j (zλy + (1 − z)λx ) 2 M +N +1 (ν − 2)!M !N ! ×w− 2 +i+j . i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)!

Since E(wl ) = 2l Γ(ν+l) then taking expectations over w yields Γ(ν) M

N

[ 2 ] [ 2 ] N −2j X XX Γ( ν2 ) −D − D+1 2 2 π E(C D det(C |D−1 ) | C = c, z)φ|D−1 (0, c | z) = ν−1 (−1)i+j+k Γ( 2 )Γ(ν) i=0 j=0 k=0 . +

..

ν−D−1

M +1

N +1

1

×cD−1−2(i+j+k) (1 − c2 ) 2 +i+j+k z − 2 +i (1 − z)− 2 +j (zλy + (1 − z)λx ) 2 (ν − 2)!M !N !Γ(ν + i + j − D2 ) × , i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)! 8

and the result follows from the factorization l! =

2l Γ( l+2 )Γ( l+1 ) 2 2 1

π2

.

Corollary 5 For λx = λy = λ, ρM,N,1 (c) =

1 2

1 2

ν−2−D

det(Λx ) det(Λy ) λ 2 π ×

4.1

1 2

M

N

[ 2 ] [ 2 ] N −2j XX M !N ! X

D +1 2

(−1)i+j+k cD−1−2(i+j+k) (1 − c2 )

ν−D−1 +i+j+k 2

i=0 j=0 k=0 D +1 )Γ( ν−M + 2 2

Γ(ν + i + j − i)Γ( ν−N2 +1 + j) . Γ(ν + i + j − D−3 )i!j!k!(ν − 1 − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)! 2

EC densities in higher dimension

The theorem above deals with the case P > 1, which was obtained by following almost the same reasoning of [3]. Although this is perhaps one of the most important situation from the application point of view, studying the more general case P > 1 could be also an appealing challenge. It could be found applications, for instance, for detecting time-frequency varying connectivity in EEG data. Hence, dealing with the case P > 1 is the aim in this subsection. .. . As above, the main task would be the evaluation of E(C|(C = 0,R = r, a, c). It can be seen from Lemma 3 that .. . e + (1 − R2 ) 12 H e +e e 0 + we e z0 , C|(C = 0) =Q zw where



  12 β1 H11 0 β H 1 x x e = 0 , β2 H11 β2 H12 H y y 21 21 22 22 β1 Hx β2 Hy β1 Hx + β2 Hy

α1 Q11 x e = α2 Q11 Q yx α1 Q21 + α2 Q21 x yx

α2 Q11 xy α3 Q11 y α3 Q21 + α2 Q21 y xy

 12 α1 Q12 x + α2 Qxy 12 , α3 Q12 y + α2 Qyx 22 22 22 22 α1 Qx + α3 Qy + α2 (Qxy + Qyx )

e e = (β2 w1 , −β1 w2 , −(1 − R2 )α2 w4 ), z= (0, 0, z4 )0 , w 1

1

1

1

1

and α1 = −Ra−1 , α2 = (ac)− 2 , α3 = −Rc−1 , β1 = (1 − R2 ) 2 a− 2 , β2 = (1 − R2 ) 2 c− 2 . Hence, the .. . computation of E(det(C |D−1 )|C |D−1 = 0, C = c, a, b), D = M + N + P shall be obtained by means of the following series of lemmas. Henceforth, division by either the factorial of a negative integer or the factorial of a non integer number is treated as multiplication by zero. Lemma 6 Let V1 , V2 ∼W ishartP (IP , ξ) independently, A be a fixed P × P symmetric matrix, and e1 , e2 be fixed scalars satisfying e1 + e2 = s1 and e1 e2 = s2 . Then E (det (e1 V1 + e2 V2 + A)) =

P X

] [ P −k ]−l [ P −k 2 2

det rk (A)

X X l=0

k=0 −1

(1 + δP −k,2l )

(s21

9

¡ ¢¡ ξ ¢¡P −k−2l¢ l −(P −k−2l−1) (P − k)! ξl P −k−l s2 2 2q

q=0

− 2s2 )q sP1 −k−2l−2q

D

Proof. Since V=U0 VU for V = e1 V1 + e2 V2 and any orthonormal matrix U, it is easily obtained E (det (e1 V1 + e2 V2 + A)) =

P X

det rk (A)E (det (e1 (V1 )P −k + e2 (V2 )P −k )) ,

k=0

where (V1 )P −k and (V2 )P −k denote any (P −k)×(P −k) principal minor of V1 and V2 , respectively. The proof concludes by applying Lemma 13. Lemma 7 Let G ∼W ishartN +2P (IN +2P , η) be a 3 × 3 block matrix where the diagonal blocks G11 , G22 and G33 are square matrices of dimension P, N and P, respectively. Let α1 , α2 and α3 be fixed scalars. Define µ ¶ α3 G22 α3 G23 + α2 G21 e G= α3 G32 + α2 G12 α1 G11 + α3 G33 + α2 (G13 + G31 ) Then P −k

e E( det(G)) =

α3N

P −k

]−l P [ 2 ][ X 2 ¡ ¢¡ ¢¡ η−N ¢¡P −k−2l¢ l −(P −k−2l−1) η!P ! X X (α1 α4 )k Nk η−N s2 2 l P −k−l 2q (η − N )! k=0 l=0 q=0

(1 + δP −k,2l )−1 (s21 − 2s2 )q sP1 −k−2l−2q . Proof. Let F∗ and F be 2 × 2 block matrices defined as µ 12 ¶ ¡ 21 ¢ G 22 −1 G G23 , F∗ : = 32 (G ) G µ 11 ¶ µ 12 ¶ ¡ 21 ¢ G G13 G 22 −1 G G23 , F : = 31 33 − 32 (G ) G G G which distribute independently as W ishart2P (I2P , N ) and W ishart2P (I2P , η − N ), respectively. It is easy to check that e = det(G e 11 ) det(G e 22 − G e 21 (G e 11 )−1 G e 12 ) det(G) = α3N det(G22 ) det(α1 F11 + α3 F22 + α2 (F12 + F21 ) + α1 α4 F11 ∗ ),

(9)

α2

where α4 = 1 − α1 α2 3 . Let s1 := α1 + α3 , s2 := α22 , and e1 , e2 be scalars satisfying e1 + e2 = s1 and e1 e2 = α1 α3 − s2 . Then α1 F11 + α3 F22 + α2 (F12 + F21 ) = e1 V1 + e2 V2 , where V1 , V2 ∼W ishartP (IP , η − N ) independently. Therefore ¡ ¡ ¢¢ e = αN E det(G22 ) det e1 V1 + e2 V2 + α1 α4 F11 E( det(G)) 3 ∗ Then, by applying Lemma 6 with ξ = η − N and A =α1 α4 F11 ∗ it is obtained P −k

P −k

[ 2 ] [ 2 ]−l P X X X ¡ ¢¡ η−N ¢ ¡ ¢ N! η! N k P e E( det(G)) = α3 (P − k)! η−N (α1 α4 ) k l P −k−l (η − N )! k=0 (N − k)! l=0 q=0 ¡P −k−2l¢ l −(P −k−2l−1) s2 2 (1 + δP −k,2l )−1 (s21 − 2s2 )q sP1 −k−2l−2q , 2q

and the result follows after proper simplifications. 10

(10)

Lemma 8 Let Q ∼W ishartM +N +2P (IM +N +2P , ν) be the 4 × 4 block matrix µ ¶ Qx Qxy , Q= Qyx Qy where in turn the diagonal blocks Qx and Qy are square 2 × 2 block matrices of dimension M + P and N + P, respectively. Let α1 , α2 and α3 be fixed scalars. Define   12 α1 Q11 α2 Q11 α1 Q12 x xy x + α2 Qxy 12 e =  α2 Q11 α3 Q11 α3 Q12 Q yx y y + α2 Qyx 21 21 21 21 22 22 22 22 α1 Qx + α2 Qyx α3 Qy + α2 Qxy α1 Qx + α3 Qy + α2 (Qxy + Qyx ) Then P −p−u

P −p−u

[ ][ ]−l −p N X P P 2 2 ´ X X X ¡ ¢¡ ¢¡ ¢¡ ¢ X p u n+p+u M M ν−M N −n M N ν!N !P ! e α α (α4 ) E det(Q) = α1 α3 n p N −n u (ν − M )! n=0 p=0 u=0 3 1 q=0 l=0 ¡ν−M −N +n¢¡ν−M −N +n¢¡P −p−u−2l¢ l −(P −p−u−2l−1) −1 2 × s2 2 (1 + δP −p−u,2l ) (s1 − 2s2 )q l P −p−u−l 2q

³

×sP1 −p−u−2l−2q . Proof. Let G∗ and G be 3 × 3 block matrices defined as  21  Qx ¡ 12 ¢ −1 12   (Q11 Qx Q11 G∗ : = Q11 yx xy Qxy , x ) Q21 yx  22   21  22 Qx Q21 Qx xy Qxy ¡ 12 ¢ −1 12 11 12  12    (Q11 Qx Q11 G : = Qyx Qy Qy − Q11 yx xy Qxy , x ) 21 Q21 Q22 Q22 yx yx Qy y which distribute independently as W ishartN +2P (IN +2P , M ) and W ishartN +2P (IN +2P , ν − M ), respectively. It can be checked that à Ãà ! à ! !! ³ ´ 22 23 21 e e e Q Q Q e e 11 ) det e 11 )−1 Q e 12 Q e 13 E(det(Q)) = E det(Q − e 31 (Q 32 33 e e Q Q Q ³ ´ ¡ ¢ M 11 e e = α1 E det(Qx ) E det(α3 α4 G∗ + G) , where

µ e = G

¶ α3 G22 α3 G23 + α2 G21 , α3 G32 + α2 G12 α1 G11 + α3 G33 + α2 (G13 + G31 ) ¶ µ 22 23 G G ∗ ∗ e∗ = ∼W ishartN +P (IN +P , M ), G G33 G32 ∗ ∗

α2

e ∗ U is diagonal. e ∗∗ = U0 G and α4 = 1 − α1 α2 3 . Let U be a 2 × 2 block orthonormal matrix such that G e has the same distribution as G e and hence It is easy to see that U0 GU ´ ³ D e e e e det(α3 α4 G∗ + G) = det α3 α4 G∗∗ + G . e ∗∗ + G) e can be expanded as the summation of the product of the determinant of any Then det(α3 α4 G e ∗ ) and the determinant of the (N + P − l) × (N + P − l) e ∗∗ (or α3 α4 G l × l principal minor of α3 α4 G 11

e formed by the remaining N + P − l rows and columns. For each l, let n and principal minor of G, e ∗ and from p = l − n be the number of rows or columns from the N × N upper diagonal matrix of G e the P × P lower diagonal matrix of G∗ , respectively. Then, by (9), N X P ¡ ¢X ¡ ¢ ¡ ¢ M 11 33 e E( det(Q)) = α1 E det(Qx ) (α3 α4 )n+p E det rn (G22 ∗ ) E det rp (G∗ )

E

¡

n=0 p=0

α3N −n

¢ det((G )N −n ) det((α1 F11 + α3 F22 + α2 (F12 + F21 ) + α1 α4 F11 ∗ )P −p ) . 22

Hence, the expression (10) with η = ν − M and N, P replaced by N − n and P − p respectively yields to N

e E( det(Q)) = α1M

P

XX ¡ ¢¡ ¢ M ! ν! M! (ν − M )! (α3 α4 )n+p Nn Pp (ν − M )! n=0 p=0 (M − n)! (M − p)! (ν − M − N + n)! P −p−u

P −p−u

[ ][ ]−l 2 2 X X ¢ ¡ ¡ ¢ (N − n)! +n N −n u P −p (P − p − u)! ν−M −N ×α3 (α1 α4 ) u l (N − n − u)! l=0 q=0 u=0 ¡ν−M −N +n¢¡P −p−u−2l¢ l −(P −p−u−2l−1) (1 + δP −p−u,2l )−1 (s21 − 2s2 )q s1P −p−u−2l−2q , × P −p−u−l s2 2 2q P −p

X

and the result follows after proper simplifications. e be the same matrix of Lemma 8, let γ1 , γ2 and γ4 be fixed real numbers, and let Lemma 9 Let Q 0 e e = (γ1 w1 , γ2 w2 , γ4 w4 )0 , with z= (0, 0, z4 ) , w w1 ∼ N ormalM (0, IM ), w2 ∼ N ormalN (0, IN ), w4 ∼ N ormalP (0, IP ), e Then independently and independent of Q. e +e e 0 + we e z0 )) = E(det(Q zw

M X N X P X

N −y P −z P −z−p − y)!(P − z)! X X X p u n+p+u α3 α1 α4 (ν − M + x)! n=0 p=0 u=0

ν!(N ∆x,y,z α1M −x α3N −y

x=0 y=0 z=0

× ×

¡M −x¢¡M −x¢¡ν−M +x¢¡N −y−n¢ n

p

N −y−n

∆x,y,z

X

X

l=0

q=0

u

¡ν−M −N +x+y+n¢¡P −z−p−u−2l¢

P −z−p−u−l 2q 2 q P −z−p−u−2l−2q ×(s1 − 2s2 ) s1

where

[ P −z−p−u ] [ P −z−p−u ]−l 2 2

¡ν−M −N +x+y+n¢ l

sl2 2−(P −z−p−u−2l−1) (1 + δP −z−p−u,2l )−1

 1, x=y=z=0    2   −γ4 P (P − 1), x = y = 0, z = 2 y = 0, x = z = 1 −γ12 M P, =  2  x = 0, y = z = 1 −γ2 N P,    0, elsewhere

          

e zw e 0 + we e z0 ) can be expanded as the summation of the product of Proof. It is easy to see that det(Q+e e 0 + we e z0 and the determinant of the (D −l)×(D −l) the determinant of any l×l principal minor of e zw

12

e formed by the remaining D − l rows and columns. For each l, let x, y and z principal minor of Q, e 0 + we e z0 . Then be the number of rows or columns from the first, second and third diagonal block e zw M X N X P X

e +e e 0 + we e z0 )) = E(det(Q zw

e M −x,N −y,P −z )) e 0 + we e z0 ))E(det(Q E(det rx,y,z (e zw

x=0 y=0 z=0

e 0 + we e z0 ) = 0 for x + y + x > 2. On the other hand, by the definition of e By one hand det rx,y,z (e zw z, 0 0 e + we e z ) = 0 for x 6= 0 and y 6= 0. Hence the only triples of (x, y, z) that contributes to det rx,y,0 (e zw the summation above are (0, 0, 0), (0, 0, 2), (1, 0, 1) and (0, 1, 1). Thus, similarly to Lemma A.7 in Cao it is easy to show that e 0 + we e z0 ) = 1, det r0,0,2 (e e 0 + we e z0 ) = −γ42 P (P − 1), det r0,0,0 (e zw zw e 0 + we e z0 ) = −γ22 N P, e 0 + we e z0 ) = −γ12 M P, det r0,1,1 (e zw det r1,0,1 (e zw and the result follows by applying Lemma 8 with M, N and P replaced by M − x, N − y and P − z, respectively. Lemma 10 Let Hx ∼N ormalM +P (0, V (IM +P )), Hy ∼N ormalN +P (0, V (IN +P )) be 2 × 2 block matrices of dimension M + P and N + P , respectively, and let β1 , β2 be fixed scalars. Define   β1 H11 0 β1 H12 x x e = 0 , β2 H11 β2 H12 H y y 21 21 22 22 β1 Hx β2 Hy β1 Hx + β2 Hy D 0 e and such that A=U and let A be a symmetric matrix with the same block structure as H AU for any fixed orthonormal matrix U. Let Ai,j,k be any (i + j + k) × (1 + j + k) principal minor of A, with i, j, k rows and columns corresponding to the first, second and third diagonal block of A, respectively, and let detri,j,k (A) be the sum of such principal minors. Then µ ¶µ ¶ i+k1 j+k2 −k1 M X N X P PX X 2 (i + k )! (−1) 2 (j + k )! (−1) M + P N + P 1 2 e + A)) = E( det(H i+k1 j+k2 i + k1 j + k2 2 2 ( i+k1 )! 2 2 ( j+k2 )! i=0 j=0 k =0 k =0 1 2 2 i+k1 j+k2 ×β1 β2 E(det(AM −i,N −j,P −k1 −k2 ))

e can be decomposed Proof. Since H  11 Hx e x = β1  0 H H21 x

ex + H e y with in the sum H   0 0 H12 x e y = β2 0 0 0 , H 0 H22 0 x

2

0 H11 y H21 y

 0  H12 y 22 Hy

D

e + A)) can be expanded as and A=U0 AU for any fixed orthonormal matrix U then E( det(H e + A)) = E(det(H

M X N X P X

e x ))E(det((H e y )M −i,N −j,P −k1 + AM −i,N −j,P −k1 )). E(det ri,j,k1 (H

i=0 j=0 k1 =0

e x ) = 0 for j 6= 0, hence It is easy to check that det ri,j,k (H e + A)) = E( det(H

M X P X

e x ))E(det((H e y )M −i,N,P −k1 + AM −i,N,P −k1 )) E(det ri,0,k1 (H

i=0 k1 =0

=

M X P X

e y )M −i,N,P −k1 + AM −i,N,P −k1 )). β1i+k1 E(det ri+k1 (Hx ))E(det((H

i=0 k1 =0

13

e y )M −i,N,P −k1 + AM −i,N,P −k1 ) it is obtained By proceeding in the same way with the term E(det((H e + A)) = E( det(H

−k1 M X N X P PX X

β1i+k1 β2j+k2 E(det ri+k1 (Hx ))E(det rj+k2 (Hy ))

i=0 j=0 k1 =0 k2 =0

×E(det(AM −i,N −j,P −k1 −k2 )), from which, by Lemma 5.3.2 in (Adler) follows the desired result. e +Q e +e e +e e 0 + we e z0 )) is easily obtained by replacing A = Q e 0 + we e z0 in Lemma Now, E(det(H zw zw 10 and using Lemma 9 for computing E(AM −i,N −j,P −k1 −k2 ). Finally, for obtaining the EC densities one must essentially follow the proof of Theorem 4.

5 5.1

Applications Simulations

In the neuroimaging literature a RF X(u) is often modeled as white noise convolved with an isotropic Gaussian filter. The width of the filter is measured by its Full Width at Half Maximum (FWHM ). It is then straightforward to show that Λx = IM

4 log 2 FWHM 2

which motivates us to define FWHM = (4 log 2)1/2 det(Λx )−1/(2M ) for any stationary random field. Table 1 shows the approximate critical thresholds for the maximum of various correlation RFs at the significance level P = 0.05. The correlation threshold at significance level P = 0.05 was found by equating the right hand side of (4) to 0.05, using Lemma 1 for the expected EC and Theorem for the EC density. These values correspond to a sample of ν = 40 images Xi (u,t), Yi (v,t), i = 1, ..., ν inside a spherical search region of 1000cc and time interval [0, 128]ms with spatial smoothing of FWHM = 20mm and temporal smoothing of FWHM = 10ms. RF Simple correlation Cross correlation Time-varying simple correlation Time-varying cross correlation

M 3 3 3 3

N 0 3 0 3

P 0 0 1 1

threshold c 0.7170 0.8016 0.7701 0.8430

Table 1: Approximate critical thresholds for the maximum of various correlation RFs at the significance level P = 0.05

14

CZ

PZ

20

20

0

0

−20 −200

0

200 OZ

400

600

−20 −200

20

20

0

0

−20 −200

0

200 C4

400

600

−20 −200

20

20

0

0

−20 −200

0

200 FZ

400

600

−20 −200

20

20

0

0

−20 −200

0

200

400

600

−20 −200

0

200 P8

400

600

0

200 400 FPZ

600

0

200 P7

400

600

0

200

400

600

Figure 1: (a) Left: Average (over 76 trials) of face - scrambled face EEG data for 8 of the 128 components of the vector M(t) (corresponding to channels CZ, PZ, C4, FPZ, FZ, OZ, P8 and P7) plotted against time t(ms). There is a negative EEG peak around t = 110ms for the electrodes CZ, PZ, C4 in the temporal region, and FPZ and FZ in the frontal region, followed by a positive peak around 170ms. The reverse happens in the occipito-temporal channels OZ, P8 and P7 (the b reconstructed inside the brain at time so-called N170 component). (b) Right: the EEG data J(t) t = 170ms. The main activated sources are located in the occipital region (bilateral), left frontal and right temporal regions.

5.2

EEG Analysis

As an illustration of the methods, we apply the P-value approximation for the time varying cross correlation RF to an EEG data set of a face recognition experiment described in [11]. The experimental paradigm consists in the presentation of 76 images of faces to the subject: 38 famous faces and 38 unfamiliar faces. The EEG data were acquired on a 128-channel (128 electrodes on the scalp) Active-Two system, sampled at 2048Hz. Each of the facial images was followed by the presentation of a scrambled version of the same face, and the resulting EEG data was subtracted, to give 76 differences. The face was presented at time zero, and data were recorded over the time window T = [−200, 600]ms. To get an idea of the data, the overall mean (over all 76 differences) of the EEG data at 8 of the 128 electrodes is shown in Figure 1(a). The EEG data at the 128 electrodes was then reconstructed throughout the brain using LORETA (Low Resolution Electromagnetic Tomography) [15]. This is a method of solving the following inverse problem. The relationship between the electric field J(t) at all points in the brain (written as a vector) at time t and the EEG measurements M(t) at all electrodes on the scalp (written as a 15

vector of length 128) at time t is given by the simple linear model based on elementary physics: M(t) = KJ(t) + e(t). The matrix K, called the leadfield, is a known function of the position of the electrodes on the scalp, the position of the points inside the brain, and the electrical properties of the brain tissue and scalp. The measurement error e(t) is assumed to be a vector of independent Gaussian processes at time t. The LORETA solution of the inverse problem is b = (K0 K + λ2 L0 L)−1 KM(t), J(t) where λ is a regularizing parameter and L is a discrete version of the Laplacian operator that force b at one the solution to be spatially smooth. The average (over trials) of the reconstructed data J(t) particular time point t = 170ms is shown in Figure 1(b). The raw data Xi (u,t), Yi (v,t) that we shall use for our analysis is extracted from the components b of J(t) for each trial. To remove the effect of familiar and unfamiliar faces depicted in Figure 1, we used the residuals obtained by subtracting the average of each group (familiar, unfamiliar) from the data in that group, to give ν = 76 − 2 = 74 effectively independent observations. We now look for time-varying connectivity of the 3D reconstructed EEG residuals. We looked for connectivity between the two brain hemispheres, so that U is the left hemisphere and V is the right hemisphere, depicted in Figure 2. The spatial and temporal FWHM s were estimated as 20.28mm and 14.17ms, respectively [13]. The intrinsic volumes were approximated by that of a sphere with the same volume, which gives a very good approximation for any non-spherical search region: µ0,1,2,3 (U) = µ0,1,2,3 (V) = 1, 191.24, 1.395 × 104 , 5.037 × 105 . The intrinsic volumes of T were µ0,1 (T ) = 1, 800. The cross correlation threshold at significance level P = 0.05 was found by equating the right hand side of (4) to 0.05, using Lemma 1 for the expected EC and Theorem for the EC density, to give c = 0.8019. Then the inter-hemispheric time varying cross correlations were calculated and the resulting 7D images thresholded at the value c = 0.8019 (for detecting negative significant correlations as well). It should be noted that voxels separated by less than one FWHM were ignored, since they will have high correlation due to the spatial smoothness effect. Figure 2 shows the significant cross correlations between hemispheres at four different time instants. We conclude that there is correlation between the right frontal region and left occipitotemporal region. These results are consistent with the classical cognitive “core model” for face recognition and perception [8] which involves the processing and transfer of information in a neuronal network that includes bilateral brain structures in the occipital and frontal regions as well as in the superior-temporal area.

16

(a)

(b)

(c)

(d)

Figure 2: The cross correlations between hemispheres thresholded at c = 0.8019 (P = 0.05) at times t = (a) 110ms, (b) 122ms, (c) 164ms, (d) 180ms. Letters A, P, L, R in each subplot mean Anterior, Posterior, Left and Right, respectively. We conclude that there are statistically significant correlations between right frontal and left occipital regions.

17

6

Appendix

Lemma 11 (Lemma A.4 in [3])Let A and B be the following (M + N ) × (M + N ) matrices: µ ¶ µ ¶ α1 Qx α2 Qxy β1 Hx 0 A= , B= α2 Qyx α3 Qy 0 β2 Hy with µ

¶ Qx Qxy Q = ∼W ishartM +N (IM +N , ν), Qyx Qy Hx ∼ N ormalM ×M (0, VM ), Hy ∼N ormalN ×N (0, VN ), independently, and let α1 , α2 , α3 be fixed scalars. Then M

N

[ 2 ] [ 2 ] N −2j X XX

E (det (A + B)) =

(−1)i+j 2−(i+j) β12i β22j α1M −2i α2N −2j α4k

i=0 j=0 k=0

× where D = M + N , α4 = 1 − multiplication by zero.

ν!M !N ! , i!j!k!(ν − D + 2i + 2j + k)!(M − 2i − k)!(N − 2j − k)!

α22 , α1 α3

and division by the factorial of a negative integer is treated as

Lemma 12 (Lemma A.9 in [3])Let g, h, s, p be fixed scalars and e1 , e2 be the solutions of e1 +e2 = s, e1 e2 = p. Let z, w ∼N ormalM (0, IM ), H ∼N ormalM ×M (0, VM ) and Q1 , Q2 ∼W ishartM (IM , ν), independently. Then M

E(det(g(zw0 + wz0 ) + hH−e1 Q1 − e2 Q2 )) =

M

M

[ 2 ] [ 2 ]−i [ 2 ]−i−j X X X i=0

×

j=0

(−1)M −i

¡ν ¢¡ j

ν M −2i−j

¢¡M −2i−2j ¢ 2k

k=0

M! (h2i + 2ig 2 h2i−2 ) + δM −2i,2j )i!

2M −i−2j−1 (1

×sM −2(i+j+k) pj (s2 − 4p)k . Lemma 13 (Lemma A.6 in [3])Let V1 , V2 ∼W ishartP (IP , ξ) independently and e1 , e2 be fixed scalars satisfying e1 + e2 = s1 and e1 e2 = s2 . Then P

E (det (e1 V1 + e2 V2 )) =

P

[ 2 ] [ 2 ]−l X X

¡ ¢¡ ¢¡ −2l¢ l −(P −2l−1) P ! ξl P ξ−l P 2q s2 2 (1 + δP,2l )−1 (s21 − 2s2 )q sP1 −2l−2q

l=0 q=0

References [1] Adler, R. J. & Hasofer, A. M. (1976). Level crossings for random fields. Annals of Probability, 4:1-12. [2] Beaky, M. M., Scherrer, R. J. & Villumsen, J. V. (1992). Topology of large scale structure in seeded hot dark matter models. Astrophysical Journal, 387:443-448. 18

[3] Cao, J. & Worsley, K.J. (1999). The geometry of correlation fields with an application to functional connectivity of the brain. Annals of Applied Probability, 9:1021-1057. [4] Cao, J. & Worsley, K.J. (1999). The detection of local shape changes via the geometry of Hotelling’s T 2 fields. Annals of Statistics. 27:925-942. [5] Cao, J. & Worsley, K.J. (2001). Applications of random fields in human brain mapping. In M. Moore (Ed.) Spatial Statists: Methodological Aspects and Applications, Springer Lecture Notes in Statistics. 159:169-182. [6] Carbonell, F., Galan, L. & Worsley, K.J. (2005). The geometry of the Wilks’s Lambda random field. Annals of the Institute of Statistical Mathematics, submitted. [7] Friston, K.J., Buchel, C., Fink, G.R., Morris, J., Rolls, E. & Dolan, R.J. (1997). Psychophysiological and modulatory interactions in neuroimaging. Neuroimage, 6:218-229. [8] Gobbini, M.I. and Haxby, J.V. (2007). Neural systems for recognition of familiar faces. Neuropsychologia 45:32-44. [9] Gott, J. R. and Park, C., Juskiewicz, R., Bies, W. E., Bennett, D. P., Bouchet, F. R. & Stebbins, A. (1990). Topology of microwave background fluctuations: theory. Astrophysical Journal. 352:1-14. [10] Hasofer, A. M. (1978). Upcrossings of random fields. Supplements of Advances in Applied Probability. 10:14-21. [11] Henson, R.N., Y. Goshen-Gottstein, T. Ganel, L.J. Otten, A. Quayle & M.D. Rugg. (2003). Electrophysiological and Haemodynamic Correlates of Face Perception, Recognition and Priming. Cerebral Cortex, 13:793-805. [12] Horwitz, B., Grady, C.L., Mentis, M.J., Pietrini, P., Ungerleider, L.G., Rapoport, S.I. & Haxby, J.V. (1996). Brain functional connectivity changes as task difficulty is altered. NeuroImage, 3:S248. [13] Kiebel, S.J., Poline, J-B., Friston, K.J., Holmes, A.P. & Worsley, K.J. (1999). Robust smoothness estimation in statistical parametric maps using normalised residuals from the general linear model. NeuroImage, 10:756-766. [14] McIntosh, A.R., & Gonzalez-Lima, F. (1994). Structural equation modeling and its application to network analysis of functional brain imaging. Human Brain Mapping, 2:2-22. [15] Pascual-Marqui, R. D., Michel, C. M. & Lehmann, D. (1994). Low-resolution electromagnetic tomography: a new method for localizing electrical activity of the brain. Int. J. Psychophysiology. 18:49-65. [16] Strother, S.C., Anderson, J.R., Schaper, K.A., Sidtis, J.J., Liow, J.S., Woods, R.P. &Rottenberg, D.A. (1995). Principal component analysis and the scaled subpro le model compared to intersubject averaging and statistical parametric mapping: I. Functional connectivity” of the human motor system studied with O15 water PET. Journal of Cerebral Blood Flow and Metabolism, 15:738-753.

19

[17] Taylor, J.E. & Worsley, K.J. (2007). Random fields of multivariate test statistics, with applications to shape analysis and fMRI. Annals of Statistics, accepted. [18] Vogeley, M. S., Park, C., Geller, M. J., Huchira, J. P. & Gott, J. R. (1994). Topological analysis of the CfA Redshift Survey. Astrophysical Journal, 420:525-544. [19] Worsley, K.J. (1994). Local maxima and the expected Euler characteristic of excursion sets of χ2 , F and t fields. Advances in Applied Probability. 26:13-42. [20] Worsley, K.J. (1995). Boundary corrections for the expected Euler characteristic of excursion sets of random fields, with an application to astrophysics. Advances in Applied Probability, 27:943-959. [21] Worsley, K.J. (1995). Estimating the number of peaks in a random field using the Hadwiger characteristic of excursion sets, with applications to medical images. Annals of Statistics. 23:640-669. [22] Worsley, K.J., Cao, J., Paus, T. & Evans, A.C. (1998). Applications of random field theory to functional conectivity. Human Brain Mapping, 6:364-367. [23] Worsley, K.J., Taylor, J.E., Tomaiuolo, F. & Lerch, J. (2004). Unified univariate and multivariate random field theory. NeuroImage, 23:S189-195. [24] Worsley, K.J., Chen, J-I., Lerch, J. & Evans, A.C. (2005). Comparing connectivity via thresholding correlations and SVD. Philosophical Transactions of the Royal Society, 360:913-920.

20

The geometry of time-varying cross correlation random ...

denote Var(vec(X)) by Vm when Cov(Xij,Xkl) = ε(i, j, k, l) − δijδkl where ε(i, j, k, l) is ..... NormalM (0,IM ), z2,w2 ∼ NormalN (0,IN ), z3,z4,w3, w4 ∼ NormalP (0,IP ),.

382KB Sizes 1 Downloads 177 Views

Recommend Documents

On the geometry of a generalized cross-correlation ...
defined for analyzing functional connectivity in brain images, where the main idea ... by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada.

On the geometry of a generalized cross-correlation ...
Random fields of multivariate test statistics, with an application to shape analysis. Annals of Statistics 36, 1—27. van Laarhoven, P., Kalker, T., 1988. On the computational of Lauricella functions of the fourth kind. Journal of. Computational and

On the Fisher's Z transformation of correlation random fields (PDF ...
Our statistics of interest are the maximum of a random field G, resulting from the .... by a postdoctoral fellowship from the ISM-CRM, Montreal, Quebec, Canada. 1.

On the Fisher's Z transformation of correlation random ...
more conservative as the correlation in tests increases, which a limitation for typical brain imaging data. Indeed,. *The first author's work was supported by a ...

The Geometry of the Wilks's Λ Random Field
Topological analysis of the CfA redshift survey. Astrophysical Journal 420, 525–544. Worsley, K. J., 1994. Local maxima and the expected euler characteristic of excursion sets of χ2, f and t fields. Advances in Applied Probability 26, 13–42. Wor

The Geometry of Random Features - Research at Google
tion in the regime of high data dimensionality (Yu et al.,. 2016) .... (4). Henceforth we take k = 1. The analysis for a number of blocks k > 1 is completely analogous. In Figure 1, we recall several commonly-used RBFs and their correspond- ing Fouri

Second metacarpal cross-sectional geometry
sentially size-free; that is, it is concerned ..... thorne, VM (1992) Continuing bone expansion and ... Ohman, JC (1993) Computer software for estimating.

Prediction of cross-sectional geometry from metacarpal ...
study for these models applied to a historic/proto-historic sample of Inuit from the ... distinct skeletal biology, this represents a robust test of the predictive models.

Computing Exact p-values for a Cross-correlation Shotgun Proteomics ...
processors such as PeptideProphet (5) and Percolator (6) simultaneously .... used with XF for primary versus secondary ions in step 3 of computing T. As an ..... the peptide level, XCorr p-value plus Percolator and MS-GF plus Percolator had ...

Correlation characteristics of the secular variation eld
the distance between points on the core surface by and the distance on the Earth's surface by a. Kliorin et al. 1988] suggested a model of the auto- correlation ...

Dynamical and Correlation Properties of the Internet
Dec 17, 2001 - 2International School for Advanced Studies SISSA/ISAS, via Beirut 4, 34014 Trieste, Italy. 3The Abdus ... analysis performed so far has revealed that the Internet ex- ... the NLANR project has been collecting data since Novem-.

The Role of Random Priorities
Apr 8, 2017 - †Université Paris 1 and Paris School of Economics, 106-112 Boulevard de l'Hopital, ... The designer selects a random priority rule, and agents.

The Kolmogorov complexity of random reals - ScienceDirect.com
if a real has higher K-degree than a random real then it is random. ...... 96–114 [Extended abstract appeared in: J. Sgall, A. Pultr, P. Kolman (Eds.), Mathematical ...

The Kolmogorov complexity of random reals - ScienceDirect.com
We call the equivalence classes under this measure of relative randomness K-degrees. We give proofs that there is a random real so that lim supn K( n) − K(.

Qualia: The Geometry of Integrated Information - ScienceOpen
14 Aug 2009 - generated by system X, endowed with causal mechanism mech, being in the particular state x1 = (n1. 1n2. 1)=[1,1] at time t=1? Prior to considering its mechanism and current state, the system of two binary elements could have been in any

Existence and the Forms of Geometry
Jan 10, 2016 - Ifyou had a way of taking a three-dimensional picture, it would he an ellipsoid. What you call as existence is multiplication, permuta- tions and ...

The geometry of the group of symplectic diffeomorphisms
Oct 15, 2007 - as a simple geometric object - a single curve t → ft on the group of all diffeomorphisms of ..... Informally, we call diffeomorphisms ft arising in this way ...... Let A ⊂ U be the ball with the same center of the radius 0.1. Consi

Event Correlation on the basis of Activation Patterns
able to map the features of events thrown by all possible sensors to ... RGNG map for byte histograms ..... Activation Patterns we visualize the high dimensional.

Updating the tumor/external surrogate correlation function - College of ...
Sep 8, 2007 - Online at stacks.iop.org/PMB/52/1. Abstract ... The final selection of which model will be used during treatment is based on the comparison of a modified ...... Radiology and Oncology (ASTRO): 49th Annual Meeting pp 42–3.

The geometry of the group of symplectic diffeomorphisms
Oct 15, 2007 - lems which come from the familiar finite dimensional geometry in the ...... It was Gromov's great insight [G1] that one can generalize some.

The geometry of the group of symplectic diffeomorphisms
Oct 15, 2007 - generates ξ such that fs equals the identity map 1l. This path is defined ...... Fix a class α ∈ H2(Cn,L). Given this data consider the following .... In order to visualize the bubbling off phenomenon we restrict to the real axes.

Way of the Cross Tamil.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Way of the ...

Visual Correlation of Network Alerts
an organization's subnets and hosts, numerous data sets, and .... domain analysis and the details of the cognitive analy- ..... often involves simple Web queries, social engineering, ..... number of alerts versus number of unique signatures) and.