Distance between Nonidentically Weakly Dependent Random Vectors and Gaussian Random Vectors under the Bounded Lipschitz Metric∗ Alessio Sancetta† May 23, 2005

Abstract This paper provides bounds for the rate of weak convergence of a multivariate weakly dependent nonidentically distributed partial sum to the Gaussian law. Using the approach of Bentkus (2003), we give an equivalent bound in terms of dimension, as in the case of iid random variables. The bound is stated in terms of minimal high level assumptions. Keywords: Bounded Lipschitz Metric, Central Limit Theorem, Copula, Mixing.

1

Statement of Result

This paper gives an estimate of the distance, under the bounded Lipschitz metric, between the partial sum of dependent nonidentically distributed random variables with values in RK and a Gaussian random vector. The bounded Lipschitz metric metrizes weak convergence and is directly related to the Prohorov distance (Dudley, 2002, for details). The result is an extension of Bentkus (2003) (Be in the sequel) from the iid case to this more general setting. The conditions used in the present work are high level assumptions that need to be checked on a case by case basis. The advantage of using high level assumptions is that the result can be used in many different ∗ Supported † Faculty

by the ESRC award RES-000-23-0400.

of Economics, University of Cambridge, Cambridge CB3 9DE. Tel. +44-(0)1223-335272, e-mail:

[email protected].

1

applications. The bound is in terms of the third absolute moment of a partial sum of the original random vectors and an error related to the dependence among the random vectors. The third absolute moment can be bounded by well known moment inequalities (e.g. Doukhan and Louhichi, 1999). The error related to the dependence among the random variables is expressed in terms of minimal conditions that are easy to check if we assume any of the existing weak dependence conditions in the literature. Let (Xi )i∈Z be a sequence of random variables with values in R. Let C A be the copula function of X := (Xi )i∈A , A ⊂ Z where #A = n is the cardinality of A. Recall that if (Xi )i∈A has joint distribution FA with marginals Fi1 , ..., Fin   C A (u) : = (FA ◦ QX ) (u) := FA Fi−1 (ui1 ) , ..., Fi−1 (uin ) , 1 n

Fi−1 (ui ) : = inf {s : Fi (s) ≥ t}

(e.g. Sklar, 1974), where QX is a componentwise quantile transform, as shown above. Suppose X := (Xi )i∈A are random variables with values in RK , Xi := (Xi1 , ..., XiK ) and #A = n. Their copula is just C A (u) = (FA ◦ QX ) (u)

(1)

  (ui1 1 ) , ..., Fi−1 (ui1 K ) , Fi−1 (ui2 1 ) , ..., Fi−1 (ui2 K ) , ..., Fi−1 (uin K ) , = FA Fi−1 11 21 1K 2K nK where Fik is the marginal distribution of Xik , and FA is the joint distribution of (Xi )i∈A . Therefore, for the moment, there is no loss in assuming that (Xi )i∈A are random variables with values in R. If the marginals are continuous, C A is unique (e.g. Sklar, op.cit.) and it is the joint distribution of uniform random variables in the unit hypercube: Ui := Fi (Xi ) is a uniform random variable. If the marginal are not continuous, define F˜i (x, τ ) = Pr (Xi < x)+τ Pr (Xi = x) , where τ is a [0, 1] uniform random variable. It is well known (e.g. Proposition 1 in R¨ uschendorf and de Valk, 1993) that Ui = F˜ (Xi , τ ) is uniformly distributed and F˜ −1 (Ui ) = Xi almost surely. Then, there exists a unique copula C A for (Xi )i∈A with uniform marginals. In the sequel, Ui := Fi (Xi ) if Fi is continuous, Ui := F˜i (Xi , τ) otherwise. The symbol  means less than or equal up to a multiplicative absolute constant and ≍ stands for equality up to a multiplicative absolute constant, while a superscript w means that the relation holds weakly. Suppose µ1 and µ2 are Borel measures. The bounded Lipschitz metric is defined 2

as ρ (µ1 , µ2 ) := supϕ∈BL1 |µ1 ϕ − µ2 ϕ| , where BL1 is the class of functions bounded by one with Lipschitz constant one (e.g. Dudley, 2002, p. 390). For any metric space (S, d) , ρ is a metric on the set of all laws on S (Dudley, op.cit., Proposition 11.3.2). Condition 1 For arbitrary sequences p = pn , q = qn , r = rn , such that n ≍ r (p + q) , define Hj

: = {i ∈ N : (j − 1) (p + q) + 1 ≤ i ≤ (j − 1) (p + q) + p}

Hj′

: = {i ∈ N : (j − 1) q + jp + 1 ≤ i ≤ j (p + q)} ,

j = 1, ..., r and Hr :=

r

j=1 Hj ,

Hr′ :=

r

′ j=1 Hj .

Then, for some p, q, r with q = o (p) there exists

a sequence ψn(p,q) going to zero as n → ∞ such that       r    H H r j  sup  (ϕ ◦ QX ) d C −  C  ≤ ψn(p,q) , ϕ∈BL1   j=1

where C A is the copula of X := (Xi )i∈A and QX is the related quantile transform.

Condition (1) will be used for sequences (Xi )i∈Z taking values in RK . In this case, the copula of (Xi )i∈A⊂Z is given by (1). Let |...| stand for the Euclidean norm and for n ∈ N+ , Sn := ni=1 Xi and for a set A ⊂ Z, SA := i∈A Xi . Then, using the same notation as in Condition (1), we will prove the following result.

Theorem 2 Suppose (Xi )i∈Z is a sequence of mean zero random vectors with values in RK , finite 2  third moment, and satisfying Condition (1). Suppose Bn is a sequence such that Bn−1 Sn  /K   is asymptotically uniformly integrable and var Bn−1 Sn is a positive definite matrix ∀n. Define  3 β p,i := E Bp−1 SHi  and let Bn−1 Sn have law µn . Then, for some r, p, q, as in Condition (1), ρ (µn , Nn )  Br−3/2

r

β p,i ln (K + 1) + (Kq/p)1/2 + ψn(p,q)

i=1

  where Nn is the Gaussian law with mean 0K (the zero K vector) and variance var Bn−1 Sn . For a metric space (S, d) , ǫ > 0, and A ⊂ S, define Aǫ := {y ∈ S : d (x, y) < ǫ for some x ∈ A} . The Prohorov distance between two laws on S is defined as π (µ1 , µ2 ) := inf {ǫ > 0 : µ1 A ≤ µ2 Aǫ + ǫ, for all Borel sets A} . 3

The following is a consequence of Theorem (2) and the well known relation between π (µ1 , µ2 ) and ρ (µ1 , µ2 ) (see Dudley, op.cit., proof Theorem 11.3.3 (c) implies (d), p. 396). Corollary 3 Under the conditions of Theorem (2),

π (µn , Nn )  Br−3/2

r

β p,i ln (K + 1) + (Kq/p)

i=1

1/2

+ ψn(p,q)

1/2

.

The proof of the Theorem is deferred to the next section. The statement of this result is simple: the first term is a modified version of the bound in Be. The second term accounts for deleting small blocks of size q, while the third is the bound from Condition (1).

1.1

Remarks on Condition 1

Condition 1 is stated in terms of uniform random variables because this provides a better understanding of the coupling condition and makes easier to establish its relation with existing dependence conditions. Example 4 Suppose X := (Xi )i∈{1,...,n} and ξ := (ξ i )i∈{1,...,n} have same marginals (Fi )i∈{1,...,n} . To to each pair Xi and ξ i associate a uniform random variable τ to define Ui := F˜i (Xi , τ ) and Vi := F˜i (ξ i , τ) . Then, by a change of variables sup |Eϕ (X1 , ..., Xn ) − Eϕ (ξ 1 , ..., ξ n )|

(2)

ϕ∈BL1

=

     sup Eϕ F1−1 (U1 ) , ..., Fn−1 (Un ) − Eϕ F1−1 (V1 ) , ..., Fn−1 (Vn ) 

ϕ∈BL1



sup |Eϕ (U1 , ..., Un ) − Eϕ (V1 , ..., Vn )| .

(3)

ϕ∈BL1

  where the last relation follows from the fact that the class of functions ϕ ◦ F1−1 , ..., Fn−1 , where

ϕ ∈ BL1 , contains the functions that are bounded and Lipschitz of constant and exponent equal

to one. Clearly, (2) metrizes weak convergence of X to ξ, while (3) metrizes weak convergence of their respective copulae. Weak convergence of the copula of X to the one of ξ implies weak convergence of X to ξ, though with possibly different rates of convergence. This follows from the   fact that ϕ ◦ F1−1 , ..., Fn−1 is bounded and almost surely continuous. To see this when Fi−1 is not

continuous, use the Lipschitz condition on ϕ and the fact that Fi−1 (∀i) can be approximated in L1 (with respect to the Lebesgue measure on [0, 1]) by a regularised version which is continuous. 4

The bound in Condition (1) can be found via different approaches. The examples that follow discuss two cases. Example 5 (Strong Mixing) Suppose X := (Xi )i∈N is a sequence of random variables with values in R. For two sigma algebras generated by X, say A and B, α (A, B) :=

sup A∈A,B∈B

|Pr (A ∩ B) − Pr (A) Pr (B)| , (j−1)q+jp

is called the coefficient of strong mixing. Suppose, A−∞

and A∞ j(p+q) are the sigma alge-

bras generated by (Xi )i≤(j−1)q+jp and (Xi )i≥j(p+q) . Using the fact that uniform convergence of distributions implies weak convergence, by repeated application of the triangle inequality, ψn(p,q) 

r−2   (j−1)q+jp α A−∞ , A∞ j(p+q) ≤ rαq , i=0

where   (j−1)q+jp , A∞ αq := sup α A−∞ j(p+q) . j

Andrews (1984) gives a constructive proof of an AR(1) not satisfying the strong mixing condition. Bradley (1986) gives a very famous example and Doukhan and Louhichi (1999, Lemma 1) give a condition under which strong mixing holds for weakly dependent variables. In all these cases, the sequence fails to be strong mixing because of some set which is not a continuity set. In some cases it is easier to prove dependence results directly from the stochastic properties of the random variables. Example 6 The bounded Lipschitz metric is bounded by 2 times the Prohorov’s distance (Corollary 11.6.5 in Dudley, 2002). Consider the following Markov inequality 

 P r max |(Xi − ξ i )| ≥ ǫ ≤ E max |(Xi − ξ i )| /ǫ ≤ δ/ǫ i∈Hr

i∈Hr

for some δ ∈ R+ . By the definition of Prohorov’s distance, (see also the first inequality in the proof of Theorem 4.1 in Billingsley, 1968, p.25), choose ǫ = δ 1/2 and obtain the bound sup |Eϕ (U1 , ...., Un ) − Eϕ (V1 , ...., Vn )| ≤ 2δ 1/2 .

ϕ∈BL1

The following gives a concrete example of this approach.

5

Example 7 (Example (6) cont’d: Causal MA(∞)) For simplicity, consider the one dimen th sional case. Suppose Xi = ∞ (s ≥ 1) j=0 aj εi−j , where (εi )i∈Z are iid with values in R, finite s ∞ moment, and j=0 |aj | < ∞. Then, it is simple to see that Condition (1) holds. However, we want an explicit bound. Construct

ξi =

q−1

aj εi−j +

j=0



aj ˜εi−j

j≥q

with       Law (˜εi )i∈Z , (εi )i∈Z = Law (˜εi )i∈Z × Law (εi )i∈Z ,     Law (˜εi )i∈Z = Law (εi )i∈Z .

Then we have the bound        E max |(Xi − ξ i )| ≤ 2E max  aj ˜εi−j  i∈Hr i∈Hr   j≥q  s 1/s      1/s   aj ˜εi−j   , ≤ 2n max E  i∈Hr j≥q 

which, by Example (6), provides an upper bound for ψn(p,q) in terms of the Ls norm (s ≥ 1) . However, using Lemma 2.2.2 in van der Vaart and Wellner (2000), E max |Xi − ξ i | ≤ ln (1 + n) max Xi − ξ i ψ2 , i∈Hr

i∈Hr

  where ...ψ2 is the Orlicz norm with function ψ2 (x) := exp x2 − 1 (e.g. van der Vaart and     Wellner, p. 95). Then, we need to bound (Xi − ξ i )ψ2 = 2  j≥q aj ˜εi−j  . Now, suppose that ψ2

the εi ’s are bounded with range △ε . Using Hoeffding’s inequality,          t2     Pr  aj ˜εi−j  ≥ t ≤ 2 exp − 2 △ε j≥q a2j j≥q 

so that Lemma 2.2.1 in van der Vaart and Wellner (2000), implies

(Xi − ξ i )ψ2

   1/2 √   ≤ 3 △ε  a2j  , j≥q 

 1/2 √   and in the previous example we can define δ := 2 3 ln (1 + n) △ε  j≥q a2j  . 6

1.2

Remarks on the Theorem

The bound in Theorem (2) is obtained using the recently developed approach of Be. As remarked in Be, it is worth noticing the slow rate of growth with respect to the dimension K. The bound is in terms of β p,i . This quantity can be bounded by means of Rosenthal kind of inequalities for dependent sequences (e.g. Rio, 2000, Doukhan and Louhichi, 1999, Dedecker and Doukhan, 2003). Example 8 (Strong Mixing cont’d) Suppose (Xi )i∈N is a centered sequence of strongly mixing random variables with values in RK and with mixing coefficients α := (αi )i∈N . Define SHi ,k := j∈Hi Xjk . By convexity of norms, β p,i

  2 3/4  3/4 K      3 2  4 2 3/2  max E |SHi ,k | = E |SHi /Bp | ≤ E  (SHi ,k /Bp )  ≤ K/Bp .   k k=1

 3/4 4 , suppose Xjk is bounded ∀j, k. Then, from a special case To get a tidy bound for E |SHi ,k |

of Theorem 3 in Doukhan and Louhichi (1999), 

4

E |SHi ,k |

3/4

2

 p

p−1

1/2 αi

i=0

3/4

 p3/2 ,

if αi ≍ i−c , c > 2. From Example (5), ψn(p,q) ≤ rαq . Setting r ≍ n1−a , p ≍ na , q ≍ nab with (a, b) ∈ (0, 1)2 , and αq ≍ q −c ≍ n−abc ρ  n−(1−a)/2 K 3/2 ln (K + 1) + K 1/2 na(b−1)/2 + n1−a n−abc . If K does not change with n, we simplify the bound by equating the exponents. Therefore, ρ  n−c/(3+4c) . Example 9 (Kernel Density Estimation) Suppose (Zi )i∈N is a centered sequence of strongly mixing random variables with values in R and with mixing coefficients α := (αi )i∈N . Define fˆ (t) = ni=1 ηh (Zi − t) /n to be the kernel density estimator of f (t) , t ∈ T a compact subset of R, where ηh (x) := η (x/h) /h and η is a continuous bounded density function. To apply

the functional central limit theorem to fˆ (t) , on top of equicontinuity in probability, we need to establish finite dimensional weak convergence to a normal distribution and control the rate of

7

convergence in terms of h → 0 as n → ∞. Let TK := t1 , ..., tK be a finite partition of T, so that h−1 ≍ K. Then, set Xi = ηh (Zi − t1 ) , ..., ηh (Zi − tK ) . For h > 0, by continuity of η, (Zi )i∈N and (Xi )i∈N have same mixing coefficients. Hence, from Example (5), ψn(p,q)  rαq . Setting r, p, q, αq as in the previous example, and h ≍ n−d , d > 0,   and ǫ ≍ ln ln nd + 1 / ln (n) , equating coefficients, ρ  n[−c(1−4d−2ǫ)+3d+2ǫ]/(3+4c) .

2

Rate of Convergence Under the Bounded Lipschitz Metric

The proof of the Theorem is kept as concise as possible trying to refer to the proof in Be. Some changes require some technical results which are contained in the next subsection. Taking Taylor expansions, the following notation will be used: let f ∈

m Ki i=1 R ,

and xi ∈ RKi , then we write

f x1 · · · xm for the m-linear form, or simply f xm if x1 = ... = xm . Proof of Theorem (2). Let Nn be a random variable with law Nn . We find a bound for supϕ∈BL1 |Eϕ (Sn ) − Eϕ (Nn )| . Write Sn = Sn′ + Rn , where Sn′ :=

r

Xi , Rn :=

j=1 i∈Hj

r

Xi ,

j=1 i∈Hj′

and set q ≍ pb , b ∈ (0, 1). Then,     sup Eϕ Bn−1 Sn − Eϕ (Nn ) ≤

ϕ∈BL1

     sup Eϕ Bn−1 Sn − Eϕ Bn−1 Sn′ 

ϕ∈BL1

    + sup Eϕ Bn−1 Sn′ − Eϕ (Nn ) ϕ∈BL1

      ≤ E Bn−1 Rn  + sup Eϕ Bn−1 Sn′ − Eϕ (Nn ) . ϕ∈BL1

By the conditions of the Theorem and Lemma (13), Bn−1 is regularly varying of index 1/2,   2 1/2 r −1 hence E Bn−1 Rn  ≍ (Kq/p)1/2 . Define S˜r := j=1 Wj , Wj = i∈Hj Bp ξ i , where

(ξ i )i∈Hr are random variables such that

      Law (ξ i )i∈Hr = Law (Xi )i∈H1 × · · · × Law (Xi )i∈Hr . 8

Then,        sup Eϕ Bn−1 Sn′ − Eϕ Br−1 S˜r  ϕ∈BL1       + sup Eϕ Br−1 S˜r − Eϕ (Nn ) .

    sup Eϕ Bn−1 Sn′ − Eϕ (Nn ) ≤

ϕ∈BL1

ϕ∈BL1

Writing Xi := Fi−1 (Ui ) and ξ i := Fi−1 (Vi ) , where Ui := F˜i (Xi , τ i ) and Vi := F˜i (ξ i , τ ) and K

(τ i )i∈Z are iid [0, 1] uniform. By a change of variables to [0, 1]

       sup Eϕ Bn−1 Sn′ − Eϕ Br−1 S˜r  ϕ∈BL1  ! " ! "     Fi−1 (Ui ) − Eϕ Bn−1 Fi−1 (Vi )   ψn(p,q) , = sup Eϕ Bn−1   ϕ∈BL1 i∈Hr

i∈Hr

using Condition (1). Therefore,

          sup Eϕ Bn−1 Sn − Eϕ (Nn )  sup Eϕ Br−1 S˜r − Eϕ (Nn ) + (Kq/p)1/2 + ψn(p,q) ,

ϕ∈BL1

(4)

ϕ∈BL1

and we only need to establish the Theorem for the sum of independent non-identically distributed random variables (Wi )i∈[1,r]∩N . Define r       3 −1 ˜ −2 ∆ := sup Eϕ Br Sr − Eϕ (N) , β p,i := E |Wi | , β p := Br β p,i . ϕ∈BL1

i=1

From the proof of Theorem 3.2 in Bentkus (2003) it suffices to prove (eq. 3.7 in Be) that there exists an absolute constant c0 such that   ∆ ≤ c0 Br−1 β p + 2δK 1/2 + c0 Br−1 β p ln 1 + δ −2 sin2 γ + ∆c0 Br−1 β p / sin γ, for arbitrary δ > 0 and γ ∈ (0, π/2) . This result follows in our case under minor modification of the proof of Theorem 3.2 in Bentkus (2003) to account for non-identically distributed random vectors. We shall sketch these changes. Suppose (Yi )i∈{1,...,r} are mean zero Gaussian random vectors such r that var (Yi ) = var (Wi ), and define Zr := i=1 Yi . Suppose Yr0 is a mean zero Gaussian random vector independent of all other random variables, such that var (Yr0 ) = Br−1 ri=1 var (Yi ). Then, ∆ ≤ 2δcY

√ K + sup |∆ (ϕ)| , ϕ∈BL1

 1/2 −2 √ r 2 for δ > 0, cY = KBr ≍ 1, ∀r, and i=1 |Yi |

    ∆ (ϕ) := Eϕ Br−1 S˜r + δYr0 − Eϕ Br−1 Zr + δYr0 . 9

Define Wi (α) := Wi cos α+Yi sin α, and S˜r (α, i) := 1≤j=i≤r Wj (α) . Define gi = Br−1 S˜r (α, i)+

δYr0 , by Lemma (11),

r

∆ (ϕ) 

i=1

r

=

i=1

  Eϕ′ Br−1 S˜r (α, i) + Br−1 Wi (α) + δYr0 Br−1 Wi′ (α) r   Eϕ′ gi + Br−1 Wi (α) Br−1 Wi′ (α) = |∆1,i +∆2,i | i=1

where     ∆1,i = −Eϕ′ gi + Br−1 Wi (α) Br−1 Wi αs , ∆2,i = Eϕ′ gi + Br−1 Wi (α) Br−1 Yi αc , and αc := cos α, αs := sin α. To ease notation, we drop the i subscript and only look at the ith term. Following Be we take a Taylor expansion. In our case we need to notice that  2 Eϕ′′ (g) Bn−1 Y αcs is uniformly integrable by Lemma (12). Therefore, it is valid to use Br−1 S˜r in place of Bn−1 Sn′ for any δ > 0 and eq. 3.17 in Be is true in our case as well. Hence, defining h := τ Bn−1 W αc + Bn−1 Y αs , f := Bn−1 W αc + τBn−1 Y αs , αsc = sin α cos α, calculations in Be give the following bound  2 1 = − E (1 − τ 1 ) ϕ′′′ (g + τ 1 h) Br−1 W hαcs 2  2 1 + E (1 − τ 1 ) ϕ′′′ (g + τ 1 f) Br−1 Y fαcs = ∆3 + ∆4 . 2

∆1 + ∆2

Now write w

g = gi = Br−1



(Wj cos α + Yj sin α) + δBr−1 Y˜i + δ Y˜r,−i ,

1≤j=i≤r

where Y˜i and Y˜r,−i are two independent mean zero Gaussian random variables independent of     all other random variables and such that var Y˜i = var (Yi ) and var Y˜r,−i = var (Yr,0 ) −   var Br−1 Y˜i . Define v := Br−1



Wj cos α + δBr−1 Y˜i , ̺Yr,−i := δ Y˜r,−i + Br−1

1≤j=i≤r



Yj sin α,

1≤j=i≤r

  w where ̺ := δ 2 + sin2 α and var (Yr,−i ) = var Y˜r,−i . Therefore, g = v + ̺Yr,−i , and given these

minor changes, we can follow the end of the proof in Be to bound ∆3 and ∆4 . The only difference is the extra term δBr−1 Y˜ in v, but this causes no impediment to the argument. For example, the

10

second display below eq. 3.30 in Be, can be replaced by Br−1



Y˜j cos α + δBr−1 Y˜i + ̺Yr,−i

1≤j=i≤r w

=

Br−1



Y˜j cos α + δBr−1 Y˜i + δ Y˜r,−i + Br−1

1≤j=i≤r w

= Br−1





Yj sin α

1≤j=i≤r

Y˜j + δ Y˜r0 ,

1≤j=i≤r

where Y˜j , Y˜r0 are Gaussian random vectors independent of each other and all other variables with w Y˜r0 = Yr0 where Yr0 was defined above.

Remark 10 We could replace Wi (as defined in the proof of the Theorem) with Wi∗ := Wi I{|Wi |≤K 1/2 M }

for some M < ∞. In the proof, the additional error in (4), due to this substitution, is  ! " r      # $1/2  −1 ˜ −1 ∗  sup Eϕ Br Sr − Eϕ Br Wi  ≤ κ (M) := max E |Wi |2 I |Wi | > K 1/2 M , 1≤i≤r  ϕ∈BL1  i=1

using the Lipschitz condition on ϕ and independence of (Wi )i∈{1,...,r} . Then E |Wi∗ |3 ≤ K 1/2 ME |Wi∗ |2 , so that in the statement of the Theorem, β p,i can be replaced by K 1/2 ME |SHi /Bp |2 + κ (M) . By uniform integrability of |SHi /Bp |2 /K, as implied by the conditions of the Theorem, κ (M) /K 1/2 can be made arbitrarily small by suitable choice of M.

2.1

Technical Lemmata

The following is derived in Bentkus (2003, eq. 1.3, 1.5). Lemma 11 Let ϕ : RK → R be a differentiable function. Suppose (Xi )i∈Z and (Yi )i∈Z are se quences of random variables with values in RK independent of each other. Define S := ni=1 Xi , Z := ni=1 Yi , Xi (α) := Xi cos α+Yi sin α, S (α) := S cos α+Z sin α, S (α, i) := 1≤j=i≤n Xj (α) , where α is uniformly distributed in [0, π/2] . Then,

n π ′ Eϕ (Z) − Eϕ (S) = Eϕ (S (α, i) + Xi (α)) Xi′ (α) , 2 i=1

and if EXi = EYi = 0, var (Xi ) = var (Yi ), then EXi (α) = 0 and E Xi′ (α) , x Xi′ (α) , y = 0 ∀x, y ∈ RK . Proof. See Bentkus (2003).

11

Lemma 12 Suppose ϕ : RK → R is a bounded function which is twice differentiable and such # $ that lim|x|→∞ |ϕ′ (x)| exp − |x|2 → 0. Suppose X is a random variable with values in RK such

that X 2 is uniformly integrable and Y is a Gaussian random variable with values in RK . Further suppose that X and Y are independent. Then, writing EY for expectation with respect to Y , EY ϕ′′ (X + Y ) X 2 is uniformly integrable. Proof. Since X 2 is uniformly integrable, it is sufficient to check that EY ϕ′′ (X + Y ) is bounded. Integrating by parts trice, using the domination condition of the Lemma and the well known relations of Gaussian derivatives and Hermite polynomials,   EY ϕ′′ (X + Y ) = EY ϕ′ (X + Y ) Y = EY ϕ (X + Y ) Y 2 − 1 ≤ |ϕ|∞ EY 2 < ∞.

2  Lemma 13 Suppose there exists a sequence Bn such that Bn−1 Sn  is uniformly integrable.

Then, Bn2 is regularly varying of index 1 (with respect to n) if (Xi )i∈Z satisfies Condition (1).

Proof. Choose Bn such that Bn2 ≍ E |Sn |2 /K. Use the notation in Condition (1). Write Sn = Sn1 + Sn2 , n1 + n2 = n and define ζ n1 := Bn−1 Sn1 , ζ n2 := Bn−1 Sn2 , ζ n := Bn−1 Sn . Uniform 1 2 integrability implies that Bn−1 Sn is associated to a sequence of tight measures, hence, Prohorov’s Theorem (e.g. van der Vaart and Wellner, 2000) implies that these measures are relatively w

w

w

compact. This holds for all subsequences as well. Therefore, ζ n → ζ, ζ n1 → ζ 1 ζ n2 → ζ 2 .  2  −1 2 2 2 E Brq Srq   KBn−2 Brq . Since Bn → ∞, and By uniform integrability, E Bn−1 Srq  = Bn−2 Brq   p rq/n → 0, Bn−1 Srq  → 0 by Chebyshev’s inequality. Therefore, we can peel off small blocks in the sums Sn , Sn1 and Sn2 and Condition (1) applies. Using an almost sure construction, by Condition w

(1) we have random variables ξ n1 ξ n2 independent of each other and such that ξ n1 ∼ ζ n1 + o (1) , w

ξ n2 ∼ ζ n2 + o (1). Using the same arguments as in the sufficiency proof of Theorem 1 in Grin’   (2003), we have that Bn2 1 + Bn2 2 Bn−2 → 1 along any subsequences n1 , n2 . Therefore Bn2 must 1 +n2 be regularly varying of index 1.

Acknowledgement 14 I would like to thank Marco Scarsini for invaluable discussions related to the copula function. 12

References [1] Andrews, D. (1984) Nonstrong Mixing Autoregressive Processes. Journal of Applied Probability 21, 930—934. [2] Bentkus, V. (2003) On Normal Approximations, Approximations of Semigroups of Operators, and Approximations by Accompanying Laws. Available at the following URL: http://www.mathematik.uni-bielefeld.de/fgweb/Preprints/fg03035.pdf. [3] Billingsley, P. (1968) Convergence of Probability Measures. New York: Wiley. [4] Bradley, R. C. (1986) Basic properties of strong mixing conditions. Dependence in probability and statistics, Progress in Probability and Statistics 11, 165—192, Boston: Birkh¨ auser. [5] Doukhan, P. and S. Louhichi (1999) A New Weak Dependence Condition and Applications to Moment Inequalities. Stochastic Processes and Applications 84, 313-342. [6] Dudley, R.M. (2002) Real Analysis and Probability. Cambridge: Cambridge University Press. [7] Grin’, A.G. (2003) On the Minimal Condition of Weak Dependency in the Central Limit Theorem for Stationary Sequences. Theory of Probability and its Applications 47, 506-510. [8] Joe, H. (1997) Multivariate Models and Dependence Concepts. London: Chapman and Hall. [9] Rio, E. (2000) Th´eorie Asymptotique des Processus Al´eatoires Faiblement D´ependants. Paris: Springer. [10] R¨ uschendorf, L. and V. de Valk (1993) On Regression Representation of Stochastic Processes. Stochastic Processes and their Applications 46, 183-198. [11] Sancetta, A. (2004) Decoupling and Convergence to Independence with Applications to Functional Limit Theorems. Submitted. [12] Sklar, A. (1973) Random Variables, Joint Distribution Functions, and Copulas. Kybernetika 9, 449-460. [13] Van der Vaart, A. and J.A. Wellner (2000) Weak Convergence of Empirical Processes. Springer Series in Statistics. New York: Springer.

13

Distance between Nonidentically Weakly Dependent ...

May 23, 2005 - In this case, the copula of (Xi)i∈A⊂Z is given by (1). Let |...| stand for the Euclidean norm and for n ∈ N+, Sn := ∑ n i=1 Xi and for a set A ⊂ Z, ...

235KB Sizes 2 Downloads 220 Views

Recommend Documents

THE DISTANCE BETWEEN RANDOMLY ...
paring two genomes containing different numbers χ1 and χ2 of linear .... homogenous paths, two heterogenous paths and one inner cycle are depicted.

Distance Learning experiments between two ...
ferent processes could be used for the development of instructional systems using educational ... The goals of the project (course) must explain what the learner is expected to learn ... scope was tackled (seen) as a project management activity. ...

Point Distance Problems with Dependent Uncertainties
Robust, finite precision, and epsilon geometry techniques are only ..... on Robotics and Automation, pp 495–502, USA , 1996. [5] J. Chen, K. Goldberg, ...