Acta Math Vietnam (2015) 40:205–230 DOI 10.1007/s40306-015-0141-0

Stein Meets Malliavin in Normal Approximation Louis H. Y. Chen1

Received: 1 March 2015 / Revised: 6 May 2015 / Accepted: 18 May 2015 / Published online: 1 July 2015 © Institute of Mathematics, Vietnam Academy of Science and Technology (VAST) and Springer Science+Business Media Singapore 2015

Abstract Stein’s method is a method of probability approximation which hinges on the solution of a functional equation. For normal approximation, the functional equation is a first-order differential equation. Malliavin calculus is an infinite-dimensional differential calculus whose operators act on functionals of general Gaussian processes. Nourdin and Peccati (Probab. Theory Relat. Fields 145(1–2), 75–118, 2009) established a fundamental connection between Stein’s method for normal approximation and Malliavin calculus through integration by parts. This connection is exploited to obtain error bounds in total variation in central limit theorems for functionals of general Gaussian processes.  Of particular interest is the fourth moment theorem which provides error bounds of the order E(Fn4 ) − 3 in the central limit theorem for elements {Fn }n≥1 of Wiener chaos of any fixed order such that E(Fn2 ) = 1. This paper is an exposition of the work of Nourdin and Peccati with a brief introduction to Stein’s method and Malliavin calculus. It is based on a lecture delivered at the Annual Meeting of the Vietnam Institute for Advanced Study in Mathematics in July 2014. Keywords Normal approximation · Stein’s method · Malliavin calculus · Berry-Esseen theorem · Multiple Wiener-Itˆo integrals · Wiener chaos · Functionals of Gaussian process · Fourth moment theorem · Breuer-Major theorem · Fractional Brownian motion Mathematics Subject Classification (2010) 60F05 · 60G15 · 60H05 · 60H07

1 Introduction Stein’s method was invented by Charles Stein in the 1960s when he used his own approach in class to prove a combinatorial central limit theorem of Wald and Wolfowitz [40] and of Lecture at the Annual Meeting 2014 of the Vietnam Institute for Advanced Study in Mathematics  Louis H. Y. Chen

[email protected] 1

National University of Singapore, 10 Lower Kent Ridge Road, Singapore 119076, Singapore

206

Louis H. Y. Chen

Hoeffding [22]. Malliavin calculus was developed by Paul Malliavin [25] in 1976 to provide a probabilistic proof of the H¨ormander criterion [23] of hypoellipticity. Although the initial goals of Stein’s method and Malliavin calculus are different, they are both built on some integration by parts techniques. This connection was exploited by Nourdin and Peccati [28] to develop a theory of normal approximation on infinite-dimensional Gaussian spaces. They were motivated by a remarkable discovery of Nualart and Peccati [34], who proved that a sequence of random variables in a Wiener chaos of a fixed order converges in distribution to a Gaussian random variable if and only if their second and fourth moments converge to the corresponding moments of the limiting random variable. By combining Stein’s method and Malliavin calculus, Nourdin and Peccati [28] obtained a general total variation bound in the normal approximation for functionals of Gaussian processes. They also proved that for 2 {F  n } in a Wiener chaos of fixed order such that E(Fn ) = 1, the error bound is of the order 4 E(Fn ) − 3, thus providing an elegant rate of convergence for the remarkable result of Nualart and Peccati [34]. We call this result of Nourdin and Peccati [28] the fourth moment theorem. The work of Nourdin and Peccati [28] has added a new dimension to Stein’s method. Their approach of combining Stein’s method with Malliavin calculus has led to improvements and refinements of many results in probability theory, such as the Breuer-Major theorem [7]. More recently, this approach has been successfully used to obtain central limit theorems in stochastic geometry, stochastic calculus, statistical physics, and for zeros of random polynomials. It has also been extended to different settings as in non-commutative probability and Poisson chaos. Of particular, interest is the connection between the NourdinPeccati analysis and information theory, which was recently revealed in Ledoux, Nourdin and Peccati [24] and in Nourdin, Peccati and Swan [32]. This paper is an exposition on the connection between Stein’s method and Malliavin calculus, and on how this connection is exploited to obtain a general error bound in the normal approximation for functionals of Gaussian processes, leading to the proof of the fourth moment theorem with some applications. It is an expanded version of the first four sections and of part of Section 5 of Chen and Poly [13], with most parts rewritten and new subsections added.

2 Stein’s Method 2.1 A General Framework Stein’s method is a method of probability approximation introduced by Charles Stein [38] in 1972. It does not involve Fourier analysis but hinges on the solution of a functional equation. Although Stein’s 1972 paper was on normal approximation, his ideas were general and applicable to other probability approximations. In a nutshell, Stein’s method can be described as follows. Let W and Z be random elements taking values in a space S , and let X and Y be some classes of real-valued functions defined on S . In approximating the distribution L (W ) of W by the distribution L (Z) of Z, we write Eh(W ) − Eh(Z) = ELfh (W ) for a test function h ∈ Y , where L is a linear operator (Stein operator) from X into Y and fh ∈ X a solution of the equation Lf = h − Eh(Z)

(Stein equation).

(2.1)

Stein Meets Malliavin in Normal Approximation

207

The error ELfh (W ) can then be bounded by studying the solution fh and exploiting the probabilistic properties of W . The operator L characterizes L (Z) in the sense that L (W ) = L (Z) if and only if for a sufficiently large class of functions f we have ELf (W ) = 0

(Stein identity).

(2.2)

In normal approximation, where L (Z) is the standard normal distribution, the operator used by Stein [38] is given by Lf (w) = f  (w) − wf (w) for w ∈ R, and in Poisson approximation, where L (Z) is the Poisson distribution with mean λ > 0, the operator L used by Chen [10] is given by Lf (w) = λf (w + 1) − wf (w) for w ∈ Z+ . However, the operator L is not unique even for the same approximating distribution but depends on the problem at hand. For example, for normal approximation, L can also be taken to be the generator of the Ornstein-Uhlenbeck process, that is, Lf (w) = f  (w) − wf  (w), and for Poisson approximation, L taken to be the generator of an immigration-death process, that is, Lf (w) = λ[f (w + 1) − f (w)] + w[f (w − 1) − f (w)]. This generator approach, which is due to Barbour [2], allows extensions to multivariate and process settings. Indeed, for multivariate normal approximation, Lf (w) = f (w) − w · ∇f (w), where f is defined on the Euclidean space; see Barbour [3] and G¨otze [21]. Examples of expository articles and books on Stein’s method for normal, Poisson and other probability approximations are Arratia, Goldstein and Gordon [1], Chatterjee, Diaconis, and Meckes [9], Barbour and Chen [4], Barbour, Holst and Janson [5], Chen, Goldstein and Shao [12], Chen and R¨ollin [14], Diaconis and Holmes [19], and Ross [37].

2.2 Normal Approximation In his 1986 monograh [39], Stein proved the following characterization of the normal distribution. Proposition 2.1 The following are equivalent. (i) (ii)

W ∼ N (0, 1); E[f  (W ) − Wf (W )] = 0 for all f ∈ CB1 .

Proof By integration by parts, (i) implies (ii). If (ii) holds, solve f  (w) − wf (w) = h(w) − Eh(Z) where h ∈ CB and Z ∼ N (0, 1). Its solution fh is given by  ∞ 1 2 1 2 e− 2 t [h(t) − Eh(Z)]dt fh (w) = −e 2 w  ww 1 2 1 2 w e− 2 t [h(t) − Eh(Z)]dt. = e2 −∞

(2.3)

(2.4)

∞ 1 2 1 2 Using w e− 2 t dt ≤ w −1 e− 2 w for w > 0, we can show that fh ∈ CB1 with fh ∞ ≤ √ 2π e h ∞ and fh ∞ ≤ 4 h ∞ . Substituting fh for f in (ii) leads to Eh(W ) = Eh(Z) for This proves (i).

h ∈ CB .

208

Louis H. Y. Chen

The proof of Propostion 2.1 shows that the Stein operator L for normal approximation, which is given by Lf (w) = f  (w) − wf (w), is obtained by integration by parts. Assume EW = 0 and Var(W ) = B 2 > 0. By Fubini’s theorem, for f absolutely continuous for which the expectations exist, we have  ∞ f  (x)EW I (W > x)dx = B 2 Ef  (W ∗ ) EWf (W ) = −∞

L (W ∗ )

is absolutely continuous with density given by B −2 EW I (W > x). The where distribution L (W ∗ ) is called W-zero-biased. The notion of zero-biased distribution was introduced by Goldstein and Reinert [20]. Now assume Var(W ) = 1. By Proposition 2.1, L (W ) = N (0, 1) if and only if L (W ∗ ) = L (W ). Heuristically, this suggests that L (W ∗ ) is “close” to L (W ) if and only if L (W ) is “close” to N (0, 1). Therefore, it is natural to ask if can we couple W ∗ with W in such a way that E|W ∗ − W | provides a good measure of the distance between L (W ) and N (0, 1)? There are three distances commonly used for normal approximation. Definition 2.2 Let Z ∼ N (0, 1), F (x) = P (W ≤ x) and (x) = P (Z ≤ x). (i)

The Wasserstein distance between L (W ) and N (0, 1) is defined by dW (L (W ), N (0, 1)) :=

(ii)

sup

|h(x)−h(y)|≤|x−y|

|Eh(W ) − Eh(Z)|.

The Kolmogorov distance between L (W ) and N (0, 1) is defined by dK (L (W ), N (0, 1)) := sup |F (x) − (x)|. x∈R

(iii)

The total variation distance between L (W ) and N (0, 1) is defined by dTV (L (W ), N (0, 1)) := =

sup |P (W ∈ A) − P (Z ∈ A)|

A∈B(R)

1 sup |Eh(W ) − Eh(Z)|. 2 |h|≤1

Note that Eh(W ) − Eh(Z) = E(h(W ) − h(0)) − E(h(Z) − h(0)). So dW (L (W ), N (0, 1)) =

sup

|h(x)−h(y)|≤|x−y|,h(0)=0

|Eh(W ) − Eh(Z)|.

Also note that |h(w)| ≤ |h(w) − h(0)| + |h(0)| ≤ |w| + |h(0)|. So |h(x) − h(y| ≤ |x − y| implies that h grows linearly. Since C 1 functions h with h ∞ ≤ 1 is dense in the sup norm in the class of functions h with |h(x) − h(y)| ≤ |x − y|, we also have dW (L (W ), N (0, 1)) =

sup h∈C 1 , h ∞ ≤1

|Eh(W ) − Eh(Z)|.

By an application of Lusin’s theorem, sup |Eh(W ) − Eh(Z)| =

|h|≤1

sup

h∈C ,|h|≤1

|Eh(W ) − Eh(Z)|.

Therefore, dTV (L (W ), N (0, 1)) =

1 sup |Eh(W ) − Eh(Z)|. 2 h∈C ,|h|≤1

The proposition below concerns the boundedness properties of the solution fh , given by (2.4), of the Stein equation (2.3) for h either bounded or absolutely continuous with bounded

Stein Meets Malliavin in Normal Approximation

209

h . The use of these boundedness properties is crucial for bounding the distances defined in Definition 2.2. Proposition 2.3 Let fh be the unique solution, given by (2.4), of the Stein equation (2.3), where h is either bounded or absolutely continuous. 1.

If h is bounded, then fh ∞ ≤

2.

3.

√ 2π h ∞ , fh ∞ ≤ 4 h ∞ .

(2.5)

h ,

then If h is absolutely continuous with bounded    fh ∞ ≤ 2 h ∞ , fh ∞ ≤ 2/π h ∞ , fh ∞ ≤ 2 h ∞ .

(2.6)

If h = I(−∞,x] where x ∈ R, then, writing fh as fx , √ 0 < fx (w) ≤ 2π /4, |wfx (w)| ≤ 1, |fx (w)| ≤ 1,

(2.7)

and for all w, u, v ∈ R, |fx (w) − fx (v)| ≤ 1,

√ |(w + u)fx (w + u) − (w + v)fx (w + v)| ≤ (|w| + 2π/4)(|u| + |v|).

(2.8) (2.9)

The bounds in the proposition and their proofs can be found in Lemmas 2.3 and 2.4 of Chen, Goldstein and Shao [12]. In the case where W ∗ is be coupled with W , that is, there is a zero-bias coupling, we have the following result. Theorem 2.4 Assume that EW = 0 and Var(W ) = 1 and that W ∗ and W are defined on the same probability space. Then dW (L (W ), N (0, 1)) ≤ 2E|W ∗ − W |.

(2.10)

Proof Let h be absolutely continuous with h ∞ ≤ 1. Then by the definition of zerobiased distribution and by (2.6), |Eh(W ) − Eh(Z)| = |E[fh (W ) − Wfh (W )]| = |E[fh (W ) − fh (W ∗ )]  W ∗ −W fh (W + t)dt| ≤ fh ∞ E|W ∗ − W | = |E 0

≤ 2 h ∞ E|W ∗ − W | ≤ 2E|W ∗ − W |. This proves the theorem. Theorem 2.4 shows that E|W ∗ −W | provides an upper bound on the Wasserstein distance. We now construct a zero-bias coupling in the case where W is a sum of independent random variables and show that E|W ∗ − W | indeed gives an optimal bound. Let X1 , . . . , Xn be independent random variables with EXi = 0, Var(Xi ) = σi2 > 0 n 3 (i) and n E|X2i | < ∞. Let W = i=1 Xi and W = W − Xi . Assume Var(W ) = 1(=⇒ i=1 σi = 1). Define (i) (ii) (iii)

I to be such that P (I = i) = σi2 for i = 1, · · · , n; Xi∗ to be Xi -zero-biased, i = 1, · · · , n; I , X1∗ , · · · , Xn∗ , X1 , · · · , Xn to be independent.

210

Louis H. Y. Chen

Then for absolutely continuous f such that f ∞ < ∞ and f  ∞ < ∞, EWf (W ) =

n 

n      EXi f W (i) + Xi = σi2 Ef  W (i) + Xi∗

i=1

i=1

  = Ef W (I ) + XI∗ = Ef  (W ∗ ). 

(2.11)

So W ∗ is coupled with W and W ∗ − W = XI∗ − XI . Note that the density of Xi∗ is  given by σi−2 EXi I (Xi > x). Straightforward calculations yield E|XI | ≤ ni=1 E|Xi |3 and  n E|XI∗ | ≤ 12 i=1 E|Xi |3 . Therefore, E|W ∗ − W | = E|XI∗ − XI | ≤ E|XI∗ | + E|XI | ≤

3 E|Xi |3 . 2 n

(2.12)

i=1

We immediately have the following corollary of Theorem 2.4. Xn be independent with EXi = 0, Var(Xi ) = σi2 , and E|Xi |3 < Corollary 2.5 Let X1 . · · · ,  ∞, i = 1, · · · , n. Let W = ni=1 Xi and assume that Var(W ) = 1. Then dW (L (W ), N (0, 1)) ≤ 3

n 

E|Xi |3 .

(2.13)

i=1

It is much more difficult to obtain an optimal bound on the Kolmogorov distance between L (W ) and N (0, 1). Such a bound can be obtained by induction or by the use of a concentration inequality. For induction, see Bolthausen [6]. For the use of a concentration inequality, see Chen [11] and Chen and Shao [16] for sums of independent random variables, and Chen and Shao [17] for sums of locally dependent random variables. See also Chen, Goldstein and Shao [12].For sums of independent random variables, Chen and Shao [16] obtained a bound of 4.1 E|Xi |3 on the Kolmogorov distance. In the next subsection, we will give a proof of an optimal bound on the Kolmogorov distance using the concentration inequality approach. In general, it is difficult to construct zero-bias couplings such that E|W ∗ − W | is small for normal approximation. However, by other methods, one can construct an equation of the form, EWf (W ) = ET1 f  (W + T2 ),

(2.14)

where T1 and T2 are some random variables defined on the same probability space as W , and f is an absolutely continuous function for which the expectations in (2.14) exist. Heuristically, in view of Proposition 2.1, L (W ) is “close” to N (0, 1) if T1 is “close” to 1 and T2 is “close” to 0. Examples of W satisfying this equation include sums of locally dependent random variables as considered in Chen and Shao [17] and exchangeable pairs as defined in Stein [39]. More generally, a random variable W satisfies (2.14) if there is a Stein cou pling (W, W  , G) where W, W , G are defined on a common probability space such that EWf (W ) = E Gf (W ) − Gf (W ) for absolutely continuous functions f for which the expectations exist (see Chen and R¨ollin [15]). In all cases, it is assumed that EW = 0 and Var(W ) = 1. Letting f (w) = w, we have 1 = EW 2 = ET1 . The case of zero-bias coupling corresponds to T1 = 1. As an illustration, let (W, W  ) be an exchangeable pair of random variables, that is, (W, W  ) has the same distribution as (W  , W ). Assume that EW = 0 and Var(W ) = 1

Stein Meets Malliavin in Normal Approximation

211



and that E W  − W |W = −λW for some λ > 0. Since the function (w, w  ) −→ (w  − w)(f (w  ) + f (w)) is anti-symmetric, the exchangeability of (W, W  ) implies

E (W  − W )(f (W  ) + f (W )) = 0. From this, we obtain

1  E (W − W )(f (W  ) − f (W )) 2λ  1

1  2   E (W − W ) f (W + (W − W )t)dt = E T1 f  (W + T2 ) = 2λ 0

EWf (W ) =

1 (W  − W )2 , T2 = (W  − W )U , and U uniformly distributed on [0, 1] and where T1 = 2λ independent of W, W  , T1 , and T2 . The notion of exchangeable pair is central to Stein’s method. It has been extensively used in the literature. Here is a simple example of an exchangeable pair. Let X1 , · ·  · , Xn be independent random variables such that EXi = 0 and Var(W ) = 1, where W = ni=1 Xi . Let X1 , · · · , Xn be an independent copy of X1 , · · · , Xn and let W  = W − XI + XI , where I is uniformly distributed on {1, · · · , n} and independent of {Xi , Xi , 1 ≤ i ≤ n}. Then (W, W  ) is an exchangeable pair and E[W  − W |W ] = − n1 W . Assume that EW = 0 and Var(W ) = 1. From (2.3) and (2.14),

Eh(W ) − Eh(Z) = E fh (W ) − T1 fh (W + T2 )



= E T1 (fh (W ) − fh (W + T2 )) + E (1 − T1 )fh (W ) . (2.15)

Different techniques have been developed for bounding the error terms on the right side of (2.15). Apart from zero-bias coupling, which corresponds to T1 = 1, we will focus on the case where T2 = 0. This is the case if W is a functional of independent Gaussian random variables as considered by Chatterjee [8] or a functional of Gaussian random fields as considered by Nourdin and Peccati [28]. In this case, (2.15) becomes



Eh(W ) − Eh(Z) = E (1 − T1 )fh (W ) = E (1 − E[T1 |W ])fh (W ) . Let h be such that |h| ≤ 1. Then, by Proposition 2.3, we obtain the following bound on the total variation distance between L (W ) and N (0, 1). dTV (L (W ), N (0, 1)) :=

1 sup |Eh(W ) − Eh(Z)| 2 |h|≤1

1  f ∞ E|1 − E[T1 |W ]| 2 h ≤ 2 h ∞ E|1 − E[T1 |W ]|  ≤ 2 Var(E[T1 |W ]), ≤

where for the last inequality it is assumed that E[T1 |W ] is square integrable. √While Chatterjee [8] developed second-order Poincar´e inequalities to bound 2 Var(E[T1 |W ]), Nourdin and Peccati [28] deployed Malliavin √ calculus. In Sections 3 and 4, we will discuss how Malliavin calculus is used to bound 2 Var(E[T1 |W ]).

2.3 Berry-Esseen Theorem In this subsection, we will give a proof of the Berry-Esseen theorem for sums of independent random variables using zero-bias coupling and a concentration inequality.

212

Louis H. Y. Chen

Theorem 2.6 (Berry-Esseen) Let X1 , · · · , Xn be independent random n variables with EXi = 0, Var(Xi ) = σi2 , and E|Xi |3 = γi < ∞. Let W = i=1 Xi and assume Var(W ) = 1. Then dK (L (W ), N (0, 1)) ≤ 7.1

n 

(2.16)

γi .

i=1

We first prove two propositions using the same notation as in Theorem 2.6. Let W ∗ be W -zero-biased and assume that it is coupled with W as given in (2.11). Let  denote the distribution function of N (0, 1). Proposition 2.7 For x ∈ R, |P (W ∗ ≤ x) − (x)| ≤ 2.44

n 

(2.17)

γi .

i=1

Proof Let fx be the unique bounded solution of the Stein equation f  (w) − wf (w) = I (w ≤ x) − (x)

(2.18)

where x ∈ R. The solution fx is given by (2.4) with h(w) = I (w ≤ x). From this equation and by (2.9), |P (W ∗ ≤ x) − (x)| = |E[fx (W ∗ ) − W ∗ fx (W ∗ )]|       

 = |E W (I ) + XI fx W (I ) + XI − W (I ) + XI∗ fx W (I ) + XI∗ |  √   2π  (I ) |XI | + |XI∗ | ≤ E |W | + 4  √  n n  2π  3 γi ≤ 2.44 γi . 1+ ≤ 2 4 i=1

i=1

This proves Proposition 2.7. Next, we prove a concentration inequality. Proposition 2.8 For i = 1, . . . , n and for a ≤ b, a, b ∈ R, we have P (a ≤ W (i) ≤ b) ≤

√ √ n 4( 2 + 1)  2 2 (b − a) + γi . 3 3

(2.19)

i=1

Proof This proof is a slight variation of that of Lemma 3.1 in Chen, Goldstein and Shao [12]. Let δ > 0 and let f be given by f ((a +b)/2) = 0 and f  (w) = I (a −δ ≤ w ≤ b +δ).

Stein Meets Malliavin in Normal Approximation

213

Then |f | ≤ (b − a + 2δ)/2. Since Xj is independent of W (i) − Xj for j  = i, Xi is independent of W (i) , and since EXj = 0 for j = 1, . . . , n, we have     EW (i) f W (i) − EXi f W (i) − Xi =

n 

    EXj f W (i) − f W (i) − Xj

j =1

=

n 

 EXj

j =1

=

n 



n 

−Xj

 EXj

j =1

0

0

−Xj

 EXj

j =1

0

−Xj

  f  W (i) + t dt   I a − δ ≤ W (i) + t ≤ b + δ dt   I a − δ ≤ W (i) + t ≤ b + δ I (|t| ≤ δ)dt

 n   ≥ EI a ≤ W (i) ≤ b Xj j =1

0 −Xj

I (|t| ≤ δ)dt

n   |Xj | min(|Xj |, δ) = EI a ≤ W (i) ≤ b j =1 n   ≥ P a ≤ W (i) ≤ b E|Xj | min(|Xj |, δ) j =1

   n     −EI a ≤ W (i) ≤ b  [|Xj | min(|Xj |, δ) − E|Xj | min(|Xj |, δ)]  j =1 = R 1 − R2 ,

(2.20)

where in the first inequality in (2.20), we used the fact that  Xj

0

−Xj

  I a − δ ≤ W (i) + t ≤ b + δ dt ≥ 0 for j = 1, · · · , n.

Using the inequality, min(a, b) ≥ a −

a2 4b

for a, b > 0, we obtain

⎫ ⎧ n n ⎬   ⎨  1 EXj2 − E|Xj |3 R1 ≥ P a ≤ W (i) ≤ b ⎭ ⎩ 4δ j =1 j =1 ⎫ ⎧ n ⎬ ⎨  1  1− E|Xj |3 . = P a ≤ W (i) ≤ b ⎭ ⎩ 4δ j =1

(2.21)

214

Louis H. Y. Chen

We also have

   n     R2 ≤ E  [|Xj | min(|Xj |, δ) − E|Xj | min(|Xj |, δ)] j =1  ⎞⎤1/2 ⎛ ⎡ n  |Xj | min(|Xj |, δ)⎠⎦ ≤ ⎣Var ⎝ ⎡ ≤ ⎣

j =1 n 

EXj2 min(|Xj |, δ)2 ⎦

j =1

⎡ ≤ δ⎣

⎤1/2

n 

⎤1/2 EXj2 ⎦

= δ.

(2.22)

j =1

Bounding the left hand side of (2.20), we obtain     EW (i) f W (i) − EXi f W (i) − Xi   1 ≤ (b − a + 2δ) E|W (i) | + E|Xi | 2 " #1/2 2 1 ≤ √ (b − a + 2δ) E|W (i) | + (E|Xi |)2 2

   1/2 1 ≤ √ (b − a + 2δ) E W (i)2 + E Xi2 2 1 (2.23) = √ (b − a + 2δ). 2  The proof of Proposition 2.8 is completed by letting δ = nj=1 E|Xj |3 and combining (2.20), (2.21), (2.22), and (2.23). We now prove Theorem 2.6. By Proposition 2.7, we have n    |P (W ≤ x) − (x)| ≤ |P (W ≤ x) − P W (I ) + XI∗ ≤ x | + 2.44 γi i=1 n    = EI x − XI ∨ XI∗ ≤ W (I ) ≤ x − XI ∧ XI∗ + 2.44 γi . i=1

(2.24) {Xi , Xi∗ , 1

≤ i ≤ n}, we have Using the independence between I and   EI x − XI ∨ XI∗ ≤ W (I ) ≤ x − XI ∧ XI∗ =

n 

  σi2 EI x − Xi ∨ Xi∗ ≤ W (i) ≤ x − Xi ∧ Xi∗

i=1

=

n  i=1

  σi2 EP x − Xi ∨ Xi∗ ≤ W (i) ≤ x − Xi ∧ Xi∗ |Xi , Xi∗ .

Stein Meets Malliavin in Normal Approximation

215

Since W (i) is independent of Xi , Xi∗ for i = 1, · · · , n, it follows from (2.19) that EI (x − XI ∨ XI∗ ≤ W (I ) ≤ x − XI ∧ XI∗ ) √ √ n n  4( 2 + 1)  2 2 2 ∗ σi E γi (|Xi | + |Xi |) + ≤ 3 3 i=1 i=1  n  √ n  √ 4( 2 + 1)  2+ γi ≤ 4.65 γi . ≤ 3 i=1

(2.25)

i=1

The proof of Theorem 2.6 is completed by combining (2.24) and (2.25).

3 Malliavin Calculus 3.1 Preamble In this paper, the work  ∞of Nourdin and Peccati will be presented in the context of the Gaussian process X = { 0 f (t)dBt : f ∈ L2 (R+ )}, where (Bt )t∈R+ is a standard Brownian motion on a complete probability space ( , F , P ), where F is generated by (Bt )t∈R+ , and L2 (R+ ) is the separable Hilbert space of square integrable real-valued functions with respect to the Lebesgue measure on R+ . This Gaussian process is a centered Gaussian family of random variables with the covariance given by # " ∞  ∞ f (t)dBt g(t)dBt = f, gL2 (R+ ) . E 0

0

There will be no loss of generality since problems of interest are of distributional nature and through an isometry these problems can be transferred to X. More specifically, let Y = {Y (h) : h ∈ H} be a centered Gaussian process over a real separable Hilbert space H with the covariance given by E [Y (h1 )Y (h2 )] = h1 , h2 H . Let ψ : H → L2 (R+ ) be an isometry and let f1 = ψ(h1 ) and f2 = ψ(h2 ) for h1 , h2 ∈ H. Then # " ∞  ∞ f1 (t)dBt f2 (t)dBt = E[Y (h1 )Y (h2 )]. E 0

0

This implies that L (X) = L (Y ) and problems of distributional nature on Y can be transferred to X. The material in this section can be found in Nourdin [27] and Nourdin and Peccati [29].

3.2 Multiple Wiener-Itˆo Integrals and Wiener Chaos Let B = (Bt )t∈R+ be a standard Brownian motion on a complete probability space p ( , F , P ), where F is generated by (Bt )t∈R+ , and let f ∈ L2 (R+ ) where p is a positive integer. We define  tp−1  t1  ∞ Ip (f ) = dBt1 dBt2 . . . dBtp f (tσ (1) , tσ (2) , . . . , tσ (p) ) (3.1) σ

0

0

0

216

Louis H. Y. Chen

where the sum is over all permutations σ of {1, 2, . . . , p}. The random variable Ip (f ) is called the pth multiple Wiener-Itˆo integral. The closed linear subspace Hp of L2 ( ) genp erated by Ip (f ), f ∈ L2 (R+ ), is called the pth Wiener chaos of B. We use the convention that H0 = R. If f is symmetric, that is, f (t1 , . . . , tp ) = f (tσ (1) , . . . , tσ (p) ) for any permutation σ of {1, . . . , p}, then  ∞  tp−1  t1 dBt1 dBt2 . . . dBtp f (t1 , t2 , . . . , tp ). Ip (f ) = p! 0

0

We define the symmetrization of f ∈

0

p L2 (R+ )

by  1 f (tσ (1 , . . . , tσ (p) ), f˜(t1 , . . . , tp ) = p! σ

(3.2)

p

where the sum is over all permutations σ of {1, . . . , p}. Let L2s (R+ ) be the closed subspace p of L2 (R+ ) of symmetric functions. By the triangle inequality, f˜ L2 Rp  ≤ f L2 Rp  , +

+

p implies f˜ ∈ L2s (R+ ). The following properties of the stochastic we see that f ∈ integrals Ip (·) can be easily verified: p L2 (R+ )

(i)

EIp (f ) = 0 and Ip (f ) = Ip (f˜) for all f ∈ L2 (R+ ).

(ii)

For all f ∈ L2 (R+ ) and g ∈ L2 (R+ ), $

p

p

q

E[Ip (f )Iq (g)] =

0 for p  = q, p!f˜, g ˜ L2 (Rp ) for p = q.

(3.3)

+

(iii)

p

The mapping f → Ip (f ) from L2 (R+ ) to L2 ( ) is linear.

The multiple Wiener-Itˆo integrals are infinite dimensional generalizations of the Hermite polynomials. The kth Hermite polynomial Hk is defined by & k % 2 x2 d − x2 Hk (x) = (−1)k e 2 e , x ∈ R. dx k If f ∈ L2 (R+ ) such that f L2 (R+ ) = 1, it can be shown that % ∞ &   Ik f ⊗k = Hk f (t)dBt ,

(3.4)

0

  where f ⊗k ∈ L2 Rk+ is the kth tensor product of f with itself defined by f (t1 , . . . , tk ) = ⊗k

f (t1 ) . . . f (tk ). If φ = f1⊗k1 ⊗· · ·⊗fp p with (fi )1≤i≤p an orthonormal system in L2 (R+ ) and k1 + · · · + kp = k, (3.4) can be extended to % ∞ & p ' Ik (φ) = Hki fi (t)dBt . (3.5) i=1

0

As in one-dimension where the Hermite polynomials form an orthogonal basis for 2 L2 (R, √1 e−x /2 dx), the space L2 ( ) can be decomposed into an infinite orthogonal sum 2π of the closed subspaces Hp . We state this fundamental fact about Gaussian spaces as a theorem below.

Stein Meets Malliavin in Normal Approximation

217

Theorem 3.1 Any random variable F ∈ L2 ( ) admits an orthogonal decomposition of the form ∞  F = Ik (fk ), (3.6) k=0

where I0 (f0 ) = E[F ] and fk ∈ L2 (Rk+ ) are symmetric and uniquely determined by F . Applying the orthogonality relation (3.3) to the symmetric kernels fk for F in the Wiener chaos expansion (3.6), ∞  k! fk 2L2 (Rk ) . (3.7) F 2L2 ( ) = +

k=0

The random variables Ik (fk ) inherit some properties from the algebraic structure of the Hermite polynomials, such as the product formula (3.8) below. To understand this, we need the definition of contraction. p

q

Definition 3.2 Let p, q ≥ 1 and let f ∈ L2 (R+ ) and g ∈ L2 (R+ ) be two symmetric functions. For r ∈ {1, . . . , p ∧ q}, the rth contraction of f and g, denoted by f ⊗r g, is defined by f ⊗r g(x1 , . . . , xp−r , y1 , · · · , yq−r )  f (x1 , · · · , xp−r , t1 , · · · , tr )g(y1 , · · · , yq−r , t1 , · · · , tr )dt1 · · · dtr . = Rr+

By convention, f ⊗0 g = f ⊗ g. (r g its The contraction f ⊗r g is not necessarily symmetric, and we denote by f ⊗ symmetrization. Note that by the Cauchy-Schwarz inequality, f ⊗r g

  p+q−2r L2 R+

≤ f L2 Rp  g L2 Rq  +

+

for

r = 0, 1, . . . , p ∧ q,

and that f ⊗p g = f, gL2 (Rp ) when p = q. + We state the product formula between two multiple Wiener-Itˆo integrals in the next theorem. p

q

Theorem 3.3 Let p, q ≥ 1 and let f ∈ L2 (R+ ) and g ∈ L2 (R+ ) be two symmetric functions. Then p∧q  %p &%q & (r g). Ip (f )Iq (g) = Ip+q−2r (f ⊗ r! (3.8) r r r=0

3.3 Malliavin Derivatives Let B = (Bt )t∈R+ be a standard Brownian motion on a complete probability space ( , F , P), where F is generated by B, and let X = {X(h), h ∈ L2 (R+ )} where ∞ X(h) = 0 h(t)dBt . The set X is a centered Gaussian family of random variables defined on ( , F , P ), with covariance given by E[X(h)X(g)] = h, gL2 (R+ ) , for h, g ∈ L2 (R+ ).

L2 (R+ ).

Such a Gaussian family is called an isonormal Gaussian process over

218

Louis H. Y. Chen

Let S be the set of all cylindrical random variables of the form: F = g (X(φ1 ), . . . , X(φn )) ,

(3.9)

where n ≥ 1, g : Rn → R is an infinitely differentiable function such that its partial derivatives have polynomial growth, and φi ∈ L2 (R+ ), i = 1, . . . , n. It can be shown that the set S is dense in L2 ( ). The Malliavin derivative of F ∈ S with respect to X is the element of L2 ( , L2 (R+ )) defined as DF =

n  ∂g (X(φ1 ), . . . , X(φn )) φi . ∂xi

(3.10)

i=1

In particular, DX(h) = h for every h ∈ L2 (R+ ). By iteration, one can define the mth derivative D m F , which is an element of L2 ( , L2 (Rm + )) for every m ≥ 2, as follows. Dm F =

n  i1 ,··· ,im =1

∂ mg [X(φ1 ), · · · , X(φn )] φi1 ⊗ · · · ⊗ φim . ∂xi1 · · · ∂xim

(3.11)

m 2 The Hilbert space L2 ( , L2 (Rm + )) of L (R+ )-valued functionals of B is endowed with the inner product, u, vL2 ( ,L2 (Rm+ )) = Eu, vL2 (Rm+ ).

For m ≥ 1, it can be shown that D m is closable from S to L2 ( , Rm + ). So, the domain of can be extended to Dm,2 , the closure of S with respect to the norm · m,2 , defined by

Dm

m

 

 E D i F 2L2 Ri  . F 2m,2 = E F 2 + +

i=1

A random variable F ∈ L2 ( ) having the Wiener chaos expansion (3.6) is an element of Dm,2 if and only if the kernels fk , k = 1, 2, . . . satisfy ∞  k=1

k m k! fk 2L2 Rk  < ∞, +

in which case, E D m F 2L2 (Rm ) = +

∞ 

(k)m k! fk 2L2 Rk  ,

k=m

+

where (k)m is the falling factorial. In particular, any F having a finite Wiener chaos expansion is an element of Dm,2 for all m ≥ 1. The Malliavin derivative D, defined in (3.10), obeys the following chain rule. If g : Rn → R is continuously differentiable with bounded partial derivatives and if F = (F1 , . . . , Fn ) is such that Fi ∈ D1,2 for i = 1, . . . , n, then g(F ) ∈ D1,2 and Dg(F ) =

n  ∂g (F )DFi . ∂xi

(3.12)

i=1

The domain D1,2 can be described in terms of the Wiener chaos decomposition as $ ) ∞  1,2 2 2 k Ik (fk ) L2 ( ) < ∞ . (3.13) D = F ∈ L ( ) : k=1

Stein Meets Malliavin in Normal Approximation

219

The derivative of F ∈ D1,2 , where F is of the form (3.6), can be identified with the element of L2 (R+ × ) given by Dt F =

∞ 

kIk−1 (fk (·, t)) ,

t ∈ R+ .

(3.14)

k=1

Here, Ik−1 (fk (·, t)) denotes the Wiener-Itˆo integral of order k − 1 with respect to the k − 1 remaining coordinates after holding t fixed. Since the fk are symmetric, the choice of the coordinate held fixed does not matter. The Ornstein-Uhlenbeck operator L is defined by the following relation L(F ) =

∞ 

−kIk (fk ),

(3.15)

k=0

for F represented by (3.6). It expresses the fact that L is diagonalizable with spectrum −N and the Wiener chaos as eigenspaces. The domain of L is Dom(L) = {F ∈ L2 ( ) :

∞  k=1

k 2 k! fk 2L2 Rk  < ∞} = D2,2 .

(3.16)

+

If F = g(X(h1 ), · · · , X(hn )), where g ∈ C 2 (Rn ) with bounded first and second partial derivatives, it can be shown that L(F ) =

n  i,j =1



∂ 2g (X(h1 ), · · · , X(hn ))hi , hj L2 (R+ ) ∂xi ∂xj n 

X(hi )

i=1

∂g (X(h1 ), · · · , X(hn )). ∂xi

(3.17)

The operator L−1 , which is called the pseudo-inverse of L, is defined as follows. L−1 (F ) = L−1 [F − E[F ]] =

∞  k=1

1 − Ik (fk ), k

(3.18)

for F represented by (3.6). The domain of L−1 is Dom(L−1 ) = L2 ( ). It is obvious that for any F ∈ L2 ( ), we have L−1 F ∈ D2,2 and LL−1 F = F − E[F ].

(3.19)

A crucial property of L is the following integration by parts formula. For F ∈ G ∈ D1,2 , we have

 E[LF × G] = −E DF, DGL2 (R+ ) .

D2,2

and

(3.20)

By the bilinearity of the inner product and the Wiener chaos expansion (3.6), it suffices to p q prove (3.20) for F = Ip (f ) and G = Iq (g) with p, q ≥ 1 and f ∈ L2 (R+ ), g ∈ L2 (R+ ) symmetric. When p  = q, we have

E[LF × G] = −pE Ip (f )Iq (g) = 0 

and E[DF, DGL2 (R+ ) ] = pq





E Ip−1 (f (·, t))Iq−1 (g(·, t)) dt = 0.

0

So, (3.20) holds in this case. When p = q, we have

E[LF × G] = −pE Ip (f )Iq (g) = −pp!f, gL2 Rp  +

220

Louis H. Y. Chen

and







E Ip−1 (f (·, t))Iq−1 (g(·, t)) dt 0  ∞ = p 2 (p − 1)! f (·, t), g(·, t) 2  p−1  dt

E[DF, DGL2 (R+ ) ] = p 2

=

0 pp!f, gL2 Rp  . +

L

R+

So, (3.20) also holds in this case. This completes the proof of (3.20). Since L−1 (F ) ∈ Dom(L) = D2,2 ⊂ D1,2 , for any F ∈ D1,2 , the quantity DF, −DL−1 F L2 (R+ ) is well defined. As we can see in the next section, DF, −DL−1 F L2 (R+ ) plays a key role in the normal approximation for functionals of Gaussian processes. In this section, we have only presented those aspects of Malliavin calculus that will be needed for our exposition of the work of Nourdin and Peccati in this paper. An extensive treatment of Malliavin calculus can be found in the book by Nualart [33].

4 Connecting Stein’s Method with Malliavin Calculus As is discussed in Section 2, the Stein operator L for normal approximation is given by Lf (w) = f  (w) − wf (w) and the equation

E f  (W ) − Wf (W ) = 0 (4.1) holds for all f ∈ CB1 if and only if W ∼ N (0, 1). It is also remarked there that if W ∼ N (0, 1), (4.1) is a simple consequence of integration by parts. Since there is the integration by parts formula of Malliavin calculus for functionals of general Gaussian processes, there is a natural connection between Stein’s method and Malliavin calculus. Indeed, integration by parts has been used in less general situations to construct the equation E[Wf (W )] = E[Tf  (W )]

(4.2)

which is a special case of (2.14). We provide two examples below. Example 1 Assume E[W ] = 0 and Var(W ) = 1. Then we have E[T ] = 1. If W has a density ρ > 0 with respect to the Lebesgue measure, then by integration by parts, W satisfies (4.2) with T = h(W ), where ∞ yρ(y)dy h(x) = x . ρ(x) If ρ is the density of N (0, 1), then h(x) = 1 and (4.2) reduces to (4.1). Example 2 Let X = (X1 , . . . , Xd ) be a vector of independent Gaussian random variables and let g : Rd → R be an absolutely continuous function. Let W = g(X). Chatterjee in [8] used Gaussian interpolation and integration by parts to show that W satisfies (4.2) with T = h(X) where d  1  ∂g √ ∂g √ 1 h(x) = (x) ( tx + 1 − tX) dt. √ E ∂xi ∂xi 0 2 t i=1

Stein Meets Malliavin in Normal Approximation

221

If d = 1 and g, the identity function, then W ∼ N (0, 1), h(x) = 1, and again (4.2) reduces to (4.1). As the previous example shows (see Chatterjee [8] for details), it is possible to construct the function h when one deals with sufficiently smooth functionals of a Gaussian vector. This is part of a general phenomenon discovered by Nourdin and Peccati in [28]. Indeed, consider a functional F of an isonormal Gaussian process X = {X(h), h ∈ L2 (R+ )} over L2 (R+ ). Assume F ∈ D1,2 , E[F ] = 0 and Var(F ) = 1. Let f : R → R be a bounded C 1 function having a bounded derivative. Since L−1 F ∈ Dom(L) = D2,2 , by (3.19) and E[F ] = 0, we have F = LL−1 F. Therefore, by the integration by parts formula (3.20), 



E[Ff (F )] = E LL−1 F × f (F ) = E Df (F ), −DL−1 F L2 (R+ ) and by the chain rule, 



E Df (F ), −DL−1 F L2 (R+ ) = E f  (F )DF, −DL−1 F L2 (R+ ) . Hence,

 E[Ff (F )] = E f  (F )DF, −DL−1 F L2 (R+ )

(4.3)

and F satisfies (4.2) with T = DF, −DL−1 F L2 (R+ ) . ∞ If F is standard normal, that is, F = I (ψ) = 0 ψ(t)dBt where ψ = I[0,1] . Then DF = I[0,1] and by (3.18), L−1 F = −I (ψ) = −F . So DF, −DL−1 F L2 (R+ ) = I[0,1] , DF L2 (R+ ) = I[0,1] , I[0,1] L2 (R+ ) = 1.

(4.4)

This and (4.3) give EFf (F ) = Ef  (F ), which is the characterization equation for the standard normal distribution. Now, let fh be the unique bounded solution of the Stein equation (2.3) where h : R → R is continuous and |h| ≤ 1. Then, fh ∈ C 1 and fh ∞ ≤ 4 h ∞ ≤ 4, and we have

+ * E[h(F )] − E[h(Z)] = E fh (F ) 1 − DF, −DL−1 F L2 (R+ )

+ * = E fh (F ) 1 − E(DF, −DL−1 F L2 (R+ ) |F ) . Therefore, sup

h∈C ,|h|≤1

 |E[h(F )] − E[h(Z)]| ≤ fh ∞ E |1 − E(DF, −DL−1 F L2 (R+ ) |F )| 

≤ 4E |1 − E(DF, −DL−1 F L2 (R+ ) |F )| .

222

Louis H. Y. Chen

It follows that dTV (L (F ), N (0, 1)) :=

1 sup |E[h(F )] − E[h(Z)]| 2 |h|≤1

1 sup |E[h(F )] − E[h(Z)]| 2 h∈C ,|h|≤1 

≤ 2E |1 − E(DF, −DL−1 F L2 (R+ ) |F )| .

=

If, in addition, F ∈ D1,4 , then DF, −DL−1 F L2 (R+ ) is square-integrable and  ,

E |1 − E(DF, −DL−1 F L2 (R+ ) |F )| ≤ Var[E(DF, −DL−1 F L2 (R+ ) |F )]. Thus, we have the following theorem of Nourdin and Peccati [28]. Theorem 4.1 Let F ∈ D1,2 such that E[F ] = 0 and Var(F ) = 1. Then

 dTV (L (F ), N (0, 1)) ≤ 2E |1 − E(DF, −DL−1 F L2 (R+ ) |F )| .

(4.5)

If, in addition, F ∈ D1,4 , then , dTV (L (F ), N (0, 1)) ≤ 2 Var[E(DF, −DL−1 F L2 (R+ ) |F )].

(4.6)

If F is standard normal, (4.4) implies that the upper bound in (4.5) is zero. This shows that the bound is tight.

5 The Fourth Moment Theorem 5.1 The Fourth Moment Phenomenon The so-called fourth moment phenomenon was first discovered by Nualart and Peccati [34] who proved that for a sequence of multiple Wiener-Itˆo integrals {Fn } of fixed order such that E[Fn2 ] → 1, the following are equivalent. (i) (ii)

L (Fn ) → N (0, 1); E[Fn4 ] → 3.

Combining Stein’s method with Malliavin calculus, Nourdin and Peccati [28] obtained an elegant bound on the rate of convergence, which we will call the fourth moment theorem. Theorem 5.1 Let F belong to the kth Wiener chaos of B for k ≥ 2 such that E[F 2 ] = 1. Then k − 1 E[F 4 ] − 3. (5.1) dTV (L (F ), N (0, 1)) ≤ 2 3k Proof This proof is taken from Nourdin [27]. Write F = Ik (fk ) where fk ∈ L2s (Rk ) is symmetric. By (3.7), E[F 2 ] = k! fk 2L2 (R ) . By the equation (3.14), we have Dt F = +

Stein Meets Malliavin in Normal Approximation

223

Dt Ik (fk ) = kIk−1 (fk (·, t)). Applying the product formula (3.8) for multiple integrals, we obtain 1 1 DF 2L2 (R ) = DF, DF L2 (R+ ) + k k ∞

= k

Ik−1 (fk (·, t))2 dt

0

 = k

k−1 ∞

0

= k

r=0

k−1 

%

r!

r=0

= k

% r!

k−1  r=0

% r!

k−1 r

k−1 r k−1 r

&2

(r fk (·, t) dt I2k−2−2r fk (·, t)⊗ %

&2 I2k−2−2r



(r fk (·, t)dt fk (·, t)⊗

&

0

&2

  (r+1 fk I2k−2−2r fk ⊗

&2 % k    k−1 ( r fk = k (r − 1)! I2k−2r fk ⊗ r −1 r=1

= k

%

k−1 

(r − 1)!

r=1

= k

k−1 r −1

&2

  (r fk + k! f 2 2 I2k−2r fk ⊗ L (R

&2 % k−1    k−1 (r fk + E[F 2 ]. (r − 1)! I2k−2r fk ⊗ r −1

+)

(5.2)

r=1

Note that since F = Ik (fk ) and E[F ] = 0, we have L−1 F = − k1 F . So 1 1 DF, DF L2 (R+ ) = DF 2L2 (R ) . + k k Letting f (F ) = F in the Stein identity (4.3), we obtain 

E DF, −DL−1 F L2 (R+ ) = E[F 2 ]. DF, −DL−1 F L2 (R+ ) =

Applying the orthogonality of the Wiener chaos and the formula (3.3), & %   1 DF 2L2 (R ) Var DF, −DL−1 F L2 (R+ ) = Var + k % &4 k−1 2  r 2 k (r fk 2 2 2k−2r . (r!) (2k − 2r)! fk ⊗ = L (R ) r k2

(5.3)

r=1

By the product formula (3.8) again, we have % &2 k  k (r fk ). F2 = r! I2k−2r (fk ⊗ r

(5.4)

r=0

Applying the Stein identity (4.3), we have





 E F 4 = E F × F 3 = 3E F 2 × DF, −DL−1 F L2 (R+ ) # " 1 = 3E F 2 × DF 2L2 (R ) . + k

(5.5)

224

Louis H. Y. Chen

This together with (5.2), (5.4) and the formula (3.3) yield % &4 k−1  2 3  k (r fk 2  2k−2r  + r(r!)2 (2k − 2r)! fk ⊗ E[F 4 ] = 3 E F 2 r L2 R+ k r=1

= 3+

3 k

k−1  r=1

% &4 k (r fk 2  2k−2r  . r(r!)2 (2k − 2r)! fk ⊗ r L2 R+

(5.6)

Comparing (5.3) and (5.6) leads to   k−1   E[F 4 ] − 3 . (5.7) Var DF, −DL−1 F L2 (R+ ) ≤ 3k   

Since Var E(DF, −DL−1 F L2 (R+ ) |F ) ≤ Var DF, −DL−1 F L2 (R+ ) , Theorem 5.1 follows from (4.6). As one can see from (5.6), E[F 4 ] ≥ 3 whenever F is a multiple Wiener-Itˆo integral with variance 1. Theorem 5.1 also implies the result of Nualart and Peccati [34] mentioned above. Without loss of generality, we assume that E[Fn2 ] = 1. The part of (ii) =⇒ (i) follows immediately from (5.1). For the part of (i) =⇒ (ii) (which actually is independent of Theorem 5.1), we observe that by the continuous mapping theorem, we have L (Fn4 ) → L (Z 4 ) where Z ∼ N (0, 1). Write Fn = Ik (fn ). By the hypercontractivity inequality (Nelson [26]),  

r E[|Ik (f )|r ] ≤ (r − 1)k k! f rL2 Rk  for k ≥ 1, r ≥ 2, f ∈ L2 Rk+ , +

and the given condition that k! fn 2 2

L (Rk+ )

= E[Ik (fn )2 ] = E[Fn2 ] = 1, we have

supn E[|Fn |r ] < ∞ for r > 4. This implies that {Fn4 } is uniformly integrable and therefore E[Fn4 ] → E[Z 4 ] = 3, and (ii) follows. (r fk 2 2 2k−2r → 0 for r = From (5.6), we observe that (ii) is equivalent to fk ⊗ L (R+

)

1, · · · , k − 1. This fact is also contained in the theorem of Nualart and Peccati [34]. The equation (5.6) also shows that the calculation of E[F 4 ] − 3 depends on that of (r fk 2 2 2k−2r for r = 1, · · · , k − 1. fk ⊗ L (R+

)

In more recent work, Nourdin and Peccati [30] proved the following optimal fourth moment theorem, which improves Theorem 5.1. Theorem 5.2 Let {Fn } be a sequence of random variables living in a Wiener chaos of fixed order such that E[Fn2 ] = 1. Assume that Fn converges to Z ∼ N (0, 1), in which case E[Fn3 ] → 0 and E[Fn4 ] → 3. Then there exist two finite constants, 0 < c < C, possibly depending on the order of the Wiener chaos and on the sequence {Fn }, but not on n, such that cM(Fn ) ≤ dTV (L (Fn ), N (0, 1)) ≤ CM(Fn ),

(5.8)

where M(Fn ) = max{E[Fn4 ] − 3, |E[Fn3 ]|}. This shows that the bound in (5.1) is optimal if and only if of the same order (typically √1n ).

 E[Fn4 ] − 3 and |E[Fn3 ]| are

Stein Meets Malliavin in Normal Approximation

225

5.2 Breuer-Major Theorem In this subsection, we show how the fourth moment theorem, that is, Theorem 5.1, can be applied to prove the Breuer-Major theorem [7]. We begin by first introducing the notion of  2 /2 1 2 −x dx can be Hermite rank of a function. It is well-known that every φ ∈ L R, √ e 2π expanded in a unique way in terms of the Hermite polynomials as follows. φ(x) =

∞ 

(5.9)

aq Hq (x).

q=0

We call d the Hermite rank of φ if d is the first integer q ≥ 0 such that aq  = 0. We now state the Breuer-Major theorem. stationary Gaussion sequence, where each Theorem 5.3 Let {Xk }k≥1 be a centered  2 1 2 √ e−x /2 dx be given by (5.9). Assume that Xk ∼ N (0, 1), and let φ ∈ L R, 2π  a0 = E[φ(X1 )] = 0 and that k∈Z |ρ(k)|d < ∞, where ρ is the covariance function of  {Xk }k≥1 and d the Hermite rank of φ. Let Vn = √1n nk=1 φ(Xk ). Then as n → ∞, we have

L (Vn ) → N (0, σ 2 )

(5.10)

where σ 2 ∈ [0, ∞) and is given by σ2 =

∞ 

q!aq2

q=d



ρ(k)q .

(5.11)

k∈Z

The original proof of Theorem 5.3 uses the method of moments, by which one has to compute all the moments of Vn and show that they converge to the corresponding moments of the limiting distribution. The fourth moment theorem offers a much simpler approach by which we only need to deal with the fourth moment of Vn . We will give a sketch of the proof here that applies the fourth moment theorem. A detailed proof can be found in Nourdin [27]. Proof First we show that Var(Vn ) =

E[Vn2 ]

=

∞  q=d

Since

and

q!aq2

 r∈Z

% & |r| ρ(r) 1 − I (|r| < n). n q

(5.12)

% & |r| q!aq2 |ρ(r)|q 1 − I (|r| < n) ≤ q!aq2 |ρ(r)|q ≤ q!aq2 |ρ(r)|d n ∞   q=d r∈Z

 q!aq2 |ρ(r)|d = E φ 2 (X1 ) |ρ(r)|d < ∞, r∈Z

it follows by an application of the dominated convergence theorem that E[Vn2 ] → σ 2 , where σ 2 ∈ [0, ∞) and is given by (5.11). If σ 2 = 0, then there is nothing to prove. So we assume that σ 2 > 0. The proof of (5.10) can be divided into three parts in increasinggenerality of φ: (i)  φ is 2 a Hermite polynomial, (ii) φ is a real polynomial, and (iii) φ ∈ L2 R, √1 e−x /2 dx . We 2π

226

Louis H. Y. Chen

sketch the proof of part (i). Let H be the real separable Hilbert space generated by {Xk }k≥1 and let ψ : H → L2 (R+ ) be an isometry. Define hk = ψ(Xk ) for k ≥ 1. Then we have  ∞ hk (x)hl (x)dx = E[Xk Xl ] = ρ(k − l). 0

Therefore,

.



L {Xk : k ∈ N} = L

/ hk (t)dBt : k ∈ N ,

0

where B = (Bt )t≥0 is a standard Brownian motion. Note that for each k ≥ 1, hk 2L2 (R ) = +

E Xk2 = 1. Since φ = Hq for some q ≥ 1, we have % ∞ & n n 1  1  L Vn = √ Hq (Xk ) = √ Hq hk (t)dBt n n 0 k=1

1 = √ n where

k=1 n 

  ⊗q Iq hk = Iq (fn,q )

k=1

1  ⊗q hk . fn,q = √ n n

k=1

(r fn,q 2  It can be shown (see Nourdin [27] for details) that fn,q ⊗ 2 L

R2k−2r +



→ 0 as n →

∞ for r = 1, · · · , k − 1. By Theorem 5.1 and (5.6) taking into account an appropriate scaling, part (i) is proved. Part (ii) follows from part (i) by writing a polynomial as a linear combination of Hermite polynomials and then applying a theorem of Peccati and Tudor [36], which concerns the equivalence between marginal and joint convergence in distribution of multiple Wiener-Itˆo integrals to the normal distributions. For part (iii), write n N n ∞ 1  1   aq Hq (Xk ) + √ aq Hq (Xk ) Vn = √ n n k=1 q=1

k=1 q=N+1

= Vn,N + Rn,N .

 2 Then apply part (ii) to Vn,N and show that supn≥1 E Rn,N → 0 as N → ∞. This completes the proof of Theorem 5.3. Bounds on the rate of convergence in the Breuer-Major theorem have been obtained by Nourdin, n Peccati and Podoskij [31], who considered random variables of the form Sn = √1 k=1 [f (Xk ) − Ef (Xx )], n ≥ 1, where {Xk }k∈Z is a d-dimensional stationary Gausn sian process and f : Rd → R a measurable function. They obtained explicit bounds on |Eh(Sn )−Eh(S)|, where S is a normal random variable and h a sufficiently smooth function. Their results both generalize and refine the Breuer-Major theorem and some other central limit theorems in the literature. The methods they used are based on Malliavin calculus, interpolation techniques, and Stein’s method.

5.3 Quadratic Variation of Fractional Brownian Motion In this subsection, we consider another application of Theorem 5.1 and also of Theorem 5.2. Let B H = (BtH )t≥0 be a fractional Brownian motion with Hurst index

Stein Meets Malliavin in Normal Approximation

227

H ∈ (0, 1), that is, B H is a centered Gaussian process with covariance function given by

 1  E BtH BsH = t 2H + s 2H − |t − s|2H , s, t ≥ 0. 2 This B H is self-similar of index H and has stationary increments. Consider the sum of squares of increments, # n " n 2   1   H 1  H H (5.13) Bk − Bk−1 −1 = n H2 BkH − Bk−1 Fn,H = σn σ k=1

k=1

2 ] = 1. An where H2 is the 2nd Hermite polynomial and σn > 0 is such that E[Fn,H 3 application of the Breuer-Major theorem shows that for 0 < H ≤ 4 ,

L (Fn,H ) → N (0, 1) as n → ∞. Nourdin and Peccati [29] applied Theorem 5.1 to prove the following theorem which provides the rates of convergence for different values of the Hurst index H . Theorem 5.4 Let Fn,H be as defined in (5.13). Then ⎧ ⎪ √1 ⎪ ⎪ n ⎪ ⎪ ⎪ ⎨ (log n) 32 √ n dTV (L (Fn,H ), N (0, 1)) ≤ cH ⎪ 4H −3 ⎪ n ⎪ ⎪ ⎪ ⎪ ⎩ 1 log n

  if H ∈ 0, 58 if H = if H ∈

5

8

5 3 8, 4



(5.14)

if H = 34 .

Proof We will give a sketch of the proof in Nourdin [27]. Consider the closed linear subspace H of L2 ( ) generated by (BkH )k∈N . As it is a real separable Hilbert space, there exists H ). Then for an isometry ψ : H → L2 (R+ ). For any k ∈ N, define hk = ψ(BkH − Bk−1 k, l ∈ N, we have  ∞

   H H hk (x)hl (x)dx = E BkH − Bk−1 BlH − Bl−1 = ρ(k − l) (5.15) 0

where ρ(r) = Therefore,

L

*

BkH

 1 |r + 1|2H + |r − 1|2H − 2|r|2H . 2

H − Bk−1

. + :k∈N =L



(5.16)

/ hk (t)dBt : k ∈ N

0

where B = (Bt )t≥0 is a standard Brownian motion. Consequently, without loss of generality, we can regard Fn,H as % ∞ & n 1  H2 hk (t)dBt . Fn = σn 0 k=1

Since for k ∈ N,

hk 2L2 (R ) +

= ρ(0) = 1 (by (5.15) and (5.16)), we have

Fn =

n 1  I2 (hk ⊗ hk ) = I2 (fn ) σn k=1

(5.17)

228

Louis H. Y. Chen

where Ip , p ≥ 1, is the pth multiple Wiener-Itˆo integral with respect to B, and fn =

n 1  hk ⊗ hk . σn k=1

Now straightforward calculations yield σn2 = 2

n 

ρ 2 (k − l) = 2

k,l=1

It can be shown that for H <

3 4,

we have



(n − |r|)ρ 2 (r).

|r|


r∈Z ρ

2 (r)

< ∞, and

 σn2 ρ 2 (r), =2 n→∞ n

(5.18)

lim

r∈Z

and for H = 34 , we have 9 σn2 = . (5.19) n→∞ n log n 16  Now we come to calculating the bound E[Fn4 ] − 3 in Theorem 5.1. We first note that (fn = fn ⊗ fn . Therefore, by (5.6), we have fn is symmetric, and so fn ⊗

 ( 1 fn 2 2  2  E Fn4 − 3 = 48 fn ⊗ L R lim

+

=

48 fn ⊗1 fn 2L2 R2  +

=

48 σn4

n 

ρ(k − l)ρ(i − j )ρ(k − i)ρ(l − j ).

(5.20)

i,j,k,l=1

By bounding the extreme right of (5.20) (see Nourdin [27] for details), we obtain ⎛ ⎞3

  4 48n |ρ(k)| 3 ⎠ . E Fn4 − 3 ≤ 4 ⎝ σn

(5.21)

|k|
From the asymptotic behavior of ρ(k) as |k| → ∞, we can show that ⎧   ⎪ O(1) if H ∈ 0, 58 ⎪ ⎨  4 |ρ(k)| 3 = O(log n) if H = 58  ⎪ ⎪ |k|
This, together with (5.18) and (5.21), implies ⎧   5 ⎪ √1 if H ∈ 0, ⎪ ⎪ 8 ,

⎨ n 3/2 E Fn4 − 3 ≤ cH (log√n)n if H = 58 ⎪   ⎪ ⎪ ⎩ n(4H −3) if H ∈ 5 , 3 . 8 4 For H = 34 , combining (5.19), (5.21), and (5.22) gives & % ,

1 4 . E Fn − 3 = O log n This proves Theorem 5.4.

(5.22)

Stein Meets Malliavin in Normal Approximation

229

In Nourdin and Peccati [30], the bounds in (5.8) are applied to obtain the following improvement of (5.14) for H ∈ (0, 34 ). Theorem 5.5 Let Fn,H be as defined in (5.13). Then ⎧   ⎪ √1 if H ∈ 0, 23 ⎪ ⎪ n ⎨ 2 √ n) if H = 2 dTV (L (Fn,H ), N (0, 1)) ∝ (log n ⎪  3 ⎪ ⎪ 6H − 92 ⎩ n if H ∈ 23 , 34 . where for nonnegative sequences (un ) and (vn ), we write vn ∝ un to mean 0 < lim inf vn /un ≤ lim sup vn /un < ∞. For H > 34 , Fn,H does not converge to a Gaussian distribution. Instead, it converges to the so-called Rosenblatt distribution, which belongs to the second Wiener chaos and is therefore not Gaussian. The expository paper by Nourdin [27], the survey paper by Peccati [35] with an emphasis on more recent results, and the book by Nourdin and Peccati [29], cover many topics and give detailed development of this new area of normal approximation. Acknowledgments I would like to thank Ivan Nourdin for some very helpful discussions during the course of writing this paper and for reading the drafts of this paper and giving very helpful comments. This work is partially supported by Grant C-146-000-034-001 and Grant R-146-000-182-112 from the National University of Singapore.

References 1. Arratia, R., Goldstein, L., Gordon, L.: Poisson approximation and the Chen-Stein method. Statist. Sci. 5, 403–434 (1990). With comments and a rejoinder by the authors 2. Barbour, A.D.: Stein’s method and Poisson process convergence. J. Appl. Probab. 25A, 175–184 (1988) 3. Barbour, A.D.: Stein’s method for diffusion approximations. Probab. Theory Relat. Fields 84, 297–322 (1990) 4. Barbour, A.D., Chen, L.H.Y.: An Introduction to Stein’s Method, Lecture Notes Series No. 4, Institute for Mathematical Sciences, National University of Singapore, Singapore University Press and World Scientific Publishing, editors (2005) 5. Barbour, A.D., Holst, L., Janson, S.: Poisson Approximation. Oxford Studies in Probability No. 2. Oxford University Press, New York (1992) 6. Bolthausen, E.: An estimate of the remainder in a combinatorial central limit theorem. Z. Wahrsch. Verw. Gebiete 66, 379–386 (1984) 7. Breuer, P., Major, P.: Central limit theorems for nonlinear functionals of Gaussian fields. J. Multivariate Anal. 13(3), 425–441 (1983) 8. Chatterjee, S.: Fluctuations of eigenvalues and second order Poincar´e inequalities. Probab. Theory Relat. Fields 143, 1–40 (2009) 9. Chatterjee, S., Diaconis, P., Meckes, E.: Exchangeable pairs and Poisson approximation. Probab. Surv. 2, 64–106 (2005) 10. Chen, L.H.Y.: Poisson approximation for dependent trials. Ann. Probab. 3, 534–545 (1975) 11. Chen, L.H.Y.: Stein’s method: some perspectives with applications. In: Accardi, L., Heyde, C.C. (eds.) Probability Towards 2000. Lecture Notes in Statistics No. 128, pp. 97–122. Springer, New York (1998) 12. Chen, L.H.Y., Goldstein, L., Shao, Q.M.: Normal Approximation by Stein’s Method. Probability and its Applications. Springer, Heidelberg (2011) 13. Chen, L.H.Y., Poly, G.: Stein’s method, Malliavin calculus, Dirichlet forms and the fourth moment theorem. In: Chen, Z.-Q., Jacob, N., Takeda, M., Uemura, T. (eds.) Festschrift Masatoshi Fukushima. Interdisciplinary Mathematical Sciences, vol. 17, pp. 107–130. World Scientific (2015) 14. Chen, L.H.Y., R¨ollin, A.: Approximating dependent rare events. Bernoulli 19, 1243–1267 (2013)

230

Louis H. Y. Chen

15. Chen, L.H.Y., R¨ollin, A.: Stein couplings for normal approximation. Preprint (2013) 16. Chen, L.H.Y., Shao, Q.M.: A non-uniform Berry-Esseen bound via Stein’s method. Probab. Theory Relat. Fields 120(3), 236–254 (2001) 17. Chen, L.H.Y., Shao, Q.M.: Normal approximation under local dependence. Ann. Probab. 32(3), 1727– 2303 (2004) 18. Chen, L.H.Y., Shao, Q.M.: Stein’s method for normal approximation. In: Barbour, A.D., Chen, L.H.Y. (eds.) An Introduction to Stein’s Method. Lecture Notes Series No. 4, pp. 1–59. Institute for Mathematical Sciences, National University of Singapore, Singapore University Press and World Scientific (2005) 19. Diaconis, P., Holmes, S.: Stein’s Method: Expository Lectures and Applications. IMS Lecture Notes Monogr. Ser. 46, Inst. Math. Statist., Beachwood, OH (2004) 20. Goldstein, L., Reinert, G.: Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7(4), 837–1139 (1997) 21. G¨otze, F.: On the rate of convergence in the multivariate CLT. Ann. Probab. 19, 724–739 (1991) 22. Hoeffding, W.: A combinatorial central limit theorem. Ann. Math. Stat. 22, 558–566 (1951) 23. H¨ormander, L.: Hypoelliptic second order differential equations. Acta Math. 119, 147–171 (1967) 24. Ledoux, M., Nourdin, I., Peccati, G.: Stein’s method, logarithmic Sobolev and transport inequalities. Geom. Funct. Anal. 25(1), 256–306 (2015) 25. Malliavin, P.: Stochastic calculus of variations and hypoelliptic operators. In: Proc. Int. Symp. on Stoch. Diff. Equations, Kyoto 1976, pp. 195–263. Wiley (1978) 26. Nelson, E.: The free Markoff field. J. Funct. Anal. 12, 211–227 (1973) 27. Nourdin, I.: Lectures on Gaussian approximations with Malliavin calculus. S´em. Probab. XLV, pp. 3– 89. Springer, Berlin (2013) 28. Nourdin, I., Peccati, G.: Stein’s method on Wiener chaos. Probab. Theory Relat. Fields 145(1–2), 75– 118 (2009) 29. Nourdin, I., Peccati, G.: Normal Approximation with Malliavin Calculus: From Stein’s Method to Universality. Cambridge Tracts in Mathematics, vol. 192. Cambridge University Press, Cambridge (2012) 30. Nourdin, I., Peccati, G.: The optimal fourth moment theorem. Proc. Am. Math. Soc. to appear (2013) 31. Nourdin, I., Peccati, G., Podolskij, M.: Quantitative Breuer-Major theorems. Stoch. Proc. Appl. 121(4), 793–812 (2011) 32. Nourdin, I., Peccati, G., Swan, Y.: Entropy and the fourth moment phenomenon. J. Funct. Anal. 266, 3170–3207 (2013) 33. Nualart, D.: The Malliavin Calculus and Related Topics, 2nd edn. Springer, Berlin (2006) 34. Nualart, D., Peccati, G.: Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 33, 177–193 (2005) 35. Peccati, G.: Quantitative CLTs on a gaussian space: a survey of recent developments. ESAIM Proc. Surv. 44, 61–78 (2014) 36. Peccati, G., Tudor, C.A.: Gaussian limits for vector-valued multiple stochastic integrals. S´em. Probab. XXXVIII, pp. 247–262. Springer, Berlin (2005) 37. Ross, N.: Fundamentals of Stein’s method. Probab. Surv. 8, 210–293 (2011) 38. Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. II: Probability Theory, pp. 583–602. University California Press, Berkeley (1972) 39. Stein, C.: Approximate Computation of Expectations. IMS Lecture Notes Monogr. Ser. 7. Inst. Math. Statist., Hayward (1986) 40. Wald, A., Wolfowitz, J.: Statistical tests based on permutations of the observations. Ann. Math. Stat. 15, 358–372 (1944)

Stein Meets Malliavin in Normal Approximation - Springer Link

Published online: 1 July 2015. © Institute of ... Annual Meeting of the Vietnam Institute for Advanced Study in Mathematics in July 2014. Keywords Normal ...

521KB Sizes 0 Downloads 178 Views

Recommend Documents

LNCS 3973 - Local Volatility Function Approximation ... - Springer Link
S&P 500 call option market data to illustrate a local volatility surface estimated ... One practical solution for the volatility smile is the constant implied volatility approach .... Eq. (4) into Eq. (2) makes us to rewrite ˆσRBF (w; K, T) as ˆσ

Quadratic approximation to plane parametric curves ... - Springer Link
Aug 15, 2006 - School of Computer Science, Cardiff ... segments is widely used in computer ... plane curves using parametric curves in low degree, in-.

Improved Approximation Algorithms for Data Migration - Springer Link
6 Jul 2011 - better algorithms using external disks and get an approximation factor of 4.5 using external disks. We also ... will be available for users to watch with full video functionality (pause, fast forward, rewind etc.). ..... By choosing disj

Refinements of rationalizability for normal-form games - Springer Link
rationalizability for normal-form games on its own fails to exclude some implausible strategy choices. One example is the game given in Figure 1. It can be shown that fЕX1, Y1Ж, ЕX1, Y2Ж, ЕX2, Y1Ж, ЕX2, Y2Жg are all rationalizable; in other w

Stein's method meets Malliavin calculus: a short survey with ... - ORBilu
lants (see Major [13] for a classic discussion of this method in the framework of ...... of Hermite power variations of fractional Brownian motion. Electron. Comm.

Stein's method meets Malliavin calculus: a short survey ...
tration, a class of central limit theorems associated with the quadratic variation of a fractional. Brownian motion ... that relation (1.2) continues to hold if one replaces fp with a sufficiently smooth function f (e.g. any C1 ...... tic time-change

Stein's method meets Malliavin calculus: a short survey with ... - ORBilu
Email: [email protected]. 1 ..... Example 2.2 (Gaussian measures) Let (A,A,ν) be a measure space, where ν is posi- tive, σ-finite and ...... (iii) → (i) is a direct application of Theorem 3.6, and of the fact that the topology of the Ko

Ambiguity in electoral competition - Springer Link
Mar 1, 2006 - How to model ambiguity in electoral competition is a challenge for formal political science. On one hand ... within democratic political institutions.1 The ambiguity of political discourse is certainly ...... Princeton University Press,

Exploring Cultural Differences in Pictogram ... - Springer Link
management applications such as Flickr and YouTube have come into wide use, allowing users to ... the meaning associated with the object. Pictorial symbols ...

Complexified Gravity in Noncommutative Spaces - Springer Link
Complexified Gravity in Noncommutative Spaces. Ali H. Chamseddine. Center for Advanced Mathematical Sciences (CAMS) and Physics Department, American University of Beirut,. Lebanon. Received: 1 June 2000 / Accepted: 27 November 2000. Abstract: The pre

Directional dependence in multivariate distributions - Springer Link
Mar 16, 2011 - in multivariate distributions, and introduce some coefficients to measure that depen- dence. ... and C(u) = uk whenever all coordinates of u are 1 except maybe uk; and. (ii) for every a = (a1, a2,..., ...... IMS Lecture Notes-Mono-.

Molecular diagnostics in tuberculosis - Springer Link
Nov 10, 2005 - species, detection of drug resistance, and typing for epi- demiological investigation. In the laboratory diagnosis of tuberculosis, the nucleic acid ...

Ethics in agricultural research - Springer Link
improvement criteria (Madden 1986). Some see distributional problems as reason to reject utilitarianism entirely (Machan 1984; Dworkin I977). Each of these.

Visceral regeneration in the crinoid - Springer Link
sic characteristic of life, although it can be lost when their costs are higher than their ... individuals with visceral regeneration in progress [7, 25–28], indicates that the ... In the following stages, the regrowth of the intestinal tract can i

Management of Diabetes in Pregnancy - Springer Link
Dec 3, 2011 - profound effects on multiple maternal organ systems. In the fetus, morbidities ... mellitus . Metformin . Glyburide . Pregnancy; diabetes management. Clinical Trial Acronyms. ACHOIS Australian Carbohydrate Intolerance Study in. Pregnant

This Month in APR - Springer Link
projected by measuring the node degree, betweenness and closeness for each node within these networks. Additionally, promiscuity maps and heat maps were.

Path delays in communication networks - Springer Link
represent stations with storage capabilities, while the edges of the graph represent com- ... message time-delays along a path in a communication network.

Evolutionary Acceleration and Divergence in ... - Springer Link
Oct 10, 2008 - Springer Science + Business Media, LLC 2008. Abstract We .... In addition to Procolobus kirkii, Zanzibar Island supports endemic and near-.

Ambiguity in electoral competition - Springer Link
Mar 1, 2006 - optimal strategies of the Downsian game in mixed strategies. Furthermore ..... According to best response behavior, the agent is .... minorities (Laslier 2002), or campaign spending regulations (Persico and Sahuguet 2002). It.

Tinospora crispa - Springer Link
naturally free from side effects are still in use by diabetic patients, especially in Third .... For the perifusion studies, data from rat islets are presented as mean absolute .... treated animals showed signs of recovery in body weight gains, reach

Chloraea alpina - Springer Link
Many floral characters influence not only pollen receipt and seed set but also pollen export and the number of seeds sired in the .... inserted by natural agents were not included in the final data set. Data were analysed with a ..... Ashman, T.L. an

GOODMAN'S - Springer Link
relation (evidential support) in “grue” contexts, not a logical relation (the ...... Fitelson, B.: The paradox of confirmation, Philosophy Compass, in B. Weatherson.

Bubo bubo - Springer Link
a local spatial-scale analysis. Joaquın Ortego Æ Pedro J. Cordero. Received: 16 March 2009 / Accepted: 17 August 2009 / Published online: 4 September 2009. Ó Springer Science+Business Media B.V. 2009. Abstract Knowledge of the factors influencing