Methods of Mathematical Physics Dr. M. G. Worster Michælmas 1997

These notes are maintained by Paul Metcalfe. Comments and corrections to [email protected].

Revision: 2.6 Date: 2004/08/23 07:14:43

The following people have maintained these notes. – date

Paul Metcalfe

Contents Introduction 1

2

3

4

Complex Variables 1.1 Conventions . . . . . . . . . 1.2 Cauchy Principal Value . . . 1.3 Analytic Continuation . . . . 1.4 Multivalued functions . . . . 1.4.1 Branch cut integrals 1.4.2 Riemann surfaces . .

v

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 1 3 5 6 6

Special Functions 2.1 The Gamma Function . . . . . . . . . 2.2 The Beta function . . . . . . . . . . . 2.3 The Riemann zeta function . . . . . . 2.3.1 Applications to number theory

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

7 7 8 10 11

Second order linear ODEs 3.1 Method of Frobenius . . . . . . . . . . . . . . . . 3.1.1 Bessel’s Equation . . . . . . . . . . . . . . 3.2 Classification of equations by singularities . . . . . 3.2.1 Equations with no regular singular points . 3.2.2 Equations with one regular singular point . 3.2.3 Equations with two regular singular points 3.2.4 Equations with three regular singular points 3.3 Integral representation of solutions . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

13 14 15 16 16 16 17 17 19

Asymptotic Expansions 4.1 Motivation . . . . . . . . . . . . . . . . 4.2 Definitions and examples . . . . . . . . 4.2.1 Manipulations . . . . . . . . . 4.3 Stokes Phenomenon . . . . . . . . . . . 4.4 Asymptotic Approximation of Integrals 4.4.1 Integration by parts . . . . . . . 4.4.2 Watson’s Lemma . . . . . . . . 4.4.3 Laplace’s Method . . . . . . . . 4.4.4 The method of stationary phase 4.4.5 Method of Steepest Descents . . 4.5 Liouville-Green Functions . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

23 23 24 25 25 25 25 27 27 29 30 32

. . . . . .

. . . . . .

. . . . . .

iii

. . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

iv

CONTENTS 4.5.1

5

Connection formulae . . . . . . . . . . . . . . . . . . . . . .

Laplace Transforms 5.1 Definition and simple properties . . . 5.1.1 Asymptotic Limits . . . . . . 5.1.2 Convolutions . . . . . . . . . 5.2 Inversion . . . . . . . . . . . . . . . 5.3 Application to differential equations . 5.3.1 Ordinary differential equations 5.3.2 Partial differential equations .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

33 37 37 38 38 39 39 39 41

Introduction These notes are based on the course “Methods of Mathematical Physics” given by Dr. M. G. Worster in Cambridge in the Michælmas Term 1997. These typeset notes are totally unconnected with Dr. Worster. Recommended books will be discussed at the end. Other sets of notes are available for different courses. At the time of typing these courses were: Probability Analysis Methods Fluid Dynamics 1 Geometry Foundations of QM Methods of Math. Phys Waves (etc.) General Relativity Combinatorics

Discrete Mathematics Further Analysis Quantum Mechanics Quadratic Mathematics Dynamics of D.E.’s Electrodynamics Fluid Dynamics 2 Statistical Physics Dynamical Systems Bifurcations in Nonlinear Convection

They may be downloaded from http://www.istari.ucam.org/maths/.

v

vi

INTRODUCTION

Chapter 1

Complex Variables 1.1

Conventions

Various people use different meanings for analytic, regular, etc. We will use these: Definition. A function is analytic at a point iff there exists an open neighbourhood of the point in which the function is complex differentiable. This is true iff the function has a Taylor expansion about that point. Definition. A function is analytic in a domain iff it is analytic at every point in the domain and single valued in the domain. Definition. A function is singular at a point iff it is not analytic at the point. Definition. A function has an isolated singularity at a point iff it is analytic in some punctured ball about the point, or iff it has a Laurent expansion about the point.

1.2

Cauchy Principal Value

The integral

R2

dx −1 x

does not exist. If we consider Z

η

I(η, ξ) = −1

dx + x

2

Z ξ

dx 2η = log x ξ

then we see that limη,ξ→0 I(η, ξ) can be made to do anything we want. The particular choice η = ξ gives a limit of log 2, and this is the Cauchy principal value of the original integral. More formally: If f (x) has a simple pole at x = c with a < c < b then the Cauchy principal value Rb of a f (x) dx is defined to be "Z lim

→0

It is written P

Rb a

c−

Z f (x) dx +

a

#

b

f (x) dx . c+

f (x) dx. For instance, P 1

R1

dx −1 x

= 0.

2

CHAPTER 1. COMPLEX VARIABLES

R Consider the complex contour Γ shown and let I = Γ f (z) dz, and let f be analytic except for a simple pole R at c. R By Cauchy’s theorem, Γ = Γ0 . In the limit  → 0, we get Z

Z

b

Z

f (z) dz = P Γ0

f (x) dx + lim

→0

a

f (z) dz. C

On C z = c + eıθ for π < θ < 2π. RSince f has only a simple pole at z = c we Res get f (z) = z−c + a0 + . . . . Then lim→0 C f (z) dz = πıRes by Cauchy’s theorem. Thus finally Z Z b f (x) dx + πıRes, f (z) dz = P a

Γ

where as the name suggests, Res is the residue of f at z = c. Similarly, going the other way round,

Z

b

Z f (z) dz = P

f (x) dx − πıRes.

Γ

a

We can extend this idea to more general complex contours, such as

to get Z

Z f (z) dz = P

Γ1

Z f (z) dx − πıRes

Z f (z) dz = P

and

Γ

Γ2

Example. Find Z cot z dz. Γ

Solution. Z

Z



cot z dz = P

cot x dx − πı −∞

Γ

= 0 − πı

Example. Find Z



−∞

by symmetry.

1 − cos x dx. x2

f (z) dx + πıRes. Γ

3

1.3. ANALYTIC CONTINUATION

Method 1. As the integrand is analytic we can deform the real axis into the contour Γ, thus Z Z ∞ 1 − cos z 1 − cos x dx = dz. x2 z2 Γ −∞ R ız Now we consider Γ∨CR 1−e z 2 dz and take the real part. This integral is 2πıRes. Method 2. We consider Z Z ∞ 1 − eıx 1 − eız dz = P dx − ıπRes z2 x2 −∞ Γ Z ∞ 1 − eıx =P dx − π = 0. x2 −∞ R∞ R∞ x Thus P −∞ 1−cos dx = π, but this is the actual integral −∞ x2 has no singularity at 0.

1−cos x x2

dx as this

Singularities at Infinity If the integral diverges at ∞ define Z



P

Z R→∞

−∞

For instance, P

R∞

R

f (x) dx = lim

dx −∞ x−ı

= limR→∞ log

f (x) dx. −R

R−ı −R−ı

= ıπ on the principal branch.

R t2

g(z, t) dt is analytic in some domain D ⊂ C if R t2 ∂g g(z, t) is analytic in z for each t ∈ (t1 , t2 ). Furthermore df dz = t1 ∂z (z, t) dt. Theorem. The function f (z) =

t1

Proof. Omitted; see Copson page 108. If either t1 or t2 is infinite then the convergence of the integral must be uniform for z ∈ D. This result extends to Z f (z) = g(z, ζ) dζ Γ

simply by parametrizing Γ.

1.3

Analytic Continuation

Theorem. Suppose D1 and D2 are disjoint simply connected domains which share a piece of common boundary Γ, with D = D1 ∪ D2 ∪ Γ simply connected as well. Let f1 (z) be analytic on D1 and continuous on D1 ∪ Γ and similarly let f2 (z) be analytic on D2 and continuous on D2 ∪ Γ. Suppose further that f1 = f2 on Γ. Then if we define ( f1 (z) z ∈ D1 ∪ Γ g(z) = f2 (z) z ∈ D2 g is analytic on D. g is called the analytic continuation of f1 from D1 into D.

4

CHAPTER 1. COMPLEX VARIABLES

H Proof. Consider I = C g(z) dz for some contour C ⊂ D. NowH I = 0H if C H⊂ D1 or C ⊂ D2 , so we just need to consider the sketched case. Then C = C1 + C2 = 0 and thus g is analytic by Morera’s theorem. The analytic continuation is unique (if it exists) by the following theorem. Theorem. If f is analytic in D and has an infinite sequence of zeroes with a limit point in D then f ≡ 0 on D. Proof. Let the limit point be at a; then f (a) = 0 by continuity. Either f ≡ 0 or f (z) = (z − a)m φ(z) with φ analytic and φ(a) 6= 0. Now φ is continuous and so there is a neighbourhood of a on which φ 6= 0. Thus there exists a neighbourhood of a on which f is nonzero, giving a contradiction. If g1 and g2 are both analytic continuations then g1 − g2 = 0 on D1 and so g1 ≡ g2 on D. Continuation of power series Suppose (for instance) P that by hook or by crook we have obtained the power series expansion for f (z) = n z n . This is analytic in |z| < 1. Then we can form a new series by Taylor expansion about some other point z0 such that |z0 | < 1. Hopefully this new power series has a convergent circle part of which is outside the original domain. We can continue this process to try to cover C, but we may run into singularities or branch cuts. Functions defined by integrals Suppose we have Z

2



f (z) = −∞

e−t dt z−t

defined for =z 6= 0. Can we find an analytic continuation of f1 (z) = f (z) for =z > 0 R −ζ2 into C? Define g(z) = Γ ez−ζ dζ, with Γ chosen to lie below ζ = z. Then g(z) is analytic. 1. If =z > 0 we can deform Γ into R to get g(z) = f1 (z). 2. If =z = 0 we get g(z) = P

2

R∞

e−t −∞ z−t

dt + πıRes.

3. If =z < 0 we use the Γ sketched to get g(z) = f (z) + 2πıRes. There are functions defined by integrals which we cannot continue, for example Z



f (z) = −∞

2

e−t dt 2 z + t2

cannot be continued from =z > 0 into C — there are two “pinching” singularities which prevent deformation of the contour as above.

1.4. MULTIVALUED FUNCTIONS

1.4

5

Multivalued functions

√ ıθ 1 The usual example is: f (z) = z 2 . If we take z = reıθ then f (z) = re 2 . If we trace f (z) as z moves around a closed curve not encircling the origin we find that θ returns to its original value and f √ is continuous. If the curve encircles the origin then θ increases to 2π and f (z) → − reıθ . f is singular at z = 0 — it is not analytic because it is not single valued in any neighbourhood of 0. f (z) has neither a Laurent expansion nor a residue at 0. 0 is called a branch point. f has a Taylor expansion about z = 1 (for example) with circle of convergence |z − 1| < 1. We can extend the Taylor expansion by analytic continuation.

The continued function is discontinuous across a curve (or ray) from the origin to infinity. This curve is called a branch cut, and f is continued analytically to a simplyconnected domain which excludes the branch cut. Another favourite example is f (x) = log z = log r + ıθ. Now =f increases by 2π on any trip around the origin and so z = 0 is a branch point.

1 A slightly more complicated example is f (z) = z 2 − 1 2 which has branch points at z = ±1. A useful way of doing it if we want f in the neighbourhood of the origin is:

but if we want to consider |z|  1 we may prefer to send both cuts away on the negative real axis as shown on the left.

It is easy to see that this is equivalent to the branch cut from −1 to +1 shown on the right (which is why it is useful for |z| large).

6

CHAPTER 1. COMPLEX VARIABLES

1.4.1

Branch cut integrals

These can be considered simplest by example. We thus look at I 1 I= z 2 − 1 2 dz, C

where C is any closed curve encircling the origin outside |z| = 1. We introduce a branch cut on the real axis between ±1, and as the integrand is analytic in the Rcut plane we can deform the contour onto the cut. It is easy to see that R ıπ 1 → 0 and Γ → 0. By fiddling some more we get that f (x) = (1 − x2 ) 2 e 2 on Γ 1

2

1

ıπ

Γ1 and f (x) = (1 − x2 ) 2 e− 2 on Γ2 . Thus − ıπ 2

I= e

−e

ıπ 2



Z

1

1 − x2

 12

dx = −πı.

−1

Another example, where we deliberately introduce a branch cut, is I = R∞ with f not even. We consider 0R f (x) log x dx as follows: If f is sufficiently nice then CR → 0 as R → ∞. We have Z

Z =

Γ1



Z f (x) log x dx

=−

and

0

Γ2

Adding these two gives Z



f (x) dx = −

Z

X

R∞ 0

f (x) dx



f (x)(log x + 2πı) dx. 0

Res (f (z) log z) .

0

1.4.2

Riemann surfaces

√ 1 1 2 2 We consider √f1 (z) = z with f1 (x) = x for x ∈ R positive and f2 (z) = z with f1 (x) = − x for x ∈ R positive. We continue f1 around the origin from the positive real axis. At z = reıπ we can continue f1 onto a copy of the complex plane where it becomes f2 . If we follow f2 around again until its branch cut on the copy of the negative real axis we find that we can jump back onto our original complex plane. We have a function which is analytic everywhere in an enlarged space with two “Riemann sheets”. Closed curves in this space encircle the origin an even number of times. Another example is f (z) = log z which has Riemann sheets in the form of an infinite spiral ramp.

Chapter 2

Special Functions This chapter deals mainly with the gamma function and its relatives. Other special functions are encountered in the next chapter.

2.1

The Gamma Function

This is an analytic continuation of the factorial function from the positive integers into C. We define Z ∞ Γ(z) = tz−1 e−t dt. (2.1) 0

This integral is well defined for 0. For 1 we integrate by parts to get the recurrence Γ(z) = (z − 1)Γ(z − 1) which we use to continue Γ into
Note that (by change of variable in (2.1)). Z Γ(m) = 2



2

x2m−1 e−x dx.

0

7

(2.2)

8

CHAPTER 2. SPECIAL FUNCTIONS

2.2

The Beta function

We define Z

1

tm−1 (1 − t)n−1 dt.

B(m, n) =

(2.3)

0

This is well-defined for 0. We now derive a formula for the beta function in terms of the gamma function, using equation (2.2). Z





Z

2

x2m−1 y 2n−1 e−x

Γ(m)Γ(n) = 4

+y 2

dxdy

0

0

Changing to polar co-ordinates we obtain Z Γ(m)Γ(n) = 4



2

r2(m+n)−1 e−r dr

0

Z

π 2

cos2m−1 θ sin2n−1 θ dθ

0 π 2

Z

2 cos2m−1 θ sin2n−1 θ dθ.

= Γ(m + n) 0

Putting τ = cos2 θ gives 1

Z

τ m−1 (1 − τ )n−1 dτ

Γ(m)Γ(n) = Γ(m + n) 0

= Γ(m + n)B(m, n). Thus we have the required formula, B(m, n) =

Γ(m)Γ(n) Γ(m + n)

(2.4)

and another integral representation of the beta function π 2

Z B(m, n) =

2 cos2m−1 θ sin2n−1 θ dθ.

(2.5)

0

Special cases Putting m = n = 12 into (2.4) and (2.5) we get π = Γ( 12 )2 , which ought to be familiar, although perhaps not in quite this form. If m = z and n = 1 − z we require 0 <
1

tz−1 (1 − t)−z dt = I

0

Putting t =

1 1+s

we get Z I= 0



s−z ds. 1+s

say.

9

2.2. THE BETA FUNCTION Evaluating this as a branch cut integral gives I(1 − e−2πız ) = 2πıe−πız and hence π . sin πz

I=

Thus B(z, 1 − z) = sinππz for 0 <
Γ(z)Γ(1 − z) =

(2.6)

on C \ Z. In fact it is true on the integers as well — if interpreted correctly! We now want B(z, z) with 0. (2.3) gives 1

Z

t − t2

B(z, z) =

z−1

dt

0 1

Z

z−1

t − t2

=2

dt,

1 2

2

to avoid a branch cut on putting s = (2t − 1) = 21−2z

1

Z

1

z−1

s− 2 (1 − s)

ds

0

= 21−2z B( 12 , z). We relate this to the gamma function using (2.4) to get Legendre’s duplication formula 1 Γ(z)Γ(z + 12 ) = π 2 21−2z Γ(2z). (2.7) Now we do B(z, n + 1) =

Z

Γ(z)Γ(n + 1) = Γ(z + n + 1)

1

tz−1 (1 − t)n dt Z n  τ n = n−z τ n−1 1 − dτ. n 0 0

We take the limit as n → ∞ to get Γ(z)Γ(n + 1)nz lim = n→∞ Γ(z + n + 1)

Z



τ z−1 e−τ dτ = Γ(z).

(2.8)

0

We can rearrange this to get Euler’s limit for the gamma function: nz n! , n→∞ z(z + 1) . . . (z + n)

Γ(z) = lim

(2.9)

which can be thought of as showing the poles at {0, −1, −2, . . . }. We can use (2.8) to get n!nz lim = 1. (2.10) n→∞ (n + z)!

10

CHAPTER 2. SPECIAL FUNCTIONS We introduce the Hankel contour shown here.

Consider I(z) =

R C

et t−z dt, which is also written Z

Z

R (0+) −∞

.

0

e−r r−z eıπz e−ıπ dr Z ∞ e−r r−z dr = eıπz 0 Z ∞ ıπz = −e e−r r−z dr. =

Γ1

Z



Γ2

R Γ

0

∼ 1−z which tends to zero if
so

(0+)

Z

1 1 = Γ(z) 2πı

et t−z dt.

(2.11)

−∞

We proved this for
2.3

The Riemann zeta function

The Riemann zeta function is defined for 1 by ζ(z) =

∞ X

n−z .

(2.12)

n=1 2

Some “famous” results are ζ(2) = π6 and ζ(4) = sentation of the gamma function (2.11) to get 2πın−z = Γ(1 − z) ζ(z) =

Z

π4 90 .

We use the Hankel repre-

(0+)

enτ τ z−1 dτ

and so

−∞

Γ(1 − z) 2πı

Z

(0+)

−∞

τ z−1 dτ. e−τ − 1

Thus ζ(z) can only be singular at z = 1, 2, . . . , thus the only singularity is at z = 1. We therefore have the analytic continuation of ζ: ζ(z) =

Γ(1 − z) 2πı

Z

(0+)

−∞

τ z−1 dτ. −1

e−τ

(2.13)

11

2.3. THE RIEMANN ZETA FUNCTION Something which we do not prove is the reflection formula ζ(1 − z) = 21−z π −z cos

1 2 πz



ζ(z)Γ(z),

(2.14)

which shows us that ζ(z) = 0 at z = 2n + 1 for n = 1, 2, . . . . By noting that 2−z ζ(z) = 21z + 41z +. . . we see that (1−2−z )ζ(z) = 11z + 31z +. . . . Continuing this process with the rest of the primes we obtain the Euler product for ζ: ζ(z) =

∞ Y

1 − p−z m

−1

,

(2.15)

i=1

where pm is the mth prime. This is the reasoning behind the following (starred) section.

2.3.1

Applications to number theory

Let π(x) be the number of primes less than or equal to the real number x. Then from the Euler product (2.15) we have ∞ X

log ζ(z) = − =− =−

m=1 ∞ X m=2 ∞ X

log 1 − p−z m



(π(m) − π(m − 1)) log 1 − m−z



   π(m) log 1 − m−z − log 1 − (m + 1)−z

m=2

= z −1 log ζ(z) =

∞ X m=2 Z ∞ 2

Z

m+1

π(m) m

z dx x (xz − 1)

and so

zπ(x) dx. x (xz − 1)

This looks like some kind of transform of π(x). We will see later that we can find approximations of π(x) from the locations of the singularities of log ζ(z) (or zeroes of ζ(z)). In 1890 Hadamard proved that ζ has no zeroes on
12

CHAPTER 2. SPECIAL FUNCTIONS

Chapter 3

Second order linear differential equations We shall discuss equations of the general form w00 + p(z)w0 + g(z)w = 0.

(3.1)

The form of the solutions of this equation can be determined by the location and nature of the singularities of p and q in C. Ordinary points z0 is an ordinary point (or regular point) of (3.1) if p and q are both analytic at z0 . The behaviour of w near z = z0 is determined orderP terms of the Taylor P by the leading expansions of p and q about z0 . If p = pn (z − z0 )n and q = qn (z − z0 )n , and either p0 6= 0 or q0 6= 0 then we get w00 + p0 w0 + q0 w ∼ 0.

(3.2)

This has solutions em(z−z0 ) , which is analytic at z = z0 . This is enough to show that the solution P of (3.1) is analytic, and carries over to cases when p0 = q0 = 0, and shows that w = an (z − z0 )n . We can determine the coefficients an by substitution, and the series converges at least out to the nearest singularity of p or q in C. We can see this with Legendre’s equation of order 1 (1 − z 2 )w00 − 2zw0 + 2w = 0.

(3.3)

We see that p and q both have singularities at z = ±1, but are analytic at z = 0. We expand w about z = 0, and equating coefficients gives an = n−3 n−1 an−2 . We thus get two series solutions, one of which terminates:  w = a0 1 − z 2 − 13 z 4 + . . . and w = a1 z. The series has (unsurprisingly) a radius of convergence 1. p and q are both singular at z = ±1, but we wish to know if we can analytically continue the series around these singularities. 13

14

CHAPTER 3. SECOND ORDER LINEAR ODEs

Singular points z = a is a singular point if either p or q is singular at z = a. We restrict to isolated singularities (when p and q have Laurent expansions). Let z = a be an isolated singularity and choose R such that p and q are analytic in D = {z ∈ C : 0 < |z − a| < R}. Let C be the circle {z ∈ C : |z − a| = ρ = R2 } and take z0 ∈ C. We can construct two independent solutions (by series substitution, say) w1 and w2 about z = z0 , which both have radius of convergence ρ. We then choose z1 ∈ C with |z1 − z0 | < ρ. Repeat (about 8 times) until we get back to a circle containing z0 . We have obtained solutions w1∗ and w2∗ , which are linear combinations of w1 and w2 : 

w1∗ w2∗





 w1 = (αij ) . w2

(3.4)

The matrix (αij ) is called the continuation matrix, which must be invertible, as we can continue the solutions backwards. We now examine the eigenvalues of (αij ) to see what happens. We first consider the distinct eigenvalue case, say λ1 and λ2 . Therefore, w1∗ = σ λ1 w1 and w2∗ = λ2 w2 . We write λi = e2πıσi , and write wk (z) = (z − a) k vk (z). σk ∗ Then wk = λk (z − a) vk (z). Thus vk is single-valued around the circle, and therefore has at worst an isolated singularity at a, and so has a Laurent expansion. If the Laurent expansion terminates below then we can write σk

wk (z) = (z − a)

∞ X

n

cn,k (z − a)

n=0

(redefining σk if necessary). This is a Frobenius expansion and in this case we call z = a a regular singular point. If we have two identical eigenvalues, λ1 = λ2 = λ (say), there are two distinct cases. If we can diagonalise (αij ) then the results above hold. If we can’t diagonalise (αij ) then we can put (αij ) in a Jordan Normal Form  (αij ) =

λ 1

 0 . λ

Then the analysis for w1 is as before, and we look for w2 (z) = u(z)w1 (z). Then −1 u∗ w1∗ = (1 + λu) w1 . So we write u = λ2πı log (z − a)+s(z), so s(z) is single-valued and has a Laurent expansion. Putting all this together we get w2 =

λ−1 σ (z − a) 1 (v1 log (z − a) + v2 (z)) , 2πı

where v1 and v2 both have Laurent expansions.

3.1

Method of Frobenius

Theorem. If z = 0 is a singular point of (3.1) then it is a regular singular point (Laurent expansions terminate below) iff zp(z) and z 2 q(z) are both analytic at z = 0.

15

3.1. METHOD OF FROBENIUS

P The method of Frobenius is to propose an infinite series w(z) = n an z σ+n . We substitute this into (3.1) and take the coefficient of the lowest power of z. This is the indicial equation which determines two values of σ, σ1 and σ2 say. If σ1 − σ2 ∈ / Z then we have two Frobenius series for w. If σ1 = σ2 then we must insert a logarithm, ! ∞ ∞ X X w(z) = z σ (A + B log z) an z n + B bn z n . n=0

n=0

If have σ1 − σ2 a positive integer thenPthere is always a Frobenius series Pwe ∞ ∞ z σ1 n=0 an z n . Either (by some miracle), z σ2 n=0 bn z n is also a solution, or we need to insert a logarithm to get w2 (z) = z σ1 (A + B log z)

∞ X

an z n + z σ2

n=0

∞ X

bn z n .

n=0

The point at infinity We set ζ =

1 z

and examine what happens as ζ → 0. (3.1) becomes   d2 w 2 p dw 1 + − 2 + 4 qw = 0. dζ 2 ζ ζ dζ ζ

(3.5)

We then apply all our previous results to equation (3.5). We find that the point at infinity is a regular singular point if 2 − zp(z) and z 2 q(z) are regular at infinity.

3.1.1

Bessel’s Equation

We apply our theory to Bessel’s equation  1 w00 + w0 + 1 − z

ν2 z2



w = 0.

(3.6)

This arises frequently in cylindrical geometries. ν is a constant parameter. (3.6) has a regular singular point at z = 0 and an irregular singular point at infinity. The indicial equation (at z = 0) is σ 2 = ν 2 . We look for Frobenius series solutions of the form ∞ X w = zσ an z n , a0 6= 0. n=0

We get a recurrence for an , an n(n + 2σ) = −an−2 . We thus split our study of the form of the solutions of (3.6) according to ν. Case 1. 2ν ∈ / N. Then σ1 − σ2 = 2ν ∈ / N and we get two series solutions. The coefficients of the equation can be determined by the recurrence and, on choosing a0 appropriately, we get the standard Bessel function Jν (z). Jν (z) =

ν 1 2z

∞ X k=0

k − 41 z 2 k!Γ(k + ν + 1)

(3.7)

The series is clearly entire, the only finite singularity being the branch point at z = 0 from the z ν factor. We get the linearly independent solution J−ν by replacing ν with −ν in (3.7).

16

CHAPTER 3. SECOND ORDER LINEAR ODEs

Case 2. ν = 0. Then σ1 = σ2 = 0. We get the solution J0 (z) (ν = 0 in (3.7)) and a second solution with a logarithmic singularity at z = 0, w2 (z) = J0 (z) log z +

∞ X

bn z n .

n=0

Case 3. 2ν ∈ N. This has two subcases: Case 3a. ν ∈ N. Then we get Jn as before and a second logarithmic solution. Case 3b. 2ν is odd. This is one of the “black magic” cases referred to earlier. The first solution with σ = ν > 0 works to give Jν (it always works). The recurrence with σ = −ν could potentially go wrong, but it just jumps over the trouble and we get J−ν . If you wish to persuade yourself of this just take a specific case (say ν = 23 ) and play with it. The integer plus a half Bessel functions Jn+ 12 (z) are all expressible in terms of elementary functions, for instance J 12 (z) = √2πz sin z and J− 21 (z) = √2πz cos z.

3.2

Classification of equations by singularities

We will only consider second order equations with at most three regular singular points (including at infinity). This class, although it seems restrictive, in fact covers most of the differential equations of mathematical physics. It will be convenient to ensure that our singularities are in nice places. The M¨obius transform is useful, and we write it as z 7→

(z − α)(β − γ) . (β − α)(z − γ)

(3.8)

This maps α 7→ 0, β 7→ 1 and γ 7→ ∞, and this seems a good place to point out that our discussion will be on the complex sphere C∞ (or the complex plane with the point at infinity attached).

3.2.1

Equations with no regular singular points

This will be a rather brief discussion. Proposition. There are no second order linear differential equations with no regular singular points. Proof. Since there are no finite singularities p and q are entire. We must have p ∼ as z → ∞, so p is bounded and thus constant. We now have a contradiction.

3.2.2

2 z

Equations with one regular singular point

Without loss of generality we can assume that this point is at z = 0. Thus p = A(z) z and q = B(z) , with A(z) and B(z) entire functions. As the equation is regular at ∞ we 2 z must have A = 2 and B = 0. Thus the only second order linear differential equation with one regular singular point is w00 + z2 w0 = 0. This has a general solution w(z) =

α z

+ β.

(3.9)

17

3.2. CLASSIFICATION OF EQUATIONS BY SINGULARITIES

3.2.3

Equations with two regular singular points

WLOG we can put these singular points at 0 and ∞, so we must have p = A(z) and z B(z) 2 q = z2 with A and B entire. As z → ∞ we must have zp(z) and z q(z) bounded and thus A and B are bounded entire functions and therefore constant. Thus the most general equation with two regular singular points at 0 and ∞ is z 2 w00 + Azw0 + Bw = 0.

(3.10)

This is a homogenous equation and so ( αz σ1 + βz σ2 σ1 6= σ2 w(z) = z σ (α + β log z) σ1 = σ2 = σ. We can work backwards to find A and B in terms of σ1 and σ2 ; we find A = 1 − σ1 − σ2 and B = σ1 σ2 . Confluence of singularities We map z 7→ z + α and let α → ∞ and define λi = ασi . Then we get the equation w00 + (λ1 + λ2 )w0 + λ1 λ2 w = 0,

(3.11)

which has the general solution w(z) = αeλ1 z + βeλ2 z . (3.11) has an irregular singular point at infinity and the solution has an essential singularity there.

3.2.4

Equations with three regular singular points

Or, a User’s Guide to the hypergeometric equation. This section is only vaguely on the edge of the Schedules. We can assume that the singular points are at 0, 1 and ∞, and let the indices at 0 be 0, 1 − c, at 1 be 0, c − a − b and at infinity be a, b respectively. We get the hypergeometric equation z(z − 1)w00 + [(a + b − 1) z + c] w0 + abw = 0.

(3.12)

There is one solution which is regular at the origin. It is written F (a, b, c; z) and F (a, b, c; z) = 1 +

ab a(a + 1)b(b + 1) z 2 z+ + .... c c(c + 1) 2!

(3.13)

This series is convergent for |z| < 1. The other solution is z 1−c F (1 + a − c, 1 + b − c, 2 − c; z) if c ∈ / Z. If c ∈ Z the second solution is logarithmic. Transformations to hypergeometric form Suppose we have a second order linear differential equation of the form w00 + pw0 + qw = 0 with three regular singular points at z = A, z = B and z = C. We transform this into hypergeometric form by applying a M¨obius transform on the independent variable taking (A, B, C) 7→ (0, 1, ∞) to put the singularities in the right place and applying a transform of the form wold = (z − A)ξ (z − B)η (z − C)ζ wnew on the

18

CHAPTER 3. SECOND ORDER LINEAR ODEs

dependent variable to give the correct indices at the singularities. We illustrate this with an example, Legendre’s equation: m2 (1 − z )w − 2zw + n(n + 1) − 1 − z2 2

00



0

 = 0,

(3.14)

which has regular singularities at ±1 and ∞. The indices at ±1 are ± m 2 and the indices at infinity are −n and n + 1. The transform m

m

w(z) = (1 − z) 2 (1 + z) 2 f (z) gives f indices of 0, −m at z = ±1 and m − n, m + n + 1 at z = ∞. and we also put ζ = 1−z 2 to move the singularities at ±1 to 0 and 1 respectively. The coefficients a, b and c of the hypergeometric equation are therefore a = m − n, b = m + n + 1 and c = m + 1 and so w(z) = 1 − z 2

 m2

F (m − n, m + n + 1, m + 1; 1−z 2 )

is a solution of (3.14). Integral representation The point of departure is the series for F (a, b, c; z), (3.13). We have F (a, b, c; z) =

∞ X Γ(k + a) Γ(k + b) k=0

Γ(a)

Γ(b)

Γ(c) z k Γ(k + c) k!



=

X Γ(k + a) Γ(k + b)Γ(c − b) z k Γ(c) Γ(b)Γ(c − b) Γ(a) Γ(k + c) k!

Γ(k + a) B(b + k, c − b) Γ(a) k=0 Z ∞ X Γ(c) z k Γ(k + a) 1 b+k−1 = t (1 − t)c−b−1 dt Γ(b)Γ(c − b) k! Γ(a) 0 k=0 Z 1 Γ(c) = tb−1 (1 − t)c−b−1 (1 − tz)−a dt. Γ(b)Γ(c − b) 0 =

Γ(c) Γ(b)Γ(c − b)

k=0 ∞ X

We thus get the final result

F (a, b, c; z) =

Γ(c) Γ(b)Γ(c − b)

Z

1

tb−1 (1 − t)c−b−1 (1 − tz)−a dt.

(3.15)

0

Confluent hypergeometric equation We move the singularity at z = 1 to z = b using z 7→ bz and then let b → ∞. We get the confluent hypergeometric equation zw00 + (c − z)w0 − aw = 0.

(3.16)

19

3.3. INTEGRAL REPRESENTATION OF SOLUTIONS

This has a regular singular point at z = 0 with indices 0 and 1 − c and an irregular singular point at infinity. The regular solution is a a(a + 1) z 2 Φ(a, c; z) = 1 + z + + ... c c(c + 1) 2!

(3.17)

and the other solution is z 1−c Φ(1+a−c, 2−c; z) if c ∈ / Z. The series representation (3.17) is entire. If a = c we get Φ(a, a; z) = ez . If −a ∈ N the series terminates to give the Laguerre polynomials. Hermite’s equation w00 − 2zw0 + 2nw = 0

(3.18)

1 2

3 2 has solutions Φ(− 12 n, − 12 ; z 2 ) and z Φ( 1−n 2 , − 2 ; z ). After some work we can get the Bessel functions

Jν (z) ∝ z ν e−ız Φ(ν + 12 , 2ν + 1; 2ız).

(3.19)

Triple confluence 2πı

This can be done in a symmetric way by placing the singularities at K, Ke 3 and 4πı 3 Ke 3 with indices 16 ± 13 K 2 and letting K → ∞. This results in Airy’s equation w00 − zw = 0,

(3.20)

which has no singularities in the finite complex plane but a really evil singularity at infinity. 3 1 If we let ζ = 23 z 2 and define W (ζ) = z − 2 w(z) we get   1 0 1 00 W + W + 1 − 2 W = 0. 3 9ζ 1

3

2 This is Bessel’s equation (3.6) for W (ıζ), so we get w(z) = z 2 J± 13 ( 2ı 3 z ) as a solution of (3.20).

3.3

Integral representation of solutions

In general, look for a solution of the form Z w(z) = K(z, t)f (t) dt,

(3.21)

Γ

where we have freedom to choose K, f and Γ so as to satisfy the differential equations. K(z, t) is known as the kernel. Some (famous?) kernels are: 1. Laplace kernel: K(z, t) = ezt . This is used in Laplace transforms in the form e−zt with Γ = R+ . 2. Fourier kernel: K(z, t) = eızt . This is used in Fourier transforms with Γ = R. 3. Euler kernel: (t − z)µ . 4. Mellin kernel: t−z .

20

CHAPTER 3. SECOND ORDER LINEAR ODEs

The Laplace kernel and Fourier kernel amount to the same thing, the choice between them just influences Γ. We will only examine this kernel. Use is best illustrated by example. Consider Airy’s equation (3.20), and look for a solution of the form Z w(z) = ezt f (t) dt, Γ

where Γ and f are to be determined. We substitute into (3.20) to get Z 0 = (t2 − z)ezt f (t) dt ZΓ    = integrating by parts. t2 f (t) + f 0 (t) ezt dt − ezt f (t) Γ Γ

So if by hook or by crook we can find f such that the integrand is zero and [ezt f (t)]Γ = 0 we have found a solution to (3.20). We choose f such that f 0 +t2 f = 0, t3

which gives f (t) = Ae− 3 and Z w(z) =

t3

ezt e− 3 dt.

(3.22)

Γ

h i t3 We now have to choose Γ such that ezt e− 3 = 0. As we are dealing with an Γ analytic function this is true on any closed Γ, but in this case (3.22) gives the true but not-very-useful solution w(z) ≡ 0 (by Cauchy’s theorem). We can get this if we integrate over an infinite range and the integrand tends to zero at infinity. This happens iff − π2 < arg t3 < π2 , which is in the shaded regions of the t-plane.

Contours starting and ending in the same sector give w ≡ 0, so we have three choices of contour, Γ1 , Γ2 and Γ3 . We note that Z Z Z = − Γ3

Γ1

Γ2

and so we only have two linearly independent solutions. One choice is Z t3 w1,2 = ezt e− 3 dt. Γ1,2

Although this seems on the face of it to not be overly helpful we will see later that this can be approximately evaluated when |z|  1, which is usually the physical case we are interested in.

21

3.3. INTEGRAL REPRESENTATION OF SOLUTIONS

For another example we try the confluent hypergeometric equation (3.16). We try as before Z w(z) = ezt f (t) dt Γ

and find that this works if  zt 2  e (t − t)f (t) Γ −

Z  Γ

d  2 (t − t)f − ctf + af dt



ezt dt = 0.

As before we choose f (t) to make the integrand zero, which gives f (t) = (t − 1)c−a−1 ta−1 and then choose Γ to make [ezt ta (1 − t)c−a ]Γ = 0. Choosing Γ depends on the particular ranges of a, c and a − c, and given a range of a, c and a − c it is not difficult to find Γ such that [ezt ta (1 − t)c−a ]Γ = 0 and the integral for w(z) does not give the trivial zero solution.

22

CHAPTER 3. SECOND ORDER LINEAR ODEs

Chapter 4

Asymptotic Expansions 4.1

Motivation

We will motivate this discussion with an example. Suppose we wish to evaluate the error function Z z 2 2 erf z = √ e−s ds. (4.1) π 0 This occurs throughout statistics and mathematical physics, for instance as a solu∂2T x √ tion to the diffusion equation ∂T ∂t = κ ∂x2 (put z = 2 κt to get an ODE for T (z)). One 2

naive approach is to expand e−s as an infinite sum and integrate termwise, which is 2 certainly analytically permissible. As e−s is entire then the series we obtain for erf z will have an infinite radius of convergence. The series is erf z =

√2 z π

1 − 13 z 2 +

1 4 10 z



1 6 42 z

 + ... .

(4.2)

If we evaluate this at z = 1 we need eight terms to get an accuracy of 10−5 . If we evaluate at z = 2 we need 16 terms and if we evaluate at z = 5 we need 75 terms. Although the series is convergent, the terms of the series get quite large before eventually tending to zero. At z = 5 the largest term is approximately 7 × 108 , so although a computer (say) can perform the sum of 75 terms in no time at all it will converge to something which is incorrect even in the first significant digit. For large |z| a better approach is to obtain an asymptotic expansion for erf z. We know that erf z → 1 as z → ∞ and so we write Z ∞ 2 2 erf z = 1 − √ e−s ds π z Z ∞ 2 1 2 =1− √ se−s ds s π z Z ∞ −s2 2 1 e−z 1 e +√ =1− √ ds integrating by parts. s2 π z π z We can continue this to get the asymptotic series for erf z 2

e−z erf z = 1 − √ z π



 1 1×3 1×3×5 − +R . 1− 2 + 2z (2z 2 )2 (2z 2 )3 23

(4.3)

24

CHAPTER 4. ASYMPTOTIC EXPANSIONS

If we apply the ratio test we see that this series is convergent nowhere. However, if we consider the remainder term R we see that Z ∞ 2 105 te−t R= dt 16 t9 z 105 e−z ≤ 32 z 9

2

and so the remainder term tends to zero very rapidly as z → ∞. At z = 2.5 only three terms of the series are needed for an accuracy of 10−5 and at z = 3 two terms will do. The truncated series is an asymptotic expansion of erf z as z → ∞. Note that the Taylor expansion is an asymptotic expansion of erf z as z → 0.

4.2

Definitions and examples

PN Definition. The sum n=1 fn (z) is an asymptotic expansion of f (z) in the limit z → ∞ if ∀M ≤ N we have PM f (z) − n=1 fn (z) → 0 as z → ∞. fM (z) We can state this informally as “the remainder is smaller than the last included term” and a similiar definition holds in any limit z → c. The property of asymptoticness may depend on arg(z − c). Definition. A sequence of function {φn (z)}∞ n=0 is an asymptotic sequence as z → c φn+1 (z) in some sector if ∀n, φn (z) → 0 as z → c in that sector. For instance φn (z) = z −n is asymptotic as z → ∞ with any argument. φn (z) = n e is asymptotic as z → ∞ for − π2 < arg z < π2 . φn (z) = (sin z) is asymptotic as z → 0 for any argument. −nz

Definition. If for a given asymptotic sequence {φn (z)} there exist constants {an } PN such that for all n, f (z) = 0 an φn (z) = o(φN (z)) as z → c in some sector then we write ∞ X an φn (z) as z → c. f (z) ∼ 0

These infinite asymptotic expansions go against the spirit of their use, but they are conceptually useful as they allow us to show f (z) =

N X

an φn (z) + aN +1 φN +1 (z) + o(φN +1 ) =

n=0

N X

an φn (z) + O(φN +1 ).

n=0

For a given asymptotic sequence {φn } the coefficients an are unique and can be found recursively from aN = lim

z→c

f (z) −

PN −1 0

φN

an φn

,

25

4.3. STOKES PHENOMENON

remembering that possibly z → c in some sector. A given function can have different asymptotic expansions in terms of different asymptotic sequences: tan z ∼ z + 13 z 3 + ∼ sin z +

1 2

2 5 15 z 3

(sin z) +

3 8

5

(sin z) ,

both as z → 0 for any arg z.

4.2.1

Manipulations

Asymptotic expansions can be naively added, subtracted, multiplied and divided to form new asymptotic expansions, but perhaps in terms of a new asymptotic series. The size of terms must be checked. Obviously, if we have an asymptotic expansion of f1 about c confined to a sector S1 and an asymptotic expansion of f2 about c confined to a sector S2 then the asymptotic expansion obtained by multiplication is only valid in the sector S1 ∩ S2 . Asymptotic expansions can be integrated termwise but cannot in general be differentiated. However if f (z) is analytic in a sector and differentiable in the sector at c (some kind of one-sided limit) then the asymptotic expansion can be differentiated termwise in that sector.

4.3

Stokes Phenomenon

P∞ Suppose f (z) ∼ N an z −n as z → ∞ for all arg z. Let f be analytic in Pa∞punctured neighbourhood of infinity. Then f has a (convergent) Laurent expansion −∞ bn z −n . This is asymptotic so by uniqueness bn = 0 for n < N and an = bn for n ≥ N and the asymptotic expansion is convergent. Conversely if the asymptotic expansion is divergent then it cannot be valid for all arg z. Divergent asymptotic expansions are associated with essential singularities of f . −z 2

For instance we have seen that for real positive z we have erf z ∼ 1 − ez√π as z → ∞. R∞ 2 In deriving this we considered z e−s ds, and for more general arg z this integral can 2 be shifted onto the original contour provided we’re in the sector such that e−s → 0 as s → ∞; thus the asymptotic expansion is valid for − π4 < arg z < π4 . Noting that erf −z 2

is an odd function of z we see that erf z ∼ −1 − ez√π as z → ∞ for

3π 4

< arg z <

−z 2

5π 4 .

An alternative method gives that erf z ∼ − ez√π as z → ∞ for π4 < arg z < 3π 4 7π and 5π < arg z < . The lines separating these sectors are called Stokes lines; 4 4 asymptotic expansions have jump discontinuities across Stokes lines.

4.4 4.4.1

Asymptotic Approximation of Integrals Integration by parts

This is used if the independent variable is in a limit of the integral. For instance, consider the exponential integral (with z real and positive at first)

26

CHAPTER 4. ASYMPTOTIC EXPANSIONS



e−s ds s z  −s ∞ Z ∞ −s e e = − ds − s z s2 z   Z ∞ −s e−z 1 e = ds 1− +2 z z s3 z  −z  We can bound the remainder term with z23 e−z = o ez2 and so   e−z 1 E1 (z) ∼ 1− as z → ∞ for positive real z. z z Z

E1 (z) =

For complex z we can see that the above method works if e−z → 0 as z → ∞, that is for 0. The result is in fact true for all arguments of z, but we need another method to cope with that. As another example we will find asymptotic expansions as z → ∞ for the Fresnel integrals (at first for positive real z) Z z Z z c(z) = cos t2 dt s(z) = sin t2 dt. 0

0

We will consider Z f (z) =

z

2

eıt dt

Z0 ∞ =

ıt2

e



Z

0

2

eıt dt.

dt − z



The first of these integrals can be done as a standard contour integral to give As for the second: Z



2

eıt dt =

z

Z

π ıπ 4 2 e .

2



2ıteıt dt 2ıt #∞ 2

"z eıt = 2ıt

+ O(z −2 ) z

ız 2

=−

e + O(z −2 ). 2ız

The evaluation of this second integral carries over to negative real z. We need to change the first integral and we get √

2

π ıπ eız f (z) ∼ ± e 4 + 2 2ız

as z → ±∞ ∈ R. 2

We ask if we can extend these results into more of C. The key point is the eız term, which must decay as z → ∞. This restricts the series to the regions 0 ≤ arg z ≤ π2 and π ≤ arg z ≤ 3π 2 respectively.

4.4. ASYMPTOTIC APPROXIMATION OF INTEGRALS

27

In the quadrants with exponential growth the exponential term of the series dom2

inates and we get f (z) ∼ Stokes lines.

4.4.2

eız 2ız

as z → ∞. The real and imaginary axes are clearly

Watson’s Lemma

This applies to integrals of the form Z

A

e−zt g(t) dt

(4.4)

0

and relies on the fact that (in this case), e−zt decays rapidly as |z| → ∞ with 0. The integral is dominated by a neighbourhood of t = 0. Lemma (Watson’s Lemma). Suppose g(t) has an asymptotic expansion in a sector S, g(t) ∼ a0 tα0 + · · · + an tαn as t → 0 with α0 > −1. Then the integral (4.4) can be evaluated termwise and Z A Z “∞” e−zt g(t) dt ∼ e−zt (a0 tα0 + · · · + an tαn ) dt 0

0



n X

ak z −αk −1 Γ(αk + 1)

0

as z → ∞ with z

−1

and A in S.

We do not prove this but give examples of its use. Consider (again) the exponential integral Z z



e−s ds = e−z s ∼ e−z

Z



Z0 ∞

e−zt dt t+1

 e−zt 1 − t + t2 + . . . dt 0  1 −z ∼e 1− z

as z → ∞, and as the Taylor series for (1 + t)−1 we used is asymptotic as t → 0 for any argument the end result we get for E1 is valid for any argument. Using similar artifice we can do the same sort of thing for the error function, although it is easier to work with the complementary error function erfc z = 1 − erf z.

4.4.3

Laplace’s Method

This applies to integrals of the form Z

β

I(x) =

g(t)exh(t) dt

(4.5)

α

in the asymptotic limit x → ∞ (which is understood to mean x  1). The integrand is largest where h has its maximum. If the maximum is at an endpoint (say at

28

CHAPTER 4. ASYMPTOTIC EXPANSIONS

α) with h0 (α) < 0 then by expanding h(t) = h(α) + (t − α)h0 (α) + O(t − α)2 and expanding g(t) = g(α) + O(t − α) we get I(x) ∼ −

exh(α) g(α) . xh0 (α)

The remainder term is O(x−2 ) and so this an asymptotic expansion. We get the leading order term easily but the higher order terms are unpleasant. If there is an interior maximum (at t0 ) then only the highest maximum contributes to the leading order term. Then h(t) = h0 + 21 h2 (t − t0 )2 + O(t − t0 )3 with h2 < 0. Now

I = exh0

Z

β

ex( 2 h2 (t−t0 ) 1

2

+O(t−t0 )3 )

g(t) dt

α

Z

“∞”

x

e 2 h2 u



2

  1 + O(u4 x) g0 + O(u2 ) du

−“∞”

∼ g0 eh0 x



2π −h2 x

 12

3

+ g0 eh0 x O(x− 2 ).

Note that the O(u3 x) and O(ux) terms are lost (integrating an odd function). gives us an easy way to derive Stirling’s formula for x!. Recall that x! = R ∞ This x −t t e dt. Put t = xτ to get 0 ∞

Z

ex(log x+log τ ) e−xτ x dτ Z ∞ x = xx ex(log τ −τ ) dτ.

x! =

0

0

The maximum of log τ − τ is at τ = 1 and we apply the formula developed above √ to get x! ∼ 2πxxx e−x as x → ∞. A harder example (which comes from scattering cross-sections in fusion reactions) is    1 Z ∞ t a 2 − dt. I(a, b) = exp − t b 0 The integrand is peaked at



ab2 4

 13

. The integral is locally dominated by this re1 1 gion. We pick up the major contribution by rescaling t = ab2 3 τ . We put x = ab 3 and let this tend to infinity. This gives Z ∞ “ ” 1 1 2 −x τ − 2 +τ I(a, b) = a 3 b 3 e dτ 0

and is now in a suitable form for the application of Laplace’s method. Applying this gives (eventually)  I(a, b) ∼

16π 3 ab5 27

 61

 exp −

27a 4b

 31 .

29

4.4. ASYMPTOTIC APPROXIMATION OF INTEGRALS

4.4.4

The method of stationary phase

We will need the Riemann-Lebesgue lemma, which is stated but not proved. Lemma (Riemann-Lebesgue). If f (t) is continuous in [a, b] then b

Z

f (t)eıxt dt → 0 as x → ∞.

a

We can see intuitively that when x becomes large the exponential term is oscillating faster and faster and getting more and more cancellation. The method of stationary phase applies to integrals of the form Z b f (x) = eıxh(t) g(t) dt x  1, a

where x, g and h are all real. First we note that if h is strictly monotonic in some subinterval [α, β] then Z

β

eıxh(t) g(t) dt =

β

Z

α

eıxh

α

g(t(h)) dh h0 (t(h))

which tends to 0 as x → ∞ (by the Riemann-Lebesgue lemma). This change of variables also suggests that the dominant contributions to f are when h0 (t) = 0. Near a stationary point t0 of h we expand h(t) = h0 + 21 (t − t0 )2 h2 + O(t − t0 )3 and then Z

“∞”

eıh0 x g(t0 )eıxτ

f (x) ∼

2

h2



−“∞” ıh0 x

“∞”

Z



∼ g(t0 )e

exp −“∞”



2π ∼ xh00 (t0 )

 12

1 ıh2 xτ 2 2

 dτ

π

g(t0 )eıxh(t0 )+ı 4 .

As an example we will consider the Airy function for large negative x.   Z 1 ∞ 1 3 Ai(−x) = cos ω − xω dω. π 0 3 This is most easily approached with Z ∞ “ 3 ” ı ω −xω I(x) = e 3 dω. 0

Let ω =



3 2

xt and λ = x . We get I(x) =



Z x



“ 3 ” ıλ t3 −t

e

dt,

0

which is now in the correct form for the method of stationary phase. Applying the theory gives    3 − 12 2 3 π 2 2 cos x − as x → ∞. Ai(−x) ∼ πx 3 4

30

CHAPTER 4. ASYMPTOTIC EXPANSIONS

Another (more physical) example is that of group velocity. Many waves are dispersive — different wavelengths travel at different speeds. If the waves are linear they can be superposed and Fourier analysis used to obtain Z ∞ f (x, t) = F (k)eı(kx−ω(k)t) dk (4.6) −∞

given an initial disturbance Z



f (x, 0) =

F (k)eıkx dk.

−∞

Suppose the initial disturbance is compact to |x| < a. What does one see at large distances from the initial disturbance? We also need t to be large, else no waves reach x. The method of stationary phase gives the asymptotic behaviour of (4.6) as dominated ∂ (kx − ω(k)t) = 0, or in other words xt = ∂ω by the point where ∂k ∂k . For a given large ∂ω x x, t find k0 such that ∂k k0 = t . Then the dominant waveform is F (k0 )eı(k0 x−ω(k0 )t) . asymptotically Conversely, given k0 then an observer moving with speed ∂ω ∂k k0 ∂ω sees waves of wavenumber k0 . This speed ∂k k0 is called the group velocity and is the physical quantity: the speed√at which energy is transferred. p For water waves ω = gk and along a ray xt = cg = 12 kg . The method of stationary phase gives √ π 1 1 −3 f (x, t) ∼ 2 2πk0 4 t− 2 g − 4 F (k0 )eı(k0 x−ω(k0 )t+ 4 ) .

4.4.5

Method of Steepest Descents

In this section we generalise the method of stationary phase (or equivalently Laplace’s method) into the complex plane. We consider integrals of the form Z f (z) = g(ζ)ezh(ζ) dζ γ

with h and g analytic on γ. To begin with we consider z real and positive and split h into real and imaginary parts, h = u + ıv. We cannot naively apply the previous methods as if the maximum of u(z) on γ is not a saddle point of h then v is varying monotonically and in the limit a large amount of cancellation occurs. We can deform γ into γ 0 to pass through a saddle point of h along a contour of v. In this case u has its maximum on γ 0 at the saddle point, so the integral is locally dominated and v is stationary so no rapid cancellation occurs. By letting γ 0 be tangent to a contour of v at the saddle point we get u decreasing most rapidly either side of the saddle and ezh is most strongly peaked. This is the method of steepest descents. If only the leading order term is required then any descending path through the saddle point will do — this is the saddle point method. As an example we consider the Airy function Z 1 3 1 Ai(z) = ezt− 3 t dt 2πı C1 1

with z real and positive. We write t = z 2 τ and then ” 1 Z 3“ 3 z2 z 2 τ − τ3 Ai(z) = e dτ. 2πı C1

31

4.4. ASYMPTOTIC APPROXIMATION OF INTEGRALS We now consider the integral Z

“ ” 3 x τ − τ3

f (x) =

e



C1 3

3

for x = z 2  1 and h(τ ) = τ − τ3 , which has saddles at τ = ±1. Although τ = 1 is the higher saddle we will go through τ = −1 so that
Z



2 2 ex(− 3 −ζ ) ıdζ −∞ Z ∞ 2 − 32 x e−xζ dζ ∼ ıe −∞ r π −2x ∼ı e 3 . x

f (x) ∼

1

3

2 z− √4 e− 3 z 2 2 π

Putting all this together Ai(z) ∼

.

We also want to know how far we can generalise this result for different values of arg z. Suppose arg z = α; this causes rotation of the steepest descent paths of < (zh(τ )) by α. For small α this doesn’t matter — we get the same asymptotic expansion. For larger α other saddles come into view. As the steepest descent path jumps from one saddle to another we get Stokes’ phenomenon. For the Airy function the asymptotic expansion we found is valid for |arg z| < π.

The Hankel functions Hν(1,2) =

The Bessel function Jν (z) =

1 2

1 πı



Z

ez sinh t−νt dt.

C1,2

 (1) (2) Hν (z) + Hν (z) . We will seek an asymptotic

(1)

−νt approximation for Hν (z), and use the method of steepest descent with  g(t) = e 1 and h(t) = sinh t. This has saddles where cosh t = 0, or t = n + 2 ıπ. We deform C1 to go through the saddle at t0 = ıπ 2 .

g(t0 ) = e−ı

πν 2

2

, h(t0 ) = ı and h(t0 ) = ı. Then h ∼ ı + 2ı (t − t0 ) and put π steepest descent has t−t0 = reıθ . Then h ∼ ı− 21 r2 eı(2θ− 2 ) . It is clear that the path of   ı(2θ− π π π ) 2 θ = 4 . However, any θ such that 0 < θ < 2 , since in this range < e > 0. If π we put 2α = 2θ − 2 we have

32

CHAPTER 4. ASYMPTOTIC EXPANSIONS

Hν(1) (z) ∼

1 −ı πν ız e 2 e πı

Z

1

e− 2 zr

2 2ıα

e

eıθ dr

 1 1 −ı πν ız 2π 2 −ıα ıθ e e ∼ e 2 e πı z   12 πν π 2 ∼ eı(z− 2 − 4 ) πz

4.5

Liouville-Green Functions

This area has a number of names associated with it; it is usually called WKB theory1 . We return to equations of the form w00 + p(z)w0 + q(z)w = 0 and by putting R − 21 z p(s)ds we convert into the standard form w00 + q(z)w = 0. w(z) = W (z)e If q is a constant then we can write down a solution of this equation; w(z) = Aeıθ 1 where θ = q 2 z. When q > 0 solutions are oscillatory with wavelength proportional to − 12 q . The WKBJ method can be applied to problems in which q varies slowly, that is ∆q q 1

is small when ∆z = O(q − 2 ), or alternatively “the fractional change in q is small over This derivation is starred — one wavelength”. dq the final result isn’t. Take   1 and q = q(z) so that dz = q 0 = O(). We expect solutions with a slowly varying amplitude a = a(z) and a slowly varying phase θ = −1 φ(z). The factor −1 ensures the wavelength is O(1) to leading order. Then eıθ ∼ eı( ∼e

−1

ıφ(0) 

φ(0)+φ0 (0)z+ 12 φ00 (0)z 2 +... ) 0

eıφ (0)z .

We propose a solution w(z) = a(z)e tion we get

ıφ(z) 

. Substituting into the governing equa-

−aφ02 + qa = 0 2a0 φ0 + aφ00 = 0.

O(1) : O() :

Rz 1 1 The O(1) equation gives φ = ± q 2 dz and the O() equation gives a = q − 4 (integrating and using the O(1) equation). We have the Liouville-Green approximate solutions to the differential equation:   Z z   Z z  1 1 − 14 2 2 w∼q A exp ı q dz + B exp −ı q dz . (4.7) If q < 0 a more convenient form is − 14

w ∼ (−q)

 Z a exp

1 and in Cambridge, WKBJ theory.

z

1

(−q) 2 dz



Z + b exp

z

1

(−q) 2 dz

 .

(4.8)

W, K, B and J are Wentzel, Kramers, Brillouin and Jeffrey respectively.

33

4.5. LIOUVILLE-GREEN FUNCTIONS

These give asymptotic solutions for large z under certain conditions: n n Suppose q ∼ z n as z → ∞. Then a ∼ z − 4 and φ ∼ z 2 +1 . Recalling the derivation we see that we can neglect the term corresponding to the 2 term provided a00  a0 φ0 as z → ∞, or substituting in we get n > −2. This is the converse of the condition for the point at ∞ to be a regular singular point. The WKBJ method thus works when the point at infinity is an irregular singular point. As an example we consider Airy’s equation w00 − zw = 0 (again). Thus q = −z 1 R √ 3 1 q dz = ±ı 23 z 2 and so and q 2 = ±ız 2 .  3 1 w(z) ∼ z − 4 exp ±z 2   1 cos 2 32 4 ∼ |z| |z| sin 3

z → +∞ z → −∞.

We do not get the constants from this method.

4.5.1

Connection formulae

These Liouville-Green functions (4.7) and (4.8) work well where q > 0 or q < 0 but do not work where q passes through zero, at which points the frequency is zero (q is no longer slowly varying on scales of the wavelength, which becomes infinite) and the amplitude is infinite. Points at which q = 0 are called turning points and the equation. WLOG consider q < 0 for z < 0 and q > 0 for z > 0

In regions 1 and 3 we have the Liouville-Green solutions

w(z) ∼

1 1 4

 φ  Ae + Be−φ

(−q) 1 w(z) ∼ 1 [a cos θ + b sin θ] q4

Z

z

φ(z) =

1

(−q) 2 dz Z

θ(z) =

z

1

q 2 dz.

We have four unknown constants. We get two equations from the boundary conditions at ±∞ and two others from the connection formulae across region 2. As z → 0, q(z) ∼ q1 z (since q(0) = 0) and the above expansions become w∼ w∼

 φ  Ae + Be−φ

φ(z) =

3 2 12 q (−z) 2 3 1

[a cos θ + b sin θ]

θ(z) =

2 12 3 q z2, 3 1

1 (−q1 z) 1 (q1 z)

1 4

1 4

(4.9) (4.10)

valid for z → 0− in region 1 and z → 0+ in region 3 respectively. They need to be matched across the intermediate region 2. In this region we approximate the 1 differential equation as w00 + q1 zw ∼ 0 and on letting τ = − (q1 ) 3 z we get Airy’s

34

CHAPTER 4. ASYMPTOTIC EXPANSIONS

equation w00 − τ w ∼ 0. This has solutions w ∼ αAi(τ ) + βBi(τ ). We can use steepest descents (for instance) to find asymptotic expressions for this inner solution as z → ±“∞”. As z → −“∞”, τ → +∞ and we get the asymptotic expression   1 φ 1 −φ w∼ √ 1 αe + βe . (4.11) 1 πq112 (−z) 4 2 As z → “∞” we have τ → −∞ and w∼ √

1 1

1

2πq 12 z 4

{(α − β) sin θ + (α + β) cos θ} .

(4.12)

We now need to match coefficients between (4.11) and (4.9); we get

α= β=

√ 2 π 1

q6 √1 π 1

A

B.

q16 Doing the same thing with (4.12) and (4.10) we get 1

q6 a = √1 (α + β) 2π 1

q6 b = √1 (α − β) . 2π Eliminating α and β we obtain the connection formulae a+b √ 2 2 a−b B= √ . 2 A=

(4.13)

As an example of the use of the connection formulae we seek to find approximate  energy eigenvalues for the non-dimensional Schr¨odinger equation ψ 00 + E − z 2 ψ = 0 with the boundary conditions ψ → 0 as z → ±∞ (quantum harmonic oscillator). In 1 particular we will consider E  1. We see that there are oscillations of frequency E 2 1 (and so the wavelength is proportional to E − 2 ). q = E − z 2 is varying on a scale of 1 E 2 and so is slowly varying on the scale of the oscillations. We can therefore apply WKBJ theory and the connection formulae. 1 1 There are turning points at ±E 2 and exponentially decaying solutions in |z| > E 2 . 1 In z < −E 2 we have √ B = 0 and WLOG A = 1. The connection formulae at 1 1 z = −E 2 give a = b = 2 and so in |z| < E 2 we have ψ∼ ∼

√

1 1

(E − z 2 ) 4 2

2 sin θ +

 π sin θ + 4 (E − z 2 ) 1 4



2 cos θ



35

4.5. LIOUVILLE-GREEN FUNCTIONS

1 Rz where θ = −√E E − z 2 2 dz (the lower limit puts the origin at 0 for the con√ nection formulae). Near z = E we write √

ψ∼ ∼

2 1

q4 2 1

q4

Z sin

E

√ − E

√ 1 2

Z

q dz − z

E

π q dz + 4

!

1 2

{sin α cos θ0 + cos α sin θ0 } ,

√ Rz R √E 1 1 where α = −√E q 2 dz + π4 and θ0 = √E q 2 dz. This moves the origin to z = E and we can√ now apply the connection formulae (4.13). In z > E we have A0 = 0 (for exponential decay) and so ψ = B 0 e−φ . Therefore we obtain B0 B0 2 cos α = √ 2 sin α = − √ . 2 2 This implies that tan α = −1, or that α = nπ − π4 . We can easily do the integral for α, and we obtain En = n − 21 for n ∈ N. We have (coincidentally) obtained the exact eigenvalues, and it is clear that this method can be used to find approximate eigenvalues of more complicated potentials.

36

CHAPTER 4. ASYMPTOTIC EXPANSIONS

Chapter 5

Laplace Transforms 5.1

Definition and simple properties

The Laplace transform of a function f (t) is defined by Z F (p) = L [f (t)] =



e−pt f (t) dt.

(5.1)

0

The variable p may be complex but we must have <(p) > γ where γ is sufficiently large to permit convergence. A greater class of functions have Laplace transforms than have Fourier transforms, due to the exponential attenuation at large t. Since the integral’s range is [0, ∞) we lose all knowledge of the function for t < 0 and the inversion of L [f (t)] is H(t)f (t). The following properties are both trivial to prove and very useful in both evaluating and inverting Laplace transforms. • L [λf + µg] = λL [f ] + µL [g] • shifting: L [eat f (t)] = F (p − a) • L [H(t − a)f (t − a)] = e−ap F (p) • change of scale: L [f (αt)] = • L

h

• L

hR

df dt t 0

i

p 1 αF(α)

= −f (0) + pF (p)

i f (u) du =

F (p) p n

d • L [tn f (t)] = (−1)n dp n F (p)

These properties often give the best way to calculate Laplace transforms and to guess inverses. We can now, starting from L [1] = p1 obtain Laplace transforms for a reasonably useful class of functions. 37

38

CHAPTER 5. LAPLACE TRANSFORMS f H(t − α) tn eαt cos αt sin αt cosh αt sinh αt

L [F ] e−αp p n! n+1 p 1 p−α p p2 +α2 α p2 +α2 p p2 −α2 α p2 −α2

Table 5.1: Simple Laplace transforms

5.1.1

Asymptotic Limits

Using Watson’s Lemma as p → ∞ we get F (p) ∼

∞ X

1 (n) f (0) n+1 p n=0

and so limp→∞ pF (p) = f (0). From the properties of the Laplace transform on the preceding page we have that Z ∞ df pF (p) = f (0) + dt e−pt dt 0 and so letting p → 0 we get limp→0 pF (p) = limt→∞ f (t) (if both limits exist). This begs the obvious question; how do we know that both limits exist? If all the singularities of F (p) lie in {z ∈ C :
5.1.2

Convolutions

We define Z



f ∗g =

f (τ )g(t − τ ) dτ −∞

and since f (y) = g(y) = 0 for y < 0 we have that Z t f ∗g = f (τ )g(t − τ ). 0

Now Z



Z

L [f ∗ g] = 0

Z

t

e−pt f (τ )g(t − τ ) dτ dt

0 ∞

Z



e−pt f (τ )g(t − τ ) dtdτ Z ∞ Z ∞ = e−pτ f (τ ) dτ e−pu g(u) du

=

0

τ

0

= L [f ] L [g] .

0

(5.2)

39

5.2. INVERSION

5.2

Inversion

Consider

γ+ı∞

Z

ept F (p) dp

I= γ−ı∞

where the contour lies to the right of all of the singularities of F (p). At this stage you should be scenting waffle; recall that the Laplace transform was defined only for p with sufficiently large real part. What we mean by F (p) is the analytic continuation of the Laplace transform into the whole of C. We now evaluate I (recall that we insisted that f (t) = 0 for t < 0). Z

γ+ı∞ pt

I=



Z

f (τ )e−pτ dτ dp

e γ−ı∞ Z γ+ı∞

0

Z



ep(t−τ ) f (τ ) dτ dp

= −∞

γ−ı∞

= ıe

γt

Z





Z

y=−∞ Z ∞ γt

= 2πıe

eıy(t−τ ) f (τ ) dτ dy

τ =−∞

δ(t − τ )e−γτ f (τ ) dτ

−∞

= 2πıf (t). We thus obtain the Bromwich inversion formula: 1 f (t) = 2πı

Z

γ+ı∞

F (p)ept dp.

(5.3)

γ−ı∞

Since γ is chosen so that the contour of integration lies to the right of all the singularities we can close the contour in the right half-plane for t < 0 and use Cauchy’s theorem to get f (t) = 0 for t < 0. Note that if F (p) is meromorphic then X f (t) = residues of F (p)ept . It is usually much easier to invert Laplace transforms by knowing the answer than by using the inversion formula.

5.3 5.3.1

Application to differential equations Ordinary differential equations

We will illustrate this with an example. Suppose we have x ¨ − 3x˙ + 2x = 4et

x(0) = −3, x(0) ˙ = 5.

(5.4)

Incidentally, Laplace transforms are overkill for this problem; it can be solved easily by using the methods learnt at A-level or in Part 1A. Generally we use Laplace transforms when we have an initial value problem, not a boundary value problem.

40

CHAPTER 5. LAPLACE TRANSFORMS We write X(p) for the Laplace transform of x(t) and on transforming (5.4) we get  X p2 − 3p + 2 + 3p − 14 =

4 p−2

which has the solution X(p) =

14 − 3p 4 + . (p − 2)2 (p − 1) (p − 2)(p − 1)

(5.5)

To invert this we convert to partial fractions: X(p) =

4 4 7 + − 2 (p − 2) p−2 p−2

which can be inverted (trivially) to get x(t) = 4te2t + 4e2t − 7et . Asymptotic behaviour as t → ∞ Deform the Bromwich contour such that it folds back around the singularity of F (p) R with largest real part (say at p = p0 ). Then the integral Γ F (p)ept dp is dominated by the neighbourhood of p0 . We can approximate this integral by forming an asymptotic expansion of F (p) about p = p0 . In our example (using (5.5)) we see that p0 = 2 and on writing p = 2 + η we have X(p) =

4 8 − 3η 4 4 −1 −1 (1 + η) + (1 + η) ∼ 2 + + analytic terms. 2 η η η η

Thus f (t) ∼

1 2t e 2πı

Z

γ+2+ı∞

F (η)eηt dη

γ−2−ı∞

∼ e2t (4t + 4) . The −7et term in the exact solution is exponentially smaller than this asymptotic solution and so doesn’t feature in the asymptotic expansion. Green’s functions For our example we wish to solve g¨ − 3g˙ + 2g = δ(t+ ) g(0) = g(0) ˙ = 0, R ∞ where δ(t+ ) is such that 0 δ(t+ ) dt = 1. 1 1 − p−1 and so g(t) = e2t −et . Laplace transforming the problem we get G(p) = p−2 For the general problem x ¨ − 3x˙ + 2x = f (t)

x(0) = x0 , x(0) ˙ = x1

we find X(p) = G(p) {F (p) + px0 + x1 − 3x0 } and so  x(t) = g(t) ∗ f (t) + x0 δ 0 (t+ ) − (x1 − 3x0 )δ(t+ ) .

41

5.3. APPLICATION TO DIFFERENTIAL EQUATIONS

5.3.2

Partial differential equations

Consider the problem ∂φ ∂2φ =κ 2 ∂t ∂x subject to the boundary conditions φ(x, 0) = 0, φ(x, t) → 0 as x → ∞ and φ(0, t) = φ0 . This models (say) the concentration of diffusing salt in a semi-infinite tube. Laplace transforming the diffusion equation (with respect to t) we get (in obvious notation) pφ˜ = κφ˜xx . √p Using the boundary conditions on φ we get the solution φ˜ = φp0 e− κ x . We can find a lot out about φ without doing the inversion. Suppose we wish to evaluate Z ∞ Φ(t) = φ(x, t) dx 0

which could be the total amount of salt in the tube. Then r Z ∞ ˜ p) dx = φ0 κ . ˜= Φ φ(x, p3 0 We can now inverse transform this to get r κt . Φ(t) = 2φ0 π For the asymptotic behaviour of φ as t → ∞ we find an asymptotic expansion for φ˜ about the largest singularity, which in this case is at p = 0. r

 p x + ... κ   x φ ∼ φ0 1 − √ + ... , πκt

φ0 φ˜ ∼ p



1−

thus

√ 0 = − √φπκt valid when x  κt. This approximation shows that ∂φ . ∂x x=0 We now do the full inversion, we see that Z γ+ı∞ φ0 1 pt−√ κp x e dp φ(x, t) = 2πı γ−ı∞ p Z (0+) φ0 1 pt−√ κp x = e dp 2πı −∞ p Z φ0 ∞ 1 −y2 t− ıyx √ κ dy. = e πı −∞ y Z ∞ ∂ φ0 √ −y 2 t− ıyx κ dy Thus φ(x, t) = − √ e ∂x π κ −∞ φ0 − x2 = −√ e 4κt . πκt Thus Z



φ(x, t) = x

x2 φ 2φ √ 0 e− 4κt dx = √ 0 π πκt

Z

∞ x √ 2 κt

2 x e−η dη = φ0 erfc √ . 2 κt

42

CHAPTER 5. LAPLACE TRANSFORMS

References ◦ Arfken and Weber, Mathematical Methods for Physicists, Fourth ed., Academic Press, 1995. Quite a lot of people sing Arfken’s praises; I am not one of them. Although it is useful I think it tries to do too much. If nothing else though, it could be used to kill small mammals.

◦ E.J. Hinch, Perturbation Methods, CUP, 1991. Useful for the asymptotic expansions part of the course and with enough other theory to make it interesting. Rather more rigorous than the approach taken in this course and much less of a threat to wildlife than Arfken.

◦ H.A. Priestley, Introduction to Complex Analysis, Revised ed., OUP, 1990. An excellent introduction to contour integrals and the complex analysis needed in this course. Some discussion of transforms but not enough to justify it as a textbook on transforms.

This is an area which appears to be very well served with textbooks. Someone recommend some good ones to me (with brief reviews).

43

Methods of Mathematical Physics - Cambridge

Aug 23, 2004 - 3.3 Integral representation of solutions . . . . . . . . . . . . . . . . . . . 19 ... Dr. M. G. Worster in Cambridge in the Michælmas Term 1997. These typeset ...

296KB Sizes 10 Downloads 199 Views

Recommend Documents

Fundamental Methods of Mathematical Economics
New York, NY 10020. ... other purpose without the prior written consent of The McGraw-Hill Companies, Inc., including, but not limited ...... mathematical software.

Fundamental Methods of Mathematical Economics
... Inc., including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning. ISBN 0-07-286591-1 (CD-ROM) ...

Transcendental philosophy and mathematical physics
clearly intended, at least in part, to answer this charge of subjective idealism. ... cognized by us a priori, because it, as well as time, inheres in us prior to all ... tique, Kant published a less well-known work, the Metaphysical foundations of n