A Higher Order Local Linearization Method for Solving Ordinary Differential Equations H. De la Cruz∗, R. J. Biscay†, F. Carbonell†, T. Ozaki‡, J. C. Jimenez†

Abstract The Local Linearization (LL) method for the integration of ordinary differential equations is an explicit one-step method that has a number of suitable dynamical properties. However, a major drawback of the LL integrator is that its order of convergence is only two. The present paper overcomes this limitation by introducing a new class of numerical integrators, called the LLT method, that is based on the addition of a correction term to the LL approximation. In this way an arbitrary order of convergence can be achieved while retaining the dynamic properties of the LL method. In particular, it is proved that the LLT method reproduces correctly the phase portrait of a dynamical system near hyperbolic stationary points to the order of convergence. The performance of the introduced method is further illustrated through computer simulations.

Keywords: Numerical integrators; Exponential methods; Local linearization; Variation of constants formula; Hyperbolic stationary points

1

Introduction

A number of existing numerical integrators for ordinary differential equations (ODEs), which will be referred to as exponential methods, have in common the explicit use of exponentials in obtaining an approximate solution. Some examples are the methods known as exponential fitting [1], exponential integrating factor [2], exponential integrators ([3], [4]), exponential time differencing ([5], [6]), truncated Magnus expansion ([7], [8]), truncated Fer expansion [9] (also called exponential of iterated commutators in [10]), local linearization (see, e.g., [4], [11], [12], [13], and references therein), quadrature schemes based on versions of the variation of constants formula (e.g., [14], [15], [16], [17], [18], and exponential Runge-Kutta methods [19], [20]). The development of exponential methods has been greatly stimulated by their success in preserving stability properties of the underlying systems in cases in which nonlinear implicit schemes are very expensive to implement and standard explicit schemes require extremely small time ∗

Universidad de Granma. Departamento de Matematica y Computacion, Bayamo MN, Cuba. email: [email protected] † Instituto de Cibernetica, Matematica y Fisica. Calle 15, No. 551, e/ C y D, Vedado, La Habana 4, C.P. 10400, Cuba.email:{biscay,felix,jcarlos}@icmf.inf.cu ‡ The Institute of Statistical Mathematics. 4-6-7 Minami-Azabu Minato-Ku, Tokyo, Japan.

1

steps in order to be stable. Furthermore, several algorithmic advances have contributed to make them more feasible even for large systems of differential equations. A number of efficient and stable procedures are now available for computing matrix exponential, e.g. through the Schur decomposition, stable Padé approximations with the scaling and squaring method, Krylov subspace methods and Lie-group based algorithms (see, e.g., [3], [4], [21], [22], [23], [24], [25]). Fast algorithms for the computation of some integrals that involve matrix exponential have also been elaborated ([26], [27]). In particular, Local Linearization (LL) for autonomous ODEs is an exponential method that elaborates the following strategy: the vector field of the equation is locally (piecewise) approximated through a first-order Taylor expansion at each time step, so obtaining successive linear equations that are explicitly integrated. This approach has also appeared in the literature under other names, such as matricial exponentially fitted method [28], exponential Euler method [29], piece-wise linearized method [12] and exponentially fitted Euler method [4]. Theoretical and practical results have shown that the LL method has a number of convenient dynamic properties. These include A-stability, lack of spurious equilibrium points under quite general conditions, and conservation of the dynamics of the exact solution around hyperbolic equilibrium points and cycles (see, e.g., [13]). Another appealing feature of the LL approach is its flexibility to be extended to more general kinds of equations, e.g., stochastic differential equations ([11], [30], [31]), random differential equations [32] and delay differential equations [33]. However, a major drawback of the standard LL method is its low order of convergence; namely, order two ([12], [13] ). To overcome this, a class of higher order local linearization methods was developed in [34]. These are based on adding a correction term to the LL approximation, which is computed by solving an auxiliary ODE by means of standard explicit integrators. In the present paper an alternative approach is introduced that will be referred to as the Local LinearizationTaylor (LLT) method. This also adds a correction term to the LL approximation but now the correction is determined through a truncated Taylor expansion. It was partially inspired by the purpose of combining the LL method with the well-know Ito-Taylor expansions for the numerical integration of stochastic differential equations [35]. In this sense, the performance of the LLT method for ODEs is also significant as a limit case of its stochastic counterpart when the noise variance goes to zero. Similarly to the methods in [5], [14], [15], [18], [19] and [20], LLT is an A-stable integrator that is derived by inserting a polynomial approximation into the remainder term of the variation of constants formula. Also likewise some other integrators based on versions of this formula (e.g., [15], [17]) and classical multiderivative integrators it requires the knowledge of higher order derivatives of the exact solution. On this basis said polynomial is optimally provided by a truncated Taylor expansion but at the expense of computing higher derivatives. For this, there are known recursive algorithms (see, e.g., [36], [37], [38]), and the use of symbolic computing packages may be helpful. The practical scope of the LLT method focuses on systems of equations for which such a computation is feasible with affordable effort. In comparison with related approaches, the LLT method has a number of distinctive features: i) Its linear term is locally determined at each time step (contrary to the global decomposition into linear and nonlinear parts involved in exponential time differencing [5] and exponential RungeKutta methods [19], [20]). ii) Said approximating polynomial is obtained through a truncated Taylor expansion, not by means of multistep interpolation or multistage algorithms as in [5], [14], [15], [18], [19], [20]. iii) As a consequence, LLT reduces to the standard LL method when the 2

degree of such a polynomial is zero. iv) Some simplifications in computing this polynomial arise from the fact that its first two coefficients are null. v) LLT is a one-step method, so avoiding the inconvenience of determining starting values (in contrast with the multistep integrators in [5], [14], [15]). vi) It is non-iterative (unlike the method introduced in [18]). vii) It does not require the previous computation of a first approximation by some other A-stable method (unlike the approach in [17]). viii) It retains the equilibrium points of the underlying system and does not introduce spurious fixed points when the time step is sufficiently small. Note that without this condition the meaning of the A-stability analysis were not clear. ix) Remarkably, all the numerical work required for computing the integrals of exponential functions involved in the LLT scheme is reduced to evaluate just one matrix exponential. The dimension q of this exponential is slightly greater than the dimension d of the system; namely, q = d + p, where p is the desired order of global convergence of the LLT approximation. The paper is organized as follows. Section 2 reviews the standard, low order LL method, as well as some discretization schemes that have been proposed for its numerical implementation. Section 3 introduces the LLT method, and discusses its computational aspects. Section 4 deals with its orders of local and global convergence, while Section 5 studies some of its dynamic properties. It is proved that the LLT approach makes possible to achieve an arbitrary order of convergence without losing important dynamic features of the standard LL method. Finally, Section 6 illustrates the performance of the introduced method through some computer simulations.

2

Standard Local Linearization

2.1

Local Linearization method

¡ ¢ Let D ⊂ Rd be an open set and f ∈C 1 D, Rd . Consider the (autonomous) initial-value problem x0 (t) = f (x (t)) , x (t0 ) = x0 ,

t0 ≤ t ≤ T,

(1) (2)

where t0 , T ∈ R, x0 ∈ D. Let t0 < t1 < ... < tN = T be a time discretization, and denote h = max (tn+1 − tn : n = 0, ..., N − 1) , hn = tn+1 − tn , Λn = [tn , tn+1 ]. The LL method can be derived as follows. Associated with (1) define the local problems x0 (t) = f (x (t)) ,

t ∈ Λn ,

x (tn ) = xtn ,

(3)

for given constants xtn ∈ D, n = 0, ..., N − 1. These equations are approximated by linear ones on the basis of the first-order Taylor expansion of f around xtn : x0 (t) = Ln x (t) + an ,

t ∈ Λn ,

x (tn ) = xtn ,

(4)

where Ln = f 0 (xtn ) ,

an = f (xtn ) − f 0 (xtn ) xtn ,

(5)

and f 0 (x) is the derivative of f evaluated at x. Equation (4) has a solution, say yn (t), that is explicit in terms of the fundamental matrix Φn (t) = Φ (t, tn ) = exp ((t − tn ) Ln ) of the associated homogeneous linear system x0 (t) = Ln x (t) , 3

t ∈ Λn .

Namely, yn (t) = Φn (t) xtn +

Z

t

tn

Φn (t) Φ−1 n (s) an ds = xtn + (t − tn ) ϕ ((t − tn ) Ln ) f (xtn ) ,

(6)

where the matrix function Z

1 ϕ ((t − tn ) Ln ) = t − tn

t−tn

e(t−tn −s)Ln ds

(7)

0

is determined by the analytic function z

ϕ (z) = (e − 1) /z =

∞ X j=0

zj , (j + 1)!

z ∈ C.

(8)

The (continuous-time) LL approximation yLL (t) to the solution of (1)-(2) is defined by concatenating the solutions (6) of said local linear problems: ¡ ¢ (9) yLL (t) = yLL (tn ) + ϕ ((t − tn ) Ln ) f yLL (tn ) , t ∈ Λn , ¢ ¡ LL 0 Ln = f y (tn ) ,

starting at yLL (t0 ) = x0 . Correspondingly, the LL discretization is defined by evaluating this approximation at the discrete times t = tn : ¡ LL ¢ = yLL (tn+1 ) = yLL (10) ytLL tn +hn ϕ (hn Ln ) f ytn , n = 0, 1, ..., N − 1, n+1 = x0 . starting at ytLL 0

2.2

Local Linearization schemes

A number of schemes have been proposed for the numerical implementation of the LL discretization (10). For completeness, a summarization of them is presented below. LL scheme I) If the matrix Ln is not singular then integration by parts of (6) results in ¢ ¡ ¡ hn Ln ¢ = ytLL + L−1 − I f ytLL e . (11) ytLL n n n n+1

This expression has been used to compute the LL discretization by some authors ([11], [28], [29]). However, it is known that formula (11) suffers from cancellation error for nearly singular matrices (tn+1 − tn ) Ln (see, e.g., comments in [6] and references therein). Another drawback of (11) is that it requires computation of matrix inverses in addition to the matrix exponential. LL scheme II) (10) can be rewritten ¡ ¢ = ytLL + r0 (Ln , hn ) f ytLL , (12) ytLL n n n+1 where

rj (M, h) =

Z

0

d×d

for any matrix M ∈ R

, h ≥ 0 and j = 0, 1, . . .. 4

h

esM sj ds,

Take m ∈ N such that τ Ln be small enough for τ = hn /2m . Then, r0 (Ln , τ ) can be approximated with good accuracy by means of the truncation of its Taylor series expansion r0 (Ln , τ ) =

∞ X (τ Ln )k k=1

k!

,

and in turns r0 (Ln , hn ) can be obtained from r0 (Ln , τ ) through m applications of the recursion r0 (Ln , 2τ ) = 2r0 (Ln , τ ) + Ln r20 (Ln , τ ) . This algorithm is proposed in [16]. A limitation of this implementation is that Taylor polynomials are not stable approximations to matrix exponential ([39]). Another one is that a very large number m of scaling steps may be necessary in order to achieve a desired accuracy. LL scheme III) An algorithm to compute rj (M, h) , j ≥ 0, on the basis of the Schur decomposition of M is elaborated in [30]. The resulting value of r0 (Ln , hn ) can be inserted into (12) to obtain ytLL . Accuracy and stability properties of this implementation remain to be studied. n+1 LL scheme IV) Taking a contour Γ ⊂ C that encloses the eigenvalues of hn Ln , ϕ (hn Ln ) in (10) can be expressed Z Z z e −1 −1 ϕ (hn Ln ) = ϕ (z) (zI − hn Ln ) dz = (zI − hn Ln )−1 dz. z Γ Γ In the context of implementing exponential time differencing methods, in [6] it is proposed to take Γ as a suitable circle and to compute this integral by the trapezoidal rule with equally spaced points. Simulation results have shown that the resulting scheme overcomes the poor accuracy of (11) when hn Ln is nearly singular. However, this is achieved at the expense of computing inverses of a large number of matrices zj I − hn Ln corresponding to the quadrature nodes zj . Further effort is involved in ensuring that Γ encloses all the eigenvalues of hn Ln . On the other hand, stability properties of this implementation remain to be studied. LL scheme V) For systems of large dimension d À 1, subspace Krylov methods for approximating matrix exponential have been introduced in [3]. For any A ∈ Rd×d , v ∈ Rd , m ∈ N and any analytic function g, they reduced the problem of approximating g (A) v to compute g (Hm ) e1 , where Hm ∈ Rm×m is the Hessenberg matrix corresponding to the Krylov subspace generated by v, Av, ..., Am−1 v, and e1 is the first unit vector in Rm . High accuracy is achieved even by choosing m ¿ d, which makes computation of g (Hm ) e1 much than g (A) v. This ¢ ¡ easier makes it feasible to obtain a precise approximation to ϕ (hn Ln ) f ytLL in (10) even for high n dimensional systems. Error bounds of this approach are studied in [3]. LL scheme VI) Denote ytLL = ytLL + gn , n n+1 where

¡ ¢ gn = hn ϕ (hn Ln ) f ytLL . n

Then, from the theorem 1 in [26] (see [40]) it follows that the vector gn can be obtained from the identity µ h L ¶ e n n gn = exp (hn Cn ) , 01×d 1 5

where

¡ ¢¶ Ln f ytLL n ∈ R(d+1)×(d+1) . Cn = 01×d 0 µ

In this way the numerical implementation of the LL method is reduced to compute the matrix exponential exp (hn Cn ). For this, there are a number of available algorithms, e.g., those based on the Schur decomposition, stable Padé approximations with scaling and squaring strategies or Krylov subspace methods ([3], [4], [21], [22], [23], [24], [25]). In addition to the LL schemes just reviewed, other ones could be elaborated by applying any A-stable numerical integrator to solve the linear problems (4). Since standard implicit methods results to be explicit for linear equations, the resulting LL schemes would be A-stable with moderate computational cost. Notice also that (4) can be equivalently expressed as the homogeneous linear problem u (tn ) = (xtn , 1)T , ¡ ¢T with the matrix Cn just defined above and u (t) = xT (t) , z (t) ∈ Rd+1 , where z (t) is an auxiliary variable. This system has the exact solution u0 (t) = Cn u (t) ,

t ∈ Tn ,

u (tn+1 ) = exp (hn Cn ) u (tn ) . Thus, the application of any integrator to (4) is equivalent to an approximation to exp (hn Cn ), likewise the LL Scheme (VI) above. In particular, implicit Runge-Kutta (IRK) methods lead to stable Padé approximations to matrix exponential [39]. However, from a practical viewpoint it should be taken into account that algorithms specifically designed to compute matrix exponential usually incorporate additional processing routines to improve accuracy and numerical stability, e.g., squaring and scaling, balancing, etc. (see [25] and references therein). For further information on LL schemes and their error analysis we refer to [41] .

3 3.1

Local Linearization with higher order correction Local Linearization-Taylor method

In order to improve the accuracy of the standard LL method reviewed above, consider the addition of a correction term z (t) to obtain a better approximation to the solution of (1)-(2): y (t) = yLL (t) + z (t) .

(13)

From the variation of constants formula (see, e.g., [42]) it follows that for each n = 0, ..., N −1 the exact solution xn (t) of (3) can be written xn (t) = ynLL (t) + rn (t) , where ynLL (t) is the solution (6) of the local linear problem (4), and for t ∈ Λn , Z t Z (t−tn ) −1 rn (t) = Φn (t) Φn (s) Mn (xn (s)) ds = e(t−tn −s)Ln Mn (xn (s + tn )) ds, (14) tn

0

Mn (x) = f (x) − (Ln x + an ) .

Here, Ln and an are defined by (5). 6

¡ ¢ Suppose D ⊂ Rd is open, p ≥ 3 and f ∈C p D, Rd . For arbitrary ξ ∈ D and a ∈ R, denote by x (t, a, ξ) a solution of x0 (t) = f (x (t)) ¡ ind ¢D on some interval [a, a + ∆a,ξ ] , ∆a,ξ > 0, with initial 1 value x (a, a, ξ) = ξ. From f ∈C D, R , the usual local existence-uniqueness theorem (see, ¡ ¢ p d e.g., [43]) of x (t, a, ξ). Furthermore, f ∈C D, R implies that x (., a, ξ) ¡ ensures the existence ¢ ∈ C p+1 [a, a + ∆a,ξ ] , Rd . In addition, since x (t, a, ξ) is the solution of an autonomous system, its temporal derivatives at t = a do not depend on a. Thus, for all ξ ∈ D, 2 ≤ j ≤ p − 1, the following is well-defined: ∂ j+1 ∂j 0 x (t, a, ξ) − f (ξ) x (t, a, ξ) |t=a . (15) ∂tj+1 ∂tj Given p ≥ 2, the Taylor expansion of Mn (xn (t)) in (14) up to order p − 1 at t = tn results in approximating xn (t) by yn (t) = ynLL (t) + zn (t) , cj (ξ) =

where zn (t) =

p−1 Z X

(t−tn )

e(t−tn −s)Ln

0

j=0

sj cj (xtn ) ds, j!

c0 (xtn ) = c1 (xtn ) = 0, and cj (xtn ) are defined by (15) for j ≥ 2. This motivates to define the following approximation y (t) to the solution of (1)-(2), which will be referred to as the Local Linearization-Taylor (LLT) approximation: for n = 0, ..., N −1, t ∈ Λn , y (t) = y (tn ) + (t − tn ) ϕ ((t − tn ) Ln ) f (y (tn )) +

p−1 Z X j=2

(t−tn )

e(t−tn −s)Ln

0

sj cj (y (tn )) ds j!

(16)

starting at y (t0 ) = x0 , where Ln = f 0 (y (tn )). Taking into consideration (7), this can also be written y (t) = y (tn ) + (t − tn ) ϕLLT (t − tn , y (tn )) where 1 ϕLLT (h, ξ) = h

Z

h

(h−s)f 0 (ξ)

e

0

Ã

p−1 X

sj f (ξ) + cj (ξ) j! j=2

(17) !

ds

(18)

for all ξ ∈ D, h ≥ 0. Correspondingly, the LLT discretization is defined by evaluation at the times t = tn , n = 0, 1, ..., N − 1 : ytn+1 = ytn + hn ϕLLT (hn , ytn ) , (19) starting at yt0 = xt0 . Given p ≥ 2, the notation LLTp will be used to make explicit that the summation in (16)-(19) is up to p − 1. This will be referred to as the LLT method of order p. Notice that for p = 2 this reduces to the LL discretization, which has (global) order of convergence 2. Details on the computation of the coefficients cj (ytn ) and the exponential integrals involved in the LLT discretization (19) are discussed in the next Section.

7

Remark 1 The LLT method just described can be straightforwardly extended to non-autonomous ODE x0 (t) = f (t, x (t)) just by substituting f (tn , y (tn )) for f (y (tn )) and f 0 (tn , y (tn )) for Ln = f 0 (y (tn )) in (16). A-stability and arbitrary order of convergence can be achieved in this way. However, A-stability is a rather weak concept for non-autonomous systems. Time-varying linear parts Ln (t) must be considered in the local linearization if stronger stability properties are desired, such as AN -stability.

3.2

Computational aspects

Given p ≥ 3, the LLTp discretization (19) requires the computation of the vectors cj (ytn )defined by (15) for 2 ≤ j ≤ p − 1, i.e., which involve temporal derivatives up to order p of the exact solution x (t, tn , ytn ) evaluated at t = tn . These in turns can be written in terms of derivatives of f evaluated at ytn . Some useful simplifications arise from the fact that c0 (ytn ) = c1 (ytn ) = 0. The implementation of the LLT discretization also requires to evaluate integrals involving the matrix exponential in (19). Remarkably, this can be done exactly by computing just one matrix exponential likewise the implementation of the LL method (VI) discussed in Section 2 (see also [26] and [27] for related results). For this, define zn (s) = y (tn + s) − y (tn ) for 0 ≤ s ≤ hn , where y (t) is the LLT approximation (17). Then, Ã ! Z s p−1 j X v zn (s) = dv, e(s−v)Ln f (y (tn )) + cj (y (tn )) j! 0 j=2 and hence the variation of constants formula implies that z0n

p−1 X 1 j (s) = Ln zn (s) + f (y (tn )) + s cj , j! j=2

0 ≤ s ≤ hn , T

where cj = cj (y (tn )) and zn (0) = 0. Define the auxiliary variable w (s) = (sp−1 / (p − 1)!, ..., s/1!, 1) . ¡ ¢T Then, un (s) = zTn (s) , wT (s) ∈ Rd+p satisfies the homogeneous linear system u0n (s) = An un (s) ,

starting at un (0) = (0, ..., 0, 1), where ⎛ Ln cp−1 cp−2 · · · ⎜01×d 0 1 ... ⎜ ... ⎜ 0 ⎜01×d 0 ⎜ .. .. . . An = ⎜ ... . . . ⎜ ⎜0 0 ··· ⎜ 1×d 0 ⎝01×d 0 0 ··· 01×d

0

0

0 ≤ s ≤ hn ,

c2 0d×1 f (y (tn )) 0 0 0 .. .. .. . . .

1 0 0 ··· 0

0 1 0 0

0 0 1 0



⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ∈ R(d+p)×(d+p) . ⎟ ⎟ ⎟ ⎠

Define the block matrices Bn ∈ Rd×(d+p−1) , gn ∈ Rd×1 , Cn ∈ Rp×(d+p−1) , vn ∈ Rp×1 through the partition µ ¶ Bn gn = exp (hn An ) . Cn vn 8

From un (hn ) = exp (hn An ) un (0) = exp (hn An ) (0, ..., 0, 1)T , it follows that zn (hn ) = gn . Thus, the LLT discretization can be obtained as (20)

ytn+1 = ytn + zn (hn ) = ytn + gn .

In this way the implementation of the LLTp discretization is reduced to compute just the exponential of a matrix hn An of dimension (d + p) × (d + p).

4

Order of convergence

In order to analyze the error of the LLTp method for given p ≥ 3, the following usual assumptions will be referred to: ¡ ¢ H1) D ⊂ Rd is an open set, and f ∈C p D, Rd . H2) x0 ∈ D and t0 , T ∈ R are such that there exists a solution x (t) in D to the initial-value problem (1)-(2) on the interval t0 ≤ t ≤ T . The following basic lemmas will be useful. ¡ ¢ Lemma 2 Under assumption (H1), cj (.) ∈ C 1 D, Rd for j = 2, ..., p − 1, where cj is defined by (15). Proof. (H1) implies that for all ξ ∈ D and j = 2, ..., p − 1, cj (ξ) given by (15) is well-defined. By the chain rule of derivation, cj (ξ) can be represented through products ¡ ¢ and sums of the (r) (r) 1 d derivatives f (ξ) , r = 0, 1, ..., j. Thus, the thesis follows from f ∈C D, R for r ≤ p − 1. Lemma 3 If (H1) holds then the function ϕLLT (s, ξ) defined by (18) satisfies ∂ ∂r C (R+ ×D) and ∂ξ ϕ (., .) ∈ C (R+ ×D) for r ∈ Z+ . ∂sr LLT

∂r ϕ ∂sr LLT

(., .) ∈

Proof. For arbitrary s ∈ R+ and ξ ∈ D, ϕLLT can be expressed p−1 X

0

ϕLLT (s, ξ) = ϕ1 (sf (ξ)) f (ξ) +

sj ϕj+1 (sf 0 (ξ)) cj (ξ) ,

(21)

j=2

where for s ∈ R+ , M ∈ Rd×d and j ≥ 1, 1 ϕj (sM) = (j − 1)!sj

Z

s

e(s−v)M vj−1 dv.

(22)

0

z ¡Setting ϕ0 (z) ¢= e , the functions ϕj (z) , z ∈ C, satisfy ϕj (0) = 1/j! and the recursion ϕj+1 (z) = ϕj (z) − 1/j! /z. Thus, they are analytic on C,

ϕj (z) =

∞ X z r−j r=j

9

r!

,

and the corresponding matrix functions ϕj (M) are continuous and have continuous derivatives on Rd×d . Thus, the theorem follows from (21), Lemma 2 and the assumed continuity of f and f 0 on D. For simplicity, henceforth it is assumed uniform discrete times, tn = t0 + nh ∈ [t0 , T ] , n = 0, 1, ..., N, h = (T − t0 ) /N. The local error of the LLTp approximation to the exact solution x (t) at time tn+1 is ln+1 = kx (tn+1 ) − x (tn ) − hϕLLT (h, x (tn ))k . Theorem 4 Assume the hypotheses (H1)-(H2) hold. Then, there exists a constant C > 0, not depending on h, such that the local error is ln+1 ≤ Chp+1 for all n = 0, ..., N − 1. Proof. Fix tn = t0 + nh ∈ [t0 , T ] . For 0 ≤ s ≤ h, denote v (s, x (tn )) = x (tn ) + sϕLLT (s, x (tn )) and z (s, x (tn )) = v (s, x (tn )) − x (tn ) . From (18) it follows that for 0 ≤ s ≤ h, Ã ! Z s p−1 j X v 0 e(s−v)f (x(tn )) f (x (tn )) + cj (x (tn )) dv. (23) z (s, x (tn )) = j! 0 j=2 This and the variation of constants formula imply that p−1

X1 ∂ z (s, x (tn )) = f 0 (x (tn )) z (s, x (tn )) + f (x (tn )) + sj cj (x (tn )) , ∂s j! j=2

0≤s≤h

(24)

¡ ¢ with z (0, x (tn )) = 0. Assumptions (H1 )-(H2 ) ensure that x (.) ∈ C p+1 [t0 , T ] , Rd , and hence (15) leads to (j+1) (j) (tn ) − f 0 (x (tn )) x (tn ) (25) cj (x (tn )) = x for j = 2, ..., p − 1. Recursive computation of derivatives using (24) and (25) shows that ∂r ∂r v (0, x (t )) = z (0, x (tn )) = x(r) (tn ) n ∂sr ∂sr ¡ ¢ for r = 0, 1, ..., p. Since in addition x (.) , v (., x (tn )) ∈ C p+1 [t0 , T ] , Rd , the Taylor formula for increments gives ln+1 = kx (tn + h) − v (h, x (tn ))k °¶ µ° p+1 °∂ ° p+1 ¡° (p+1) °¢ p+1 ° h , ≤ sup °x (a)° h + sup ° v (s, x (a)) ° ∂sp+1 °

(26)

where the sup is over s ∈ [0, h] , a ∈ [t0 , T ] . ¡ ¢ The first sup in (26) is finite due to x (.) ∈ C p+1 [t0 , T ] , Rd . Lemma 3 and continuity of x (.) guarantee that ϕLLT (s, x (a)) is bounded on [0, h] × [t0 , T ] , ∂r and so v (s, x (a)) is bounded too. Induction using (24) shows that ∂s r v (s, x (a)) is bounded on (s, a) ∈ [0, h] × [t0 , T ] for all r ≥ 0. Hence the second sup in (26) is finite too. 10

Theorem 5 Suppose conditions (H1)-(H2) hold. Then, there exist constants h0 , C > 0 such that, for 0 < h ≤ h0 , the LLTp discretization ytn+1 given by (19) is well-defined for all times tn+1 = t0 + nh, n = 0, ..., N − 1, and the global error is ° ° en+1 = °x (tn+1 ) − ytn+1 ° ≤ Chp .

Proof. Denote X = {x (t) : t ∈ [t0 , T ]} . Since X is a compact set contained in the open set D ⊂Rd , there exists ε > 0 such that the compact set ½ ¾ d Aε = ξ ∈ R : min kξ − x (t)k ≤ ε x(t)∈X

is contained in D. By construction, yt0 = xt0 = x (t0 ) ∈ Aε . Assume yt0 , ..., ytn ∈ Aε for some 0 ≤ n ≤ N − 1. Then, ytn+1 is well-defined, and en+1 = kx (tn+1 ) − ytn − hϕLLT (h, ytn )k ≤ ln+1 + en + hRn ,

(27)

where ln+1 = kx (tn+1 ) − x (tn ) − hϕLLT (h, x (tn ))k is the local error at time tn+1 , and Rn = kϕLLT (h, x (tn )) − ϕLLT (h, ytn )k . ∂ Lemma 3 ensures that ∂ξ ϕLLT (h, ξ) is continuous on (h, ξ) ∈ R+ ×D. From this, given that R+ ×D is open and [0, T − t0 ] × Aε ⊂ R+ ×D is compact, ϕLLT (h, ξ) restricted to [0, T − t0 ] × Aε satisfies a Lipschitz condition with respect to ξ (see, e.g., Lemma 2, pp. 92, in [43]). Thus, for some constant K > 0, Rm ≤ K kx (tm ) − ytm k

for m = 1, ..., n. Furthermore, by theorem 4 there is a constant C0 > 0 such that for all h ≥ 0 and 1 ≤ m ≤ N, lm ≤ C0 hp+1 . Thus, en+1 ≤ C0 hp+1 + (1 + hK) en , and hence ¡ ¢ en+1 ≤ 1 + (1 + hK) + ... + (1 + hK)n−1 C0 hp+1 exp (nhK) − 1 ≤ C0 hp ≤ Chp , K

(28)

where C = [exp ((T − t0 ) K) − 1] C0 /K. Therefore, in order to guarantee ytn ∈ Aε for all n = 0, ..., N − 1, and so that the LLT discretization is well-defined, it is sufficient that 0 < h ≤ h0 , where h0 is chosen in such a way that Chp0 ≤ ε. This also ensures that the inequality (28) holds. 11

5

Dynamics near stationary points

Our main purpose in what follows is to study the dynamical behavior of the LLT method near stationary points. The LLT map Ψh (y) = y+hϕLLT (h, y) inherits the regularity and A-stability of the LL discretization [13] because the former reduces to the later in case of linear systems. For general nonlinear systems the next theorem shows that the LLT discretization has no spurious fixed point for step sizes sufficiently small. Theorem 6 Suppose hypothesis (H1) of Section 4 holds. If the stepsize is small enough then a fixed point of the LLT map is an equilibrium point of the continuous-time system, and vice verse. Proof. The chain rule of differential calculus implies that the coefficients cj (y) in the definition of the LLTp discretization (18)-(19) are linear functions of f (y), say cj (y) = Mj (y) f (y) for some matrices Mj (y). Thus, using (21) the LLTp map can be written Ψh (y) = y+hB (h, y) , where B (h, y) =

Ã

ϕ1 (hf 0 (y)) +

p−1 X

!

hj ϕj+1 (hf 0 (y)) Mj (y) f (y)

j=2

and ϕj are defined by (22). Since ϕj are continuous functions and ϕj (0) = I/j!, B (h, y) is a continuous function of h and B (0, y) = I. Therefore, there exists a vicinity of h = 0 such that B (h, y) is an invertible matrix. Thus, for h > 0 small enough, Ψh (y) = y if and only if f (y) = 0. We will study the capability of the LLT discretization for approximating the local stable and unstable manifolds at a stationary point following techniques introduced in [44]. For this, some notations are first defined below. Suppose the following conditions are fulfilled:¡ ¢ H3) D ⊂ Rd is an open set, 0 ∈ D, f ∈C p+1 D, Rd H4) f (0) = 0, and all the eigenvalues of f 0 (0) have non null real parts. Then 0 ∈ Rd is an hyperbolic stationary point of the system x0 = f (x). Let x (t, x0 ) be the solution of this equation with x (0, x0 ) = x0 and maximal interval of existence [0, T (x0 )[. Given p ≥ 2 and h > 0, let y (tn , x0 ) ∈ D be the LLTp discretization starting at y (0, x0 ) = x0 and maximal grid of existence J h (x0 ) = {tn = nh : n = 0, ..., N } , N ∈ N∪ {∞} . Let Xs , Xu be the stable and unstable subspace for the hyperbolic linear vector field f 0 (0), d such that Rd = X ªs ⊕ Xu , (xs , xu ) = x ∈R , kxk = max (kxs k , kxu k) . For ε > 0 let Kε = © d x ∈ R : kxk ≤ ε .The local stable and unstable manifold at 0 with respect to some ε0 ≥ ε > 0 are defined by Ms = {x0 ∈ Kε : x (t, x0 ) ∈ Kε0 ∀t ≥ 0, x (t, x0 ) → 0 as t → ∞} , Mu = {x0 ∈ Kε : x (t, x0 ) ∈ Kε0 ∀t ≥ 0, x (t, x0 ) → 0 as t → −∞} . 12

Here, the solution x (t, x0 ) for t < 0 is defined via the time reverse system x0 = e f (x) = −f (x) . It is known that these manifolds can be represented as graphs Ms = {(xs , p (xs )) : xs ∈ Kε,s } ,

Ms = {(q (xs ) , xu ) : xu ∈ Kε,u } ,

where Kε,s = Kε ∩ Xs , Kε,u = Kε ∩ Xu , and the functions p : Kε,s → Kε,u , q : Kε,u → Kε,s are as smooth as f. Likewise, the local stable and unstable manifold at 0 of the time discretization map are defined by Msh = {x0 ∈ Kε : y (tn , x0 ) ∈ Kε0 ∀n ∈ N, y (tn , x0 ) → 0 as n → ∞} , Msh = {x0 ∈ Kε : y (tn , x0 ) ∈ Kε0 ∀n ∈ −N, y (tn , x0 ) → 0 as n → −∞} . The points y (tn , x0 ) for n ∈ −N are defined via the time reverse system corresponding to the LLT map (assuming h small enough to ensure this map be invertible). The behavior of the LLT method near stationary points is summarized in the next theorem. This shows that the phase portrait of a dynamical system near an hyperbolic stationary point is reproduced correctly by the LLT method. In particular, the stable and unstable manifolds of the LLTp discretization converge to their continuous counterparts with an error of order p. This make it possible to improve the approximation to the local manifolds provided by the standard LL method, which is of order only 2 [13]. Theorem 7 Suppose the conditions (H3) and (H4) holds. Then, there exist constants C, ε, ε0 , h0 > 0 such that the local stable Msh and unstable Muh manifolds of the LLTp map at 0 (h ≤ h0 ) are of the form ©¡ ¢ ©¡ ª ¢ ª Muh = qh (xu ) , xu : xu ∈ Kε,u , Msh = xs , ph (xs ) : xs ∈ Kε,s ,

where ph = p + O (hp ) uniformly in Kε,s , and qh = q + O (hp ) uniformly in Kε,u . Moreover, for any x0 ∈ Kε and h ≤ h0 , there exists z0 = z0 (x0 , h) ∈ Kε0 satisfying sup {kx (tn , x0 ) − y (tn , z0 )k : x (t, x0 ) ∈ Kε for t ∈ [t0 , tn ]} ≤ Chp .

Correspondingly, for any z0 ∈ Kε and h ≤ h0 , there exists x0 = x0 (z0 , h) ∈ Kε0 such that sup {kx (tn , x0 ) − y (tn , z0 )k : y (tm , x0 ) ∈ Kε for m = 0, ..., n} ≤ Chp . The proof of this theorem will be based on the following lemma. Lemma 8 Assume that (H3) holds. Then, for some δ > 0 the map ϕLLT : R+ × D of the LLTp discretization (18) satisfies: a) ϕLLT (h, .) → f (.) when h → 0 uniformly in Kδ ∂ ϕ (h, .) → f 0 (.) when h → 0 uniformly in Kδ . ∂ξ °LLT ° ° x(h,ξ)−ξ ° b) ° h − ϕLLT (h, ξ)° = O (hp ) uniformly in ξ ∈ Kδ . 13

∂ Proof. Part (a) follows from lemma 3, ϕLLT (0, ξ) = f (ξ) and ∂ξ ϕLLT (0, ξ) = f 0 (ξ) . Under (H3), the standard existence-uniqueness theorem ensures that for some T > 0 there exists the solution x (t, 0) defined on t ∈ [0, T ]. Theorem 1, pp. 80, in [43] states that there are constants δ > 0, T > 0, such that for any ξ ∈ Kδ there exists a (unique) solution x (t, ξ) of (1) on [0, T ] with x (0, ξ) = ξ, and furthermore x (., .) ∈ C p+1 ([0, T ] × Kδ ) . Let X be the compact set {x (t, ξ) : t ∈ [0, T ] , ξ ∈ Kδ } ⊂ D. Given ξ ∈ Kδ , h ∈ [0, T ], for 0 ≤ s ≤ h denote v (s, ξ) = ξ + sϕLLT (s, ξ) .

Likewise the proof of Theorem 4 it can be shown that, for r = 0, ..., p, ∂r ∂r v (0, ξ) = x (0, ξ) . ∂sr ∂tr r

p+1

∂ ∂ For 0 ≤ r ≤ p + 1, ∂s r ϕLLT (., .) ∈ C (R+ ×D) (lemma 3), and so ∂sp+1 v (., .) is bounded on p+1 ∂ the compact set [0, h] × X . Also ∂t p+1 x (., .) is bounded on [0, T ] × Kδ . Thus, the application of the Taylor formula leads to

kx (h, ξ) − v (h, x (h, ξ))k ≤ Chp+1 for some constant C > 0 that does not depend on (h, ξ) ∈ [0, T ] × Kδ . This complete the proof of (b). Proof of theorem 7. Parts (a), (b) of Lemma 8 and the fact that the LLT method retains the stationary points of the continuous system are, respectively, the hypotheses (3.1), (3.2) and (3.3) of Theorem 3.1 in [44], from which the present theorem directly follows.

6

Numerical examples

In this section the performance of the LLT3 method (order p = 3) is illustrated by some computer simulations. Following the algorithm described in Section 3.2, the implementation of the LLT3 discretization reduces to compute ytn+1 by (20), taking in this equation the vector gn ∈ Rd as ¶ µ Bn gn = exp (hn An ) , Cn vn where the matrix An ∈ R(d+3)×(d+3) is



⎞ Ln c2 0d×1 f(ytn ) ⎜01×d 0 1 0 ⎟ ⎟. An = ⎜ ⎝01×d 0 0 1 ⎠ 01×d 0 0 0

Using the rules of matrix differential calculus [45] it can be shown that ¡ ¢ c2 = Id×d ⊗ f T (yn ) f 00 (yn )f(yn ),

where f 00 denotes the Hessian matrix of f. The diagonal Padé approximation with the scaling and squaring strategy was used to compute exp (hn An ) (see Algorithm 11.3.1 in [21] for more details). The following example is taken from [44] in order to illustrate not only convergence issues of the LLT discretization but also its dynamics around hyperbolic stationary points. 14

Example 9 x01 = −2x1 + x2 + 1 − µf (x1 , λ) , x02 = x1 − 2x2 + 1 − µf (x2 , λ) , −1

where f (x, λ) = x (1 + x + λx2 ) . For µ = 15, λ = 57, this system has two stable stationary points and one unstable stationary point in the region 0 ≤ x1 , x2 ≤ 1. There is a nontrivial stable manifold for the unstable point which separates the basins of attraction for the two stable points (see [44]). Figure 1 shows the performance of the LLT3 scheme in this example. For comparison, Figure1(a) presents the phase portrait obtained by the LLT scheme with a very small step-size (h = 2−13 ), which can be regarded the exact solution of the underlying system for visualization purposes. For comparison, Figures 1 (b)-(c)-(d) show the phase portraits obtained by a third order explicit Runge-Kutta (RK3), the LL and the LLT3 methods with step-size h = 2−2 . It is observed in Figure 1 that the RK3 discretization fails to reproduce correctly the phase portrait of the underlying system near one of the point attractors. On the contrary, the exact phase portrait is adequately approximated near both point attractors by the LL and LLT3 methods, the latter showing better accuracy. This is in accordance with theorem 7 which states that LL and LLT3 integrators approximate correctly the local manifolds with errors O (h2 ) and O (h3 ), respectively. Furthermore, notice that the RK3 and LL discretization do not approximate adequately the basins of attraction in the region shown in Figure 1. For instance, RK3 trajectories starting near (0, 0.5) and LL trajectories starting near (0, 0.6) go towards wrong point attractors in comparison with exact trajectories. In contrast, the attracting sets are much better represented by the phase portrait of the LLT3 method. This demonstrates that the larger accuracy of the LLTp method in comparison with the LL method can in practice lead to appreciable improvement in dynamical performance even for moderate values of the order p. For illustrating the order of convergence, the LLT3 discretization y (tn ) of this example was computed for different step sizes hi = 2−(i+4) , i = 1, ..., 6 in the interval [t0 , tN ] = [0, 10] with initial condition (0.2, 0.1) . The errors e (hi ) = max kx (tn ) − y (tn )k , t0 ≤tn ≤tN

were used for numerically estimating the global order of convergence of the LLT3 discretization. The estimated order is the slope of the straight line fitted to the set of points (log2 (hi ) , log2 (e (hi ))), i = 1, ..., 6. Fig. 2 shows these points and the straight line fitted them. As expected according to the results in Section 4, the fitted line has a slope close to p = 3. The next example illustrates the performance of the LLT method in a well-known stiff system that is frequently used to test the dynamical behavior of numerical integrators; namely, the Van der Pol equation (see, e.g., [39], [46]). Example 10 x01 = x2, ¡¡ ¢ ¢ x02 = E 1 − x21 x2 − x1 , 15

where E = 103 . Figure 3 shows the approximations obtained for this example by a fourth order Runge-Kutta (RK4), the LL and the LLT3 methods for several step sizes starting at (2, 0). For large step sizes the Runge-Kutta discretization is explosive due to lack of stability of the numerical approximation. The LL method does not explode but shows poor accuracy. Note in particular that the LL trajectory has a range quite outside the range of the exact solution. In contrast, for larger step sizes the LLT3 method achieves more precise approximation in reconstructing the underlying dynamics associated with a limit cycle.

References [1] Liniger, W. and Willoughby, R. A. Efficient integration methods for stiff systems of ordinary differential equations, SIAM J. Numer. Anal. 7 (1970) 47-66. [2] Lawson, J. D. Generalized Runge-Kutta processes for stable systems with large Lipschitz constants, SIAM J. Numer. Anal. 4 (1967) 372-380. [3] Hochbruck, M. and Lubich, C. On Krylov subspace approximations to the matrix exponential operator, SIAM Numer. Anal. 34 (1997) 1911-1925. [4] Hochbruck, M., Lubich, C. and Selhofer, H. Exponential integrators for large systems of differential equations, SIAM J. Sci. Comput. 19 (1998) 1552-1574. [5] Cox, S. M. and Matthews, P. C. Exponential time differencing for stiff systems, Journal of Computational Physics. 176 (2002) 430-455. [6] Kassm, A. and Trefethen, L. N. Fourth-order time stepping for stiff PDEs, SIAM J. Scient. Comput. 26 (2005) 1214-1333. [7] Iserles, A., Marthinsen, A. and Nørset, S. P. On the implementation of the method of Magnus series for linear differential equations, BIT. 39 (1999) 281-304. [8] Blanes, S., Casas, F. and Ros, J. Improved high order integrators based on the Magnus expansion, BIT. 40 (2000) 343-450. [9] Zanna, A. Collocation and relaxed collocation for the FER and the Magnus expansions, SIAM J. Numer. Anal. 36 (1999) 1145-1182. [10] Iserles, A. Solving linear ordinary differential equations by exponentials of iterated commutators, Numer. Math. 45 (1984) 183-199. [11] Ozaki, T. A bridge between nonlinear time series models and nonlinear stochastic dynamical systems: a local linearization approach, Statistica Sinica. 2 (1992) 113-135. [12] Ramos, J. I. and Garcia-Lopez, C. M. Piecewise-linearized methods for initial-value problems, Appl. Math. Comput. 82 (1997) 273-302. [13] Jimenez, J. C., Biscay, R. J., Mora, C. M., and Rodriguez, L. M. Dynamic properties of the Local Linearization method for initial-valued problems, Applied Mathematics and Computation. 126 (2002) 63-81. [14] Norsett, S. P. An A-stable modification of the Adams-Bashforth method. Conference on the Numerical Solution of Differential Equations, Dunde, Scotland, Springer: Berlin. (1969) 214-219. [15] Jain, R. K. Some A-stable methods for stiff ordinary differential equations, Mathematics of Computation. 26 (1972) 71-77. [16] Pavlov, B. V. and Rodionova, O. E. The method of local linearization in the numerical solution of stiff systems of ordinary differential equations, U.S.S.R. Comput. Maths. Math. Phys. 27 (1987) 30-38. 16

[17] Iserles, A.Quadrature methods for stiff ordinary differential systems, Mathematics of Computation. 36 (1981) 171-182. [18] Friesner, R. A., Tuckerman, L. S., Dornblaser, B. C. and Russo, T. V. A method for exponential propagation of large systems of stiff nonlinear differential equations, Journal of Scientific Computing. 4 (1989) 327-354. [19] Hochbruck, M. and Ostermann, A. Explicit exponential Runge—Kutta methods for semilinear parabolic problems, Preprint. (2004). [20] Hochbruck, M. and Ostermann, A. Exponential Runge—Kutta methods for parabolic problems, Applied Numerical Mathematics. 53 (2005) 323—339. [21] Golub, G. H. and Van Loan, C. F. Matrix Computations, 2nd ed., The Johns Hopkins University Press, Baltimore and London. 1999. [22] Sidje, R. B. EXPOKIT: software package for computing matrix exponentials, ACM Trans. Math. Sofware. 24 (1998) 130-156. [23] Dieci, L. and Papini, A. Pade approximations for the exponential of a block triangular matrix, Linear Algebra Appl. 308 (2000) 103-202. [24] Celledoni, E. and Iserles, A. Methods for the approximation of matrix exponentials in a Lie-algebraic setting, IMA J. Numer. Anal. 21 (2001) 463-488. [25] Higham, N. J. The scaling and squaring method for the matrix exponential revisited, Numerical Analysis Report 452, Manchester Centre for Computational Mathematics (2004). [26] Van Loan, C. F. Computing Integrals Involving the Matrix Exponential, IEEE Trans. Autom. Control. AC-23 (1978) 395-404. [27] Carbonell, F., Jimenez, J. C. and Pedroso, L. M. Computing multiple integrals involving matrix exponentials. Technical Report 2005-36, Instituto de Cibernetica, Matematica y Fisica, La Habana. (2005). [28] Carroll, J.A matricial exponentially fitted scheme for the numerical solution of stiff initialvalue problems, Computers Math. Applic. 26 (1993) 57-64. [29] Bower, J. M. and Beeman, D. The book of GENESIS: exploring realistic neural models with the general neural simulation system, Springer-Verlag, 1995. [30] Biscay, R. J., Jimenez, J. C., Riera, J. and Valdes, P. Local linearization method for the numerical solution of stochastic differential equations, Annals of the Institute of Statistical Mathematics. 48 (1996) 631-644. [31] Jimenez, J. C., Biscay, R. J. Approximation of continuous time stochastic processes by the Local Linearization method revisited, Stochast. Anal. Appl. 20 (2002) 105-121. [32] Carbonell, F., Jimenez, J. C., Biscay, R. J. and de la Cruz, H. The local linearization method for the numerical integration of random differential equations, BIT Numerical Mathematics. 45 (2005) 1-14. [33] Jimenez, J. C., Pedroso, L. M., Carbonell, F. and Hernandez, V. Local Linearization method for numerical integration of delay differential equations, Technical Report 2003-241, Instituto de Cibernetica, Matematica y Fisica, La Habana. (2003). [34] de la Cruz, H., Biscay, R. J., Carbonell, F. M., Jimenez, J. C. and Ozaki T. Local Linearization-Runge Kutta (LLRK) Methods for Solving Ordinary Differential Equations, Lecture Notes in Computer Sciences, Springer-Verlag. 3991 (2006) 132-139. [35] Kloeden P.E. and Platen E., Numerical Solution of Stochastic Differential Equations, 2nd ed., Springer-Verlag, Berlin, 1995. [36] Butcher, J. C. The Numerical Analysis of Ordinary Differential Equations, Runge-Kutta 17

and General Linear Methods, John Wiley & Sons: Chichester, 1987. [37] Hairer, E., Norsett, S. P. and Wanner, G. Solving Ordinary Differential Equations I, 2nd ed., Springer-Verlag: Berlin, 1993. [38] Chang, Y. F. and Corliss, G. ATOMFT: Solving ODEs and DAEs using Taylor series, Computers Math. Applic. 28 (1994) 209-233. [39] Hairer, E. and Wanner, G. Solving Ordinary Differential Equations II. Stiff and DifferentialAlgebraic Problems, 3th ed., Springer-Verlag: Berlin, 1996. [40] Jimenez, J. C. A Simple Algebraic Expression to Evaluate the Local Linearization Schemes for Stochastic Differential Equations, Applied Mathematics Letters. 15 (2002) 775-780. [41] Jimenez, J. C. and Carbonell, F. Rate of convergence of local linearization schemes for initial-value problems, Applied Mathematics and Computation. 171 (2005) 1282-1295. [42] Lakshmikantham, V., Leela, A. and Martynyuk. Stability Analysis of Nonlinear Systems, Marcel Dekker, Inc: New York, 1989. [43] Perko, L. Differential Equations and Dynamical Systems, Springer: New York, 2001. [44] Beyn,W. J.On the numerical approximation of phase portraits near stationary points, SIAM J.Numer. Anal. 24 (1987) 1095-1113. [45] Magnus, J. R and Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley, New York, 1988. [46] Enright, W. H., Hull, T. E. and Lindberg, B. Comparing numerical methods for stiff systems of ODEs, BIT. 15 (1975) 10-48. LEGENDS OF FIGURES Figure 1. (Example 9). a) Phase portrait obtained by the LLT3 scheme with a very small step-size, h = 2−13 (which can be thought of as the exact solution for visualization purposes). b), c), d) Phase portraits obtained, respectively, by a third order Runge-Kutta (RK3), the LL and the LLT3 methods with step-size h = 2−2 (continuous line). For reference, the exact trajectories are also shown in each case as dashed lines. Figure 2. The points represent (in logarithmic scale) the error of the LLT3 method in example 9 versus stepsize, i.e., (log2 (hi ) , log2 (e (hi ))) , for hi = 2−(i+4) , i = 1, ..6. The straight line fitted to these points is also shown. This has slope close to the order of convergence, p = 3 = [3.01] . Figure 3. Time discretizations obtained for example 10 by a fourth order Runge-Kutta (RK4), the LL and the LLT3 methods for step sizes a) h1 = 0.001000, b) h2 = 0.000977, and c) h3 = 0.000244.

18

True Solution

RK3 Solution

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 (a)

0

0.1

0.2

0.3

0.4

0.5

0.6

0 (b)

0

0.1

LL Solution 1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

(c)

0

0.1

0.2

0.3

0.4

0.3

0.4

0.5

0.6

0.5

0.6

LLT3 Solution

1

0

0.2

0.5

0.6

0 (d)

FIGURE 1

0

0.1

0.2

0.3

0.4

−20

−22

−24

log2 e(h)

−26

−28

−30

−32

−34

−36 −10

−9.5

−9

−8.5

−8

−7.5 log2 h

FIGURE 2

−7

−6.5

−6

−5.5

−5

193

4

26

x 10

10

x 10

4000

RK4

2000 2

5

0 −10

0 −10

0 −2000

−5

0

5

−5

4

LL

2 1 0

LLT3

−1

−2

183

x 10

x 10

0

2

4

4000

4000

2000

2000

0

0

−2000

−2000 0

5

−4000 −4

4000

4000

4000

2000

2000

2000

0

0

0

−2000

−2000

−2000

−4000 −4

−2

0

2 a)

4

6

−2

0

2

4

6

−2

0

2

4

6

−2

0

2

4

6

x 10

−4000 −5

6

5 −4000 −4

0

93

−4000 −4

−2

0

2 b)

FIGURE 3

4

6

−4000 −4

c)

A Higher Order Local Linearization Method for Solving ...

A Higher Order Local Linearization Method for. Solving Ordinary Differential Equations. H. De la Cruz∗, R. J. Biscay†, F. Carbonell†, T. Ozaki‡, J. C. Jimenez†.

489KB Sizes 0 Downloads 233 Views

Recommend Documents

LOCAL LINEARIZATION METHOD FOR NUMERICAL INTEGRATION ...
INTEGRATION OF DELAY DIFFERENTIAL EQUATIONS∗. J.C. JIMENEZ ..... Bi n(˜yi tn (u − τi) − ˜yi tn (−τi)) + dn)du. + hn. ∫. 0 u. ∫. 0. eAn(hn−u)cndrdu. (2.16) ...

The Local Linearization method for numerical integration of random ...
A Local Linearization (LL) method for the numerical integration of Random ... interest in the study of RDEs has been motivated by the development of the ... However, the application of these averaged methods is not only restricted to the ...

Rate of convergence of local linearization schemes for ...
Linear discretization, the order of convergence of the LL schemes have not been ... In this paper, a main theorem on the convergence rate of the LL schemes for ...

Rate of convergence of Local Linearization schemes for ...
Feb 17, 2005 - The proposal of this paper is studying the convergence of the LL schemes for ODEs. Specif- ..... [20] T. Barker, R. Bowles and W. Williams, Development and ... [27] R.B. Sidje, EXPOKIT: software package for computing matrix ...

Rate of convergence of Local Linearization schemes for ...
email: [email protected], [email protected]. February 17, 2005. Abstract. There is ..... In Lecture Notes in Computer Science 2687: Artificial Neural Nets Problem.

The tetrahedral finite cell method: Higher-order ...
SUMMARY. The finite cell method (FCM) is an immersed domain finite element method that combines higher- ...... available imaging technology. We can only ...

Filtering: A Method for Solving Graph Problems in ...
As the input to a typical MapReduce computation is large, one ... universities are using Hadoop [6, 21] for large scale data analysis. ...... International Conference on Knowledge Discovery and Data ... Inside large-scale analytics at facebook.

Filtering: A Method for Solving Graph Problems in ...
social network analysis. Although it ... seminal work of Karger [10] to the MapReduce setting. ... The most popular model is the PRAM model, ...... O'Reilly Media,.

A Local Collision Avoidance Method for Non-strictly ...
Email: {f-kanehiro, e.yoshida}@aist.go.jp ... Abstract—This paper proposes a local collision avoidance method ... [2] extends this method to avoid local minima.

Linearization and Newton's Method Completed 2017.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Linearization ...

Higher-Order Linearisability
we shall also consider more restrictive testing scenarios in which this ..... On the other hand, proving conformance to a history specification has been addressed.

Higher-Order Thoughts
City University of New York, Graduate Center ... and effects underlies what Dennett calls the Cartesian Theater model of mind. ... DENNETT SYMPOSIUM 911 ...

Solving higher-degree equations.pdf
Solving higher-degree equations.pdf. Solving higher-degree equations.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Solving higher-degree ...

Method for downloading information data in wireless local loop system
Feb 26, 2008 - disadvantage in that terminal subscribers should move to a downloading area to ... data to the plurality of mobile communication terminals through the ... Additional advantages, objects, and features of the inven tion will be set ...

Storage router and method for providing virtual local storage
Jul 24, 2008 - CRD-5500, Raid Disk Array Controller Product Insert, pp. 1-5. 6'243'827 ..... Data Book- AIC-1 160 Fibre Channel Host Adapter ASIC (Davies Ex. 2 (CNS ..... devices 20 comprise hard disk drives, although there are numerous ...

An optimization method for solving the inverse Mie ...
the LSP to the particle parameters over their domain, calling for variable density of the database. Moreover, there may be a certain threshold in the dependence ...

An application of genetic algorithm method for solving ...
To expound the potential use of the approach, a case example of the city Kolkata, ... From the mid-'60s to early '70s of the last century, a .... early stage. ... objective Fk(X), k = 1, 2, ..., K. Then the fuzzy goal ..... enforcement of traffic rul

Storage router and method for providing virtual local storage
Jul 24, 2008 - Technical Report-Small Computer System Interface-3 Generic. PacketiZed Protocol ... 1, 1996, IBM International Technical Support Organization, ..... be a rack mount or free standing device With an internal poWer supply.

Element-local level set method for three-dimensional ...
Jun 23, 2009 - Let Sp be the set of elements that are intact (uncracked) and share the crack front with elements ... where nI is the number of the adjacent cracked elements which share node I with the current element e. Note that ..... Xu XP, Needlem

Higher Order Statistics for Identification of Minimum ...
A. Boumezzough are with Department of Physics, Polydisciplinary Faculty,. Sultan Moulay Slimane University, Morocco. II. PRILIMINAIRIES AND PROBLEM STATEMENT. We consider the single-input single-output (SISO) model. (Fig. 1) of the finite impulse res

Almost periodic solutions for some higher-order ...
+1 202 806 7123; fax: +1 202 806 6831. ... existence of almost periodic solutions to the class of nonautonomous n-order differential equations with operator ...

Interleaved Intensity Order Based Local Descriptor for ...
The image matching results in terms of recall vs 1-precision are depicted in Fig.6 over each sequence of Oxford Dataset. ... number of interleaved set & number of neighbors in each set) when B=1. (i.e. number of multi-scale regions) and C=1 (i.e. num

Higher Order Statistics for Identification of Minimum ...
Higher Order Statistics (HOS) to build our algorithm. The simulation results in noisy environment, ... Keywords—System Identification, Higher Order Statistics,. Communication Channels. I. INTRODUCTION ...... of Future Generation Communication and N

Higher-Order Beliefs and Epistemic Conditions for ...
for instance, [s : a(s) = a] is set of states at which action profile a is played. For simplicity, the event is written as [a]. We say an event E is true at state s if s ∈ E. Player i's conjecture φi is his belief over opponents' actions A-i. Let