Linear multistep method

Linear multistep method

Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

Contents

Definitions

Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form

 y' = f(t,y), \quad y(t_0) = y_0.

The result is approximations for the value of y(t) at discrete times ti:

ti = t0 + ih
yi = y(ti) = y(t0 + ih)
fi = f(ti,yi)

where h is the time step (sometimes referred to as Δt).

A linear multistep method uses a linear combination of yi and yi' to calculate the value of y for the desired current step.

Multistep method will use the previous s steps to calculate the next value. Consequently, the desired value at the current processing stage is yn + s.

A linear multistep method is a method of the form

 \begin{align}
& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align}

where h denotes the step size and f the right-hand side of the differential equation. The coefficients  a_0, \ldots, a_{s-1} and  b_0, \ldots, b_s determine the method. The designer of the method chooses the coefficients; often, many coefficients are zero. Typically, the designer chooses the coefficients so they will exactly interpolate y(t) when it is an nth order polynomial.

If bs = 0, then the method is called "explicit", since the formula can directly compute yn + s.

If  b_s \ne 0 then the method is called "implicit", since the value of yn + s depends on the value of f(tn + s,yn + s), and the equation must be solved for yn + s. Iterative methods such as Newton's method are often used to solve the implicit formula.

Sometimes an explicit multistep method is used to "predict" the value of yn + s. That value is then used in an implicit formula to "correct" the value. The result is a Predictor-corrector method.

Examples

Consider for an example the problem

 y' = y, \quad y(0) = 1.

The exact solution is y(t) = et.

One-Step Euler

A simple numerical method is Euler's method:

 y_{n+1} = y_n + hf(t_n, y_n). \,

Euler's method can be viewed as an explicit multistep method for the degenerate case of one step.

This method, applied with step size  h = \tfrac12 on the problem y' = y, gives the following results:

 \begin{align}
  y_1 &= y_0 + hf(t_0, y_0) = 1 + \tfrac12\cdot1 = 1.5, \\
  y_2 &= y_1 + hf(t_1, y_1) = 1.5 + \tfrac12\cdot1.5 = 2.25, \\
  y_3 &= y_2 + hf(t_2, y_2) = 2.25 + \tfrac12\cdot2.25 = 3.375, \\
  y_4 &= y_3 + hf(t_3, y_3) = 3.375 + \tfrac12\cdot3.375 = 5.0625.
\end{align}

Two-Step Adams Bashforth

Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method

 y_{n+2} = y_{n+1} + \tfrac32 hf(t_{n+1},y_{n+1}) - \tfrac12 hf(t_n,y_n).

This method needs two values, yn + 1 and yn, to compute the next value, yn + 2. However, the initial value problem provides only one value, y0 = 1. One possibility to resolve this issue is to use the y1 computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits):

 \begin{align}
  y_2 &= y_1 + \tfrac32 hf(t_1, y_1) - \tfrac12 hf(t_0, y_0) = 1.5 + \tfrac32\cdot\tfrac12\cdot1.5 - \tfrac12\cdot\tfrac12\cdot1 = 2.375, \\
  y_3 &= y_2 + \tfrac32 hf(t_2, y_2) - \tfrac12 hf(t_1, y_1) = 2.375 + \tfrac32\cdot\tfrac12\cdot2.375 - \tfrac12\cdot\tfrac12\cdot1.5 = 3.7812, \\
  y_4 &= y_3 + \tfrac32 hf(t_3, y_3) - \tfrac12 hf(t_2, y_2) = 3.7812 + \tfrac32\cdot\tfrac12\cdot3.7812 - \tfrac12\cdot\tfrac12\cdot2.375 = 6.0234.
\end{align}

The exact solution at t = t4 = 2 is  \mathrm{e}^2 = 7.3891\ldots , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough.

Multistep Method Families

Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs).

Adams–Bashforth methods

The Adams–Bashforth methods are explicit methods. The coefficients are as − 1 = − 1 and a_{s-2} = \cdots = a_0 = 0, while the bj are chosen such that the methods has order s (this determines the methods uniquely).

The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1; Butcher 2003, p. 103):

  •  y_{n+1} = y_n + hf(t_n, y_n) \, —this is simply the Euler method;
  •  y_{n+2} = y_{n+1} + h\left( \tfrac32 f(t_{n+1}, y_{n+1}) - \tfrac12 f(t_n, y_n)\right);
  •  y_{n+3} = y_{n+2} + h\left( \tfrac{23}{12} f(t_{n+2}, y_{n+2}) - \tfrac43 f(t_{n+1}, y_{n+1}) + \tfrac{5}{12}f(t_n, y_n)\right);
  •  y_{n+4} = y_{n+3} + h\left( \tfrac{55}{24} f(t_{n+3}, y_{n+3}) - \tfrac{59}{24} f(t_{n+2}, y_{n+2}) + \tfrac{37}{24} f(t_{n+1}, y_{n+1}) - \tfrac38 f(t_n, y_n) \right);
  •  \begin{align} y_{n+5} = y_{n+4} + h\bigl( &\tfrac{1901}{720} f(t_{n+4}, y_{n+4}) - \tfrac{1387}{360} f(t_{n+3}, y_{n+3}) \\
&+ \tfrac{109}{30} f(t_{n+2}, y_{n+2}) - \tfrac{637}{360} f(t_{n+1}, y_{n+1}) + \tfrac{251}{720} f(t_n, y_n) \bigr). \end{align}

The coefficients bj can be determined as follows. Use polynomial interpolation to find the polynomial p of degree s − 1 such that

 p(t_{n+i}) = f(t_{n+i}, y_{n+i}), \qquad \text{for } i=0,\ldots,s-1.

The Lagrange formula for polynomial interpolation yields

 p(t) = \sum_{j=0}^{s-1} \frac{(-1)^{s-j-1}f(t_{n+j}, y_{n+j})}{j!(s-j-1)!h^{s-1}} \prod_{i=0 \atop i\ne j}^{s-1} (t-t_{n+i}).

The polynomial p is locally a good approximation of the right-hand side of the differential equation y' = f(t,y) that is to be solved, so consider the equation y' = p(t) instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking

 y_{n+s} = y_{n+s-1} + \int_{t_{n+s-1}}^{t_{n+s}} p(t)\,dt.

The Adams–Bashforth method arises when the formula for p is substituted. The coefficients bj turn out to be given by

 b_{s-j-1} = \frac{(-1)^j}{j!(s-j-1)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s-1} (u+i) \,du, \qquad \text{for } j=0,\ldots,s-1.

Replacing f(t, y) by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1)

The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977).

Adams–Moulton methods

The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have as − 1 = − 1 and a_{s-2} = \cdots = a_0 = 0. Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that bs = 0, an s-step Adams–Moulton method can reach order s + 1, while an s-step Adams–Bashforth methods has only order s.

The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1; Quarteroni, Sacco & Saleri 2000):

The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points tn−1, … tns, as above, but also tn. The coefficients are given by

 b_{s-j} = \frac{(-1)^j}{j!(s-j)!} \int_0^1 \prod_{i=0 \atop i\ne j}^{s} (u+i-1) \,du, \qquad \text{for } j=0,\ldots,s.

The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1).

Analysis

The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability.

The first question is whether the method is consistent: is the difference equation

 \begin{align}
& y_{n+s} + a_{s-1} y_{n+s-1} + a_{s-2} y_{n+s-2} + \cdots + a_0 y_n \\
& \qquad {} = h \bigl( b_s f(t_{n+s},y_{n+s}) + b_{s-1} f(t_{n+s-1},y_{n+s-1}) + \cdots + b_0 f(t_n,y_n) \bigr),
\end{align}

a good approximation of the differential equation y' = f(t,y)? More precisely, a multistep method is consistent if the local error goes to zero as the step size h goes to zero, where the local error is defined to be the difference between the result yn + s of the method, assuming that all the previous values y_{n+s-1}, \ldots, y_n are exact, and the exact solution of the equation at time tn + s, divided by h. A computation using Taylor series shows out that a linear multistep method is consistent if and only if

 \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad \sum_{k=0}^s b_k = s + \sum_{k=0}^{s-1} ka_k.

All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2).

If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order O(hp + 1) as h goes to zero. This is equivalent to the following condition on the coefficients of the methods:

 \sum_{k=0}^{s-1} a_k = -1 \quad\text{and}\quad q  \sum_{k=0}^s k^{q-1} b_k  = s^q + \sum_{k=0}^{s-1} k^q a_k \text{ for } q=1,\ldots,p.

The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order s + 1 (Hairer, Nørsett & Wanner 1993, §III.2).

These conditions are often formulated using the characteristic polynomials

 \rho(z) = z^s + \sum_{k=0}^{s-1} a_k z^k \quad\text{and}\quad \sigma(z) = \sum_{k=0}^s b_k z^k.

In terms of these polynomials, the above condition for the method to have order p becomes

 \rho(\mathrm{e}^h) - h\sigma(\mathrm{e}^h) = O(h^{p+1}) \quad \text{as } h\to 0.

In particular, the method is consistent if it has order one, which is the case if ρ(1) = 0 and ρ'(1) = σ(1).

If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. The method is convergent if and only if it is consistent and the root condition is satisfied. Consequently, a consistent method is stable if and only if this condition is satisfied, and thus the method is convergent if and only if it is stable.

Furthermore, if the method is stable, the method is said to be strongly stable if z = 1 is the only root of modulus 1. If it is stable and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be relatively stable. Note that 1 must be a root; thus stable methods are always one of these two.

Example

Consider the Adams–Bashforth three-step method

y_{n+1} = y_n + h\left( {23\over 12} f(t_n, y_n) - {16 \over 12} f(t_{n-1}, y_{n-1}) + {5\over 12}f(t_{n-2}, y_{n-2})\right).

The characteristic equation is thus

q(z)=z^3-z^2=z^2(z-1)\,

which has roots z = 0,1, and the conditions above are satisfied. As z = 1 is the only root of modulus 1, the method is strongly stable.

First and second Dahlquist barriers

These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method.

First Dahlquist barrier

A zero-stable and linear q-step multistep method cannot attain an order of convergence greater than q + 1 if q is odd and greater than q + 2 if q is even. If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5).

Second Dahlquist barrier

There are no explicit A-stable and linear multistep methods. The implicit ones have order of convergence at most 2 (Hairer & Wanner 1996, Thm V.1.4).

See also

  • Digital energy gain

References

  • Bashforth, Francis (1883), An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams, Cambridge .
  • Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, John Wiley, ISBN 978-0-471-96758-3 .
  • Goldstine, Herman H. (1977), A History of Numerical Analysis from the 16th through the 19th Century, New York: Springer-Verlag, ISBN 978-0-387-90277-7 .
  • Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems (2nd ed.), Berlin: Springer Verlag, ISBN 978-3-540-56670-0 .
  • Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5 .
  • Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2 .
  • Milne, W. E. (1926), "Numerical integration of ordinary differential equations", American Mathematical Monthly (Mathematical Association of America) 33 (9): 455–460, doi:10.2307/2299609, JSTOR 2299609 .
  • Moulton, Forest R. (1926), New methods in exterior ballistics, University of Chicago Press .
  • Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000), Matematica Numerica, Springer Verlag, ISBN 978-8-847-00077-3 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Numerov's method — is a numerical method to solve ordinary differential equations of second order in which the first order term does not appear. It is a fourth order linear multistep method. The method is implicit, but can be made explicit if the differential… …   Wikipedia

  • Crank–Nicolson method — In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations.[1] It is a second order method in time, implicit in time, and is numerically …   Wikipedia

  • Midpoint method — For the midpoint rule in numerical quadrature, see rectangle method. Illustration of the midpoint method assuming that yn equals the exact value y(tn). The midpoint method computes yn + 1 …   Wikipedia

  • Semi-implicit Euler method — In mathematics, the semi implicit Euler method, also called symplectic Euler, semi explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton s equations, a system of ordinary… …   Wikipedia

  • Heun's method — In mathematics and computational science, Heun s method may refer to the improved or modified Euler s method (that is, the explicit trapezoidal rule[1]), or a similar two stage Runge–Kutta method. It is named after Karl L. W. M. Heun and is a… …   Wikipedia

  • Newmark-beta method — The Newmark beta method is a method of numerical integration used to solve differential equations. It is widely used in numerical evaluation of the dynamic response of structures and solids such as in finite element analysis to model dynamic… …   Wikipedia

  • Euler method — In mathematics and computational science, the Euler method, named after Leonhard Euler, is a first order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic kind of explicit… …   Wikipedia

  • Stiff equation — In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise …   Wikipedia

  • Numerical ordinary differential equations — Illustration of numerical integration for the differential equation y = y,y(0) = 1. Blue: the Euler method, green: the midpoint method, red: the exact solution, y = et. The step size is h = 1.0 …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”