If the equation can be solved in closed form, we should do that. But what if we have an equation that cannot be solved in closed form? What if we want to find the value of the solution at some particular \(x\text{?}\) Or perhaps we want to produce a graph of the solution to inspect the behavior. In this section we will learn about the basics of numerical approximation of solutions.
The simplest method for approximating a solution is Euler’s method 1
Named after the Swiss mathematician Leonhard Paul Euler (1707–1783). The correct pronunciation of the name sounds more like “oiler.”
. It works as follows: Take \(x_0\) and \(y_0\) and compute the slope \(k = f(x_0,y_0)\text{.}\) The slope is the change in \(y\) per unit change in \(x\text{.}\) Follow the line for an interval of length \(h\) on the \(x\)-axis. Hence if \(y = y_0\) at \(x_0\text{,}\) then we say that \(y_1\text{,}\) the approximate value of \(y\) at \(x_1 = x_0 + h\text{,}\) is \(y_1 = y_0 + h k\text{.}\) Rinse, repeat! Let \(k = f(x_1,y_1)\text{,}\) and then compute \(x_2 = x_1 + h\text{,}\) and \(y_2 = y_1 + h k\text{.}\) Now compute \(x_3\) and \(y_3\) using \(x_2\) and \(y_2\text{,}\) etc. Consider the equation \(y' = \nicefrac{y^2}{3}\text{,}\)\(y(0)=1\text{,}\) and \(h=1\text{.}\) Then \(x_0=0\) and \(y_0 = 1\text{.}\) We compute
We then draw an approximate graph of the solution by connecting the points \((x_0,y_0)\text{,}\)\((x_1,y_1)\text{,}\)\((x_2,y_2)\text{,....}\) See Figure 1.16 for the first two steps of the method.
More abstractly, for any \(i=0,1,2,3,\ldots\text{,}\) we compute
The line segments we get are an approximate graph of the solution. Generally it is not exactly the solution. See Figure 1.17 for the plot of the real solution and the approximation.
We continue with the equation \(y' = \nicefrac{y^2}{3}\text{,}\)\(y(0)=1\text{.}\) Let us try to approximate \(y(2)\) using Euler’s method. In Figures Figure 1.16 and Figure 1.17 we have graphically approximated \(y(2)\) with step size 1. With step size 1, we have \(y(2) \approx 1.926\text{.}\) The real answer is 3. We are approximately 1.074 off. Let us halve the step size. Computing \(y_4\) with \(h=0.5\text{,}\) we find that \(y(2) \approx 2.209\text{,}\) so an error of about 0.791. Table 1.1 gives the values computed for various parameters.
Exercise1.7.1.
Solve this equation exactly and show that \(y(2) = 3\text{.}\)
The difference between the actual solution and the approximate solution is called the error. We usually talk about the size of the error and we do not care much about its sign.
\begin{equation*}
\text{Error} = \bigl\lvert \text{Actual } y - \text{Approximate } y
\bigr\rvert .
\end{equation*}
The point is, we do not know the real solution. If we knew the error exactly, we would know the actual solution ... so what is the point of doing the approximation?
Table1.1.Euler’s method approximation of \(y(2)\) where of \(y'=\nicefrac{y^2}{3}\text{,}\)\(y(0)=1\text{.}\)
\(h\)
Approximate \(y(2)\)
Error
\(\frac{\text{Error}}{\text{Previous error}}\)
1
1.92593
1.07407
0.5
2.20861
0.79139
0.73681
0.25
2.47250
0.52751
0.66656
0.125
2.68034
0.31966
0.60599
0.0625
2.82040
0.17960
0.56184
0.03125
2.90412
0.09588
0.53385
0.015625
2.95035
0.04965
0.51779
0.0078125
2.97472
0.02528
0.50913
Note that except for the first few times, each time we halve the \(h\text{,}\) the error approximately halves. Halving of the error is a general feature of Euler’s method as it is a first order method. A simple improvement of the Euler method, see the exercises, produces a second order method. A second order method reduces the error to approximately one quarter every time we halve the interval. The order being “second” means the squaring in \(\nicefrac{1}{4} = \nicefrac{1}{2} \times \nicefrac{1}{2}
= {(\nicefrac{1}{2})}^2\text{.}\)
To get the error to be within 0.1 of the answer, we had to do 64 steps. To get it to within 0.01, we would have to halve another three or four times, meaning doing 512 to 1024 steps. The improved Euler method from the exercises should quarter the error every time we halve the interval, so we would have to do (approximately) half as many “halvings” to get the same error. This reduction can be a big deal. With 10 halvings (starting at \(h=1\)) we have 1024 steps, whereas with 5 halvings we only have to do 32 steps, assuming that the error was comparable to start with. A computer may not care about this difference for a problem this simple, but suppose each step would take a second to compute (the function may be substantially more difficult to compute than \(\nicefrac{y^2}{3}\)). Then the difference is 32 seconds versus about 17 minutes. We are not being altogether fair, a second order method would probably double the time to do each step. Even so, it is 1 minute versus 17 minutes. Next, suppose that we have to repeat such a calculation for different parameters a thousand times. You get the idea.
In practice, we do not know how large the error is! How do we know what is the right step size? Well, essentially, we keep halving the interval, and if we are lucky, we can estimate the error from a few of these calculations and the assumption that the error goes down by a factor of one half each time (if we are using standard Euler).
Exercise1.7.2.
In the table above, suppose you do not know the error. Take the approximate values of the function in the last two lines, assume that the error goes down by a factor of 2. Can you estimate the error in the last time from this? Does it (approximately) agree with the table? Now do it for the first two rows. Does this agree with the table?
Let us talk a little bit more about the example \(y' = \nicefrac{y^2}{3}\text{,}\)\(y(0) =
1\text{.}\) Suppose that instead of \(y(2)\) we wish to find \(y(3)\text{.}\)Table 1.2 lists the results of this effort for successive halvings of \(h\text{.}\) What is going on here? Well, you should solve the equation exactly and you will notice that the solution does not exist at \(x=3\text{.}\) In fact, the solution goes to infinity when you approach \(x=3\text{.}\)
Table1.2.Attempts to use Euler’s to approximate \(y(3)\) where of \(y'=\nicefrac{y^2}{3}\text{,}\)\(y(0)=1\text{.}\)
\(h\)
Approximate \(y(3)\)
1
3.16232
0.5
4.54329
0.25
6.86079
0.125
10.80321
0.0625
17.59893
0.03125
29.46004
0.015625
50.40121
0.0078125
87.75769
Another case where things go bad is if the solution oscillates wildly near some point. The solution may exist at all points, but even a much better numerical method than Euler would need an insanely small step size to approximate the solution with reasonable precision. And computers might not be able to easily handle such a small step size.
In real applications we would not use a simple method such as Euler’s. The simplest method that would probably be used in a real application is the standard Runge–Kutta method (see exercises). That is a fourth order method, meaning that if we halve the interval, the error generally goes down by a factor of 16 (it is fourth order as \(\nicefrac{1}{16} =
\nicefrac{1}{2} \times \nicefrac{1}{2}
\times \nicefrac{1}{2} \times \nicefrac{1}{2}\)).
Choosing the right method to use and the right step size can be very tricky. There are several competing factors to consider.
Computational time: Each step takes computer time. Even if the function \(f\) is simple to compute, we do it many times over. Large step size means faster computation, but perhaps not the right precision.
Roundoff errors: Computers only compute with a certain number of significant digits. Errors introduced by rounding numbers off during our computations become noticeable when the step size becomes too small relative to the quantities we are working with. So reducing step size may in fact make errors worse. There is a certain optimum step size such that the precision increases as we approach it, but then starts getting worse as we make our step size smaller still. Trouble is: this optimum may be hard to find.
Stability: Certain equations may be numerically unstable. What may happen is that the numbers never seem to stabilize no matter how many times we halve the interval. We may need a ridiculously small interval size, which may not be practical due to roundoff errors or computational time considerations. Such problems are sometimes called stiff. In the worst case, the numerical computations might be giving us bogus numbers that look like a correct answer. Just because the numbers seem to have stabilized after successive halving, does not mean that we must have the right answer.
We have seen just the beginnings of the challenges that appear in real applications. Numerical approximation of solutions to differential equations is an active research area for engineers and mathematicians. For example, the general purpose method used for the ODE solver in Matlab and Octave (as of this writing) is a method that appeared in the literature only in the 1980s.
ExercisesExercises
1.7.3.
Consider \(\dfrac{dx}{dt} = {(2t-x)}^2\text{,}\)\(x(0)=2\text{.}\) Use Euler’s method with step size \(h=0.5\) to approximate \(x(1)\text{.}\)
Use Euler’s method with step sizes \(h = 1, \nicefrac{1}{2}, \nicefrac{1}{4}, \nicefrac{1}{8}\) to approximate \(x(1)\text{.}\)
Solve the equation exactly.
Describe what happens to the errors for each \(h\) you used. That is, find the factor by which the error changed each time you halved the interval.
1.7.5.
Approximate the value of \(e\) by looking at the initial value problem \(y'=y\) with \(y(0)=1\) and approximating \(y(1)\) using Euler’s method with a step size of \(0.2\text{.}\)
1.7.6.
Example of numerical instability: Take \(y' = -5y\text{,}\)\(y(0) = 1\text{.}\) We know that the solution should decay to zero as \(x\) grows. Using Euler’s method, start with \(h=1\) and compute \(y_1, y_2, y_3, y_4\) to try to approximate \(y(4)\text{.}\) What happened? Now halve the interval. Keep halving the interval and approximating \(y(4)\) until the numbers you are getting start to stabilize (that is, until they start going towards zero). Note: You might want to use a calculator.
The simplest method used in practice is the Runge–Kutta method. Consider \(\frac{dy}{dx}=f(x,y)\text{,}\)\(y(x_0) = y_0\text{,}\) and a step size \(h\text{.}\) Everything is the same as in Euler’s method, except the computation of \(y_{i+1}\) and \(x_{i+1}\text{.}\)
Use Runge–Kutta (see above) with step sizes \(h=1\) and \(h=\nicefrac{1}{2}\) to approximate \(y(1)\text{.}\)
Use Euler’s method with \(h=1\) and \(h=\nicefrac{1}{2}\text{.}\)
Solve exactly, find the exact value of \(y(1)\text{,}\) and compare.
1.7.101.
Let \(x' = \sin(xt)\text{,}\) and \(x(0)=1\text{.}\) Approximate \(x(1)\) using Euler’s method with step sizes 1, 0.5, 0.25. Use a calculator and compute up to 4 decimal digits.
Answer.
Approximately: 1.0000, 1.2397, 1.3829
1.7.102.
Let \(x' = 2t\text{,}\) and \(x(0)=0\text{.}\)
Approximate \(x(4)\) using Euler’s method with step sizes 4, 2, and 1.
Solve exactly, and compute the errors.
Compute the factor by which the errors changed.
Answer.
a) 0, 8, 12 b) \(x(4) = 16\text{,}\) so errors are: 16, 8, 4. c) Factors are 0.5, 0.5, 0.5.
1.7.103.
Let \(x' = x e^{xt+1}\text{,}\) and \(x(0)=0\text{.}\)
Approximate \(x(4)\) using Euler’s method with step sizes 4, 2, and 1.
Guess an exact solution based on part a) and compute the errors.
Answer.
a) 0, 0, 0 b) \(x=0\) is a solution so errors are: 0, 0, 0.
There is a simple way to improve Euler’s method to make it a second order method by doing just one extra step. Consider \(\frac{dy}{dx}=f(x,y)\text{,}\)\(y(x_0) = y_0\text{,}\) and a step size \(h\text{.}\) What we do is to pretend we compute the next step as in Euler, that is, we start with \((x_i,y_i)\text{,}\) we compute a slope \(k_1 = f(x_i,y_i)\text{,}\) and then look at the point \((x_i+h,y_i + k_1h)\text{.}\) Instead of letting our new point be \((x_i+h,y_i + k_1h)\text{,}\) we compute the slope at that point, call it \(k_2\text{,}\) and then take the average of \(k_1\) and \(k_2\text{,}\) hoping that the average is going to be closer to the actual slope on the interval from \(x_i\) to \(x_i+h\text{.}\) And we are correct, if we halve the step, the error should go down by a factor of \(2^2 = 4\text{.}\) To summarize, the setup is the same as for regular Euler, except the computation of \(y_{i+1}\) and \(x_{i+1}\text{.}\)
Use the improved Euler’s method (see above) with step sizes \(h=\nicefrac{1}{4}\) and \(h=\nicefrac{1}{8}\) to approximate \(y(1)\text{.}\)
Use Euler’s method with \(h=\nicefrac{1}{4}\) and \(h=\nicefrac{1}{8}\text{.}\)
Solve exactly, find the exact value of \(y(1)\text{.}\)
Compute the errors, and the factors by which the errors changed.
Answer.
a) Improved Euler: \(y(1) \approx 3.3897\) for \(h=\nicefrac{1}{4}\text{,}\)\(y(1) \approx 3.4237\) for \(h=\nicefrac{1}{8}\text{,}\) b) Standard Euler: \(y(1) \approx 2.8828\) for \(h=\nicefrac{1}{4}\text{,}\)\(y(1) \approx 3.1316\) for \(h=\nicefrac{1}{8}\text{,}\) c) \(y = 2e^x-x-1\text{,}\) so \(y(2)\) is approximately \(3.4366\text{.}\) d) Approximate errors for improved Euler: \(0.046852\) for \(h=\nicefrac{1}{4}\text{,}\) and \(0.012881\) for \(h=\nicefrac{1}{8}\text{.}\) For standard Euler: \(0.55375\) for \(h=\nicefrac{1}{4}\text{,}\) and \(0.30499\) for \(h=\nicefrac{1}{8}\text{.}\) Factor is approximately \(0.27\) for improved Euler, and \(0.55\) for standard Euler.
For a higher quality printout use the PDF version: https://www.jirka.org/diffyqs/diffyqs.pdf