## Численное решение обыкновенных дифференциальных уравнений (ОДУ) в Python

Модуль **scipy.integrate** имеет две функции **ode()** и **odeint()**, которые предназначены для решения систем обыкновенных дифференциальных уравнений (ОДУ) первого порядка с начальными условиями в одной точке (т.е. задача Коши).

Функция **ode()** более универсальная, а функция **odeint()** (ODE integrator) имеет более простой интерфейс и хорошо решает большинство задач.

Функция **odeint()** имеет три обязательных аргумента и много опций. Она имеет следующий формат

### Решение одного ОДУ

Допустим надо решить диф. уравнение 1-го порядка

Получилось что-то такое:

### Решение системы ОДУ

Пусть теперь мы хотим решить (автономную) систему диф. уравнений 1-го порядка

## Integration ( scipy.integrate )#

The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator. An overview of the module is provided by the help command:

### General integration ( quad )#

The function quad is provided to integrate a function of one variable between two points. The points can be \(\pm\infty\) ( \(\pm\) inf ) to indicate infinite limits. For example, suppose you wish to integrate a bessel function jv(2.5, x) along the interval \([0, 4.5].\)

This could be computed using quad :

The first argument to quad is a “callable” Python object (i.e., a function, method, or class instance). Notice the use of a lambda- function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. Notice, that in this case, the true value of this integral is

is the Fresnel sine integral. Note that the numerically-computed integral is within \(1.04\times10^<-11>\) of the exact result — well below the reported error bound.

If the function to integrate takes additional parameters, they can be provided in the *args* argument. Suppose that the following integral shall be calculated:

This integral can be evaluated by using the following code:

Infinite inputs are also allowed in quad by using \(\pm\) inf as one of the arguments. For example, suppose that a numerical value for the exponential integral:

is desired (and the fact that this integral can be computed as special.expn(n,x) is forgotten). The functionality of the function special.expn can be replicated by defining a new function vec_expint based on the routine quad :

The function which is integrated can even use the quad argument (though the error bound may underestimate the error due to possible numerical error in the integrand from the use of quad ). The integral in this case is

This last example shows that multiple integration can be handled using repeated calls to quad .

Numerical integration algorithms sample the integrand at a finite number of points. Consequently, they cannot guarantee accurate results (or accuracy estimates) for arbitrary integrands and limits of integration. Consider the Gaussian integral, for example:

Since the integrand is nearly zero except near the origin, we would expect large but finite limits of integration to yield the same result. However:

This happens because the adaptive quadrature routine implemented in quad , while working as designed, does not notice the small, important part of the function within such a large, finite interval. For best results, consider using integration limits that tightly surround the important part of the integrand.

Integrands with several important regions can be broken into pieces as necessary.

### General multiple integration ( dblquad , tplquad , nquad )#

The mechanics for double and triple integration have been wrapped up into the functions dblquad and tplquad . These functions take the function to integrate and four, or six arguments, respectively. The limits of all inner integrals need to be defined as functions.

An example of using double integration to compute several values of \(I_

As example for non-constant limits consider the integral

This integral can be evaluated using the expression below (Note the use of the non-constant lambda functions for the upper limit of the inner integral):

For n-fold integration, scipy provides the function nquad . The integration bounds are an iterable object: either a list of constant bounds, or a list of functions for the non-constant integration bounds. The order of integration (and therefore the bounds) is from the innermost integral to the outermost one.

The integral from above

can be calculated as

Note that the order of arguments for *f* must match the order of the integration bounds; i.e., the inner integral with respect to \(t\) is on the interval \([1, \infty]\) and the outer integral with respect to \(x\) is on the interval \([0, \infty]\) .

Non-constant integration bounds can be treated in a similar manner; the example from above

can be evaluated by means of

which is the same result as before.

### Gaussian quadrature#

A few functions are also provided in order to perform simple Gaussian quadrature over a fixed interval. The first is fixed_quad , which performs fixed-order Gaussian quadrature. The second function is quadrature , which performs Gaussian quadrature of multiple orders until the difference in the integral estimate is beneath some tolerance supplied by the user. These functions both use the module scipy.special.orthogonal , which can calculate the roots and quadrature weights of a large variety of orthogonal polynomials (the polynomials themselves are available as special functions returning instances of the polynomial class — e.g., special.legendre ).

### Romberg Integration#

Romberg’s method [WPR] is another method for numerically evaluating an integral. See the help function for romberg for further details.

### Integrating using Samples#

If the samples are equally-spaced and the number of samples available is \(2^

In case of arbitrary spaced samples, the two functions trapezoid and simpson are available. They are using Newton-Coates formulas of order 1 and 2 respectively to perform integration. The trapezoidal rule approximates the function as a straight line between adjacent points, while Simpson’s rule approximates the function between three adjacent points as a parabola.

For an odd number of samples that are equally spaced Simpson’s rule is exact if the function is a polynomial of order 3 or less. If the samples are not equally spaced, then the result is exact only if the function is a polynomial of order 2 or less.

## Differential Equations with SciPy – odeint or solve_ivp

SciPy features two different interfaces to solve differential equations: odeint and solve_ivp . The newer one is solve_ivp and it is recommended but odeint is still widespread, probably because of its simplicity. Here I will go through the difference between both with a focus on moving to the more modern solve_ivp interface. The primary advantage is that solve_ivp offers several methods for solving differential equations whereas odeint is restricted to one. We get started by setting up our system of differential equations and some parameters of the simulation.

UPDATE 07.02.2021: Note that I am focusing here on usage of the different interfaces and not on benchmarking accuracy or performance. Faruk Krecinic has contacted me and noted that assessing accuracy would require a system with a known solution as a benchmark. This is beyond the scope of this blog post. He also pointed out to me that the hmax parameter of odeint is important for the extent to which both interfaces give similar results. You can learn more about the parameters of odeint in the docs.

We will be using the Lorenz system. We can directly move on the solving the system with both odeint and solve_ivp .

Simulation results from odeint and solve_ivp. Note that the solve_ivp looks very different primarily because of the default temporal resolution that is applied. Changing the the temporal resolution and getting very similar results to odeint is easy and shown below.

The first thing that sticks out is that the solve_ivp solution is less smooth. That is because it is calculated at fewer time points, which in turn has to do with the difference between t_span and t . The odeint interface expects t , an array of time points for which we want to calculate the solution. The temporal resolution of the system is given by the interval between time points. The solve_ivp interface on the other hand expects t_span , a tuple that gives the start and end of the simulation interval. solve_ivp determines the temporal resolution by itself, depending on the integration method and the desired accuracy of the solution. We can confirm that the temporal resolution of solve_ivp is lower in this example by inspecting the output of both functions.

The t array has 4000 elements and therefore the result of odeint has 4000 rows, each row being a time point defined by t. The result of solve_ivp is different. It has its own time array as an attribute and it has 1989 elements. This tells us that solve_ivp indeed calculated fewer time points affection temporal resolution. So how can we increase the the number of time points in solve_ivp ? There are three ways: 1. We can manually define the time points to integrate, similar to odeint . 2. We can decrease the error of the solution we are willing to tolerate. 3. We can change to a more accurate integration method. We will first change the integration method. The default integration method of solve_ivp is RK45 and we will compare it to the default method of odeint , which is LSODA .

Comparison between RK45 and LSODA integration methods of solve_ivp.

The LSODA method is already more accurate but we can make it even more accurate but it is still not as accurate as the solution we got from odeint . That is because we made odeint solve at even higher temporal resolution when we passed it t. To get the exact same result from solve_ivp we got from odeint , we must pass it the exact time points we want to solve with the t_eval parameter.

odeint and solve_ivp with identical integration method and temporal resolution.

Now both solutions have identical temporal resolution. But their solution is still not identical. I was unable to confirm why that is the case but I suspect very small floating point errors. The Lorenz attractor is a chaotic system and even small errors can make it diverge. The following plot shows the first variable of the system for odeint and solve_ivp from the above simulation. It confirms my suspicion that floating point accuracy is to blame but was unable to confirm the source.

Solutions from odeint and solve_ivp diverge even for identical temporal resolution and integration method.

There is one more way to make the system more smooth: decrease the tolerated error. We will not go into that here because it does not relate to the difference between odeint and solve_ivp . It can be used in both interfaces. There are some more subtle differences. For example, by default, odeint expects the first parameter of the problem to be the state and the second parameter to be time t . This is the other way around for solve_ivp . To make odeint accept a problem definition where time is the first parameter we set the parameter tfirst=True .

In summary, solve_ivp offers several integration methods while odeint only uses LSODA .

## scipy.integrate.odeint#

Integrate a system of ordinary differential equations.

For new code, use scipy.integrate.solve_ivp to solve a differential equation.

Solve a system of ordinary differential equations using lsoda from the FORTRAN library odepack.

Solves the initial value problem for stiff or non-stiff systems of first order ode-s:

where y can be a vector.

By default, the required order of the first two arguments of *func* are in the opposite order of the arguments in the system definition function used by the scipy.integrate.ode class and the function scipy.integrate.solve_ivp . To use a function with the signature func(t, y, . ) , the argument *tfirst* must be set to True .

Computes the derivative of y at t. If the signature is callable(t, y, . ) , then the argument *tfirst* must be set True .

**y0** array

Initial condition on y (can be a vector).

**t** array

A sequence of time points for which to solve for y. The initial value point should be the first element of this sequence. This sequence must be monotonically increasing or monotonically decreasing; repeated values are allowed.

**args** tuple, optional

Extra arguments to pass to function.

**Dfun** callable(y, t, …) or callable(t, y, …)

Gradient (Jacobian) of *func*. If the signature is callable(t, y, . ) , then the argument *tfirst* must be set True .

**col_deriv** bool, optional

True if *Dfun* defines derivatives down columns (faster), otherwise *Dfun* should define derivatives across rows.

**full_output** bool, optional

True if to return a dictionary of optional outputs as the second output

**printmessg** bool, optional

Whether to print the convergence message

**tfirst** bool, optional

If True, the first two arguments of *func* (and *Dfun*, if given) must t, y instead of the default y, t .

New in version 1.1.0.

Array containing the value of y for each desired time in t, with the initial value *y0* in the first row.

**infodict** dict, only returned if full_output == True

Dictionary containing additional output information

vector of step sizes successfully used for each time step

vector with the value of t reached for each time step (will always be at least as large as the input times)

vector of tolerance scale factors, greater than 1.0, computed when a request for too much accuracy was detected

value of t at the time of the last method switch (given for each time step)

cumulative number of time steps

cumulative number of function evaluations for each time step

cumulative number of jacobian evaluations for each time step

a vector of method orders for each successful step

index of the component of largest magnitude in the weighted local error vector (e / ewt) on an error return, -1 otherwise

the length of the double work array required

the length of integer work array required

a vector of method indicators for each successful time step: 1: adams (nonstiff), 2: bdf (stiff)

If either of these are not None or non-negative, then the Jacobian is assumed to be banded. These give the number of lower and upper non-zero diagonals in this banded matrix. For the banded case, *Dfun* should return a matrix whose rows contain the non-zero bands (starting with the lowest diagonal). Thus, the return matrix *jac* from *Dfun* should have shape (ml + mu + 1, len(y0)) when ml >=0 or mu >=0 . The data in *jac* must be stored such that jac[i — j + mu, j] holds the derivative of the *i`th equation with respect to the `j`th state variable. If `col_deriv* is True, the transpose of this *jac* must be returned.

**rtol, atol** float, optional

The input parameters *rtol* and *atol* determine the error control performed by the solver. The solver will control the vector, e, of estimated local errors in y, according to an inequality of the form max-norm of (e / ewt) <= 1 , where ewt is a vector of positive error weights computed as ewt = rtol * abs(y) + atol . rtol and atol can be either vectors the same length as y or scalars. Defaults to 1.49012e-8.

**tcrit** ndarray, optional

Vector of critical points (e.g., singularities) where integration care should be taken.

**h0** float, (0: solver-determined), optional

The step size to be attempted on the first step.

**hmax** float, (0: solver-determined), optional

The maximum absolute step size allowed.

**hmin** float, (0: solver-determined), optional

The minimum absolute step size allowed.

**ixpr** bool, optional

Whether to generate extra printing at method switches.

**mxstep** int, (0: solver-determined), optional

Maximum number of (internally defined) steps allowed for each integration point in t.

**mxhnil** int, (0: solver-determined), optional

Maximum number of messages printed.

**mxordn** int, (0: solver-determined), optional

Maximum order to be allowed for the non-stiff (Adams) method.

**mxords** int, (0: solver-determined), optional

Maximum order to be allowed for the stiff (BDF) method.

solve an initial value problem for a system of ODEs

a more object-oriented integrator based on VODE

for finding the area under a curve

The second order differential equation for the angle *theta* of a pendulum acted on by gravity with friction can be written:

where *b* and *c* are positive constants, and a prime (’) denotes a derivative. To solve this equation with odeint , we must first convert it to a system of first order equations. By defining the angular velocity omega(t) = theta'(t) , we obtain the system:

Let *y* be the vector [*theta*, *omega*]. We implement this system in Python as:

We assume the constants are *b* = 0.25 and *c* = 5.0:

For initial conditions, we assume the pendulum is nearly vertical with *theta(0)* = *pi* — 0.1, and is initially at rest, so *omega(0)* = 0. Then the vector of initial conditions is

We will generate a solution at 101 evenly spaced samples in the interval 0 <= *t* <= 10. So our array of times is:

Call odeint to generate the solution. To pass the parameters *b* and *c* to *pend*, we give them to odeint using the *args* argument.

The solution is an array with shape (101, 2). The first column is *theta(t)*, and the second is *omega(t)*. The following code plots both components.