Skip to main content
Article Open Access

Effective solution of nonlinear DAEs problems using Taylor series method

  • EMAIL logo , and ORCID logo
Published/Copyright: January 20, 2026
Become an author with De Gruyter Brill

Abstract

Most methods for solving ordinary differential equations use a limited order to calculate the results. The higher-order method based on the Taylor series presented in this article can use as many terms of the Taylor series as necessary to obtain a stable and accurate solution. When solving nonlinear problems using this method, the solution can be complex. This paper introduces a new approach to solving nonlinear problems using the explicit Taylor series method. It compares this new approach against the previously used approach and the state-of-the-art solvers in MATLAB software.

MSC 2020: 34A34; 34-04; 34A30

1 Introduction

When solving real-world technical problems, the vast majority of them are nonlinear. To describe such systems, the nonlinear ordinary differential equations (ODEs) can be used. In the paper presented at the 2024 IEEE 17th International Scientific Conference on Informatics [1], several methods for representing the operation division, which is commonly used in nonlinear problems, were introduced. The paper presented the basic principle of calculation on a simple example. This article extends the paper by introducing a new method for calculating multiplication operations and by comparing the approaches that showed the most promise using a complex, real-world technical problem. The article also builds upon [2], where the nonlinear formulation of the Kepler problem was introduced. This article introduces a different method to solve the problem.

The first implementation of the Modern Taylor Series Method (MTSM) was TKSL/386 software (TKSL stands for Taylor-Kunovsky Simulation Language) [3], 4]. So far, the MTSM has been implemented and tested in MATLAB software [5], in C/C++ languages (FOS [6] and TKSL/C software [7]). Some experiments with the solution of stiff systems of ODEs and analysis of the implicit scheme of MTSM have been done; see e.g. [8], 9] and references herein. Additionally, the method can be effectively implemented in hardware [10].

Similar works and implementations of the Taylor series method in a variable order and variable step context were presented by different authors, e.g., TIDES software [11], and TAYLOR [12], which includes a detailed description of a variable-step-size version. Other implementations based on Taylor series include ATOMF [13], COSY INFINITY [14], and DAETS [15]. The variable step-size variable-order scheme is also described in refs. [16], [17], [18], where simulations on a parallel computer are shown. An approach based on an approximate formulation of the Taylor methods can be found in ref. [19]. For example, further research is being done in ref. [20], which describes the generalised implementation of the Taylor series-based method with its order limited to three. The MTSM allows for computation with arbitrary accuracy and step size if variable-precision arithmetic and higher-order of the method are used. The article [21] focuses on the Open Multi-Processing (OpenMP) parallelisation of a multiple precision Taylor series method using one computational node. The model problem is the chaotic dynamic system – the classical Lorenz system. The article also briefly mentions the Clean Numerical Simulation (CNS) concept originally published in ref. [22]. The CNS provides reliable chaotic trajectories in a long enough interval of time.

This paper is divided into several sections. Section 2 introduces the variable-step, variable-order method and commonly used terms and abbreviations. Section 3 describes the solution of linear and nonlinear problems. Particularly, it discusses the solution of problems with just function multiplication and the solution of differential-algebraic equations (DAEs) for both operations: division and multiplication. Section 4 introduces the selected example problem – Kepler problem and several systems of ODEs and DAEs that can be used to solve it. The simulation results are summarized in Section 5, where the solution using the method is compared against the state-of-the-art solvers ode23 and ode45 in MATLAB software [23]. The comparison is also performed against the approach used in ref. [2] to show the better effectiveness of the approach using DAEs systems.

2 Method based on the Taylor series

This section introduces the developed method – Modern Taylor Series Method (MTSM). More information, including details about other research in this area, can be found in refs. [2], 3], 24], 25]. An ordinary differential equation with an initial condition

(1) y ( t ) = F ( t , y ( t ) ) y ( 0 ) = y 0 ,

is called an initial value problem (IVP). The best-known and the most accurate approximation of the (1) is to construct the Taylor series in the form

(2) y i + 1 = y i + h F ( t i , y i ) + + h n n ! F [ n 1 ] ( t i , y i ) ,

where h is the integration step size, F [ n 1 ] is the (n − 1)th derivative of F , y i y(t i ) is the approximation of the current value and y i+1y(t i + h) is the approximation of the next value of the function y(t) [26].

MTSM very effectively implements the variable-step-size, variable-order numerical solution of IVPs using the Taylor series expansion [3]. It is based on a recurrent calculation of the Taylor series terms for each integration step. Therefore, the complicated calculation of higher-order derivatives is not necessary. Rather, the value of each Taylor series term can be numerically calculated. Equation (2) can then be rewritten in the form

(3) y i + 1 = p ( 0 ) + p ( 1 ) + p ( 2 ) + + p ( n ) ,

where p(j), j = 0, …, n denotes the Taylor series terms. MTSM transforms the input problem into a system of autonomous ODEs, allowing for the recurrent calculation of the Taylor series terms. The Taylor series terms in each time step are truncated when the stopping rule for the last σ number of Taylor series terms is met

(4) j = n σ n p ( j ) ε ,

where ɛ is the required accuracy of the numerical solution using MTSM. In this article, we consider σ = 3.

An important part of the method is the automatic setting of the integration order, i.e., using as many Taylor series terms as the defined accuracy requires. Let P denote the function that changes during the computation and define the number of Taylor series terms used in the current integration step P i+1 = n in (3). More information about the method and additional comparison with state-of-the-art methods can be found in ref. [2].

3 Numerical solution of ODEs

In this section, the numerical solution of ODEs is introduced for both linear and nonlinear problems.

For linear systems of ODEs in the form y ′(t) = Ay (t) + b, with initial conditions y (t) = y 0, Taylor series expansion (2) can be constructed using matrix-vector notation as

(5) y i + 1 = y i + h ( A y i + b ) + h 2 2 ! A ( A y i + b ) + + h n n ! A ( n 1 ) ( A y i + b ) ,

where A is the constant Jacobian matrix and b is the constant right-hand side. Moreover, Taylor series terms in (3) can be computed recurrently using

(6) p ( 0 ) = y i , p ( 1 ) = h ( A y i + b ) , p ( j ) = h j A p ( j 1 ) , j = 2 , , n .

The next subsections introduce the ways to solve nonlinear ODEs using the method presented in Section 2.

3.1 ODEs with operation multiplication

For nonlinear problems with operation multiplication, IVP (1) has the following form

(7) y = A y + B 1 y j k + B 2 y jkl + + b , y ( 0 ) = y 0 ,

where A R n e × n e is the constant matrix for the linear part of the system, matrices B 1 R n e × n m j k , B 2 R n e × n m jkl are the constant matrices for nonlinear part of the system. The vector b R n e is the right-hand side for the forces incoming to the system, y 0 is a vector of the initial conditions, and the symbol ne stands for the number of equations of the system of ODEs. Symbols nm jk and nm jkl represent the number of two and three-function multiplications, respectively.

The unknown function y j k R n m j k represents the vector of two-terms multiplications y j y k and similarly y jkl R n m jkl represents the vector of three-terms multiplications y jj y kk y ll , where vectors of indices j , k , jj , kk , ll ∈ {1, …, ne} come from indeces of function y in multiplications terms in (7). The operation ⊙ stands for element-by-element multiplication. For simplification, the matrices A, B 1, B 2, … and the vector b are constant. The higher derivatives of the terms B 1 y jk , B 2 y jkl in (7) can be included in a recurrent calculation of the Taylor series terms p B1 and p B2 in explicit form

(8) p A ( 1 ) = A y + b , p B 1 ( 1 ) = B 1 y j k , p B 2 ( 1 ) = B 2 y jkl , p A ( r ) = A p ( r 1 ) , p B 1 ( r ) = B 1 a = 1 r p j ( a 1 ) p k ( r a ) , p B 2 ( r ) = B 2 a = 0 r 1 p j j ( a ) b = 1 r a p k k ( b 1 ) p l l ( r a b ) ,

where r = 2, …, n. The vectors of Taylor series terms p j , p k and p jj , p kk , p ll have the same meaning for the element-by-element multiplication of two-terms and three-terms in (7), respectively. Finally, the Taylor series terms are calculated as a sum of linear and nonlinear terms

(9) p ( s ) = h s ( p A ( s ) + p B 1 ( s ) + p B 2 ( s ) ) , s = 1 , , n ,

where r and s are the current indexes of the Taylor series terms, a and b are the auxiliary indexes for the summation of two and three-term multiplications in the nonlinear part of the Taylor series, p A (s) is the linear term computed using the recurrent calculation for the linear part of ODE system. The next value of the function can be calculated using

(10) y i + 1 = p ( 0 ) i + p ( 1 ) i + p ( 2 ) i + + p ( n ) i ,

where p (0) i is the value of the function y i , p (1) i , …, p (n) i are the Taylor series terms calculated at time t = t i using (9). Multiplication terms of the Taylor series for more multiplications p B3, p B4, … can be calculated recurrently in a similar way. More information can be found in refs. [2], 27].

In the conference article [1], we have presented several approaches that can be used to solve nonlinear ODEs with operation division. The most promising was Method 3, which used an implicit scheme to transform a system of nonlinear ODEs into the system of DAEs.

3.2 DAEs with operation division

Let us analyse the IVP

(11) y = z , y ( 0 ) = y 0 ,

and

(12) z = f g .

The Taylor series terms for functions f and g in (12) can be computed recurrently

(13) p f ( 1 ) = h f p g ( 1 ) = h g p f ( 2 ) = h 2 p f ( 1 ) p g ( 2 ) = h 2 p g ( 1 ) p f ( 3 ) = h 3 p f ( 2 ) p g ( 3 ) = h 3 p g ( 2 ) p f ( n ) = h n p f ( n 1 ) p g ( n ) = h n p g ( n 1 ) .

Functions f(t i+1) and g(t i+1) can be then approximated using Taylor series expansions

(14) f i + 1 = f i + j = 1 n p f ( j ) i , g i + 1 = g i + j = 1 n p g ( j ) i .

The Taylor series terms for a function y in (11) could be computed recurrently in the form

(15) p y ( 1 ) = h z p y ( 2 ) = h 2 p z ( 1 ) p y ( 3 ) = h 3 p z ( 2 ) p y ( n ) = h n p z ( n 1 ) .

Based on [1], the n-th Taylor series term for function z can be computed

(16) p z ( n ) = 1 g p f ( n ) j = 1 n p z ( n j ) p g ( j ) ,

where p z (0) = z. Finally, the functions y(t i+1) and z(t i+1) can be approximated using Taylor series terms computed in (15) and (16), respectively

(17) y i + 1 = y i + j = 1 n p y ( j ) i , z i + 1 = z i + j = 1 n p z ( j ) i .

As we can see in (16), the computation of Taylor series terms p z (j) for auxiliary function z is in implicit form, which means we need to compute all the Taylor series terms p f (j), p g (j) and p y (j), where j = 1, …, n, for functions f, g and y previously. This substitution leads to the solution of DAEs, where algebraic equations (in our simple case, the function z) are substituted into the original problem of ODEs and solved.

Take note, that IVP (11) with algebraic equation (12) could be rewritten in matrix-vector notation in the form

(18) y 1 = A y y ( 0 ) = y 0 y 2 = B d y j ,

where A = (1), B d = (1/g), y 1 = y, y 2 = z, y j = f and y = y 1 T , y 2 T T .

The DAE system (18) is comprised again of a linear (matrix A) and nonlinear (matrix B d ) part. The linear part of the system (Taylor series terms p A ) is solved using (6). The equation (16), used for the computation of Taylor series terms in the nonlinear part, can be expressed

(19) p B d ( n ) = B d p j ( n ) i = 1 n p k ( n i ) p l ( i ) ,

where p k (0) = y k (0), the symbol ⊙ has the same meaning (element-by-element multiplication) as in (8) and vectors of indices j , k , l ∈ {1, …, ne} in the terms p j , p k , p l define the indices of the terms in operation division, namely numerator, quotient and denominator, respectively.

The Taylor series terms are in the form p ( j ) = ( p A ( j ) T , p B d ( j ) T ) T , j = 1, …, n. Finally y (t i+1) can be approximated using the Taylor series expansion

(20) y i + 1 = y i + j = 1 n p ( j ) i .

3.3 DAEs with operation multiplication

Similarly, the operation multiplication in ODEs can be transformed into a system of DAEs. Let us analyse IVP (11) where

(21) z = f g .

The Taylor series terms for functions f, g, and y can be computed recurrently using (13) and (15), respectively. Generally, the n-th Taylor series term for function z can be computed

(22) p z ( n ) = j = 1 n + 1 p f ( n j + 1 ) p g ( j 1 ) ,

where p f (0) = f and p g (0) = g. Finally, the functions y(t i+1) and z(t i+1) in (17) can be approximated using Taylor series terms computed in (15) and (22).

As we can see again in (22), the computation of Taylor series terms p z (j) for the auxiliary function z with operation multiplication is in implicit form, which means we need to compute all the Taylor series terms p f (j) and p g (j) in (13), where j = 1, …, n, for functions f and g previously. Note that IVP (11) with algebraic equation (21) can be rewritten in matrix-vector notation in the form

(23) y 1 = A y y ( 0 ) = y 0 y 2 = y j k ,

where A = (1), y 1 = y, y 2 = z, y jk = fg and y = y 1 T , y 2 T T .

The DAE system (23) is again comprised of a linear (matrix A) and nonlinear (vector y 2) part, which contains the operation multiplication of 2 functions. The linear part of the system (Taylor series terms p A ) is solved using (6). The equation (22), used for the computation of Taylor series terms in the nonlinear part, can be expressed

(24) p m ( n ) = i = 1 n + 1 p j ( n i + 1 ) p k ( i 1 ) ,

where p j (0) = y j (0) and p k (0) = y k (0). The symbol ⊙ has the same meaning (element-by-element multiplication) as in (8) and vectors of indices j , k ∈ {1, …, ne} in the terms p j , p k define the indices of the terms in operation multiplication, namely multiplier and multiplicand, respectively.

The Taylor series terms are in the form p ( j ) = ( p A ( j ) T , p m ( j ) T ) T , j = 1, …, n. Finally y (t i+1) can be approximated using the Taylor series expansion (20).

4 Kepler problem

To show how the presented approach works, let us consider the Kepler problem [28]. We have analyzed the problem thoroughly in ref. [2] using the approach described in Section 3.1. Let us compare the results with the newly proposed solution of nonlinear ODEs using transformation to DAEs systems (formulation of such problems is described in Sections 3.2 and 3.3).

The Kepler problem is defined using the following system of nonlinear ODEs

(25) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 1 r 3 y 3 ( 0 ) = 0 y 4 = y 2 r 3 y 4 ( 0 ) = 1 + e 1 e ,

where e is an eccentricity of a rotating body and r = y 1 2 + y 2 2 . The analytical solution of (25) is defined by

(26) K a ( y 1 , y 2 , e ) := ( y 1 + e ) 2 + y 2 2 1 e 2 1 = 0 .

The solution of (25) for different e is shown in Figure 1. Some possibilities of the transformation of the original problem (25) will be introduced in the next part of this section. Note that all presented Examples 1–5 are equivalent to the original Kepler problem (25).

Figure 1: 
Solution for e = 0.25 (left), e = 0.5 (right), e = 0.75 (bottom).
Figure 1:

Solution for e = 0.25 (left), e = 0.5 (right), e = 0.75 (bottom).

4.1 Example 1

To solve (25) using the presented method, the term r 3 = y 1 2 + y 2 2 3 2 has to be removed using the following transformation

(27) y 5 = y 1 2 + y 2 2 3 2 y 5 = 3 2 y 1 2 + y 2 2 1 2 2 y 1 y 1 + 2 y 2 y 2 = 3 y 1 2 + y 2 2 1 2 y 1 y 3 + y 2 y 4 .

The term y 1 2 + y 2 2 1 2 has to be handled next in a similar manner

(28) y 6 = y 1 2 + y 2 2 1 2 y 6 = 1 2 y 1 2 + y 2 2 1 2 2 y 1 y 1 + 2 y 2 y 2 = y 1 2 + y 2 2 1 2 y 1 y 3 + y 2 y 4 = y 1 y 3 + y 2 y 4 y 1 2 + y 2 2 1 2 = y 1 y 3 + y 2 y 4 y 6 .

The system of ODEs representing the problem without the term r 3 = y 1 2 + y 2 2 3 2 can be written as

(29) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 1 y 5 y 3 ( 0 ) = 0 y 4 = y 2 y 5 y 4 ( 0 ) = 1 + e 1 e y 5 = 3 ( y 6 y 1 y 3 + y 6 y 2 y 4 ) y 5 ( 0 ) = r ( 0 ) 3 y 6 = y 1 y 3 + y 2 y 4 y 6 y 6 ( 0 ) = r ( 0 ) .

The same system of ODEs was analyzed in ref. [2]. To solve this system using the newly proposed DAE solver, the algebraic equations for multiplication and division have to be derived

(30) y 7 = y 1 y 3 , y 11 = y 1 y 5 , y 8 = y 2 y 4 , y 12 = y 2 y 5 , y 9 = y 6 y 7 , y 13 = y 7 y 6 , y 10 = y 6 y 8 , y 14 = y 8 y 6 .

The full DAEs system can be written as follows

(31) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 11 y 3 ( 0 ) = 0 y 4 = y 12 y 4 ( 0 ) = 1 + e 1 e y 5 = 3 y 9 + 3 y 10 y 5 ( 0 ) = r ( 0 ) 3 y 6 = y 13 + y 14 y 6 ( 0 ) = r ( 0 ) y 7 = y 1 y 3 y 11 = y 1 y 5 y 8 = y 2 y 4 y 12 = y 2 y 5 y 9 = y 6 y 7 y 13 = y 7 y 6 y 10 = y 6 y 8 y 14 = y 8 y 6 .

The DAEs system (31) can be rewritten in matrix-vector notation using (18) and (23) in the form

(32) y 1 = A y y ( 0 ) = y 0 y 2 = y j k y 3 = B d y j ,

where constant matrix A of the size 6 × 14 has eight nonzero elements A(1, 3) = 1, A(2, 4) = 1, A(3, 11) = −1, A(4, 12) = −1, A(5, 9) = 3, A(5, 10) = 3, A(6, 13) = 1, A(6, 14) = 1, the square matrix B d of the size 4 contains nonzero elements only on the main diagonal

diag ( B d ) = ( 1 / y 5 , 1 / y 5 , 1 / y 6 , 1 / y 6 )

and vectors

y 1 = y 1 , y 2 , y 3 , y 4 , y 5 , y 6 T y 2 = ( y 7 , y 8 , y 9 , y 10 ) T y j k = ( y 1 y 3 , y 2 y 4 , y 6 y 7 , y 6 y 8 ) T y 3 = y 11 , y 12 , y 13 , y 14 T y j = ( y 1 , y 2 , y 7 , y 8 ) T .

4.2 Example 2

The system (29) was used in ref. [2]. In the original article, it was not possible to solve divisions directly. They had to be transformed into multiplication form (7).

The new approach introduced in this article allows us to derive the problem differently. The term r = y 1 2 + y 2 2 from (25) can be generated using auxiliary function y 5 and its derivative in the form

(33) y 5 = y 1 2 + y 2 2 1 2 y 5 = 1 2 y 1 2 + y 2 2 1 2 2 y 1 y 1 + 2 y 2 y 2 = y 1 2 + y 2 2 1 2 ( y 1 y 3 + y 2 y 4 ) = y 1 y 3 + y 2 y 4 y 1 2 + y 2 2 1 2 = y 1 y 3 + y 2 y 4 y 5 .

The system of ODEs can then be rewritten as

(34) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 1 r 3 y 3 ( 0 ) = 0 y 4 = y 2 r 3 y 4 ( 0 ) = 1 + e 1 e y 5 = y 1 y 3 + y 2 y 4 y 5 y 5 ( 0 ) = r ( 0 ) .

After substituting algebraic equations for multiplication and division operations to the system of ODEs (34), the final system of DAEs is in the form

(35) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 12 y 3 ( 0 ) = 0 y 4 = y 13 y 4 ( 0 ) = 1 + e 1 e y 5 = y 10 + y 11 y 5 ( 0 ) = r ( 0 ) y 6 = y 1 y 3 y 10 = y 6 y 5 y 7 = y 2 y 4 y 11 = y 7 y 5 y 8 = y 5 y 5 y 12 = y 1 y 9 y 9 = y 8 y 5 y 13 = y 2 y 9 .

System (35) can be written in matrix-vector notation (32), where constant matrix A of the size 5 × 13 has six nonzero elements A(1, 3) = 1, A(2, 4) = 1, A(3, 12) = −1, A(4, 13) = −1, A(5, 10) = 1, A(5, 11) = 1, the square matrix B d of the size 4 contains nonzero elements only on the main diagonal

diag ( B d ) = ( 1 / y 5 , 1 / y 5 , 1 / y 9 , 1 / y 9 )

and vectors

y 1 = ( y 1 , y 2 , y 3 , y 4 , y 5 ) T y 2 = ( y 6 , y 7 , y 8 , y 9 ) T y j k = ( y 1 y 3 , y 2 y 4 , y 5 y 5 , y 8 y 5 ) T y 3 = ( y 10 , y 11 , y 12 , y 13 ) T y j = ( y 6 , y 7 , y 1 , y 2 ) T .

Examples 1 and 2 so far were not presented in ref. [2]. The following examples compare the formulation of systems using the approach from the previous article to the newly proposed one.

4.3 Example 3

The first comparison from [2] shows the most complicated variant with five function multiplications

(36) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 7 y 1 y 3 ( 0 ) = 0 y 4 = y 7 y 2 y 4 ( 0 ) = 1 + e 1 e y 5 = 3 y 6 ( y 1 y 3 + y 2 y 4 ) y 5 ( 0 ) = r ( 0 ) 3 y 6 = y 8 ( y 1 y 3 + y 2 y 4 ) y 6 ( 0 ) = r ( 0 ) y 7 = 3 y 6 y 7 2 ( y 1 y 3 + y 2 y 4 ) y 7 ( 0 ) = 1 y 5 ( 0 ) y 8 = y 8 3 ( y 1 y 3 + y 2 y 4 ) y 8 ( 0 ) = 1 y 6 ( 0 ) .

This system can be transformed into DAEs as follows:

(37) y 1 = y 3 y 5 = 3 y 15 + 3 y 16 y 2 = y 4 y 6 = y 17 + y 18 y 3 = y 9 y 7 = 3 y 21 3 y 22 y 4 = y 10 y 8 = y 23 y 24 y 9 = y 1 y 7 y 17 = y 8 y 11 y 10 = y 2 y 7 y 18 = y 8 y 12 y 11 = y 1 y 3 y 19 = y 6 y 13 y 12 = y 2 y 4 y 20 = y 8 y 14 y 13 = y 7 y 7 y 21 = y 11 y 19 y 14 = y 8 y 8 y 22 = y 12 y 19 y 15 = y 6 y 11 y 23 = y 11 y 20 y 16 = y 6 y 12 y 24 = y 12 y 20 .

This system with the same initial conditions as (36) can again be rewritten in the matrix-vector notation (32). The 8 × 24 matrix A has twelve nonzero elements A(1, 3) = 1, A(2, 4) = 1, A(3, 9) = −1, A(4, 10) = −1, A(5, 15) = 3, A(5, 16) = 3, A(6, 17) = 1, A(6, 18) = 1, A(7, 21) = −3, A(7, 22) = −3, A(8, 23) = −1, A(8, 24) = −1 and

(38) y 1 = ( y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 , y 8 ) T y 2 = ( y 9 , y 10 , y 11 , y 12 , y 13 , y 14 , y 15 , y 16 , y 17 , y 18 , y 19 , y 20 , y 21 , y 22 , y 23 , y 24 ) T y j k = y 1 y 7 , y 2 y 7 , y 1 y 3 , y 2 y 4 , y 7 y 7 , y 8 y 8 , y 6 y 11 , y 6 y 12 , y 8 y 11 , y 8 y 12 , y 6 y 13 , y 8 y 14 , y 11 y 19 , y 12 y 19 , y 11 y 20 , y 12 y 20 T y 3 = 0 , y j = 0 , B d = 0 .

4.4 Example 4

The next system from [2] has nine ODEs and four function multiplications

(39) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 7 y 1 y 3 ( 0 ) = 0 y 4 = y 7 y 2 y 4 ( 0 ) = 1 + e 1 e y 5 = 3 y 6 y 9 y 5 ( 0 ) = r ( 0 ) 3 y 6 = y 8 y 9 y 6 ( 0 ) = r ( 0 ) y 7 = 3 y 6 y 7 2 y 9 y 7 ( 0 ) = 1 y 5 ( 0 ) y 8 = y 8 3 y 9 y 8 ( 0 ) = 1 y 6 ( 0 ) y 9 = y 3 2 + y 4 2 y 7 y 1 2 + y 2 2 y 9 ( 0 ) = y 1 ( 0 ) y 3 ( 0 ) + y 2 ( 0 ) y 4 ( 0 ) .

This system can be transformed into DAEs as follows

(40) y 1 = y 3 y 6 = y 13 y 2 = y 4 y 7 = 3 y 20 y 3 = y 10 y 8 = y 21 y 4 = y 11 y 9 = y 16 + y 17 y 22 y 23 y 5 = 3 y 12 y 10 = y 1 y 7 y 17 = y 4 y 4 y 11 = y 2 y 7 y 18 = y 1 y 1 y 12 = y 6 y 9 y 19 = y 2 y 2 y 13 = y 8 y 9 y 20 = y 12 y 14 y 14 = y 7 y 7 y 21 = y 13 y 15 y 15 = y 8 y 8 y 22 = y 7 y 18 y 16 = y 3 y 3 y 23 = y 7 y 19 .

This system with the same initial conditions as (39) can be again rewritten in the matrix-vector notation (32). The 9 × 23 matrix A has twelve nonzero elements A(1, 3) = 1, A(2, 4) = 1, A(3, 10) = −1, A(4, 11) = −1, A(5, 12) = 3, A(6, 13) = 1, A(7, 20) = −3, A(8, 21) = −1, A(9, 16) = 1, A(9, 17) = 1, A(9, 22) = −1, A(9, 23) = −1 and

(41) y 1 = y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 , y 8 , y 9 T y 2 = ( y 10 , y 11 , y 12 , y 13 , y 14 , y 15 , y 16 , y 17 , y 18 , y 19 , y 20 , y 21 , y 22 , y 23 ) T y j k = ( y 1 y 7 , y 2 y 7 , y 6 y 9 , y 8 y 9 , y 7 y 7 , y 8 y 8 , y 3 y 3 , y 4 y 4 , y 1 y 1 , y 2 y 2 , y 12 y 14 , y 13 y 15 , y 7 y 18 , y 7 y 19 ) T y 3 = 0 , y j = 0 , B d = 0 .

4.5 Example 5

Finally, the last system from [2] contains twelve ODEs and three function multiplications

(42) y 1 = y 3 y 1 ( 0 ) = 1 e y 2 = y 4 y 2 ( 0 ) = 0 y 3 = y 7 y 1 y 3 ( 0 ) = 0 y 4 = y 7 y 2 y 4 ( 0 ) = 1 + e 1 e y 5 = 3 y 10 y 5 ( 0 ) = r ( 0 ) 3 y 6 = y 11 y 6 ( 0 ) = r ( 0 ) y 7 = 3 y 7 2 y 10 y 7 ( 0 ) = 1 y 5 ( 0 ) y 8 = y 8 2 y 11 y 8 ( 0 ) = 1 y 6 ( 0 ) y 9 = y 3 2 + y 4 2 y 7 y 12 y 9 ( 0 ) = y 1 ( 0 ) y 3 ( 0 ) + y 2 ( 0 ) y 4 ( 0 ) y 10 = y 9 y 11 + y 6 y 3 2 + y 6 y 4 2 y 6 y 7 y 12 y 10 ( 0 ) = y 6 ( 0 ) y 9 ( 0 ) y 11 = y 8 y 11 2 + y 8 y 3 2 + y 8 y 4 2 y 8 y 7 y 12 y 11 ( 0 ) = y 8 ( 0 ) y 9 ( 0 ) y 12 = 2 y 1 y 3 + 2 y 2 y 4 y 12 ( 0 ) = y 1 ( 0 ) 2 + y 2 ( 0 ) 2 .

The IVP (42) can be transformed into DAEs as follows:

(43) y 1 = y 3 y 7 = 3 y 24 y 2 = y 4 y 8 = y 25 y 3 = y 13 y 9 = y 17 + y 18 y 19 y 4 = y 14 y 10 = y 20 + y 26 + y 27 y 28 y 5 = 3 y 10 y 11 = y 29 + y 30 + y 31 y 32 y 6 = y 11 y 12 = 2 y 22 + 2 y 23 y 13 = y 1 y 7 y 14 = y 2 y 7 y 15 = y 7 y 7 y 16 = y 8 y 8 y 17 = y 3 y 3 y 18 = y 4 y 4 y 19 = y 7 y 12 y 20 = y 9 y 11 y 21 = y 11 y 11 y 22 = y 1 y 3 y 23 = y 2 y 4 y 24 = y 10 y 15 y 25 = y 11 y 16 y 26 = y 6 y 17 y 27 = y 6 y 18 y 28 = y 6 y 19 y 29 = y 8 y 21 y 30 = y 8 y 17 y 31 = y 8 y 18 y 32 = y 8 y 19 ,

with the same initial conditions as (42). This system can again be rewritten in the matrix-vector notation (32). The 12 × 32 matrix A has 21 nonzero elements A(1, 3) = 1, A(2, 4) = 1, A(3, 13) = −1, A(4, 14) = −1, A(5, 10) = 3, A(6, 11) = 1, A(7, 24) = −3, A(8, 25) = −1, A(9, 17) = 1, A(9, 18) = 1, A(9, 19) = −1, A(10, 20) = 1, A(10, 26) = 1, A(10, 27) = 1, A(10, 28) = −1, A(11, 29) = −1, A(11, 30) = 1, A(11, 31) = 1, A(11, 32) = −1, A(12, 22) = 2, A(12, 23) = 2 and

(44) y 1 = ( y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 , y 8 , y 9 , y 10 , y 11 , y 12 ) T y 2 = ( y 13 , y 14 , y 15 , y 16 , y 17 , y 18 , y 19 , y 20 , y 21 , y 22 , y 23 , y 24 , y 25 , y 26 , y 27 , y 28 , y 29 , y 30 , y 31 , y 32 ) T y j k = y 1 y 7 , y 2 y 7 , y 7 y 7 , y 8 y 8 , y 3 y 3 , y 4 y 4 , y 7 y 12 , y 9 y 11 , y 11 y 11 , y 1 y 3 , y 2 y 4 , y 10 y 15 , y 11 y 16 , y 6 y 17 , y 6 y 18 , y 6 y 19 , y 8 y 21 , y 8 y 17 , y 8 y 18 , y 8 y 19 T y 3 = 0 , y j = 0 , B d = 0 .

5 Experimental results

Section 4 defined the systems that are going to be solved using the proposed method. In this section of the paper, the results of the performed experiments are going to be presented. Experiments were performed using MATLAB 2021a on a Ryzen 5 3600XT CPU with six cores, equipped with 32 GB of RAM, on Windows 11.

Experiments were repeated 100 times. The median computation time (t m ) was calculated from the computation times during the individual runs of the solvers. The speedup-against (s a ) ratio of the solver is defined as:

(45) s a = t m 1 t m 2 ,

where t m 1 denotes the median calculation time using one of the MATLAB ode solvers from Table 1 against t m 2 , which denotes the median calculation time using the MTSM solvers listed in the same table. The s a ≫ 1 indicates a significantly faster computation time using the MTSM solvers (bold in the following tables). The error (err) is calculated as the norm of the analytical solution calculated using (26) in all time steps.

Table 1 lists the state-of-the-art solvers that are part of MATLAB and implemented MTSM solvers for MATLAB used in this section.

Table 1:

MATLAB numerical methods.

Solver Method
ode23 Bogacki-Shampine [29]
ode45 Dormand-Prince [30]
MTSM M Nonlinear MTSM solver (Section 3.1)
MTSM D Nonlinear MTSM solver for DAE systems (Sections 3.2 and 3.3)

The parameters for all used solvers and solved problems are summarized in Table 2.

Table 2:

Parameters for simulation experiments.

Parameter Value
MTSM accuracy ɛ = 1 × 10−10
Relative, absolute tolerances relTOL = absTOL = 1 × 10−13
Number of rotations n r = 2
Maximum simulation time T = 2πn r  [s]
Eccentricity e 1 = 0.25, e 2 = 0.5, e 3 = 0.75
Step size for MTSM solvers h 1 = 2 π 20 [ s ] , h 2 = 2 π 50 [ s ] , h 3 = 2 π 100 [ s ]
Maximum order for MTSM solvers P max = 64

Note that the error (err) is similar for all of the following experiments (around 1 × 10−10), based on the set parameters in Table 2.

The results of the simulation experiments are summarized in the following tables. Table 3 shows the results for (31). The results show that the newly proposed approach outperforms state-of-the-art methods in nearly all cases.

Table 3:

Results for Example 1.

e ode23 ode45 MTSM D
s a s a [s]
0.25 111.67 1.76 0.009
0.5 76.43 1.10 0.016
0.75 55.43 0.79 0.029

Similarly, Table 4 shows the results for (35).

Table 4:

Results for Example 2.

e ode23 ode45 MTSM D
s a s a [s]
0.25 127.86 2.05 0.0073
0.5 86.12 1.09 0.014
0.75 62.76 0.8 0.0252

Tables 3 and 4 show that the systems behave similarly.

The following tables contain comparisons between the newly proposed DAEs formulation (MTSM D ), previously used formulation (MTSM M ) and MATLAB solvers. Table 5 shows the results for (37). The solver MTSM D is the fastest one for his problem by a wide margin.

Table 5:

Results for Example 3.

e MTSM M ode23 ode45 MTSM D
s a s a s a [s]
0.25 2.12 115.17 2.17 0.0085
0.5 2.09 80.53 1.58 0.016
0.75 2.08 63.48 1.30 0.030

Table 6 again compares the new DAE formulation against the previous one and the state-of-the-art for (40). The solver MTSM D is again the fastest one for his problem by a wide margin.

Table 6:

Results for Example 4.

e MTSM M ode23 ode45 MTSM D
s a s a s a [s]
0.25 2.18 162.7 3.46 0.0059
0.5 2.13 108.86 2.23 0.012
0.75 2.08 86.34 1.80 0.022

The results for (43) are in Table 7. They again show that the solver MTSM D is the fastest one by a wide margin.

Table 7:

Results for Example 5.

e MTSM M ode23 ode45 MTSM D
s a s a s a [s]
0.25 1.16 161.16 4.09 0.0061
0.5 1.11 97.83 2.18 0.013
0.75 1.08 85.57 1.81 0.023

The number of steps for one of the problems is in Table 8. Table 8 shows that the MTSM solvers perform the smallest number of steps.

Table 8:

Number of steps for (37).

Number of steps
e ode23 ode45 MTSM M MTSM D
0.25 155,054 9,601 40 40
0.5 216,172 13,901 100 100
0.75 313,721 21,217 200 200

Figure 1 shows the solution of the problem for all values of e with the parameters specified in Table 2.

Figures 2 and 3 show the difference between the behaviour of the MTSM solvers and state-of-the-art solvers during calculation. MTSM-based solvers can change the number of used Taylor series terms (P function) based on the required accuracy. The state-of-the-art solvers have fixed order (not considering error correction), and thus, they must increase the number of steps to maintain the required accuracy.

Figure 2: 
MTSM order for e = 0.25 (left), e = 0.5 (right), e = 0.75 (bottom).
Figure 2:

MTSM order for e = 0.25 (left), e = 0.5 (right), e = 0.75 (bottom).

Figure 3: 
Integration step size for e = 0.75.
Figure 3:

Integration step size for e = 0.75.

Finally, the Figure 4 shows that the MTSM methods can calculate a more stable solution in time than the state-of-the-art solvers for the following parameters n r = 10, e = 0.5, relTOL = absTOL = ɛ = 1 × 10−4.

Figure 4: 
Numerical solution of (43) for e = 0.5.
Figure 4:

Numerical solution of (43) for e = 0.5.

Note that the MTSM solvers have higher accuracy even with the same settings as the state-of-the-art solvers due to the higher-order stopping rule. The accuracy is shown in the Figure 5. Note that ode23 and ode45 solvers are unable to complete the calculation up to time T = 20π s. The MTSM solvers keep the set accuracy.

Figure 5: 






K


a




${\mathcal{K}}_{a}$



 function (26) for e = 0.5.
Figure 5:

K a function (26) for e = 0.5.

6 Conclusions

The main aim of this paper was to introduce a new approach to solving nonlinear problems using the high-order numerical integration method. This approach uses systems of DAEs that can contain equations with operations multiplication, and division. The results of Examples 1 and 2 demonstrate that the solution of DAE systems with division is faster than that of state-of-the-art solvers in almost all cases. The second set of Examples (3, 4, and 5) compares the approach using DAEs with the approach that was used in the previously published papers. The results show that the solution obtained using the newly proposed approach with MTSM solvers is faster than that obtained with state-of-the-art solvers in all cases. Further analysis of the new approach using DAEs will be the aim of future research.


Corresponding author: Petr Veigend, Faculty of Information Technology, Brno University of Technology, Božetěchova 2, 612 66, Brno, Czech Republic, E-mail: 

Funding source: BUT FIT

Award Identifier / Grant number: FIT-S-23-8151

Acknowledgments

The article includes the results of the internal BUT FIT project FIT-S-23-8151.

  1. Conflict of Interest: The authors state no conflict of interest.

  2. Research funding: This work was supported by BUT FIT (FIT-S-23-8151).

References

[1] P. Veigend, G. Nečasová, and V. Šátek, Taylor series based numerical integration method for solution of nonlinear problems with division, 2024 IEEE 17th International Scientific Conference on Informatics Proceedings, Poprad, Slovakia: Institute of Electrical and Electronics Engineers, 2024, pp. 432–437 (English).10.1109/Informatics62280.2024.10900903Search in Google Scholar

[2] P. Veigend, G. Nečasová, and V. Šátek, Taylor series based numerical integration method, Open Comput. Sci. 11 (2021), no. 1, 60–69, https://doi.org/10.1515/comp-2020-0163 (english).Search in Google Scholar

[3] J. Kunovský, Modern Taylor Series Method, Habilitation work, Faculty of Electrical Engineering and Computer Science. Brno University of Technology, Brno, 1994.Search in Google Scholar

[4] J. Kunovský, Tksl. Available: https://www.fit.vut.cz/research/product/51/.en Search in Google Scholar

[5] Mathworks. Matlab and simulink software. Available: http://www.mathworks.com Search in Google Scholar

[6] F. Kocina, Fos. Available: http://www.fit.vutbr.cz/iveigend/fos Search in Google Scholar

[7] V. Šátek, High performance computing research group. Available: https://www.fit.vut.cz/research/group/hpc/.en Search in Google Scholar

[8] V. Šátek, Stiff systems analysis, Inf. Sci. Technol. Bull. ACM Slovakia 4 (2012), no. 3, 1–11.Search in Google Scholar

[9] V. Šátek, J. Kunovský, and J. Kopřiva, Advanced stiff systems detection, Acta Electrotech. Inform. 11 (2012), no. 4, 66–71.10.2478/v10198-011-0045-4Search in Google Scholar

[10] F. Kocina, J. Kunovský, G. Nečasová, V. Šátek, and P. Veigend, Parallel solution of higher order differential equations, Proceedings of the 2016 International Conference on High Performance Computing & Simulation (HPCS 2016). Institute of Electrical and Electronics Engineers, Innsbruck, Austria, 2016, pp. 302–309.10.1109/HPCSim.2016.7568350Search in Google Scholar

[11] R. Barrio, M. Rodríguez, A. Abad, and F. Blesa, TIDES: a free software based on the Taylor series method, Monogr. Real Acad. Cienc. Zaragoza 35 (2011), 83–95.Search in Google Scholar

[12] A. Jorba and M. Zou, A software package for the numerical integration of ODE by means of high-order Taylor methods, Exp. Math. 14 (2005), 99–117, https://doi.org/10.1080/10586458.2005.10128904.Search in Google Scholar

[13] Y. F. Chang and G. Corliss, ATOMF: solving ODEs and DAEs using Taylor series, Comput. Math. Appl. 28 (1994), 209–233, https://doi.org/10.1016/0898-1221(94)00193-6.Search in Google Scholar

[14] K. Makino and M. Berz, Cosy infinity version 9, Nucl. Instrum. Methods Phys. Res. A 558 (2006), no. 1, 346–350, https://doi.org/10.1016/j.nima.2005.11.109.Search in Google Scholar

[15] N. S. Nedialkov and J. Pryce, Solving differential algebraic equations by Taylor series III. The DAETS code, JNAIAM J. Numer. Anal. Ind. Appl. Math. 3 (2008), 61–80.Search in Google Scholar

[16] R. Barrio, F. Blesa, and M. Lara, VSVO formulation of the Taylor method for the numerical solution of ODEs, Comput. Math. Appl. 50 (2005), 93–111, https://doi.org/10.1016/j.camwa.2005.02.010.Search in Google Scholar

[17] R. Barrio, Performance of the Taylor series method for ODEs/DAEs, Appl. Math. Comput. 163 (2005), 525–545, https://doi.org/10.1016/j.amc.2004.02.015.Search in Google Scholar

[18] P. Mohazzabi and J. L. Becker, Numerical solution of differential equations by direct Taylor expansion, J. Appl. Math. Phys. 5 (2017), no. 3, 623–630, https://doi.org/10.4236/jamp.2017.53053.Search in Google Scholar

[19] A. Baeza, S. Boscarino, P. Mulet, G. Russo, and D. Zorío, Approximate Taylor methods for ODEs, Comput. Fluids 159 (2017), 156–166, https://doi.org/10.1016/j.compfluid.2017.10.001.Search in Google Scholar

[20] P. Amodio, F. Iavernaro, F. Mazzia, M. S. Mukhametzhanov, and Y. D. Sergeyev, A generalized Taylor method of order three for the solution of initial value problems in standard and infinity floating-point arithmetic, Math. Comput. Simulation 141 (2017), 24–39, https://doi.org/10.1016/j.matcom.2016.03.007.Search in Google Scholar

[21] S. Dimova, I. Hristov, R. Hristova, I. Puzynin, T. Puzynina, Z. Sharipov, et al.., Openmp parallelization of multiple precision Taylor series method. arXiv (2019). arXiv: 1908.09301.Search in Google Scholar

[22] S. Liao, On the reliability of computed chaotic solutions of non-linear differential equations, Tellus A Dyn. Meteorol. 61 (2008), no. 4, 550–564, https://doi.org/10.1111/j.1600-0870.2009.00402.x.Search in Google Scholar

[23] Mathworks. Matlab documentation – solve nonstiff odes. Available: https://se.mathworks.com/help/matlab/math/solve-nonstiff-odes.html Search in Google Scholar

[24] P. Veigend, G. Nečasová, and V. Šátek, High order numerical integration method and its applications – the first 36 years of MTSM, 2019 IEEE 15th International Scientific Conference on Informatics. Institute of Electrical and Electronics Engineers, Poprad, Slovakia, 2019, pp. 25–30.10.1109/Informatics47936.2019.9119258Search in Google Scholar

[25] P. Veigend, G. Nečasová, and V. Šátek, Taylor series method in numerical integration: linear and nonlinear problems, 2022 IEEE 16th International Scientific Conference on Informatics, Informatics 2022 – Proceedings, IEEE Communications Society, Poprad, Slovakia, 2023, pp. 239–244 (english).10.1109/Informatics57926.2022.10083462Search in Google Scholar

[26] E. Hairer, S. P. Nørsett, and G. Wanner, Solving Ordinary Differential Equations I, vol. Nonstiff Problems, Springer-Verlag Berlin Heidelberg, Heidelberg, Germany, 1987.10.1007/978-3-662-12607-3Search in Google Scholar

[27] P. Veigend, G. Nečasová, and V. Šátek, Solving linear and nonlinear problems using Taylor series method, Open Comput. Sci. 14 (2024), no. 1, 1–15 (English).10.1515/comp-2024-0005Search in Google Scholar

[28] J. Charles Butcher, Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, Hoboken, New Jersey, 2016.10.1002/9781119121534Search in Google Scholar

[29] L. F. Shampine and M. W. Reichelt, The matlab ode suite, SIAM J. Sci. Comput. 18 (1997), no. 1, 1–22, https://doi.org/10.1137/s1064827594276424.Search in Google Scholar

[30] J. R. Dormand and P. J. Prince, A family of embedded Runge-Kutta formulae, J. Comput. Appl. Math. 6 (1980), no. 1, 19–26, https://doi.org/10.1016/0771-050x(80)90013-3.Search in Google Scholar

Received: 2025-03-19
Accepted: 2025-12-12
Published Online: 2026-01-20

© 2026 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Effective solution of nonlinear DAEs problems using Taylor series method
  3. Approximation results in Orlicz spaces by modified sampling Kantorovich operators
  4. Nonlinear maps preserving bi-skew Lie triple product on factor von Neumann algebras
  5. Noncanonical general Emden–Fowler differential equations of neutral type: improved oscillatory properties and their applications
  6. The semi-classical of multiple sign-changing solutions for a class of quasi-linear Schrödinger equations
  7. Pseudodifferential operators and their commutators on Morrey type spaces
  8. Higher-order impulsive coupled systems with ϕ-Laplacian and generalized jump conditions
  9. Existence of global smooth solutions for non-isentropic compressible Euler equations with time-dependent damping
  10. Monotonicity and symmetry of positive solutions to fractional p(x, ⋅)-Laplacian equation in bounded and unbounded domains
  11. Modified four-term conjugate gradient method with applications in image restoration and regression problem
  12. Solution for the system of generalized nonlinear mixed variational inequality problems
  13. New common fixed point results in digital metric space
  14. Analysis of an evolutionary fractional hemivariational inequality with applications
  15. On the 2k-th power mean of the quadratic character sums of the polynomials
  16. Generalized Hyers–Ulam stability of n-dimensional wave equations in the L 2-norm
  17. Observability inequality for the spatial discretization of weakly coupled 2d-wave equations
  18. Resolvent approaches to elliptic regularity in stationary Fokker–Planck equations
  19. On approximating the complete p-elliptic integral of the second kind by weighted Lehmer means
  20. New approaches on fractional integral inequalities for functions whose higher-order derivatives are bounded
  21. New contributions to best proximity theory using proximal projection operators in hyperbolic metric spaces
  22. Existence and multiplicity of real solutions to the Helmholtz equation with concave and convex nonlinearities
  23. Neumann problem with a discontinuous nonlinearity
  24. A covering approach to eigenvalue bounds for the fractional p-Laplacian
  25. Approximate formulas for stationary characteristics of a renewal-reward process in a strip
  26. Special Issue on 3rd International Conference: Constructive Mathematical Analysis
  27. Numerical stability of branched continued fraction expansions of Lauricella–Saran’s hypergeometric function F K ratios
Downloaded on 4.5.2026 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2025-0216/html?lang=en
Scroll to top button