A linear homogeneous system of \(n\) differential equations with constant coefficients has the form:

\[ {\mathbf{X}’\left( t \right) = A\mathbf{X}\left( t \right),\;\;}\kern-0.3pt {\mathbf{X}\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{x_1}\left( t \right)}\\ {{x_2}\left( t \right)}\\ \vdots \\ {{x_n}\left( t \right)} \end{array}} \right],\;\;}\kern-0.3pt {A = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}}\\ {{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2n}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right].} \]

Here \(\mathbf{X}\left( t \right)\) is an \(n\)-dimensional vector, \(A\) is a square matrix with constant coefficients of size \(n \times n.\)

Next, we describe a general algorithm for solving this system and consider specific cases where the solution is constructed by the method of undetermined coefficients.

We seek a solution of the given equation in the form of vector functions

\[\mathbf{X}\left( t \right) = {e^{\lambda t}}\mathbf{V},\]

where \(\lambda\) is the eigenvalue of \(A,\) and \(\mathbf{V}\) is the eigenvector associated with the eigenvalue.

The eigenvalues \({\lambda _i}\) are found from the auxiliary equation

\[\det \left( {A – \lambda I} \right) = 0,\]

where \(I\) is the identity matrix.

Since some of the roots \({\lambda _i}\) can be multiple, in the general case of an \(n\)th order system, the auxiliary equation has the form:

\[{{\left( { – 1} \right)^n}{\left( {\lambda – {\lambda _1}} \right)^{{k_1}}}{\left( {\lambda – {\lambda _2}} \right)^{{k_2}}} \cdots}\kern0pt{ {\left( {\lambda – {\lambda _m}} \right)^{{k_m}}} }={ 0.}\]

Here the following condition holds:

\[{k_1} + {k_2} + \cdots + {k_m} = n.\]

The power \({k_i}\) of the factor \(\left( {\lambda – {\lambda _i}} \right)\) is called the algebraic multiplicity of the eigenvalue \({\lambda _i}.\)

For each eigenvalue \({\lambda _i},\) we can find the eigenvector (or more eigenvectors in the case of a multiple \(\left.{\lambda _i}\right)\) using the formula

\[\left( {A – {\lambda _i}I} \right){\mathbf{V}_i} = \mathbf{0}.\]

The number of eigenvectors associated with the eigenvalue \({\lambda _i}\) is called the geometric multiplicity of \({\lambda _i}\) (we denote it by \({s_i}\)). Thus, the eigenvalue \({\lambda _i}\) is characterized by two quantities: the algebraic multiplicity \({k_i}\) and geometric multiplicity \({s_i}.\) The following relationship holds:

\[0 \lt {s_i} \le {k_i},\]

i.e., the geometric multiplicity \({s_i}\) (or the number of eigenvectors) does not exceed the algebraic multiplicity \({k_i}\) of the eigenvalue \({\lambda _i}.\)

A fundamental system of solutions and, hence, the general solution of the system strongly depend on the algebraic and geometric multiplicity of the eigenvalues \({\lambda _i}.\) In the simplest case \({s_i} = {k_i} = 1,\) when the eigenvalues \({\lambda _i}\) of the matrix \(A\) are distinct and each number \({\lambda _i}\) is associated with one eigenvector \({\mathbf{V}_i},\) the fundamental system of solutions consists of the functions

\[{{e^{{\lambda _1}t}}{\mathbf{V}_1},\;}\kern-0.3pt{{e^{{\lambda _2}t}}{\mathbf{V}_2}, \;\ldots,\;}\kern-0.3pt{ {e^{{\lambda _n}t}}{\mathbf{V}_n}.}\]

In this case, the general solution is written as

\[ {\mathbf{X}\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{x_1}\left( t \right)}\\ {{x_2}\left( t \right)}\\ \vdots \\ {{x_n}\left( t \right)} \end{array}} \right] } = {{C_1}{e^{{\lambda _1}t}}{\mathbf{V}_1} }+{ {C_2}{e^{{\lambda _2}t}}{\mathbf{V}_2} + \cdots } + {{C_n}{e^{{\lambda _n}t}}{\mathbf{V}_n} } = {\sum\limits_{i = 1}^n {{C_i}{e^{{\lambda _i}t}}{\mathbf{V}_i}} ,} \]

where \({C_i}\) are arbitrary constants.

Let’s discuss the case of complex roots of the characteristic equation. If all the coefficients in the equations are real numbers, the complex roots will be “born” in the form of pairs of complex conjugate numbers \(\alpha \pm i\beta .\) To construct a solution that is associated with such a pair, it is enough to take one number, for example, \(\alpha + i\beta\) and find for it the eigenvector \(\mathbf{V},\) which may also have complex coordinates. Then the solution will be presented by the complex-valued vector function \({e^{\left( {\alpha + i\beta } \right)t}}\mathbf{V}\left( t \right).\) The exponential function can be expanded by Euler’s formula:

\[

{{e^{\left( {\alpha + i\beta } \right)t}} }={ {e^{\alpha t}}{e^{i\beta t}} }

= {{e^{\alpha t}}\left( {\cos \beta t + i\sin \beta t} \right).}

\]

As a result, the part of the general solution corresponding to the pair of eigenvalues \(\alpha \pm i\beta,\) will be presented in the form of

\[ {\mathbf{X}\left( t \right) \text{ = }}\kern0pt {{ {e^{\alpha t}}\left( {\cos \beta t + i\sin \beta t} \right) \cdot}}\kern0pt{{ \left( {{\mathbf{V}_\text{Re}} + i{\mathbf{V}_\text{Im}}} \right) }} = {{{e^{\alpha t}}\left[ {\cos \left( {\beta t} \right){\mathbf{V}_\text{Re}} }\right.}-{\left.{ \sin \left( {\beta t} \right){\mathbf{V}_\text{Im}}} \right] }} + {{i{e^{\alpha t}}\left[ {\cos \left( {\beta t} \right){\mathbf{V}_\text{Im}} }\right.}+{\left.{ \sin \left( {\beta t} \right){\mathbf{V}_\text{Re}}} \right] }} = {{\mathbf{X}^{\left( 1 \right)}}\left( t \right) + i{\mathbf{X}^{\left( 2 \right)}}\left( t \right),} \]

where \(\mathbf{V} =\) \( {\mathbf{V}_\text{Re}} + i{\mathbf{V}_\text{Im}}\) is the complex eigenvector. The vector functions \({\mathbf{X}^{\left( 1 \right)}}\) and \({\mathbf{X}^{\left( 2 \right)}}\) in the real and imaginary parts of the resulting expression form two linearly independent real solutions.

As it can be seen, the solution for a pair of complex conjugate eigenvalues is constructed in the same manner as for the real eigenvalues. It’s only necessary to clearly distinguish the real and imaginary parts of the vector function at the end of the transformations.

Now consider the case of multiple roots \({\lambda _i}.\) For simplicity, we assume them to be real numbers. Here again, the solution process is split into two scenarios.

If the algebraic multiplicity \({k_i}\) and geometric multiplicity \({s_i}\) of an eigenvalue \({\lambda _i}\) coincide \(\left( {{k_i} = {s_i} \gt 1} \right),\) there exist \({k_i}\) eigenvectors for this value \({\lambda _i}\). As a result, the eigenvalue \({\lambda _i}\) will be associated with \({k_i}\) linearly independent solutions of the form

\[{{e^{{\lambda _i}t}}\mathbf{V}_i^{\left( 1 \right)},\;}\kern-0.3pt{{e^{{\lambda _i}t}}\mathbf{V}_i^{\left( 2 \right)},\; \ldots ,\;}\kern-0.3pt{{e^{{\lambda _i}t}}\mathbf{V}_i^{\left( {{k_i}} \right)}.}\]

In this case, the system of \(n\) equations will have total \(n\) eigenvectors forming a fundamental system of solutions. Examples of such systems are given on the web page Method of Eigenvalues and Eigenvectors.

The most interesting is the case of multiple \({\lambda _i}\) when the geometric multiplicity \({s_i}\) is less than the algebraic multiplicity \({k_i}.\) This means that we have only \({s_i}\) \(\left( {{s_i} \lt {k_i}} \right)\) eigenvectors associated with \({\lambda _i}.\) The number of eigenvectors \({s_i}\) is given by the formula

\[{{s_i} }={ n – \text{rank}\left( {A – {\lambda _i}I} \right),}\]

where \(\text{rank}\left( {A – {\lambda _i}I} \right)\) means the rank of the matrix \({A – {\lambda _i}I}\) with substituted value of \({\lambda _i}.\)

The solution corresponding to \({\lambda _i}\) can be sought as the product of a polynomial of degree \({k_i} – {s_i}\) by the exponential function \({e^{{\lambda _i}t}}:\)

\[ {{\mathbf{X}_i}\left( t \right) = {\mathbf{P}_{{k_i} – {s_i}}}\left( t \right){e^{{\lambda _i}t}},\;\;}\kern-0.3pt{ \text{where}}\;\; {{\mathbf{P}_{{k_i} – {s_i}}}\left( t \right) }={ {\mathbf{A}_0} + {\mathbf{A}_1}t + \cdots }+{ {\mathbf{A}_{{k_i} – {s_i}}}{t^{{k_i} – {s_i}}}.} \]

Here \({\mathbf{P}_{{k_i} – {s_i}}}\left( t \right)\) is a vector polynomial, i.e., each of the \(n\) coordinates corresponds to a polynomial of degree \({{k_i} – {s_i}}\) with some coefficients to be determined.

In fact, the method of undetermined coefficients is needed only in the case of multiple roots \({\lambda _i},\) when the number of linearly independent eigenvectors is less than the algebraic multiplicity of the root \({\lambda _i}.\)

To find the vectors \({\mathbf{A}_0},\) \({\mathbf{A}_1}, \ldots ,\) \({\mathbf{A}_{{k_i} – {s_i}}}\) for each such eigenvalue \({\lambda _i},\) one should substitute the vector function \({\mathbf{X}_i}\left( t \right)\) in the original system of equations. Equating the coefficients of the terms with the same power in the left and right sides of each equation, we obtain an algebraic system of equations for the unknown vectors \({\mathbf{A}_0},\) \({\mathbf{A}_1}, \ldots ,\) \({\mathbf{A}_{{k_i} – {s_i}}}.\)

This method for constructing the general solution of a system of differential equations is sometimes referred to as the Euler method.

## Solved Problems

Click or tap a problem to see the solution.