(22 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
<div style="text-align:center"> A slecture by Yijia Wen </div>
 
<div style="text-align:center"> A slecture by Yijia Wen </div>
  
=== <small> 4.0 Abstract <small> ===
+
=== <small> 4.0 Concept <small> ===
  
 
<font size="3px"> Similar as systems of normal equations, several ODEs can also form a system. A typical system of <math>n</math>
 
<font size="3px"> Similar as systems of normal equations, several ODEs can also form a system. A typical system of <math>n</math>
Line 15: Line 15:
 
<math>\frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n)</math>
 
<math>\frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n)</math>
  
To solve them, we introduce a method with eigenvectors and eigenvalues of matrices. There is an essential theorem for it. '''If <math>\frac{dx}{dt}=A\bold{x}</math>, and the <math>n×n</math> matrix <math>A</math> has <math>n</math> distinct real eigenvalues with corresponding eigenvectors, the general solution will be <math>\bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} </math>,''' where <math>\lambda_n</math> are eigenvalues, <math>\bold{v_n}</math> are eigenvectors, and <math>C_n</math> are arbitrary constants. The theorem is derived from the matrix exponential of the power series for <math>e^A</math>, while we don't prove it here. In this tutorial, we are doing systems of two ODEs (hence <math>2×2</math> matrices involved) for examples.
+
To solve them, we introduce matrix, whose concept here is similar to using matrix operation to solve systems of linear equations (e.g. Gaussian elimination method). There is an essential theorem for it. '''If <math>\frac{dx}{dt}=A\bold{x}</math>, and the <math>n×n</math> matrix <math>A</math> has <math>n</math> distinct real eigenvalues with corresponding eigenvectors, the general solution will be <math>\bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} </math>,''' where <math>\lambda_n</math> are eigenvalues, <math>\bold{v_n}</math> are eigenvectors, and <math>C_n</math> are arbitrary constants. Strictly, the theorem is derived from the matrix exponential of the power series for <math>e^A</math>, while we don't prove it here, but use a more intuitive explanation of analogy instead.
 +
 
 +
 
 +
In one-dimensional space, a single linear ODE <math>\frac{dx}{dt}=\lambda x</math> has a solution <math>x=Ae^{\lambda t} </math>, where <math>\lambda</math> is a constant. Similarly, in two(or more)-dimensional space, a linear ODE system <math>\frac{dx}{dt}=A\bold{x}</math> will have a solution in the form <math>\bold{x}=e^{\lambda t} \bold{v}</math>, where <math>A</math> is a matrix without unknowns, <math>\lambda</math> is a constant and <math>\bold{v}</math> is a constant vector. For a matrix, the most identical constant and constant vector are going to be its eigenvalues and eigenvectors. In this tutorial, we are doing systems of two ODEs (hence <math>2×2</math> matrices involved) for examples.
 +
 
  
 
First of all, we should be familiar with how to convert a system of linear equations to the matrix form. The same idea is used to convert a system of linear ODEs to the matrix form. For example, consider the system of linear ODEs
 
First of all, we should be familiar with how to convert a system of linear equations to the matrix form. The same idea is used to convert a system of linear ODEs to the matrix form. For example, consider the system of linear ODEs
Line 38: Line 42:
 
'''<font size="4px"> 4.1 ODE Systems with Real Eigenvalues </font>'''
 
'''<font size="4px"> 4.1 ODE Systems with Real Eigenvalues </font>'''
  
<font size="3px"> When you are given a matrix, the first thing to do is to find its identities, which is something distinguished it from anything else. The most intrinsic property for a matrix are its eigenvalues and eigenvectors.   
+
<font size="3px"> When we are given a matrix, the first thing to do is to find its identities, which is something distinguished it from anything else. The most intrinsic property for a matrix are its eigenvalues and eigenvectors.   
  
 
Consider the theorem and system from 4.0, <math>\begin{bmatrix}
 
Consider the theorem and system from 4.0, <math>\begin{bmatrix}
Line 60: Line 64:
 
y \end{bmatrix} </math>.
 
y \end{bmatrix} </math>.
  
</font>
+
If initial values are given, we can plug them in to solve out the constant <math>C_1</math> and <math>C_2</math>, to get an explicit solution.
  
  
'''<font size="4px"> 4.2 Inhomogeneous Linear Systems </font>'''
+
Refer [http://tutorial.math.lamar.edu/Classes/DE/RealEigenvalues.aspx here] for further explanation of the phase portrait, an understanding from the geometrical perspective.
 +
 
 +
Sometimes the eigenvalues will be repeated, refer [http://tutorial.math.lamar.edu/Classes/DE/RepeatedEigenvalues.aspx here] for a solution to this, as I feel like I can't explain more clear than it. :) </font>
 +
 
 +
 
 +
'''<font size="4px"> 4.2 ODE Systems with Complex Eigenvalues </font>'''
 +
 
 +
<font size="3px"> Sometimes complex numbers come up while solving the matrix problem to find the eigenvalues and eigenvectors. We know that the complex solutions for normal equations with all real coefficients always come in conjugate pairs. Similarly, in the systems of linear ODEs with all real coefficients (and so all components in the matrix are real), complex eigenvalues will also occur in conjugate pairs, and correspond to pairs of complex conjugate eigenvectors as well. We still put them into the standard form of solution as usual first. But it needs more operations as we are in need of real solutions for ODE systems.
 +
 
 +
 
 +
Consider a linear system of two ODEs <math>\begin{bmatrix}
 +
\frac{dx}{dt}\\
 +
\frac{dy}{dt} \end{bmatrix}
 +
= \begin{bmatrix}
 +
5 & 2\\
 +
-4 & 1 \end{bmatrix}
 +
\begin{bmatrix}
 +
x\\
 +
y \end{bmatrix} </math>. It is easy to find its eigenvalue <math>\lambda_1=3+2i</math>, <math>\lambda_2=3-2i</math>. Their corresponding eigenvectors are <math>\bold{v_1}=\begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix}</math>, <math>\bold{v_2}=\begin{bmatrix}
 +
1\\
 +
-1-i \end{bmatrix} </math>. Plug them in the standard form in 4.0 to get the general solution <math>\bold{x}=C_1 e^{(3+2i)t} \begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix} + C_2 e^{(3-2i)t} \begin{bmatrix}
 +
1\\
 +
-1-i \end{bmatrix} </math>, where <math>\bold{x}=\begin{bmatrix}
 +
x\\
 +
y \end{bmatrix} </math>.
 +
 
 +
 
 +
Now it's time to further deduct it to reduce the complex part of the solution- take the real part only. As the eigenvalues and eigenvectors are both in conjugate pairs, their real parts are same. Given an initial value <math>\bold{x}(0)=\begin{bmatrix}
 +
2\\
 +
5 \end{bmatrix}</math> and work out the constants <math>C_1=1-\frac{7}{2}i</math>, <math>C_2=1+\frac{7}{2}i</math>, which are also conjugate pairs. Hence, <math>\bold{x}=(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix} + (1+\frac{7}{2}i) e^{(3-2i)t} \begin{bmatrix}
 +
1\\
 +
-1-i \end{bmatrix} </math>,
 +
 
 +
<math>=2 Re [(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix}] </math>, where "Re" represents the real part of the solution,
 +
 
 +
<math>=2 Re [(1-\frac{7}{2}i) e^{3t} e^{2ti} \begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix}] </math>, by the property of power,
 +
 
 +
<math>=2e^{3t} Re [(1-\frac{7}{2}i) e^{2ti} \begin{bmatrix}
 +
1\\
 +
-1+i \end{bmatrix}] </math>, as <math>e^{3t}</math> is real,
 +
 
 +
<math>=2e^{3t} Re [(1-\frac{7}{2}i) (cos2t+isin2t) \begin{bmatrix}
 +
\begin{bmatrix}
 +
1\\
 +
-1 \end{bmatrix} + i \begin{bmatrix}
 +
0\\
 +
1 \end{bmatrix}
 +
\end{bmatrix}] </math>, by Euler's Formula <math>e^{\theta i}=cos\theta + i sin\theta</math> and splitting out the complex number into real and complex parts respectively,
 +
 
 +
<math>=2e^{3t} Re[((cos2t+isin2t)-\frac{7}{2}i cos2t+\frac{7}{2}sin2t) \begin{bmatrix}
 +
\begin{bmatrix}
 +
1\\
 +
-1 \end{bmatrix} + i \begin{bmatrix}
 +
0\\
 +
1 \end{bmatrix}
 +
\end{bmatrix}] </math>, by the product rule for polynomials,
 +
 
 +
<math>=2e^{3t} Re[((cos2t+\frac{7}{2}sin2t)+i(sin2t-\frac{7}{2}cos2t)) \begin{bmatrix}
 +
\begin{bmatrix}
 +
1\\
 +
-1 \end{bmatrix} + i \begin{bmatrix}
 +
0\\
 +
1 \end{bmatrix}
 +
\end{bmatrix}] </math>, by the combination of real and complex parts,
 +
 
 +
<math>=2e^{3t} [(cos2t+\frac{7}{2}sin2t) \begin{bmatrix}
 +
1\\
 +
-1 \end{bmatrix} -(sin2t-\frac{7}{2}cos2t) \begin{bmatrix}
 +
0\\
 +
1 \end{bmatrix} </math>, by the product rule for polynomials and giving up the complex parts.
 +
 
  
<font size="3px">
+
Here, finally we got the explicit solution to the linear system with complex eigenvalues. Refer  [http://tutorial.math.lamar.edu/Classes/DE/ComplexEigenvalues.aspx here] for further explanation from the geometrical perspective for a more complete understanding. </font>
  
  

Latest revision as of 03:04, 19 November 2017

Systems of ODEs

A slecture by Yijia Wen

4.0 Concept

Similar as systems of normal equations, several ODEs can also form a system. A typical system of $ n $

coupled first-order ODE looks like:

$ \frac{dx_1}{dt}=f_1(t,x_1,x_2,...x_n) $

$ \frac{dx_2}{dt}=f_2(t,x_1,x_2,...x_n) $

...

$ \frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n) $

To solve them, we introduce matrix, whose concept here is similar to using matrix operation to solve systems of linear equations (e.g. Gaussian elimination method). There is an essential theorem for it. If $ \frac{dx}{dt}=A\bold{x} $, and the $ n×n $ matrix $ A $ has $ n $ distinct real eigenvalues with corresponding eigenvectors, the general solution will be $ \bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} $, where $ \lambda_n $ are eigenvalues, $ \bold{v_n} $ are eigenvectors, and $ C_n $ are arbitrary constants. Strictly, the theorem is derived from the matrix exponential of the power series for $ e^A $, while we don't prove it here, but use a more intuitive explanation of analogy instead.


In one-dimensional space, a single linear ODE $ \frac{dx}{dt}=\lambda x $ has a solution $ x=Ae^{\lambda t} $, where $ \lambda $ is a constant. Similarly, in two(or more)-dimensional space, a linear ODE system $ \frac{dx}{dt}=A\bold{x} $ will have a solution in the form $ \bold{x}=e^{\lambda t} \bold{v} $, where $ A $ is a matrix without unknowns, $ \lambda $ is a constant and $ \bold{v} $ is a constant vector. For a matrix, the most identical constant and constant vector are going to be its eigenvalues and eigenvectors. In this tutorial, we are doing systems of two ODEs (hence $ 2×2 $ matrices involved) for examples.


First of all, we should be familiar with how to convert a system of linear equations to the matrix form. The same idea is used to convert a system of linear ODEs to the matrix form. For example, consider the system of linear ODEs

$ \frac{dx}{dt}=8x+2y $,

$ \frac{dy}{dt}=2x+5y $.

We separate the variables and their coefficients to get the matrix form $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 8 & 2\\ 2 & 5 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $

From here we can start our journey.


4.1 ODE Systems with Real Eigenvalues

When we are given a matrix, the first thing to do is to find its identities, which is something distinguished it from anything else. The most intrinsic property for a matrix are its eigenvalues and eigenvectors.

Consider the theorem and system from 4.0, $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 8 & 2\\ 2 & 5 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $. We can easily calculate the eigenvalues $ \lambda_1=4 $, $ \lambda_2=9 $, and therefore eigenvectors $ \bold{v_1}=\begin{bmatrix} 1\\ -2 \end{bmatrix} $, $ \bold{v_2}=\begin{bmatrix} 2\\ 1 \end{bmatrix} $. Plug them in the standard form of general solution in 4.0, we have the general solution to this system of linear ODEs is $ \bold{x}=C_1 e^{4t} \begin{bmatrix} 1\\ -2 \end{bmatrix} + C_2 e^{9t} \begin{bmatrix} 2\\ 1 \end{bmatrix} $, where $ \bold{x}=\begin{bmatrix} x\\ y \end{bmatrix} $.

If initial values are given, we can plug them in to solve out the constant $ C_1 $ and $ C_2 $, to get an explicit solution.


Refer here for further explanation of the phase portrait, an understanding from the geometrical perspective.

Sometimes the eigenvalues will be repeated, refer here for a solution to this, as I feel like I can't explain more clear than it. :)


4.2 ODE Systems with Complex Eigenvalues

Sometimes complex numbers come up while solving the matrix problem to find the eigenvalues and eigenvectors. We know that the complex solutions for normal equations with all real coefficients always come in conjugate pairs. Similarly, in the systems of linear ODEs with all real coefficients (and so all components in the matrix are real), complex eigenvalues will also occur in conjugate pairs, and correspond to pairs of complex conjugate eigenvectors as well. We still put them into the standard form of solution as usual first. But it needs more operations as we are in need of real solutions for ODE systems.


Consider a linear system of two ODEs $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 5 & 2\\ -4 & 1 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $. It is easy to find its eigenvalue $ \lambda_1=3+2i $, $ \lambda_2=3-2i $. Their corresponding eigenvectors are $ \bold{v_1}=\begin{bmatrix} 1\\ -1+i \end{bmatrix} $, $ \bold{v_2}=\begin{bmatrix} 1\\ -1-i \end{bmatrix} $. Plug them in the standard form in 4.0 to get the general solution $ \bold{x}=C_1 e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix} + C_2 e^{(3-2i)t} \begin{bmatrix} 1\\ -1-i \end{bmatrix} $, where $ \bold{x}=\begin{bmatrix} x\\ y \end{bmatrix} $.


Now it's time to further deduct it to reduce the complex part of the solution- take the real part only. As the eigenvalues and eigenvectors are both in conjugate pairs, their real parts are same. Given an initial value $ \bold{x}(0)=\begin{bmatrix} 2\\ 5 \end{bmatrix} $ and work out the constants $ C_1=1-\frac{7}{2}i $, $ C_2=1+\frac{7}{2}i $, which are also conjugate pairs. Hence, $ \bold{x}=(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix} + (1+\frac{7}{2}i) e^{(3-2i)t} \begin{bmatrix} 1\\ -1-i \end{bmatrix} $,

$ =2 Re [(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, where "Re" represents the real part of the solution,

$ =2 Re [(1-\frac{7}{2}i) e^{3t} e^{2ti} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, by the property of power,

$ =2e^{3t} Re [(1-\frac{7}{2}i) e^{2ti} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, as $ e^{3t} $ is real,

$ =2e^{3t} Re [(1-\frac{7}{2}i) (cos2t+isin2t) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by Euler's Formula $ e^{\theta i}=cos\theta + i sin\theta $ and splitting out the complex number into real and complex parts respectively,

$ =2e^{3t} Re[((cos2t+isin2t)-\frac{7}{2}i cos2t+\frac{7}{2}sin2t) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by the product rule for polynomials,

$ =2e^{3t} Re[((cos2t+\frac{7}{2}sin2t)+i(sin2t-\frac{7}{2}cos2t)) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by the combination of real and complex parts,

$ =2e^{3t} [(cos2t+\frac{7}{2}sin2t) \begin{bmatrix} 1\\ -1 \end{bmatrix} -(sin2t-\frac{7}{2}cos2t) \begin{bmatrix} 0\\ 1 \end{bmatrix} $, by the product rule for polynomials and giving up the complex parts.


Here, finally we got the explicit solution to the linear system with complex eigenvalues. Refer here for further explanation from the geometrical perspective for a more complete understanding.


4.3 Exercises


4.4 References

Faculty of Mathematics, University of North Carolina at Chapel Hill. (2016). Linear Systems of Differential Equations. Chapel Hill, NC., USA.

Institute of Natural and Mathematical Science, Massey University. (2017). 160.204 Differential Equations I: Course materials. Auckland, New Zealand.

Robinson, J. C. (2003). An introduction to ordinary differential equations. New York, NY., USA: Cambridge University Press.

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett