(One intermediate revision by the same user not shown)
Line 14: Line 14:
  
 
<center><math>\left(\begin{array}{c}q_{1}\\q_{2}\\q_{3}\end{array}\right)(\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right) - \left(\begin{array}{ccc}0.4&0.1&0.3\\0.2&0.7&0.1\\0.4&0.2&0.6\end{array}\right)) = 0</math></center>
 
<center><math>\left(\begin{array}{c}q_{1}\\q_{2}\\q_{3}\end{array}\right)(\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right) - \left(\begin{array}{ccc}0.4&0.1&0.3\\0.2&0.7&0.1\\0.4&0.2&0.6\end{array}\right)) = 0</math></center>
 +
 +
<center><math>\left(\begin{array}{c}q_{1}\\q_{2}\\q_{3}\end{array}\right)\left(\begin{array}{ccc}0.6&-0.1&-0.3\\-0.2&0.3&-0.1\\-0.4&-0.2&0.4\end{array}\right) = 0</math></center>
 +
 +
Here, we set up the equation to fit our example.
 +
 +
<center><math>\left(\begin{array}{ccc}1&0&-\frac{5}{8}\\0&1&-\frac{3}{4}\\0&0&0\end{array}\right) = 0</math></center>
 +
 +
Next, we simplify the transition matrix to reduced row echelon form (RREF). The reader is encouraged to refer to the additional readings section to learn more about this process. However, simply put, the rows are manipulated so that the transition matrix as a whole will have leading ones in each row, with each subsequent row having a leading one that is to the right of the previous row’s leading one.
 +
 +
<center><math>1q_{1} + 0q_{2} - \frac{5}{8}q_{3} = 0, q_{1} = \frac{5}{8}q_{3} \\
 +
0q_{1} + 1q_{2} - \frac{3}{4}q_{3} = 0, q_{2} = \frac{3}{4}q_{3} \\
 +
0q_{1} + 0q_{2} + 0q_{3} = 0, q_{3} = s</math></center>
 +
 +
<center><math>q = s\left(\begin{array}{c}\frac{5}{8}\\\frac{3}{4}\\1\end{array}\right)</math></center>
 +
 +
The matrix can be viewed as a system of equations, where each row is a separate equation and each column is a separate variable. Then, each equation is solved. In the first two rows, q1 and q2 are solved in terms of q3. However, the third equation does not give any information for the value of q3 because everything is just zero. Therefore, we set q3 to be equal to variable s. The new q is shown above.
 +
 +
To solve for <math>s</math>,
 +
 +
<center><math>s = \frac{1}{\frac{5}{8} + \frac{3}{4} + 1} = \frac{8}{19}</math></center>
 +
 +
Plugging ''s'' back into '''q''', we get
 +
 +
<center><math>q = \left(\begin{array}{c}\frac{5}{19}\\\frac{6}{19}\\\frac{8}{19}\end{array}\right) = \left(\begin{array}{c}0.2632\\0.3158\\0.4211\end{array}\right)</math></center>
 +
 +
Notice that the values of <math>q_{1}</math>, <math>q_{2}</math>, and <math>q_{3}</math> are identical to the values in the state vector from the Python demonstration after thirty iterations. The steady-state vector provides the long-run probabilities for the Markov chain.
  
 
[[ Walther MA271 Fall2020 topic2|Back to Markov Chains]]
 
[[ Walther MA271 Fall2020 topic2|Back to Markov Chains]]
  
 
[[Category:MA271Fall2020Walther]]
 
[[Category:MA271Fall2020Walther]]

Latest revision as of 01:36, 6 December 2020


Steady-State Vectors

A transition matrix is considered regular if it has all positive entries. This is reflected in our example, as the transition matrix has only positive entries, so it must be regular. Regular Markov chains will eventually approach a fixed vector state $ q $, which is also known as the steady-state vector of a Markov chain. A steady-state vector is a vector that does not change after multiplying it with the transition matrix, satisfying the equation

$ Pq = q $

This can also be written as

$ q(I - P) = 0 $

Where $ I $ is the identity matrix. To demonstrate this with our weather problem,

$ \left(\begin{array}{c}q_{1}\\q_{2}\\q_{3}\end{array}\right)(\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right) - \left(\begin{array}{ccc}0.4&0.1&0.3\\0.2&0.7&0.1\\0.4&0.2&0.6\end{array}\right)) = 0 $
$ \left(\begin{array}{c}q_{1}\\q_{2}\\q_{3}\end{array}\right)\left(\begin{array}{ccc}0.6&-0.1&-0.3\\-0.2&0.3&-0.1\\-0.4&-0.2&0.4\end{array}\right) = 0 $

Here, we set up the equation to fit our example.

$ \left(\begin{array}{ccc}1&0&-\frac{5}{8}\\0&1&-\frac{3}{4}\\0&0&0\end{array}\right) = 0 $

Next, we simplify the transition matrix to reduced row echelon form (RREF). The reader is encouraged to refer to the additional readings section to learn more about this process. However, simply put, the rows are manipulated so that the transition matrix as a whole will have leading ones in each row, with each subsequent row having a leading one that is to the right of the previous row’s leading one.

$ 1q_{1} + 0q_{2} - \frac{5}{8}q_{3} = 0, q_{1} = \frac{5}{8}q_{3} \\ 0q_{1} + 1q_{2} - \frac{3}{4}q_{3} = 0, q_{2} = \frac{3}{4}q_{3} \\ 0q_{1} + 0q_{2} + 0q_{3} = 0, q_{3} = s $
$ q = s\left(\begin{array}{c}\frac{5}{8}\\\frac{3}{4}\\1\end{array}\right) $

The matrix can be viewed as a system of equations, where each row is a separate equation and each column is a separate variable. Then, each equation is solved. In the first two rows, q1 and q2 are solved in terms of q3. However, the third equation does not give any information for the value of q3 because everything is just zero. Therefore, we set q3 to be equal to variable s. The new q is shown above.

To solve for $ s $,

$ s = \frac{1}{\frac{5}{8} + \frac{3}{4} + 1} = \frac{8}{19} $

Plugging s back into q, we get

$ q = \left(\begin{array}{c}\frac{5}{19}\\\frac{6}{19}\\\frac{8}{19}\end{array}\right) = \left(\begin{array}{c}0.2632\\0.3158\\0.4211\end{array}\right) $

Notice that the values of $ q_{1} $, $ q_{2} $, and $ q_{3} $ are identical to the values in the state vector from the Python demonstration after thirty iterations. The steady-state vector provides the long-run probabilities for the Markov chain.

Back to Markov Chains

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva