Line 15: Line 15:
 
<br>  
 
<br>  
  
A concrete example<br>Let’s consider a person has two choices of drinking for his breakfast every morning, milk or coffee. We let Xi=1 if the person chooses to drink milk on the ith morning. Let Xi=2 if the person chooses to drink coffee on the ith morning. In this case, the sequence of cases (random variables) X1, X2… is a stochastic process with two possible states at each time. To make it be a Markov chain, consider people are usually preferring switching rather than repeating, and he prefers a little bit more on milk than coffee. Thus assume probability of that person choosing milk today given he chose milk yesterday is 0.3. The probability that he chooses coffee given he chose coffee yesterday is 0.2. Then this sequence of states becomes a Markov Chain. <br><br> <br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 1 | ''X''<sub>''n''</sub> = 1) = 0.3</span>, <span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 2 | ''X''<sub>''n''</sub> = 1) = 1 − 0.3 = 0.7</span><br> <br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 1 | ''X''<sub>''n''</sub> = 2) = 1 − 0.2 = 0.8</span>, <span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 2 | ''X''<sub>''n''</sub> = 2) = 0.2</span><br> <br>[[Image:Milk and Coffee.PNG]]<br> What can we do with it<br> Transition probabilities and Transition Matrix<br>We define P_ij to be the probability that the future state is j given the present state is i.<br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = ''j'' | ''X''<sub>''n''</sub> = ''i'') = ''P''<sub>''i''''j'''</sub></span>'''<br>They are called transition distributions. If a Markov Chain’s transition distributions do not depend on n (just like the above example), then we call the Markov Chain has stationary transition distributions.<br>In particular, a Markov Chain with stationary transition distributions can be presented by a Transition Matrix. For instance, the transition Matrix for the above Example is:<br> <br><math>\begin{pmatrix}
+
A concrete example<br>Let’s consider a person has two choices of drinking for his breakfast every morning, milk or coffee. We let Xi=1 if the person chooses to drink milk on the ith morning. Let Xi=2 if the person chooses to drink coffee on the ith morning. In this case, the sequence of cases (random variables) X1, X2… is a stochastic process with two possible states at each time. To make it be a Markov chain, consider people are usually preferring switching rather than repeating, and he prefers a little bit more on milk than coffee. Thus assume probability of that person choosing milk today given he chose milk yesterday is 0.3. The probability that he chooses coffee given he chose coffee yesterday is 0.2. Then this sequence of states becomes a Markov Chain. <br><br> <br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 1 | ''X''<sub>''n''</sub> = 1) = 0.3</span>, <span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 2 | ''X''<sub>''n''</sub> = 1) = 1 − 0.3 = 0.7</span><br> <br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 1 | ''X''<sub>''n''</sub> = 2) = 1 − 0.2 = 0.8</span>, <span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = 2 | ''X''<sub>''n''</sub> = 2) = 0.2</span><br> <br>[[Image:Milk and Coffee.PNG]]<br> What can we do with it<br> Transition probabilities and Transition Matrix<br>We define P_ij to be the probability that the future state is j given the present state is i.<br><span class="texhtml">''P''(''X''<sub>''n'' + 1</sub> = ''j'' | ''X''<sub>''n''</sub> = ''i'') = ''P''<sub>''i''''j'''''</sub></span>'''''<br>They are called transition distributions. If a Markov Chain’s transition distributions do not depend on n (just like the above example), then we call the Markov Chain has stationary transition distributions.<br>In particular, a Markov Chain with stationary transition distributions can be presented by a Transition Matrix. For instance, the transition Matrix for the above Example is:<br> <br><math>\begin{pmatrix}
 
0.3 & 0.7 \\
 
0.3 & 0.7 \\
 
0.8 & 0.2
 
0.8 & 0.2
\end{pmatrix}</math> <br> <br><math>\lim_{n \to \infty}A^n</math><br> '''  
+
\end{pmatrix}</math> <br> <br><math>\lim_{n \to \infty}A^n</math><br> ''' ''  
 
+
<br>More Generally:<math>P=\begin{bmatrix}
 +
P_{11} & \cdots & P_{1k} \\
 +
\vdots & \ddots & \vdots \\
 +
P_{k1} & \cdots & P_{kk}
 +
\end{bmatrix}</math>
 
[[Category:MA375Spring2014Walther]] [[Category:Math]] [[Category:Project]]
 
[[Category:MA375Spring2014Walther]] [[Category:Math]] [[Category:Project]]

Revision as of 13:29, 24 March 2014

What is a Markov chain/process? What can you do with it? And why are lots of people talking about them? Back to MA375 Spring 2014 


Outlines of the Project
A) What is a Markov Chain
-Stochastic process
-Definition of Markov Chain
-Concrete examples and several properties
B) What can we do with it?
-Transition Probabilities and Transition Matrix
-Computation with Transition Matrix
-Matrix limits and Markov Chains
C) Why Useful
-Applications in Real Life with examples and computation


1.What is a Markov chain?
To introduce the answer this question, we should first realize the definition of stochastic process, because a Markov Chain is a special type of a stochastic process. Definition of a stochastic process:
“A sequence of random variables X1, X2… is called a stochastic process with discrete time parameter” [1]

Explanation: We may think X1, X2… as samples of an experiment. In stochastic process, they represents states of process. Clearly, X1 is called the initial state, and Xn represents the state at time n. In the introductory probability course, we are familiar with the models when X1…Xn are independently identical distributed (iid) random variables. However, note states do not necessarily have to be independently identical distributed. I think this note is especially important to eliminate confusion for first time learners in the Markov Chain.


Definition of a Markov chain
A stochastic process is a Markov Chain if, for each time n, the conditional distributions of all Xn+j given X1…Xn depend only on Xn instead of X1…Xn-1. In majority or basic cases, we consider j=1. That is, the probability distribution of random variable Xn is determined by the value from the (n-1)th state (Xn-1). This means the distribution of future states depend only on the present state, and has nothing to do with the past states.
*Note: In a Markov Chain, the value of present state only determines the “probability distribution of the future state”, it does not determine exactly the value of the future states.


A concrete example
Let’s consider a person has two choices of drinking for his breakfast every morning, milk or coffee. We let Xi=1 if the person chooses to drink milk on the ith morning. Let Xi=2 if the person chooses to drink coffee on the ith morning. In this case, the sequence of cases (random variables) X1, X2… is a stochastic process with two possible states at each time. To make it be a Markov chain, consider people are usually preferring switching rather than repeating, and he prefers a little bit more on milk than coffee. Thus assume probability of that person choosing milk today given he chose milk yesterday is 0.3. The probability that he chooses coffee given he chose coffee yesterday is 0.2. Then this sequence of states becomes a Markov Chain.


P(Xn + 1 = 1 | Xn = 1) = 0.3, P(Xn + 1 = 2 | Xn = 1) = 1 − 0.3 = 0.7

P(Xn + 1 = 1 | Xn = 2) = 1 − 0.2 = 0.8, P(Xn + 1 = 2 | Xn = 2) = 0.2

Milk and Coffee.PNG
What can we do with it
Transition probabilities and Transition Matrix
We define P_ij to be the probability that the future state is j given the present state is i.
P(Xn + 1 = j | Xn = i) = Pi'j
They are called transition distributions. If a Markov Chain’s transition distributions do not depend on n (just like the above example), then we call the Markov Chain has stationary transition distributions.
In particular, a Markov Chain with stationary transition distributions can be presented by a Transition Matrix. For instance, the transition Matrix for the above Example is:

$ \begin{pmatrix} 0.3 & 0.7 \\ 0.8 & 0.2 \end{pmatrix} $

$ \lim_{n \to \infty}A^n $

More Generally:$ P=\begin{bmatrix} P_{11} & \cdots & P_{1k} \\ \vdots & \ddots & \vdots \\ P_{k1} & \cdots & P_{kk} \end{bmatrix} $

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood