## Markov chains and steady state vectors

In my data management class we are studying matricies, we have broken the whole unit up into sections and then in small groups we have to teach the rest of the class a section or the matrix unit. Well, our section is markov chains. I understand it myself but I don't know if I'll be able to explain/teach it to others very well.
Could someone go over the key points and try and put it in simple terms, so I can best explain it to the rest of my class?

Thanks
 PhysOrg.com mathematics news on PhysOrg.com >> Pendulum swings back on 350-year-old mathematical mystery>> Bayesian statistics theorem holds its own - but use with caution>> Math technique de-clutters cancer-cell data, revealing tumor evolution, treatment leads
 Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor This might be fun (though I haven't read through it myself) http://www.pankin.com/markov/intro.htm http://cs.bilgi.edu.tr/~bulent/MarkovChains.html
 www.mathworld.com markov chains are Pn = P0^N where P0 is a probability matrix right? and N is the number of generations?

## Markov chains and steady state vectors

Markov chains are a sequence of random variables X_1,....,X_n, where probability that a system is in state x_n at time t_n is exclusively dependent on the probability that the system is in state x_(n-1) at time t_(n-1).

In a very informal way we can say, that this is a record of n occurrences and each occurrence is such that its completely dependent only on the occurrence that occurred just before it and no other previous occurrence. (If you a sytem control person, you can think it of as first order linear/nonlinear system)

A more convenient way to think of this is to draw a state space diagram with every link from one state to other state denoting the probability of transition.

These probabilities of transition can be recorded in the form of a matrix (say P). If Q_0 denotes the initial probability (the probability with which a system is in some initial state), then the further developments of the system can be predicted as,
Q_1 = Q_0*P
Q_2 = Q_1*P
and so on.....

Note that this is just a mathematical model and there are many mathematical works made on this model. What this means is, if you ever find a real world problem which follows this model, then you can directly apply all the results that you see for markov model to your real world problem.

Interestingly, this simple model is highly useful in many places. Many of the real world engineering problems are first order linear/non-linear type and if there is a probabilistic control associated with it, then it becomes the perfect candidate for a markov model.

-- AI

 Similar discussions for: Markov chains and steady state vectors Thread Forum Replies Calculus & Beyond Homework 9 Calculus & Beyond Homework 2 Advanced Physics Homework 2 Calculus & Beyond Homework 2 Linear & Abstract Algebra 2