Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Markov chains and steady state vectors

  1. Jun 1, 2005 #1
    In my data management class we are studying matricies, we have broken the whole unit up into sections and then in small groups we have to teach the rest of the class a section or the matrix unit. Well, our section is markov chains. I understand it myself but I don't know if I'll be able to explain/teach it to others very well.
    Could someone go over the key points and try and put it in simple terms, so I can best explain it to the rest of my class?

  2. jcsd
  3. Jun 1, 2005 #2


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

  4. Jun 1, 2005 #3

    markov chains are Pn = P0^N where P0 is a probability matrix right? and N is the number of generations?
  5. Jun 2, 2005 #4
    Markov chains are a sequence of random variables X_1,....,X_n, where probability that a system is in state x_n at time t_n is exclusively dependent on the probability that the system is in state x_(n-1) at time t_(n-1).

    In a very informal way we can say, that this is a record of n occurrences and each occurrence is such that its completely dependent only on the occurrence that occurred just before it and no other previous occurrence. (If you a sytem control person, you can think it of as first order linear/nonlinear system)

    A more convenient way to think of this is to draw a state space diagram with every link from one state to other state denoting the probability of transition.

    These probabilities of transition can be recorded in the form of a matrix (say P). If Q_0 denotes the initial probability (the probability with which a system is in some initial state), then the further developments of the system can be predicted as,
    Q_1 = Q_0*P
    Q_2 = Q_1*P
    and so on.....

    Note that this is just a mathematical model and there are many mathematical works made on this model. What this means is, if you ever find a real world problem which follows this model, then you can directly apply all the results that you see for markov model to your real world problem.

    Interestingly, this simple model is highly useful in many places. Many of the real world engineering problems are first order linear/non-linear type and if there is a probabilistic control associated with it, then it becomes the perfect candidate for a markov model.

    -- AI
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Markov chains and steady state vectors
  1. What's a Markov Chain? (Replies: 3)

  2. Markov chain (Replies: 0)