Physics Forums

Physics Forums (http://www.physicsforums.com/index.php)
-   Set Theory, Logic, Probability, Statistics (http://www.physicsforums.com/forumdisplay.php?f=78)
-   -   Markov process (http://www.physicsforums.com/showthread.php?t=138099)

steven187 Oct13-06 03:32 AM

Markov process
 
Hi all,

Im currently researching into stochastic proesses. The gaussian process wasnt hard to tackle, However, I dont understand the Markov process. See I understand that a stochastic process is a family of random variable's which is dependent and distinguished upon another variable. But with a Markov process I cant see where this family of random variables like the way I see it in the Gaussian process. How could I understand this graphically?

I also realise that we need to sets of information, is it the initial distribution or the initial point and the transition probability?

Another thing I dont understand is if these stochastic processes are related to time how are we suppose to know the distribution at a particular point in time if only one thing can occur at a particular point in time?

Please help,

Regards

Steven

steven187 Oct15-06 04:52 AM

Hi all,

To answer my own question, pretty much the family of random variables is the same in both Markov and Gaussian Process whats different is how the densities are calculated, Especially the Markov process its quit remarkable. In terms of understanding it graphically, we have an initial distribution then after 1 time step we change from one state to another, and its the probability of this change of state which is used to find the density function of such a process.

And yes it aint an initial point its an initial distribution.

Please correct me if im wrong?

However I still dont understand how we are are suppose to know the distribution at a particular point in time if only one thing can occur a a particular point in time? I believe that this stochastic processes are not that realistic as I find gaining knowledge of the distribution at each point in time impossible unless we make a number of assumptions.

Regards

Steven

Cincinnatus Oct15-06 01:52 PM

To compute the probability of the Markov chain being in any given state i after one step we multiply the probability of being in each initial state j by the probability of going from j to i and add this value to the probabilities of getting to i starting from different initial states.

That is,
Pr(X1=i|X0 has initial distribution)=P(1)P(1,i)+P(2)P(2,i)+...+P(n)P(n,i)

Where P(n) is the probability of starting in state n and P(n,i) is the probability of going from state n to state i.

The same equation can also be written as:

Pr(X1=i|X0 has initial distribution d)=P*d

Where P is the matrix of transition probabilities, (the matrix with i,jth element equal to P(i,j)) and D is the initial distribution written as a column vector (the kth element of D is the probability of starting in state k).

So, to find the distribution after k steps we just apply the same procedure k times. That is, we find the distribution after 1 step, then use it as the initial distribution to find the distribution after 2 steps and so on.

Now we can calculate the probability of a Markov chain being in a state i after n steps as follows:

Pr(Xk=i|X0 has initial distribution d)=(P^k)*d

Does this answer your question?

steven187 Oct15-06 02:17 PM

Hi there,

Thanxs for your response, it makes alot more sense now, it seems like a simple probability problem except its alot bigger, I now get how we get the distribution function for each step, however in terms of the initial distribution function, how do we work out such distributions? I mean to be realistic and actually apply this process we would need to know how to figure out the initial distribution? is there a way or is it subjective?

Cincinnatus Oct15-06 03:35 PM

It depends what you are trying to model. Often you know the state the process will start in. In which case you'd use the initial distribution vector that has a 1 as the element corresponding to that state and 0s everywhere else.

Also, a certain class of Markov processes turns out to have a long term probability distribution (that's the limit of the distribution as the number of steps goes to infinity) that is independent of the initial distribution you use. So, depending on what you are doing, the initial distribution might not matter a whole lot.


All times are GMT -5. The time now is 02:43 PM.

Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums