Markov process

  • Thread starter steven187
  • Start date
  • #1
steven187
176
0
Hi all,

Im currently researching into stochastic proesses. The gaussian process wasnt hard to tackle, However, I don't understand the Markov process. See I understand that a stochastic process is a family of random variable's which is dependent and distinguished upon another variable. But with a Markov process I can't see where this family of random variables like the way I see it in the Gaussian process. How could I understand this graphically?

I also realize that we need to sets of information, is it the initial distribution or the initial point and the transition probability?

Another thing I don't understand is if these stochastic processes are related to time how are we suppose to know the distribution at a particular point in time if only one thing can occur at a particular point in time?

Please help,

Regards

Steven
 

Answers and Replies

  • #2
steven187
176
0
Hi all,

To answer my own question, pretty much the family of random variables is the same in both Markov and Gaussian Process what's different is how the densities are calculated, Especially the Markov process its quit remarkable. In terms of understanding it graphically, we have an initial distribution then after 1 time step we change from one state to another, and its the probability of this change of state which is used to find the density function of such a process.

And yes it aint an initial point its an initial distribution.

Please correct me if I am wrong?

However I still don't understand how we are are suppose to know the distribution at a particular point in time if only one thing can occur a a particular point in time? I believe that this stochastic processes are not that realistic as I find gaining knowledge of the distribution at each point in time impossible unless we make a number of assumptions.

Regards

Steven
 
  • #3
Cincinnatus
389
0
To compute the probability of the Markov chain being in any given state i after one step we multiply the probability of being in each initial state j by the probability of going from j to i and add this value to the probabilities of getting to i starting from different initial states.

That is,
Pr(X1=i|X0 has initial distribution)=P(1)P(1,i)+P(2)P(2,i)+...+P(n)P(n,i)

Where P(n) is the probability of starting in state n and P(n,i) is the probability of going from state n to state i.

The same equation can also be written as:

Pr(X1=i|X0 has initial distribution d)=P*d

Where P is the matrix of transition probabilities, (the matrix with i,jth element equal to P(i,j)) and D is the initial distribution written as a column vector (the kth element of D is the probability of starting in state k).

So, to find the distribution after k steps we just apply the same procedure k times. That is, we find the distribution after 1 step, then use it as the initial distribution to find the distribution after 2 steps and so on.

Now we can calculate the probability of a Markov chain being in a state i after n steps as follows:

Pr(Xk=i|X0 has initial distribution d)=(P^k)*d

Does this answer your question?
 
  • #4
steven187
176
0
Hi there,

Thanxs for your response, it makes a lot more sense now, it seems like a simple probability problem except its a lot bigger, I now get how we get the distribution function for each step, however in terms of the initial distribution function, how do we work out such distributions? I mean to be realistic and actually apply this process we would need to know how to figure out the initial distribution? is there a way or is it subjective?
 
  • #5
Cincinnatus
389
0
It depends what you are trying to model. Often you know the state the process will start in. In which case you'd use the initial distribution vector that has a 1 as the element corresponding to that state and 0s everywhere else.

Also, a certain class of Markov processes turns out to have a long term probability distribution (that's the limit of the distribution as the number of steps goes to infinity) that is independent of the initial distribution you use. So, depending on what you are doing, the initial distribution might not matter a whole lot.
 

Suggested for: Markov process

  • Last Post
Replies
29
Views
1K
  • Last Post
Replies
6
Views
531
  • Last Post
Replies
2
Views
434
Replies
1
Views
958
Replies
4
Views
484
Replies
5
Views
736
  • Last Post
Replies
2
Views
2K
Replies
14
Views
909
  • Last Post
Replies
1
Views
1K
Replies
3
Views
669
Top