Understanding Markov Processes: Steven's Questions

  • Thread starter Thread starter steven187
  • Start date Start date
AI Thread Summary
The discussion focuses on understanding Markov processes in the context of stochastic processes, particularly comparing them to Gaussian processes. Key points include the clarification that while both processes involve families of random variables, the calculation of densities differs, with Markov processes relying on transition probabilities from an initial distribution. Participants explain how to compute the probability of transitioning between states and how to derive distributions over multiple steps. Additionally, they address the challenge of determining the initial distribution, noting that it can often be based on known starting states, and mention that some Markov processes may converge to a long-term distribution that is independent of the initial conditions. Overall, the conversation emphasizes the complexities and applications of Markov processes in modeling.
steven187
Messages
176
Reaction score
0
Hi all,

Im currently researching into stochastic proesses. The gaussian process wasnt hard to tackle, However, I don't understand the Markov process. See I understand that a stochastic process is a family of random variable's which is dependent and distinguished upon another variable. But with a Markov process I can't see where this family of random variables like the way I see it in the Gaussian process. How could I understand this graphically?

I also realize that we need to sets of information, is it the initial distribution or the initial point and the transition probability?

Another thing I don't understand is if these stochastic processes are related to time how are we suppose to know the distribution at a particular point in time if only one thing can occur at a particular point in time?

Please help,

Regards

Steven
 
Physics news on Phys.org
Hi all,

To answer my own question, pretty much the family of random variables is the same in both Markov and Gaussian Process what's different is how the densities are calculated, Especially the Markov process its quit remarkable. In terms of understanding it graphically, we have an initial distribution then after 1 time step we change from one state to another, and its the probability of this change of state which is used to find the density function of such a process.

And yes it aint an initial point its an initial distribution.

Please correct me if I am wrong?

However I still don't understand how we are are suppose to know the distribution at a particular point in time if only one thing can occur a a particular point in time? I believe that this stochastic processes are not that realistic as I find gaining knowledge of the distribution at each point in time impossible unless we make a number of assumptions.

Regards

Steven
 
To compute the probability of the Markov chain being in any given state i after one step we multiply the probability of being in each initial state j by the probability of going from j to i and add this value to the probabilities of getting to i starting from different initial states.

That is,
Pr(X1=i|X0 has initial distribution)=P(1)P(1,i)+P(2)P(2,i)+...+P(n)P(n,i)

Where P(n) is the probability of starting in state n and P(n,i) is the probability of going from state n to state i.

The same equation can also be written as:

Pr(X1=i|X0 has initial distribution d)=P*d

Where P is the matrix of transition probabilities, (the matrix with i,jth element equal to P(i,j)) and D is the initial distribution written as a column vector (the kth element of D is the probability of starting in state k).

So, to find the distribution after k steps we just apply the same procedure k times. That is, we find the distribution after 1 step, then use it as the initial distribution to find the distribution after 2 steps and so on.

Now we can calculate the probability of a Markov chain being in a state i after n steps as follows:

Pr(Xk=i|X0 has initial distribution d)=(P^k)*d

Does this answer your question?
 
Hi there,

Thanxs for your response, it makes a lot more sense now, it seems like a simple probability problem except its a lot bigger, I now get how we get the distribution function for each step, however in terms of the initial distribution function, how do we work out such distributions? I mean to be realistic and actually apply this process we would need to know how to figure out the initial distribution? is there a way or is it subjective?
 
It depends what you are trying to model. Often you know the state the process will start in. In which case you'd use the initial distribution vector that has a 1 as the element corresponding to that state and 0s everywhere else.

Also, a certain class of Markov processes turns out to have a long term probability distribution (that's the limit of the distribution as the number of steps goes to infinity) that is independent of the initial distribution you use. So, depending on what you are doing, the initial distribution might not matter a whole lot.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...

Similar threads

Replies
2
Views
2K
Replies
1
Views
2K
Replies
9
Views
5K
Replies
8
Views
2K
Replies
9
Views
1K
Replies
6
Views
2K
Back
Top