Understanding Markov Processes: Steven's Questions

  • Thread starter steven187
  • Start date
In summary, a stochastic process is a family of random variables which is dependent and distinguished upon another variable. The Gaussian process is a simple example of a stochastic process. With a Markov process, we can't see where the family of random variables begins and ends like we do with the Gaussian process. However, we can find the probability of a Markov chain being in a state after n steps.
  • #1
steven187
176
0
Hi all,

Im currently researching into stochastic proesses. The gaussian process wasnt hard to tackle, However, I don't understand the Markov process. See I understand that a stochastic process is a family of random variable's which is dependent and distinguished upon another variable. But with a Markov process I can't see where this family of random variables like the way I see it in the Gaussian process. How could I understand this graphically?

I also realize that we need to sets of information, is it the initial distribution or the initial point and the transition probability?

Another thing I don't understand is if these stochastic processes are related to time how are we suppose to know the distribution at a particular point in time if only one thing can occur at a particular point in time?

Please help,

Regards

Steven
 
Physics news on Phys.org
  • #2
Hi all,

To answer my own question, pretty much the family of random variables is the same in both Markov and Gaussian Process what's different is how the densities are calculated, Especially the Markov process its quit remarkable. In terms of understanding it graphically, we have an initial distribution then after 1 time step we change from one state to another, and its the probability of this change of state which is used to find the density function of such a process.

And yes it aint an initial point its an initial distribution.

Please correct me if I am wrong?

However I still don't understand how we are are suppose to know the distribution at a particular point in time if only one thing can occur a a particular point in time? I believe that this stochastic processes are not that realistic as I find gaining knowledge of the distribution at each point in time impossible unless we make a number of assumptions.

Regards

Steven
 
  • #3
To compute the probability of the Markov chain being in any given state i after one step we multiply the probability of being in each initial state j by the probability of going from j to i and add this value to the probabilities of getting to i starting from different initial states.

That is,
Pr(X1=i|X0 has initial distribution)=P(1)P(1,i)+P(2)P(2,i)+...+P(n)P(n,i)

Where P(n) is the probability of starting in state n and P(n,i) is the probability of going from state n to state i.

The same equation can also be written as:

Pr(X1=i|X0 has initial distribution d)=P*d

Where P is the matrix of transition probabilities, (the matrix with i,jth element equal to P(i,j)) and D is the initial distribution written as a column vector (the kth element of D is the probability of starting in state k).

So, to find the distribution after k steps we just apply the same procedure k times. That is, we find the distribution after 1 step, then use it as the initial distribution to find the distribution after 2 steps and so on.

Now we can calculate the probability of a Markov chain being in a state i after n steps as follows:

Pr(Xk=i|X0 has initial distribution d)=(P^k)*d

Does this answer your question?
 
  • #4
Hi there,

Thanxs for your response, it makes a lot more sense now, it seems like a simple probability problem except its a lot bigger, I now get how we get the distribution function for each step, however in terms of the initial distribution function, how do we work out such distributions? I mean to be realistic and actually apply this process we would need to know how to figure out the initial distribution? is there a way or is it subjective?
 
  • #5
It depends what you are trying to model. Often you know the state the process will start in. In which case you'd use the initial distribution vector that has a 1 as the element corresponding to that state and 0s everywhere else.

Also, a certain class of Markov processes turns out to have a long term probability distribution (that's the limit of the distribution as the number of steps goes to infinity) that is independent of the initial distribution you use. So, depending on what you are doing, the initial distribution might not matter a whole lot.
 

1. What is a Markov process?

A Markov process is a mathematical model used to describe the probability of transitioning from one state to another over a series of time steps. It follows the Markov property, which states that the future state of the system only depends on its current state and not on any previous states.

2. How are Markov processes used in science?

Markov processes are used in various fields of science, including biology, physics, finance, and computer science. They are useful for modeling systems that involve random or probabilistic events, such as biological processes, financial markets, and computer algorithms.

3. What are the key components of a Markov process?

The key components of a Markov process include the state space, transition probabilities, and initial state distribution. The state space is the set of all possible states that the system can be in. The transition probabilities describe the likelihood of moving from one state to another. The initial state distribution represents the probability of the system being in each state at the beginning of the process.

4. What are some real-world applications of Markov processes?

Markov processes have a wide range of applications in various fields. For example, they are used in biology to model population dynamics and genetic drift, in physics to model particle interactions, in finance to model stock prices and market trends, and in natural language processing to model language patterns and generate text.

5. What are some limitations of Markov processes?

Markov processes have some limitations, such as assuming that the future state of the system is only dependent on its current state, not taking into account any external factors or events. They also assume that the transition probabilities remain constant over time, which may not be the case in some real-world situations. Additionally, Markov processes may not be suitable for modeling complex systems with a large number of states.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
683
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
874
  • Precalculus Mathematics Homework Help
Replies
4
Views
746
Replies
93
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
713
Back
Top