Why Is My Markov Chain Simulation Producing Unexpected Output?

In summary, there may be a few errors in your code that are causing unexpected output. Make sure to only define non-zero elements in P, only calculate Q(:,end) for the steady state vector, add a closing parenthesis in your while loop, and use X = B(:,end) to get the steady state vector.
  • #1
Jamin2112
986
12

Homework Statement




screen-capture-1-19.png


screen-capture-2-10.png


Homework Equations



Seems pretty straightforward ...

The Attempt at a Solution



Here's what I put:


P = zeros(10,10);

P(1,2) = 1;

P(2,1) = 1/2;

P(2,3) = 1/2;

P(3,1) = 1/2;

P(3,4) = 1/2;

P(4,1) = 1/3;

P(4,2) = 1/3;

P(4,5) = 1/3;

P(5,1) = 1/2;

P(5,6) = 1/2;

P(6,1) = 1/4;

P(6,2) = 1/4;

P(6,3) = 1/4;

P(6,7) = 1/4;

P(7,1) = 1/2;

P(7,8) = 1/2;

P(8,1) = 1/4;

P(8,2) = 1/4;

P(8,4) = 1/4;

P(8,9) = 1/4;

P(9,1) = 1/3;

P(9,3) = 1/3;

P(9,10) = 1/3;

P(10,1) = 1/3;

P(10,2) = 1/3;

P(10,5) = 1/3;

A = P';

save A.dat A -ASCII

x0 = [.1 .1 .1 .1 .1 .1 .1 .1 .1 .1]';

for
i = 1:5
Q(:,i) = A^i * x0;
end

for i = 1:4
p10(i,1) = Q(10,i);
end

save p10.dat p10 -ASCII

b0 = x0;

i = 0;

tol = 10^(-8);

B(:,1) = A * b0 / norm(A*b0, 1);

while
norm(B(:,i+1)-B(:,i), 1) / norm(B(:,i+1)) > tol
i = i+1
B(:,i+1) = A * B(:,i) / nor * B(:,i), 1);
end

X = B(:,i);

save ssVect.dat X -ASCII

c = dim(max(B(:,i)))

save mostOften.dat c -ASCII

save counter.dat i -ASCII




But I'm getting weird output like

B =
0.0333 0.0083 0.0042 0.0010 0.0005
0.0083 0 0 0 0
0.0042 0 0 0 0
0.0010 0 0 0 0
0.0005 0 0 0 0

When B supposed to just be a vector ... What's going on here?
 
Physics news on Phys.org
  • #2


Hi there! It looks like there may be a few issues with your code. First, when you define P as a 10x10 matrix, you only need to specify the non-zero elements. So for example, instead of setting P(1,2) and P(2,1) to be non-zero, you can just set P(1,2) = 1/2. This will make your code cleaner and more efficient.

Second, when you define Q(:,i) = A^i * x0, you are creating a matrix where each column represents the state vector at a different time step. If you want to get the steady state vector, you should only calculate Q(:,end), which represents the state vector at the final time step.

Third, your while loop for calculating the steady state vector is missing a closing parenthesis after "nor". It should be "nor * B(:,i), 1)".

Lastly, the output for B is not a vector because you are saving it as a matrix in your code. To get the steady state vector, you should use X = B(:,end) instead of just X = B.

I hope this helps! Let me know if you have any other questions.
 

Related to Why Is My Markov Chain Simulation Producing Unexpected Output?

1. Why is my Markov chain not converging?

There could be several reasons why your Markov chain is not converging. One possibility is that your transition matrix is not well-defined or is not appropriate for your data. Another reason could be that your initial state distribution is not representative of the true distribution. It is also possible that your chain is too short or that there is a lack of mixing between states. It is important to carefully examine and troubleshoot each of these components in order to identify the issue.

2. How do I know if my Markov chain is mixing well?

One way to assess the mixing of your Markov chain is by looking at the autocorrelation plot of the chain's states. If the plot shows a rapid decrease in autocorrelation as lag increases, it indicates good mixing. Another method is to compare the distribution of states from different starting points in the chain - if they are similar, it suggests good mixing. Additionally, you can use statistical tests such as the Gelman-Rubin diagnostic to evaluate convergence and mixing.

3. Can I use a Markov chain for non-stationary data?

Markov chains are typically used for stationary data, where the transition probabilities do not change over time. However, there are methods for incorporating non-stationarity into Markov chains, such as using time-varying transition matrices or adapting the chain's parameters over time. It is important to carefully consider the nature of your data and whether a Markov chain is appropriate for your specific scenario.

4. How can I improve the performance of my Markov chain?

There are several ways to improve the performance of a Markov chain. One approach is to increase the length of the chain - a longer chain can help reduce bias and improve mixing. Another method is to adjust the initial state distribution to better reflect the true distribution of the data. Additionally, you can try different parameter tuning methods, such as simulated annealing or Metropolis-Hastings, to improve the chain's convergence and mixing.

5. What are some common applications of Markov chains?

Markov chains have many practical applications in fields such as finance, natural language processing, and biology. Some common uses include predictive modeling, time series analysis, and text generation. Markov chains are also frequently used in Monte Carlo simulations for complex systems. Overall, they are a versatile and powerful tool for modeling and analyzing sequential data with probabilistic dependencies.

Similar threads

  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
312
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
3K
  • Precalculus Mathematics Homework Help
Replies
24
Views
2K
Replies
16
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
6K
Back
Top