Graduate Markov Chain as a function of dimensions

Click For Summary
An animation of a Markov chain of order 50 was created to explore the behavior of clustering and flattening in higher dimensions. The discussion raises questions about whether the observed clustering is due to data spreading in larger dimensions or if induced correlations in specific coordinates maintain cluster integrity. Despite increasing dimensions, clusters appear to reach a steady state without significant spreading. The incremental changes in a Euclidean metric are analyzed, questioning the relationship between dimensionality and data distribution. Reference is made to a paper by Gareth Roberts and Jeff Rosenthal that discusses limit theorems and mixing properties in Markov Chains, which may provide insights into these phenomena.
FallenApple
Messages
564
Reaction score
61


Here is an animation I created in R.

I built this Markov chain of order 50 by correlating the information in one of the coordinates while randomly varying the rest. Is there an explanation for the clustering and flattening out over increasing dimensions of the vector space? Is it due to the fact that data becomes spread out over larger dimensions?

But that doesn't explain why the clusters themselves do not spread out or why other clusters condense. I've done this for much larger dimensions and it seems to reach a steady state.

The plot is of the incremental changes in a Euclidean metric vs the input, so I don't know if viewing the data as extremely spread out in higher dimensional space would translate to this plot.

Do this mean that the correlation that I induced in that coordinate is strong enough such that it keeps the cluster together regardless of how high the dimension is?
 
Last edited:
Physics news on Phys.org
I'm not sure if this would answer your question, but statisticians Gareth Roberts (of Lancaster University, later University of Warwick, UK) and Jeff Rosenthal (of the University of Toronto, and a former professor of mine when I was in grad school) wrote a paper summarizing limit theorems for Markov Chains in the context of MCMC algorithms.

https://arxiv.org/abs/math/0404033

I believe the contents of the paper will explain the specific convergence of Markov Chains and the properties of "mixing" in Markov Chains (aka the time when Markov Chains are "close" to its steady state distribution).
 
  • Like
Likes FallenApple
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

Replies
9
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 25 ·
Replies
25
Views
5K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
560