Autocorrelation of a Bernoulli Coin Flipping Experiment

Click For Summary
SUMMARY

The forum discussion centers on the autocorrelation of a Bernoulli coin flipping experiment, specifically analyzing the random variable x[n], which represents the outcomes of the flips. The mean of x[n] is established as (2p - 1), where p is the probability of getting HEADS. The autocorrelation function is defined such that E{x[n+m]x[n]} equals 1 when m = 0 and (2p - 1)² when m ≠ 0. This indicates that while the flips are uncorrelated for different instances (m ≠ 0), they are perfectly correlated with themselves (m = 0).

PREREQUISITES
  • Understanding of Bernoulli processes and probability theory
  • Familiarity with random variables and their properties
  • Knowledge of autocorrelation and covariance concepts
  • Basic proficiency in statistical notation and calculations
NEXT STEPS
  • Study the properties of Bernoulli processes in depth
  • Learn about the derivation of autocorrelation functions in stochastic processes
  • Explore the implications of covariance and correlation in random variables
  • Investigate the concept of independence in probability theory
USEFUL FOR

Statisticians, data scientists, and anyone interested in understanding the mathematical foundations of stochastic processes and their applications in real-world scenarios.

tworitdash
Messages
104
Reaction score
25
I am confused at one point. The coin flipping Bernoulli Process has a probability of p of getting HEADS and a probability of 1-p of getting TAILS. Let's define a random variable x[n], which takes the value +1 when it is a HEADS, and -1 when it is a TAILS. The mean or estimation of x[n] becomes (2p -1) and I can derive it as an integration of probability function with the function itself. Which results in the sum of +1 times p and -1 time (1-p). However, when it comes to autocorrelation if the lag is 0, That is Estimate[x[n+m]x[n]] = 1 when m = 0 and if m is not equal to 0, it becomes (2p -1) ^ 2. I basically get it when I try to understand the physical meaning of it, but mathematically how this calculation is done? Because of the fact that the autocorrelation is a second-order property, it becomes (2p - 1) ^ 2. However, not for m = 0. Why and how?E{x[n]} = 2p−1

E{x[n+m]x[n]} = 1 for [m = 0]
E{x[n+m]x[n]} = (2p - 1)2 for [m != 0]
 
Physics news on Phys.org
If p = 0.5, then 2p - 1 = 0. That means that a flip is correlated with itself (##m=0##) but any two different flips (##m\neq 0##) are completely uncorrelated. That's what you'd expect. What is confusing you about that?
 
tworitdash said:
That is Estimate[x[n+m]x[n]] = 1 when m = 0 and if m is not equal to 0, it becomes (2p -1) ^ 2. I basically get it when I try to understand the physical meaning of it, but mathematically how this calculation is done? Because of the fact that the autocorrelation is a second-order property, it becomes (2p - 1) ^ 2. However, not for m = 0. Why and how?
Presumably, you calculated the probabilities that x[n]=1 and x[n+m]=1; x[n]=1 and x[n+m]=-1; x[n]=-1 and x[n+m]=1; and x[n]=-1 and x[n+m]=-1. Are those probabilities the same when ##m=0## and ##m\ne 0##?
 
  • Like
Likes   Reactions: tworitdash
vela said:
Presumably, you calculated the probabilities that x[n]=1 and x[n+m]=1; x[n]=1 and x[n+m]=-1; x[n]=-1 and x[n+m]=1; and x[n]=-1 and x[n+m]=-1. Are those probabilities the same when ##m=0## and ##m\ne 0##?
Yes It should be the same when m = 0. Isn't it? But is it an expession of p or just 1?
 
tworitdash said:
I am confused at one point. The coin flipping Bernoulli Process has a probability of p of getting HEADS and a probability of 1-p of getting TAILS. Let's define a random variable x[n], which takes the value +1 when it is a HEADS, and -1 when it is a TAILS. The mean or estimation of x[n] becomes (2p -1) and I can derive it as an integration of probability function with the function itself. Which results in the sum of +1 times p and -1 time (1-p). However, when it comes to autocorrelation if the lag is 0, That is Estimate[x[n+m]x[n]] = 1 when m = 0 and if m is not equal to 0, it becomes (2p -1) ^ 2. I basically get it when I try to understand the physical meaning of it, but mathematically how this calculation is done? Because of the fact that the autocorrelation is a second-order property, it becomes (2p - 1) ^ 2. However, not for m = 0. Why and how?E{x[n]} = 2p−1

E{x[n+m]x[n]} = 1 for [m = 0]
E{x[n+m]x[n]} = (2p - 1)2 for [m != 0]

If the successive flips are "independent", the ##X[n]## results are uncorrelated; that is, the auto-covariance is
$$\text{Cov}(X[n], X[k]) = 0 \; \text{for} \; n \neq k.$$ Since
$$\text{Cor}(X[n],X[k]) \equiv \frac{\text{Cov}(X[n],X[k])}{\sigma_{X[n]} \sigma_{X[k]}}, $$
(where ##\sigma_X## is the standard deviation of ##X##) it follows that the correlation is zero as well.
Remember: the covariance ##\text{Cov}## is defined as
$$\text{Cov}(X[n],X[k]) \equiv E[ (X[n]-E X[n]) (X[k] - E X[k])], $$
and this evaluates to
$$\text{Cov}(X[n],X[k]) = E (X[n] X[k] ) - (E X[n]) (E X[k]).$$ This is zero for a Bernoulli process.For ##n = k## the covariance reduces to the variance:
$$ \text{Var} (X[n]) = E( X[n]- E X[n])^2 = E(X^2[n]) - (E X[n])^2 = 4p(1-p).$$
 
Last edited:
  • Like
Likes   Reactions: tworitdash

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
4K
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
12K
  • · Replies 1 ·
Replies
1
Views
2K