Difficult computational statistics problem

Click For Summary
SUMMARY

The discussion centers on estimating the probabilities of heads for a penny (p) and a dime (d), as well as the switching probability (s) in a coin-tossing scenario modeled as a Hidden Markov Model (HMM). The underlying state of the coin (either penny or dime) follows a Markov chain with a transition probability matrix defined as \mathbb{P}= \pmatrix{1-s & s \\ s & 1-s}. The outcomes of the tosses (heads or tails) are observable, while the coin used remains hidden. Several online tutorials are recommended for further understanding of HMMs, including resources from the University of Beira Interior and UBC.

PREREQUISITES
  • Understanding of Hidden Markov Models (HMM)
  • Familiarity with Markov chains and transition probability matrices
  • Basic knowledge of probability theory
  • Experience with statistical estimation techniques
NEXT STEPS
  • Study the principles of Hidden Markov Models in detail
  • Learn about the Expectation-Maximization (EM) algorithm for parameter estimation in HMMs
  • Explore the tutorial on HMMs from the University of Beira Interior
  • Review the illustrative example provided in the UBC tutorial on Bayesian methods
USEFUL FOR

Statisticians, data scientists, and researchers working on computational statistics problems, particularly those involving Hidden Markov Models and probabilistic estimation.

Bazzinga
Messages
45
Reaction score
0
I've got a tricky computational statistics problem and I was wondering if anyone could help me solve it.

Okay, so in your left pocket is a penny and in your right pocket is a dime. On a fair toss, the probability of showing a head is p for the penny and d for the dime. You randomly chooses a coin to begin, toss it, and report the outcome (heads or tails) without revealing which coin was tossed. Then you decide whether to use the same coin for the next toss, or to switch to the other coin. You switch coins with probability s, and use the same coin with probability (1 - s). The outcome of the second toss is reported, again not reveling the coin used.

I have a sequence of heads and tails data based on these flips, so how would I go about estimating p, d, and s?
 
Physics news on Phys.org
Bazzinga said:
I've got a tricky computational statistics problem and I was wondering if anyone could help me solve it.

Okay, so in your left pocket is a penny and in your right pocket is a dime. On a fair toss, the probability of showing a head is p for the penny and d for the dime. You randomly chooses a coin to begin, toss it, and report the outcome (heads or tails) without revealing which coin was tossed. Then you decide whether to use the same coin for the next toss, or to switch to the other coin. You switch coins with probability s, and use the same coin with probability (1 - s). The outcome of the second toss is reported, again not reveling the coin used.

I have a sequence of heads and tails data based on these flips, so how would I go about estimating p, d, and s?

What you are describing is a so-called Hidden Markov Model. Here, the underlying state (dime or penny) follows a Markov chain with transition probability matrix
\mathbb{P}= \pmatrix{1-s & s \\ s & 1-s}
However, the state is not observable---only the outcomes (H or T) of tossing the coins can be observed.

There are several useful tutorials available on-line: see, eg.,
http://di.ubi.pt/~jpaulo/competence/tutorials/hmm-tutorial-1.pdf or
http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf

This last source has a brief treatment of your problem, as an illustrative example.
 
  • Like
Likes   Reactions: Bazzinga
Ray Vickson said:
What you are describing is a so-called Hidden Markov Model. Here, the underlying state (dime or penny) follows a Markov chain with transition probability matrix
\mathbb{P}= \pmatrix{1-s & s \\ s & 1-s}
However, the state is not observable---only the outcomes (H or T) of tossing the coins can be observed.

There are several useful tutorials available on-line: see, eg.,
http://di.ubi.pt/~jpaulo/competence/tutorials/hmm-tutorial-1.pdf or
http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf

This last source has a brief treatment of your problem, as an illustrative example.

Great I'll take a look at those! Thanks!
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
23
Views
8K
  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K