1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Coin toss with changing probability

  1. Dec 17, 2016 #1
    Hello I'm trying to solve the following problem : a coin is tossed with probability 1/2 of giving head but if it lands head the probability of head rise to .9. If it gives tail again or if the number of head is bigger equal 5 in a row then it fails down to .5.

    I wrote a code running for 100000000 trials and found the probability of head
    Code (C):

    #include<stdio.h>
    #include<time.h>
    #include<stdlib.h>

    double rnd()
    {
       int maxv=1<<24;
       int val=rand()%maxv;
       return((double)val/(double)maxv);
    }

    int main(void)
    {
       int iter=100000000;
       int numpile=0;
       double p=.5;
       int pileserie=0;

       srand(time(0));
     
       for(int i=0;i<iter;i++)
       {
           if(rnd()<p)   // choose head
           {
               numpile++;
               pileserie++;
               if(pileserie>=5)
                   p=.5;
               else
                   p=.9;
           }
           else       // else tail
           {
               pileserie=0;
               p=.5;
           }
       }

       printf("%lf\n",(double)numpile/(double)iter);
    }
     
    were near .703

    I would like to calculate this probability exactly I started with an empirical equation $$p (h)=\underbrace {(.9-(.9-.5)e^{-5a})}_{p_1-(p_1-p_0)e^{-an}}p (h)+p_0*p (t) $$ ??*

    With $$p (h)+p (t)=1$$.
    I don't understand how to find the exact formula * for the probabilities given the problem since the parameter a above should be fitted with numerical results.
     
    Last edited: Dec 17, 2016
  2. jcsd
  3. Dec 17, 2016 #2

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    You are dealing with a discrete-time Ergotic (irreducible) Markov chain with 5 states and an associated transition matrix. Suppose the states are labeled I=initial state, h1= 1 head,..., h4=4 heads. You can make a transition matrix of the probabilities of transitioning from one state to another and study the results. Let T designate the transition matrix. You can find the "stationary distribution", d, of T such that dT = d (see http://www.mast.queensu.ca/~stat455/lecturenotes/set3.pdf ). That is what you want. For an example of solving for the stationary distribution using MATLAB, see http://stackoverflow.com/questions/...ov-chain-stationary-distribution-solving-eqns
     
  4. Dec 17, 2016 #3
    I just read a bit about Markov chains but I don't understand two things :

    1) is this not a two states chain (head n tail) ?
    2) is this process with memory ? i read in wikipedia that markov processes were supposed memoryless ?
     
  5. Dec 17, 2016 #4

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    In this problem, you can embed the memory into different states. One state, h0, can be the initial state with equal H &T probabilities. A second state, h1, can be after the first Head. A third state, h2, can be after two heads in a row. h3 after 3 heads in a row. The last state, h4, is after 4 heads in a row. If the system is in state hn (n=0..3) and a head is tossed, it transitions to state hn+1. If the system is in state h4 and a head is tossed, it transitions to state h0. In any state, if a tails is tossed, it transitions to state h0. These states include all the past history that is needed to determine the transition probabilities. So there is no memory needed other than what is in the definition of the state. It is a Markov process.

    In each state, it is easy to specify the probabilities of transitioning to the other states. That will give you the transition matrix.
     
  6. Dec 17, 2016 #5
    So is it : $$\begin {array} 0.5 & .5 & 0 & 0 & 0 & 0\\ .1 & 0 & .9 & 0 & 0 & 0\\ .1 & 0 & 0 & .9 & 0 & 0\\ .1 & 0 & 0 & 0 & .9 & 0 \\ .1 & 0 & 0 & 0 & 0 & .9\\ .5 & 0 & 0 & 0 & 0 & .5\end {array}$$ for the column vector t h1 h2 h3 h4 h5 ?

    Or is the last line .5 .5 0 0 0 0 ?

    Then the invariant density were the eigenvector for eigenvalue 1 and p (h)=p (h1)+p (h2)+p (h3)+p (h4)+p (h5)
     
  7. Dec 17, 2016 #6

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    That is close to what I was thinking, but I am afraid that I was a little sloppy in my thinking. What you have might be better. You need to be able to get the probability that the last toss was heads by totaling some state probabilities. So I think that your addition of the last row for 5 heads (or more) in a row might be needed. That is a state that I had not listed.
    What happens after a 5'th head returns the probability to 0.5? This transition matrix leaves it in the same state and the next head will not go to 0.9. That depends on the problem statement. After 5 heads, should a sixth head have probability 0.9 or 0.5?
    So this would make a 5'th head go to state h1 as though there was just 1 head in a row. That depends on the problem statement. Should 6 heads in a row be treated exactly like 1 head in a row or like 4 heads in a row?
     
  8. Dec 17, 2016 #7
    After the fourth head probabilities are .5 to head or tail and then h6 .5 to head or tail aso always .5 as soon as you reach 5 heads.

    So its not the same as coming back to h1 so i suppose my first table was right.
     
  9. Dec 18, 2016 #8
    One more question : is there any theorem showing that the eigenvalue one exists and that the sign of the elements of its eigenvector is always the same ?
     
  10. Dec 18, 2016 #9

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Yes. In the reference above ( http://www.mast.queensu.ca/~stat455/lecturenotes/set3.pdf ) at the bottom of page 121 it says "We will show that for an irreducible Markov chain, a stationary distribution exists if and only if all states are positive recurrent, and in this case the stationary distribution is unique."
     
  11. Dec 24, 2016 #10
    So we can modelize processes with this kind of memory with a Markov's chain. But could could we modelize the following :

    A fair coin is tossed but :

    -If the two previous were the same : probability of that output is .9
    -Else : probability is .5 ?

    I think it's not possible since the probability is path dependent.
     
  12. Dec 24, 2016 #11

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I assume you mean the prob of a third of the same type and not two more of the same
    probability of what? The same type as the prior?
    The "path" information seems easy to define as a state. Only the prior two tosses are significant, so there are not many states.

    Although I have not analyzed them mathematically as Markov processes, there are state transition diagrams that are huge compared to these examples. They can have hundreds of states that encapsulate a complicated series of historical events.
     
  13. Dec 24, 2016 #12
    I mean something like

    H1->H2 .5
    H1->T2 .5
    H2->H3 .9
    H2->T3 .1
    T1->H2 .5
    T1->T2 .5
    ...

    But in fact it does not work since it depends on the series, for example T1->H2->T3 is all with probability .5 but H1->H2->T3 is .5 and .1

    We could also imagine a random generator giving H or T, as soon as one event arrives it has probability .9. Then the long term probability would be 1/2. However the average length of the series consisting of the all the same result would be 90 compared to 2 with probability .5.
     
    Last edited: Dec 24, 2016
  14. Dec 24, 2016 #13

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    You can use states that are defined by the prior two results: SHH, SHT, STH, STT. With those states you can make a state transition matrix of the probabilities of transitioning between the states on the next coin toss. It is a Markov chain.
     
  15. Dec 24, 2016 #14

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Here's an alternative. Consider each sequence as a game that ends with a tail.

    0.5 games have one play of a Tail.

    0.5(0.1) games have two plays: HT

    0.5(0.9)(0.1) games have three plays- HHT

    Etc.

    Note that after 5 heads the game becomes 50-50 again.

    This gives a sequence of probabilities for the length of a game with always one tail and a variable number of heads.

    From this you can calculate the expected number of tails per game ( which is 1) and the expected number of heads per game, which I get to be 2.34.

    This gives a probability of almost exactly 0.7 that any particular toss is a head.
     
  16. Dec 24, 2016 #15

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    To generalise: If the temporary probability of a head is ##a## then the expected number of heads per game is:

    ##E(H) = \frac12 [\frac{1-a^5}{1-a} + a^4]##

    And the probability of a head is

    ##p(H) = \frac{E(H)}{E(H)+1}##

    With ##a=0.9## this gives ##p(H) = 0.704##
     
  17. Dec 30, 2016 #16
    If one consider that the probability to remain were a for both then the length would increase but on the long run both probabilities were 1/2 ?
     
  18. Dec 30, 2016 #17

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    If the game is symmetric in heads and tails, then the probability of a head must be 1/2 in the long run.

    A harder problem would be: After ##1-n## heads, the probability of a head is ##a## and after ##1-m## tails the probability of a tail is ##b##.

    In your original problem ##n =4, a=0.9, m=1, b = 0.5##.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Coin toss with changing probability
Loading...