Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Need help with basic probability

  1. Feb 3, 2012 #1
    Hello everyone,

    I am trying to figure out some overall possibilities of some states for a certain configuration and need a bit of help with probability theory.

    So, assume that there is an object that can take 3 states {0, 1, 2} and the object can only change the state by doing one jump at a time. So, from 0 to 1 and then from 1 to 2 (I think it is called the Ising model)...

    Initially the object is in state 0 and an event happens and the object can either stay in state 0 or move to state 1.

    So, we have P_1(0) and P_1(1) and their sum is 1.

    Now, consider the following scenario: object stayed in state (0). We have another event:

    Again, we have a new set of probabilities
    P_2(0) and P_2(1) again their sum is 1.

    Now, after these two events, what is the overall probability that the object stays in state 0 or state 1?

    Now, consider another scenario: object jumps to state (1) after the first event. Now, in the second event, we have probabilities

    P_2(1) and P_2(2). So, it can either stay in state 1 or move to state 2.

    So, overall what should be the probability of it being in state (0), (1) or (2)... is there a simple way to think about this that can encapsulate all these scenarios.

    The background is that I am trying to code a small program that can track all the probabilities of these state jumps for this object and the problem is that the states can be more than 3...more like 20! And I want to figure out an algorithm and, of course, also understand how this probabilities evolve..

    Would appreciate any help you can give me.

    Thanks,
    Luca
     
  2. jcsd
  3. Feb 3, 2012 #2

    jedishrfu

    Staff: Mentor

    so is this kind of like an electron model where the electron can only be in one state at a time of a number of possible states with the caveat that it can only go up or down one state?
     
  4. Feb 3, 2012 #3
    Yes, I guess it is. However, I must confess that I am not a physicist and I am barely know high school level math. I recently started learning stuff on my own and have been toying around with some discrete optimization model and what I am trying to do is quantify sort of confidence values on estimated solutions from optimization algorithms.

    Cheers,
    Luca
     
  5. Feb 3, 2012 #4

    jedishrfu

    Staff: Mentor

    so you're in a given state n and you have three choices:

    1) move up one state
    2) stay where you are
    3) move down one state

    with 33% chance of doing one of these three actions, right?

    and when you're in state 0 or state 19 (ie the twentieth state) you only have
    two choices:

    - stay where you are
    - move to a neighboring state
     
  6. Feb 3, 2012 #5
    Hi there,

    Yes, I actually do not even go down one state. So, I just either stay in that state or go up. However, they are not equally likely, so not 50-50.

    So, after event 1, the probability of going from state 0 to state 1 could be low. However, after event 2, the same probability could be high.

    So, after 2 events, what is the probability of seeing the object in state(0), state (1) or state (2)?

    Thanks,
    Luca
     
  7. Feb 3, 2012 #6

    jedishrfu

    Staff: Mentor

    given 50/50 probability of moving or staying in a given state I get
    3x in state 0 and
    2x in state 1 and
    1x in state 2 a

    nd normalizing things I get
    3:6 in state 0 for 50%,
    2:6 in state 1 for 33% and
    1:6 in state 2 for 16.6%

    my state change diagram

    0
    | \
    0 1
    | \ \
    0 1 2
     
  8. Feb 4, 2012 #7

    chiro

    User Avatar
    Science Advisor

    Hey pamparana.

    What you are describing is a markov chain. There is a very well developed theory for markov processes that lets you find among other things, steady state probabilities which tell you in the long run, what the probability is of ending up in a particular state.

    How much background do you have in mathematics and probability/statistics?
     
  9. Feb 4, 2012 #8
    Hello guys,

    Thanks for replying. I do not have much background in any of these things unfortunately. But I am willing to learn!

    So basically, what I was looking for was a sort of a general formula/algorithm for this that can be efficiently coded up. I will look into this in a bit more detail now that you mention it might be a Markov Chain.

    Thanks,
    Luca
     
  10. Feb 4, 2012 #9
    This is a Markov chain. The transition probabilities are described in terms of the transition probability matrix. In your case the matrix is 2x2 with only two non-zero entries. The entries are the probabilities of staying in the current state or moving to the next state in a single step. For multiple steps, the Chapman-Kolmogorov equations give the probability of finding yourself in a given state after n steps, by what is essentially matrix multiplication. The only trick here is, if I understand your question, is that the probability matrix is different for each current state but that poses no real problem. Try "Probability Models" by Sheldon Ross.
     
  11. Feb 4, 2012 #10
    That is great. Thanks for the advise :)

    Luca
     
  12. Feb 4, 2012 #11
    You are welcome. Maybe I can clarify with a simple example. Let Pij equal the probability of transitioning from state i to state j. So P01=probability of transitioning from 0 to 1. So suppose P00=0.6 and P01=0.4. Note that the sum is necessarily 1 because you only have 2 choices. Now after 2 steps you can be in any of 3 states, namely 0, 1, or 2. Let's suppose P11=0.75 and P12=0.25 since you said the transition probabilities can change from state to state.

    So after 2 steps:

    P02=(P01)(P12)=0.1 because there is only one way to get from 0 to 2 in two steps, namely step on each.

    P01=(P00)(P01)+(P01)(P11)=(0.6)(0.4)+(0.4)(0.75)=0.54 because there are two ways to get from state 0 to state 1 in two steps, both including staying in one state on one step.

    P00=(0.6)(0.6)=0.36 because you stay where you are twice.

    Notice that 0.1+0.54+0.36=1 because you must be in state 0,1,or 2 after 2 steps.

    Now after 3 steps you could be in any of the states 0, 1, 2, or 3.

    There is a quick visual trick for figuring out how many terms you must have and what they are. Suppose + means that you step and - means that you don't. Then to move k steps in n time steps you need to know the number of ways to take k in n. For example, I want to find P02 after 3 time steps. That means that I need ++-, +-+, or -++ to get 2 steps in 3 time steps. (Usually the number of steps is a superscript, the positions ij are subscripts).

    From my little pictures I know that the transition probability P02 after 3 time steps contains 3 terms, one for each picture. So

    P02=(P01)(P12)(P22)+(P01)(P11)(P12)+(P00)(P01)(P12).

    Each term in the sum is represented by one of my ++- pictures involving moving twice nd staying once in 3 time steps.

    Try it for larger. You can check yourself by adding probabilities. For example, after 3 time steps, you can be in any of states 0,1,2, or 3. So the sum of these 4 probabilities, P00, P01, P02, P03 must equal 1. Using pictures, P00=(---), P01=(+--)+(-+-)+(--+),
    P02=(++-)+(+-+)+(-++), and P03=(+++).

    Hope this helps. Let me know if you have questions.
     
  13. Feb 4, 2012 #12
    You are welcome. Maybe I can clarify with a simple example. Let Pij equal the probability of transitioning from state i to state j. So P01=probability of transitioning from 0 to 1. So suppose P00=0.6 and P01=0.4. Note that the sum is necessarily 1 because you only have 2 choices. Now after 2 steps you can be in any of 3 states, namely 0, 1, or 2. Let's suppose P11=0.75 and P12=0.25 since you said the transition probabilities can change from state to state.

    So after 2 steps:

    P02=(P01)(P12)=0.1 because there is only one way to get from 0 to 2 in two steps, namely step on each.

    P01=(P00)(P01)+(P01)(P11)=(0.6)(0.4)+(0.4)(0.75)=0.54 because there are two ways to get from state 0 to state 1 in two steps, both including staying in one state on one step.

    P00=(0.6)(0.6)=0.36 because you stay where you are twice.

    Notice that 0.1+0.54+0.36=1 because you must be in state 0,1,or 2 after 2 steps.

    Now after 3 steps you could be in any of the states 0, 1, 2, or 3.

    There is a quick visual trick for figuring out how many terms you must have and what they are. Suppose + means that you step and - means that you don't. Then to move k steps in n time steps you need to know the number of ways to take k in n. For example, I want to find P02 after 3 time steps. That means that I need ++-, +-+, or -++ to get 2 steps in 3 time steps. (Usually the number of steps is a superscript, the positions ij are subscripts).

    From my little pictures I know that the transition probability P02 after 3 time steps contains 3 terms, one for each picture. So

    P02=(P01)(P12)(P22)+(P01)(P11)(P12)+(P00)(P01)(P12).

    Each term in the sum is represented by one of my ++- pictures involving moving twice nd staying once in 3 time steps.

    Try it for larger. You can check yourself by adding probabilities. For example, after 3 time steps, you can be in any of states 0,1,2, or 3. So the sum of these 4 probabilities, P00, P01, P02, P03 must equal 1. Using pictures, P00=(---), P01=(+--)+(-+-)+(--+),
    P02=(++-)+(+-+)+(-++), and P03=(+++).

    Hope this helps. Let me know if you have questions.
     
  14. Feb 4, 2012 #13
    Thank you so much for this detailed explanation. it is so much clear to me now. So, it is the sum of the product of probabilities on each path.

    Just to clarify one thing: there is no need to normalize these probabilities at any point:

    So,

    P(0, 1) = (P00)(P01)+(P01)(P11). There is no need to divide this by 2 or the sum of the individual probabilities.

    Thank you very much.

    Luca
     
  15. Feb 4, 2012 #14
    You are welcome. No you don't have any normalization because each path is different and we overcount nothing. I should note that the number of path gets big really fast but if you track these indices you will find that you can do the whole thing numerically using matrix multiplication.
     
  16. Feb 4, 2012 #15
    Thank you. Yes, I have been reading up on these Markov chains and have a good idea on how to implement it in the computer.

    Thanks again everybody :) Much appreciated. You are all awesome :)

    Luca
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Need help with basic probability
Loading...