Question about probability with absorption

  • Context: Graduate 
  • Thread starter Thread starter chuy52506
  • Start date Start date
  • Tags Tags
    Absorption Probability
Click For Summary

Discussion Overview

The discussion revolves around a probability problem involving a Markov chain with absorption states. Participants explore the probabilities of reaching certain states (specifically N=10) and the expected number of steps before absorption, starting from N=2. The context includes theoretical aspects of probability and Markov processes.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that the probability of absorption is 1 regardless of the starting point, while others challenge this by discussing the specific probabilities of reaching N=10 or N=1.
  • There is a suggestion that the expected number of steps before absorption can be calculated, although the exact method is not agreed upon.
  • One participant mentions the need to set up a Markov chain and calculate the steady-state distribution to analyze long-term probabilities.
  • Another participant emphasizes that the probability of moving toward the upper absorbing state is consistently 0.69 and toward the lower state is 0.31.
  • Some participants express uncertainty about the implications of reaching the absorbing states and the conditions under which these probabilities hold.
  • There is a reference to the "gambler's ruin problem" as a related concept, indicating a potential framework for understanding the situation.
  • Participants note the importance of considering the Markov property in determining future states based on the current state.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the exact probabilities or methods for calculating expected steps before absorption. Multiple competing views remain regarding the interpretation of the probabilities and the implications of the Markov process.

Contextual Notes

Some assumptions about the transition probabilities and the nature of absorption states may not be fully articulated, leading to varying interpretations of the problem. The discussion reflects differing understandings of the Markov chain setup and its implications for the probabilities involved.

chuy52506
Messages
77
Reaction score
0
say we have integers 0-10. We start with N and the probability that N grows by 1 is .69. The probability that N decreases by 1 is .31.
Thus obtaining N+1 after one time step is .69 and similar obtaining N-1 is .31.
Once N reaches 0 or 10 it is absorbed and can't move from there.
My question is what is the probability that N is eventually at 10 if we start at N=2? also what is the probability of absorption if we start at N=2?
 
Last edited:
Physics news on Phys.org
chuy52506 said:
say we have integers 1-11. We start with N and the probability that N grows by 1 is .69. The probability that N decreases by 1 is .31.
Thus obtaining N+1 after one time step is .69 and similar obtaining N-1 is .31.
Once N reaches 1 or 11 it is absorbed and can't move from there.
My question is what is the probability that N is eventually at 10 if we start at N=2? also what is the probability of absorption if we start at N=2?

Unless I'm missing something, the probability of absorption is 1 no matter where you start. The probability that N=10 is 0.69 since there is a 0.31 probability that N=1 on the first step.
 
im sorry i meant to ask
Starting from N=2, what is the expected number of steps before absorption?
 
i changed the problem a bit to make more sense
 
Google for "gambler's ruin problem"
 
chuy52506 said:
say we have integers 0-10. We start with N and the probability that N grows by 1 is .69. The probability that N decreases by 1 is .31.
Thus obtaining N+1 after one time step is .69 and similar obtaining N-1 is .31.
Once N reaches 0 or 10 it is absorbed and can't move from there.
My question is what is the probability that N is eventually at 10 if we start at N=2? also what is the probability of absorption if we start at N=2?

Hey chuy52506.

You will need to setup a markov chain and calculate the steady-state distribution for the long-term probabilities for the probability of getting a 1 vs getting a 10. If there is a non-zero chance of always getting to N=10, then the long-term probability will always be 1 since N=10 is an absorbing state, and judging from your post, I think this is the case.

If you want to figure out for a finite number of transitions, then construct your transition matrix M, find M^n for n transitions and then multiply this matrix by your initial probability vector which will have a 1 entry in the second element (this is a column vector) and when you multiply M by this column vector you will get probabilities of being in each state from N=1 to N=10 as expressed by the column.

If you want to prove long-term absorption (when n->infinity) then you will need to use the definition of absorption in markovian systems.
 
chiro said:
Hey chuy52506.

You will need to setup a markov chain and calculate the steady-state distribution for the long-term probabilities for the probability of getting a 1 vs getting a 10. If there is a non-zero chance of always getting to N=10, then the long-term probability will always be 1 since N=10 is an absorbing state, and judging from your post, I think this is the case.

If you want to figure out for a finite number of transitions, then construct your transition matrix M, find M^n for n transitions and then multiply this matrix by your initial probability vector which will have a 1 entry in the second element (this is a column vector) and when you multiply M by this column vector you will get probabilities of being in each state from N=1 to N=10 as expressed by the column.

If you want to prove long-term absorption (when n->infinity) then you will need to use the definition of absorption in markovian systems.

By the definition of the Markov property, the probability of a future state is dependent only on the present state. The probability of moving toward the upper absorbing state is always 0.69 and the toward the lower absorbing state is is always 0.31. I assume the process continues to termination (absorption) with p=1. Therefore the probability the process terminates at N=11 is 0.69 and at N=1; 0.31. If the process terminates at N=11 and the process starts at N=2, then the marker must visit N=10 with probability 0.69 .

(We are not talking about consecutive moves in one direction. If that were true, the probability of 9 consecutive moves from N=2 to N=11 would be 0.69^9 = 0.035.)
 
Last edited:
SW VandeCarr said:
By the definition of the Markov property, the probability of a future state is dependent only on the present state. The probability of moving toward the upper absorbing state is always 0.69 and the toward the lower absorbing state is is always 0.31. I assume the process continues to termination (absorption) with p=1. Therefore the probability the process terminates at N=11 is 0.69 and at N=1; 0.31. If the process terminates at N=11 and the process starts at N=2, then the marker must visit N=10 with probability 0.69 .

(We are not talking about consecutive moves in one direction. If that were true, the probability of 9 consecutive moves from N=2 to N=11 would be 0.69^9 = 0.035.)

I actually missed the criteria that it gets stuck at 0 which screwed up my analysis.

So yeah this is going to be what micromass said: i.e. a two-absorption state system.
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 29 ·
Replies
29
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K