MHB Showing a probability is of order (Dt)^2

  • Thread starter Thread starter DinkyDoe
  • Start date Start date
  • Tags Tags
    Probability
DinkyDoe
Messages
2
Reaction score
0
Hoi. I want to prove the following:

Suppose we want to calculate the probability that we have transitions $C\to O\to C$
in the time interval $[0,\Delta t]$ where $C,O$ stand for different states. (we start in state $C$) And we have $T^+ \sim \exp(\lambda _+)$ and $T^- \sim \exp(\lambda_-)$ where $T^+$ is the time between a transition from $C\to O$, and $T^-$ the time between a transition $O\to C$. I want to show that the probability is of order $(\Delta t)^2$. So we need not know the probability explicitly. But suppose we try to do this explicity...

So i guess we need a triple integral for this...if we want to do this explicitly, we want to integrate over all $0<s_1<s_2\leq \Delta t$ so that

$P(T^+\leq s_1)$,
$P(T^-\leq s_2-s_1)$,
$P(T^+>\Delta t-s_1-s_2)$ (since we want after $C\to O \to C$ that it doesn't flip back to state $O$ within $[0,\Delta t]$)

And we simply assume independance...so I guess we can multiply the density functions
and make some triple integral with the correct boundaries. But I'm really confusing myself with this...

using the probability densities we must get some integral $\int\int\int ... ds_1ds_2dy$

Can someone see through my confusion, and point me in the right direction? I guess I'm not such a big calculus-star.

Thanks :)

---------- Post added at 01:50 PM ---------- Previous post was at 01:39 PM ----------

Could it be that this is the correct integral? $\int_0^{\Delta t}p_+(s_1)\left(\int_{s_1}^{\Delta t}p_-(s_2)\left(\int _{\Delta t-s_1-s_2}^{\infty}p_+(x)dx\right)ds_2\right)ds_1$

---------- Post added at 02:55 PM ---------- Previous post was at 01:50 PM ----------

DinkyDoe said:
Hoi. I want to prove the following:

Suppose we want to calculate the probability that we have transitions $C\to O\to C$
in the time interval $[0,\Delta t]$ where $C,O$ stand for different states. (we start in state $C$) And we have $T^+ \sim \exp(\lambda _+)$ and $T^- \sim \exp(\lambda_-)$ where $T^+$ is the time between a transition from $C\to O$, and $T^-$ the time between a transition $O\to C$. I want to show that the probability is of order $(\Delta t)^2$. So we need not know the probability explicitly. But suppose we try to do this explicity...

So i guess we need a triple integral for this...if we want to do this explicitly, we want to integrate over all $0<s_1<s_2\leq \Delta t$ so that

$T^+\leq s_1$,
$T^-\leq s_2-s_1$,
$T^+>\Delta t-s_1-s_2$ (since we want after $C\to O \to C$ that it doesn't flip back to state $O$ within $[0,\Delta t]$)

And we simply assume independance...so I guess we can multiply the density functions
and make some triple integral with the correct boundaries. But I'm really confusing myself with this...

using the probability densities we must get some integral $\int\int\int ... ds_1ds_2dy$

Can someone see through my confusion, and point me in the right direction? I guess I'm not such a big calculus-star.

Thanks :)

---------- Post added at 01:50 PM ---------- Previous post was at 01:39 PM ----------

Could it be that this is the correct integral? $\int_0^{\Delta t}p_+(s_1)\left(\int_{s_1}^{\Delta t}p_-(s_2)\left(\int _{\Delta t-s_1-s_2}^{\infty}p_+(x)dx\right)ds_2\right)ds_1$

edit 2: sorry...think i solved it
 
Physics news on Phys.org
DinkyDoe said:
Hoi. I want to prove the following:

Suppose we want to calculate the probability that we have transitions $C\to O\to C$
in the time interval $[0,\Delta t]$ where $C,O$ stand for different states. (we start in state $C$) And we have $T^+ \sim \exp(\lambda _+)$ and $T^- \sim \exp(\lambda_-)$ where $T^+$ is the time between a transition from $C\to O$, and $T^-$ the time between a transition $O\to C$. I want to show that the probability is of order $(\Delta t)^2$. So we need not know the probability explicitly. But suppose we try to do this explicity...

So i guess we need a triple integral for this...if we want to do this explicitly, we want to integrate over all $0<s_1<s_2\leq \Delta t$ so that

$P(T^+\leq s_1)$,
$P(T^-\leq s_2-s_1)$,
$P(T^+>\Delta t-s_1-s_2)$ (since we want after $C\to O \to C$ that it doesn't flip back to state $O$ within $[0,\Delta t]$)

And we simply assume independance...so I guess we can multiply the density functions
and make some triple integral with the correct boundaries. But I'm really confusing myself with this...

using the probability densities we must get some integral $\int\int\int ... ds_1ds_2dy$

Can someone see through my confusion, and point me in the right direction? I guess I'm not such a big calculus-star.

Thanks :)

---------- Post added at 01:50 PM ---------- Previous post was at 01:39 PM ----------

Could it be that this is the correct integral? $\int_0^{\Delta t}p_+(s_1)\left(\int_{s_1}^{\Delta t}p_-(s_2)\left(\int _{\Delta t-s_1-s_2}^{\infty}p_+(x)dx\right)ds_2\right)ds_1$

---------- Post added at 02:55 PM ---------- Previous post was at 01:50 PM ----------



edit 2: sorry...think i solved it
It is very simple you are asking for the asymtotic form (as \( \Delta t \to 0\) ) of the cumulative distribution \( F_{T^++T^-}(\Delta t) \), where we know the density:

\[f_{T^++T^-}(z)=\int_0^z \lambda_+e^{-\lambda_+(z-y)} \lambda_-e^{-\lambda_-y}dy\]

Both the required integrals are elementary, then a power series expansion of \( F_{T^++T^-}(\Delta t) \) about \( \Delta t=0\) and you are done.

That is; the key idea here is that the density of the sum of two independant random variables is the convolution of their individual densities.

Now if you want that the probability that there is no further transition in the interval it does become a bit more complicated .. but an application of Bayes' theorem should do it.

CB
 
Last edited:
CaptainBlack said:
It is very simple you are asking for the asymtotic form (as \( \Delta t \to 0\) ) of the cumulative distribution \( F_{T^++T^-}(\Delta t) \), where we know the density:

\[f_{T^++T^-}(z)=\int_0^z \lambda_+e^{-\lambda_+(z-y)} \lambda_-e^{-\lambda_-y}dy\]

Both the required integrals are elementary, then a power series expansion of \( F_{T^++T^-}(\Delta t) \) about \( \Delta t=0\) and you are done.

That is; the key idea here is that the density of the sum of two independant random variables is the convolution of their individual densities.

Now if you want that the probability that there is no further transition in the interval it does become a bit more complicated .. but an application of Bayes' theorem should do it.

CB
Thank you. Sounds cool, then you're actually using more advanced stuff. I did it in a more elementary way...I think I solved it correctly. I worked it out in Latex:

So we want to calculate the probability that we have transitions $C\to O \to C$ within the time-interval $[0,\Delta t]$. Here we denote by $T^+$ the time between transition $C\to O$, and $T^-$ the time between transition $O\to C$, where $T^+\sim \exp(\lambda _+)$ and $T^-\sim \exp(\lambda_-)$ . Denote by $p^+(x)$ and $p^-(x)$ the corresponding density functions. Observe that we want to intgrate over all possible $0<s_1<s_2\leq \Delta t$ such that $T^+\leq s_1$, $T^-\leq s_2-s_1$, and $T^+>\Delta t-s_1-s_2$ since we don't want to flip back $C\to O$ after $C\to O\to C$ within the time-interval $[0,\Delta t]$. This leads to the following integral $$ \int_0^{\Delta t}p_+(s_1)\left(\int_{s_1}^{\Delta t}p_-(s_2)\left(\int _{\Delta t-s_1-s_2}^{\infty}p_+(x)dx\right)ds_2\right)ds_1$$
Calculating the integral gives us $$\left(\frac{\lambda_-}{\lambda_+-\lambda_-}\right)\left( \left(\frac{1}{\lambda_+}\right)\left(e^{(\lambda_+-\lambda_-)\Delta t}-e^{-\lambda_-\Delta t}\right)+\left(\frac{1}{2\lambda_+-\lambda_-}\right)\left(e^{-\lambda_+\Delta t}-e^{(\lambda_+-\lambda_-)\Delta t}\right)\right)$$Using the taylor expansion of $e^x$, we show that the above expression equals $O(\Delta t^2)$. That is, we show that in its Taylor expansion, the coefficients of $\Delta t^0$ and $\Delta t$ are zero, and the coefficient of $\Delta t^2$ is unequal to zero. We can forget the factor $\frac{\lambda_-}{\lambda_+-\lambda_-}$. The case $\Delta t^0$ is obvious. We check that the coefficient of $\Delta t $ equals zero. Namely, observe that $$\left(\frac{1}{\lambda_+}\right)\left(\left(\lambda_+-\lambda_-\right)\Delta t+\lambda_-\Delta t\right)+ \left(\frac{1}{2\lambda_+-\lambda_-}\right)\left(-\lambda_+\Delta t+(\lambda_+-\lambda_-)\Delta t\right) = \Delta t-\Delta t =0$$
After another straightforward verification, we find that the coefficient of $\Delta t^2$ equals $\frac{\lambda-}{2}\neq 0$. Therefore, for small intervals $[0,\Delta t]$ the probability of the event $[C\to O\to C]$ is $O(\Delta t^2)$.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top