Random process derived from Markov process

Mubeena
Messages
2
Reaction score
0
I have a query on a Random process derived from Markov process. I have stuck in this problem for more than 2 weeks.
Let r(t) be a finite-state Markov jump process described by
\begin{alignat*}{1}
\lim_{dt\rightarrow 0}\frac{Pr\{r(t+dt)=j/r(t)=i\}}{dt} & =q_{ij}
\end{alignat*}
when i \ne j, and where q_{ij} is the transition rate and represents the probability per time unit that r(t) makes a transition from state $i$ to a
state $j$. Now, let r(\rho(t)) be a random process derived from r(t) depending on a parameter \rho(t), which is defined by
\begin{alignat*}{1}
\frac{d}{dt}\rho(t)=f(r(\rho(t))),\qquad\rho(0)=0
\end{alignat*}
Here f(.) is a piecewise continuous function depending on r(\rho(t))
with range space as \mathbb{R}, a set of Real numbers. In this case can we describe the random process r(\rho(t)) as
\begin{alignat*}{1}
\lim_{dt\rightarrow 0}\frac{\mathrm{Pr}\{r(\rho(t+dt))=j/r(\rho(t))=i\}}{\rho(t+dt)-\rho(t)} =q_{ij},\qquad i\ne j\\
\end{alignat*}
 
Physics news on Phys.org
I'll try to fix up the question a little:

Let r(t) be a finite-state Markov jump process described by
\begin{alignat*}{1}
\lim_{dt\rightarrow 0}\frac{Pr\{r(t+dt)=j \ | \ r(t)=i\}}{dt} & =q_{ij}
\end{alignat*} when i \ne j, and where q_{ij} is the transition rate and represents the probability per time unit that r(t) makes a transition from state i to a state j.

For a given real valued piecewise continuous function f(), define \rho(t) by
\begin{alignat*}{1}
\frac{d}{dt}\rho(t)=f(r(\rho(t))),\qquad\rho(0)=0
\end{alignat*}

Does the random process r(\rho(t)) satisfy the following?:

\begin{alignat*}{1}
\lim_{dt\rightarrow 0}\frac{\mathrm{Pr}\{r(\rho(t+dt))=j \ | \ r(\rho(t))=i\}}{\rho(t+dt)-\rho(t)} =q_{ij},\qquad i\ne j\\
\end{alignat*}
 
Last edited:
Hey Mubeena and welcome to the forums.

For this proposition, I have a gut feeling it is true but only if you have specific conditions on the monotonic behavior of p(t).

If you have something that goes up and down then essentially you are screwing up with the ordering of the conditional statement since the Markovian aspect depends on time t with time t + dt and if p(t) starts decreasing then it screws up this forward attribute in time for the conditional distribution and things "reverse".

In short if p(t) is decreasing then p(t+dt) < p(t).

If my reasoning holds, then my best guess is that you can show that the Markovian condition fails because of the above.
 
Hi Stephen and Chiro,
Thank you very much for your help.
By your arguments, If I assume ρ(t) to be monotonically increasing by assuming f(r(ρ(t)))>0, for all t, then the last equality (lim_{dt->0} Pr{}/ρ(t+dt)-ρ(t)=q_ij) holds right?
 
If you want to prove it, you will need to show that the limit has the same form when the function is monotonic.

Once you formalize this in definitions you should be OK (I think).
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top