Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Rate from one state to another state in Markov

  1. Nov 27, 2016 #1
    upload_2016-11-27_13-23-43.png
    Hi, initially I would like to express that if you look at the first question it asks the rate at which the production goes from up to down, and ıf there is a rate I think the result should include time unit, but when I look at the solution result only consists of numbers ( the multiplication of probabilities), there is no time unit. Could you explain why it is true????? Although rate definition is number of times something happens, or the number of examples of something within a certain period, why don't we have any time period unit??????????
     
  2. jcsd
  3. Nov 27, 2016 #2

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    This is a process that is happening in steps. The rate would be from one step to the next. What units the steps are taken in is not defined. It could be from one hour to the next, from one second to the next, or it might not have anything to do with time. It could be from one position on a Z axis to the next or from one computer run to the next, etc. etc. It could even be going backward in time if you were trying to analyze how something happened and were analyzing it step-by-step backward in time. All the calculations can be done and make sense without specifying what the steps mean.
     
    Last edited: Nov 27, 2016
  4. Nov 27, 2016 #3
    upload_2016-11-27_15-11-36.png
    Ok, thanks for return, but another question: also the question wants us to find proportion of up time (third part of question), but as far as I know in order to find proportion of time we use the following formula : $$\frac {\pi_i*\mu_i} {\sum_{j=0}^n \pi_j*\mu_j}$$, where $$\mu_i$$ is expected amount of time in state "i" before transition. But towards the solution expected amount of time is not used, why is not expected amount of time used????
     

    Attached Files:

  5. Nov 27, 2016 #4

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I don't know why you say that. πi is already the long-run proportion of time in state i. Why would you care where it transitioned to i from or how long it was in that prior state j?
     
  6. Nov 27, 2016 #5
    Ok then, could you clarify for me what is the difference between $$\frac {\pi_i*\mu_i} {\sum_{j=0}^n \pi_j*\mu_j}$$ and $$\pi_i$$?????I think I have some confusion here, because I know $$\frac {\pi_i*\mu_i} {\sum_{j=0}^n \pi_j*\mu_j}$$ as a proportion of time, and actually it makes sense because in this formula we have expected amount of time being in state i and total expected time, afterwards if we divide them we should obtain the proportion of time being in state i. Nevertheless, I know $$\pi_i$$ as Ni(m)/m where m is the number of transition and goes infinity, Ni(m) is the number of times being in state i in m transitions, I saw this definition in many books, and from here we can deduce that$$\pi_i$$ actually is not the proportion of time being in state i...... What can you say about that ???
     
    Last edited: Nov 27, 2016
  7. Nov 27, 2016 #6

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I would interpret πi as already including any necessary calculations. It is already the long-run proportion of time in state i. What more do you want? It's not clear to me that your μi is any different from πi. In that case, it would be wrong to do your calculation using μi * πi because i does not always transition to i. Your equation might be saying something about the odds of staying in exactly the same state for one step (although I don't think that is right either because neither factor is a transition probability) Is that what you want?
     
  8. Nov 27, 2016 #7
    upload_2016-11-27_21-57-15.png
    I cut this out of ross stochastic process book and I really wonder what you understand here ?????, I understand Pi is the proportion of time in state i, and πi is different thing here.....
     
  9. Nov 27, 2016 #8

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Hard to tell without all the definitions of the symbols, but the first line of the solution in post #1 says that πi are the "long-run proportions". So I would take that as a given. It is possible that the meanings of symbols are the same throughout, but I doubt it. In Theorem 4.8.3 it's not clear to me what the exact definitions of Pi, πi, and μi are.
    CORRECTION: I meant to say "It is possible that the meanings of symbols are not the same throughout, but I doubt it."
     
    Last edited: Nov 28, 2016
  10. Nov 28, 2016 #9
    upload_2016-11-28_12-34-48.png
    By the way also I found the definition of πi @FactChecker in the same book I mentioned (ross stochastic process, I hope it can clarify something for you). And still I can not get the logic of why we write the proportion of up time in post#1 just in terms of πi whereas in post#7 we also use μi ???????
     
  11. Nov 28, 2016 #10

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    CORRECTION: I meant to say "It is possible that the meanings of symbols are not the same throughout, but I doubt it."
     
  12. Nov 28, 2016 #11

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I am still struggling with the definitions of the notation. I don't see any post that defines μi. I have several other questions about the parts you have posted and I don't think that I can help you. About the only thing that seems unambiguous to me is the statement in the first line of the solution in post #1 says that πi are the "long-run proportions". But that seems to be all that is needed for that problem.
     
    Last edited: Nov 28, 2016
  13. Nov 28, 2016 #12
    upload_2016-11-27_21-57-15-png.109532.png

    upload_2016-11-28_20-13-3.png
    @FactChecker I want to share one more attachment which may enable you to understand μi definition, also I would like to express that this attachment is the following part of post#7 (following part of proof). Besides in order to ease your comprehension I added both new attachment and the attachment in post#7 together. Therefore let me repeat my question why do we write the proportion of up time in post#1 just in terms of πi whereas in post#7 we also use μi in order to find proportion of time???????
     
  14. Nov 28, 2016 #13

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I don't know. There are 2 statements of what the πi are:
    1) In the Example 4.21 Solution they are described as the "long-run proportions in state i"
    If you accept this meaning do you agree with the solution? I think it makes sense.

    2) In Theorem 4.8.3, they are described as 1/E( number of transitions between visits to i )
    Is this different from the "long-run proportions in state i"?
    And then there is Pi, which again seems to me like it is the same as the "long-run proportions in state i", although there are a lot of definitions to wade through.
    I will have to "tap out" and leave this to smarter people than me.
     
  15. Nov 28, 2016 #14

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    If you have a discrete-time Markov chain, the times are t = 0,1,2,..., and ##P_{ij}= P\{X(t+1) = j | X(t) = i \}##. A transition takes place at every single unit of time, although it may be a transition from a state to itself (so not really a change of state at all). So, for example, if the time unit is 1 day, the rate of going (from the "up" states ##U## to the "down" states ##D##) is
    $$R_{U \to D} = \sum_{j \in D} \sum_{i \in U} \pi_i P_{ij} $$
    This will be a "dimensionless" fractional number < 1, so is a fraction of a day. For example, if ##R_{U \to D} = 1/20##, that means that, over the long run, there is about a 5% chance of a breakdown in an operating day.

    If you have a continuous-time Markov chain, the times are real number t ≥ 0. Now the system remains in a state ##i## for an exponentially-distributed random amount of time ##T_i##, then jumps to another state ##j \neq i## with some probability ##q_{ij}##. If ##\mu_i = E(T_i)##, the transition rate from state ##i## to state ##j \neq i## is ##a_{ij} = q_{ij}/\mu_i## and the diagonal element of the transition matrix is the negative quantity ##a_{ii} = - 1/\mu_i##. In this case the long-run average rate of transition from up states ##U## to down states ##D## is
    $$R_{U \to D} = \sum_{j \in D} \sum_{i \in U} \pi_i a_{ij} $$
    This is not a dimensional number, now, but has dimensions of a ##\text{rate} = 1/\text{time}##.

    As for proportions of time up or down, there is a difference between the discrete-time continuous-time cases. Look first at the discrete-time case. Over a large number N of discrete time periods 1,2,..., N, we might observe the system at each time and get a record of results something like
    uuudduuuuuuddddduduudduuuuuddduuuuuuddddddduuuu,
    where u = "up| and d = "down". If ##N_u## and ##N_d## are the number of u's and d's (with ##N_u + N_d = N##, then the observed fraction of time the system is up is ##f_u(N)= N_u/N##. Of course, ##N_u## and ##f_u(N)## are random for any fixed, finite ##N##, but in the limit ##N \to \infty## the fraction ##f_u(N)## converges to ##f_u##, a definite, non-random quantity. That is just the long-run proportion of time the system is in one of the states in ##U##, so is just ##f_u = \sum_{i \in U} \pi_i##.

    In the continuous-time case the situation is different. Now, over a long time interval ##[0,T]## the fraction of time the system is up depends both on the probability it is up and the time spent in up states when it is up. That is why you would get ##f_u = \sum_{i \in U} \pi_i \mu_i## in the continuous-time case.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Rate from one state to another state in Markov
Loading...