mertcan said:
View attachment 109485
Hi, initially I would like to express that if you look at the first question it asks the rate at which the production goes from up to down, and ıf there is a rate I think the result should include time unit, but when I look at the solution result only consists of numbers ( the multiplication of probabilities), there is no time unit. Could you explain why it is true? Although rate definition is number of times something happens, or the number of examples of something within a certain period, why don't we have any time period unit?
If you have a discrete-time Markov chain, the times are t = 0,1,2,..., and ##P_{ij}= P\{X(t+1) = j | X(t) = i \}##. A transition takes place at every single unit of time, although it may be a transition from a state to itself (so not really a change of state at all). So, for example, if the time unit is 1 day, the rate of going (from the "up" states ##U## to the "down" states ##D##) is
$$R_{U \to D} = \sum_{j \in D} \sum_{i \in U} \pi_i P_{ij} $$
This will be a "dimensionless" fractional number < 1, so is a fraction of a day. For example, if ##R_{U \to D} = 1/20##, that means that, over the long run, there is about a 5% chance of a breakdown in an operating day.
If you have a continuous-time Markov chain, the times are real number t ≥ 0. Now the system remains in a state ##i## for an exponentially-distributed random amount of time ##T_i##, then jumps to another state ##j \neq i## with some probability ##q_{ij}##. If ##\mu_i = E(T_i)##, the transition rate from state ##i## to state ##j \neq i## is ##a_{ij} = q_{ij}/\mu_i## and the diagonal element of the transition matrix is the negative quantity ##a_{ii} = - 1/\mu_i##. In this case the long-run average rate of transition from up states ##U## to down states ##D## is
$$R_{U \to D} = \sum_{j \in D} \sum_{i \in U} \pi_i a_{ij} $$
This is not a dimensional number, now, but has dimensions of a ##\text{rate} = 1/\text{time}##.
As for proportions of time up or down, there is a difference between the discrete-time continuous-time cases. Look first at the discrete-time case. Over a large number N of discrete time periods 1,2,..., N, we might observe the system at each time and get a record of results something like
uuudduuuuuuddddduduudduuuuuddduuuuuuddddddduuuu,
where u = "up| and d = "down". If ##N_u## and ##N_d## are the number of u's and d's (with ##N_u + N_d = N##, then the observed fraction of time the system is up is ##f_u(N)= N_u/N##. Of course, ##N_u## and ##f_u(N)## are random for any fixed, finite ##N##, but in the limit ##N \to \infty## the fraction ##f_u(N)## converges to ##f_u##, a definite, non-random quantity. That is just the long-run proportion of time the system is in one of the states in ##U##, so is just ##f_u = \sum_{i \in U} \pi_i##.
In the continuous-time case the situation is different. Now, over a long time interval ##[0,T]## the fraction of time the system is up depends both on the probability it is up and the time spent in up states when it is up. That is why you would get ##f_u = \sum_{i \in U} \pi_i \mu_i## in the continuous-time case.