Can you Combinie two transition probability matrices?

AI Thread Summary
Combining two transition probability matrices is mathematically feasible by averaging their elements, but care must be taken regarding the data behind the matrices. Each matrix must adhere to the properties of a valid transition matrix, where each row sums to one. If one matrix has zero occurrences for a state, it cannot be considered valid, and the resulting average may not sum to one. A better approach is to use a weighted average based on the amount of data each matrix represents, ensuring that matrices with more data have a greater influence on the final result. This method provides a more accurate representation of the underlying stochastic process.
bradyj7
Messages
117
Reaction score
0
Hello,

How would you/or is it possible to combine two transitional probability matrices?

Say for example I have 2 matrices that both measure the probability of moving from one state to another (the states are 1, 2 and 3)

Here is a picture

http://dl.dropbox.com/u/54057365/All/TPM1and2.JPG

Can these be combined into one probability matrix? Do you just take the average of each element?

To be more specific

Say for example I record the speed of a car each second on a road. Say I do this twice and get two TPMs with speed as the state. Each TPM has different probabilities.

So I would have two matrices like so

http://dl.dropbox.com/u/54057365/All/tpmbig.JPG

Can you combine them into one and assume that the resulting TPM is the "average" TPM for speed along that stretch of road?

Thank you

Many thanks

John
 
Last edited by a moderator:
Physics news on Phys.org
Hey bradyj7 and welcome to the forums.

The main characteristics for a markov chain is that you have to following basic probability axioms. Apart from that, there are really no restrictions for your transition matrix (every row has to sum up to 1 and the matrix must be square)

As for things related to the process, as long as the process exhibits the properties of first order conditional independence, then you are set.

Mathematically there is no problem with finding the scalar average of the two matrices: in other words set your "average" transition matrix to 1/2 x (A+B) where A is the first matrix and B is the second. Since this process will create a matrix with that preserves the row property (sum equals 1) as well as the other Kolmogorov axioms, the resulting matrix is a proper transition matrix.

Having said the above though, you need to be very careful about doing something like this because you need a deep understanding of the process to see what kinds of things you can do to get what you call an 'average' transition matrix.

It might be helpful to know a bit more about the process before you do something like this.

Also it would actually be wiser to create a system that is conditional on the car type and then take it from there.

This means that you have to formulate a probability model that is conditional on the car type in which you should get distribution for your new stochastic process based on the above information.

How much background do you with probability/statistics?
 
Hello Chiro,

Thank you for taking the time understand the question and your detailed response.

I'm currently doing a diploma in statistics and have an undergraduate degree in engineering/maths so I would have an understanding of the basic concepts.

Mathematically there is no problem with finding the scalar average of the two matrices: in other words set your "average" transition matrix to 1/2 x (A+B) where A is the first matrix and B is the second. Since this process will create a matrix with that preserves the row property (sum equals 1) as well as the other Kolmogorov axioms, the resulting matrix is a proper transition matrix.

I thought that this would be OK. However, when I did it, the rows in the resulting 'average' matrix do not sum to 1. Take for example, these two matrices

http://dl.dropbox.com/u/54057365/All/TPM1and2.JPG

In the second matrix, no speed value for 2 was recorded hence the row (row 2) sums to 0. The reason why is because the speed is recorded every second and for this particular trip the recorded values were 1 and the next one was 3.

So if I take the average of these two matrices, I would get

0.65 0.05 0.3
0.25 0.25 0 ...Row 2 does not sum to 1?
0.1 0.25 0.85

Essentially, what I am doing is driving a car along a street and recording the speed every second. Then creating frequency and transition probability matrices. Then generating a speed profile of a car on the street using a markov chain.

The same car drives the route at the same time each day. Would this cover this requirement?

This means that you have to formulate a probability model that is conditional on the car type in which you should get distribution for your new stochastic process based on the above information.

Thank you

John
 
Last edited by a moderator:
The first matrix with a zero row in 2nd column is not a valid transition matrix.
 
Hello,

I was thinking that this was the reason.

I have prepared a little spread sheet of my method if you wouldn't mind looking at it?

http://dl.dropbox.com/u/54057365/All/tpm3.JPG

A sample of recorded values are in Col A and you will see a frequency a matrix and transitional probability matrix.

As you pointed out row 2 is incorrect.

However the next time the car drives the route a value of 2 could be recorded. And in order to get the average of the two matrices then they would need to be the same size, correct?

Could you suggest a solution to this?

Many thanks

John
 
Last edited by a moderator:
Hi Chiro,

Would you have any further thoughts on this?

Kind regards

John
 
bradyj7 said:
Hi Chiro,

Would you have any further thoughts on this?

Kind regards

John

Hey bradyj7.

Sorry dude, the timezone here in Australia is different and I forgot you made another reply (didn't come up in my inbox).

Whenever you have the case where you don't have any probability data, basically you stick with the approach of getting frequency information for each 'atomic' event, which in this case corresponds to each 'cell' in your matrix.

Once you get the frequency information you just divide by the relevant total and that is your empirical probability based on data.

If its not based on data, then it will just be probability that obeys the Kolmogorov axioms (i.e. probability between 0 and 1 and all probabilities in each row add up to 1).

After reading your experiment there is something I want to say about it and about the idea of 'averaging' probabilities:

From what you have said, the averaging idea is a good idea but only if there is an equal amount of data in each transition matrix. Let me explain:

Since all of the data is talking about the exact same process, then each matrix represents an 'independent' collection of probability information for the same process.

This means that you can do a special kind of average to combine the two matrices and based on what they represent, this is not only ok but expected.

Lets illustrate the point with a simple stochastic process (not markov but the same principles apply):

Lets have experiment information A = {1,2,3,4,5} A_N = 15 and B = {3,6,11,14,15} A_N = 29. Now they are all 'measurements' from the same experiment but they are partitioned into sets A and B.

Now if we wanted to figure out the probability info taking into account both A and B we would calculate P(x) = [A(x) + B(x)]/(A_N + B_N).

But what if we had P_A(x) that represented a probability for data in A and P_B(x) that represents data in B?

We know P_A(x) = A(x)/A_N and P_B(x) = B(x)/B_N. Now our goal is to find P(x) in terms of P_A(x) and P_B(x) (which in your case is the two transition matrices).

P(x) = [A(x) + B(x)]/(A_N + B_N) = A(x)/[A_N + B_N] + B(x)/[A_N + B_N]

We now use the substitution A(x)/[A_N + B_N] = A(x)/A_N x [1 - B_N/(A_N+B_N)] and B(x)/[A_N+B_N] = B(x)/B_N x [1 - A_N/(A_N+B_N)].

But we have A(x)/A_N as our P_A(x) and B(x)/B_N as our P_B(x) which means that we can calculate P(x) as:

P(x) = P_A(x) x [1 - B_N/(A_N+B_N)] + P_B(x) x [1 - A_N/(A_N+B_N)]

This is what you have to do. Basically your A_N and B_N are the 'total histogram counts' for each row and your P_A(x) and P_B(x) correspond to probability information in each cell of your matrix.

The way this works is that if one matrix has a lot of data compared to the other, it will be weighted more which if you think about it makes sense: imagine if one matrix had 10000 bits of data and your second matrix had 2.

The above is not only the how but the why: because all the data is part of the same process all I am doing is show to go from partitioned data (your two matrices) to unpartitioned data (you're 'averaged' matrix).

If you have any other questions I'll do my best to answer them.
 
Hello Chiro,

Its cool, I'm based in Ireland so we are in opposite time zones

Thank you for replying again and for your detailed explanation.

You said that

From what you have said, the averaging idea is a good idea but only if there is an equal amount of data in each transition matrix.

Do you mean by this that each matrix would need to have the same number transition states i.e 1,2,3,4,5 or that it matrix would need to created from the same number of 'recorded values'?

I think that you refer to this again here

The way this works is that if one matrix has a lot of data compared to the other, it will be weighted more which if you think about it makes sense: imagine if one matrix had 10000 bits of data and your second matrix had 2.

Am I correct?

Which ever the case may be, can I just say that for each experiment there will always be 20 recorded values (if this is what you are referring to). But I cannot say for certain that each experiment will contain at least one occurrance of the states 1,2,3,4,5. These are speed values and are dependent on how fast an individual accelerates. For instance somebody speed value could go from 1 to 2 to 3 but for somebody who accelerates fast it may go from 1 to 3.

Just so I understand what your proposing, I put together another two examples. Both examples have 20 recorded values. Example 1 has at at least one occurance of each state and all probabilities sum to 1. Example 2 has no occurances of state 4.

http://dl.dropbox.com/u/54057365/All/TPM1.JPG
http://dl.dropbox.com/u/54057365/All/TPM4.JPG

So that I can be certain that I understand you correctly would you mind posting what you would calculate as being the 'average' of these two matrices based on what you explained in the previous post?

Many thanks

John
 
Last edited by a moderator:
I think I may have babbled on a lot more than usual. I'll say the specifics:

Consider your excel spreadsheets:

http://dl.dropbox.com/u/54057365/All/TPM1.JPG
http://dl.dropbox.com/u/54057365/All/TPM4.JPG

Now consider the formula we derived for calculating the final probability transition matrix:

P(x) = P_A(x) x [1 - B_N/(A_N+B_N)] + P_B(x) x [1 - A_N/(A_N+B_N)]

In your particular problem: P(x) represents the probability in your given row that you are working on where x is the cell in that row.

P_A(x) represents the probability value in the same row at cell x for your first matrix.

P_B(x) is the same thing for your second matrix.

A_N is the total number of counts for the same row we are dealing with in matrix A

B_N is the total number of counts for the same row we are dealing with in matrix B

Now as an example of actual values:

Lets look at the first row for both matrices:

A_N = 6, B_N = 2, P_A(1) = 0.17, P_B(1) = 0 P(1) for this row = 0.17 x (1 - 2/(6+2)) + 0 = 0.17 x 3/4 = 0.1275

Because the first matrix has more data it is weighted more.

You just apply the same formula for each cell in the given row. You can use a spreadsheet to do this for and everything will be calculated instantly.

If you study the formula for a little bit you will see that it is basically a 'weighted average' of the two matrices where the one that has the most data is given a bigger weight (closer to 1).

If both matrices have the exact same amount of data the weight will be exactly half since B_N/(A_N+B_N) = A_N/(A_N+B_N) = 1/2 and (1 - 1/2) = 1/2.
 
Last edited by a moderator:
  • #10
Hello Chiro,

Thanks very much for the help, I understand it all now!

Cheers

John
 
  • #11
Hello Chiro,

I have one further question about the formula for calculating the weighted average of the matrices, if you don't mind?

What would the formula be for 3 matrices?

I've attempted it below, but I'm unsure what goes in the place that I've marked with **?**, is it P_A(x) or P_B(x), or does it matter? or is the equation incomplete?P(x) = P_A(x) x [1 - B_N/(A_N+B_N+C_N)] + P_B(x) x [1 - A_N/(A_N+B_N+C_N)] + P_C(x) x [1 - **?**/(A_N+B_N+C_N)]

Thank you

John
 
  • #12
bradyj7 said:
Hello Chiro,

I have one further question about the formula for calculating the weighted average of the matrices, if you don't mind?

What would the formula be for 3 matrices?

I've attempted it below, but I'm unsure what goes in the place that I've marked with **?**, is it P_A(x) or P_B(x), or does it matter? or is the equation incomplete?


P(x) = P_A(x) x [1 - B_N/(A_N+B_N+C_N)] + P_B(x) x [1 - A_N/(A_N+B_N+C_N)] + P_C(x) x [1 - **?**/(A_N+B_N+C_N)]

Thank you

John

Hey John.

Do you want the basic idea for N matrices or just the special case of N = 3?

I'd recommend to discuss the general formula although there will be a few extra symbols here and there.

The idea is exactly the same in that you calculate the 'weights' of the data of each matrix with respect to the total data in 'each' matrix. Because we had two data points we know that w1 + w2 = 1 which is why we have the (1 - blah).

In the general sense we just use a simple formula to calculate weights and then multiply the P_blah(x) terms by the weights.

I'll wait for your answer before I go any further.
 
  • #13
Hello Chiro,

Yes, it would great if you could describe the basic formula for N matrices. That way I'll be able to apply it to different numbers of matrices and I'll also have a better understanding.

Thank you

John
 
  • #14
Well the basic idea is that you weight everything by the number of data points in one row of one matrix relative to the total across all rows.

Lets define a bit of notation.

Let N be the number of matrices you have.

Let N_(i,j) be the number of data points for the ith matrix for the jth row.

Let S_(i,j)(x) be the frequency count for the ith matrix in the jth row for the xth element.

From this we have P(i,j)(x) is the probability for the ith matrix for the jth row at the xth column. We can write this as:

P(i,j)(x) = S_(i,j)(x) / N_(i,j)

Now we have all of this information at hand (exactly like in your spreadsheet).

Let P(r,x) be the 'combined' probability matrix for row r and column x corresponding to our final probability we wish to calculate.

This is given as:

P(r,x) = Sum (i = 1 to N) w_i P(i,r,x)

and our job is to determine w_i.

Note that we do this on a row by row basis and not on a matrix by matrix basis unless each row in each matrix has the exact same amount of data: other-wise its important to make sure you do this on a row by row basis.

Now it turns out calculating w_i is really easy.

All we have to do is consider the weight of the data of one matrix with respect to all the data in all the matrices.

Lets introduce the following variable T where:

T = Sum (i = 1 to N) N_(i,r)

where r is the same row we are dealing with in our calculation.

w_i = N_(i,r) / T.

That is it!

All I have done is figured out the weight of the data of one matrix with respect to everything in total for a particular row and used that to denote the combination of how the matrix contributes to the overall data set.

Now take these formulas apply them across all matrices and all rows and you will get a final matrix that represents all the data you have collected.
 
  • #15
Hello Chiro,

Thank you for this. I understand the theory behind what you are saying, but I'm struggling to see how you derive a specific formula for N = some number from this.

Would you mind posting the formula for N=3 so I can compare it to N=2?

I really appreciate your help.

John
 
  • #16
bradyj7 said:
Hello Chiro,

Thank you for this. I understand the theory behind what you are saying, but I'm struggling to see how you derive a specific formula for N = some number from this.

Would you mind posting the formula for N=3 so I can compare it to N=2?

I really appreciate your help.

John

Ok let's see how we go.

Lets assume we are using the standard convention that has been used in this thread for N's P(x)'s and so on.

First let's look at the N = 2 case.

If we have N_A, N_B, P_A(x), P_B(x) and let's say that our row is fixed with N = 3 and T = N_A + N_B. Now we have

w_1 = N_A/(N_A+N_B) and w_2 = N_B(N_A+N_B)

But we know that N_A/(N_A+N_B) + N_B(N_A+N_B) = 1 using some simple algebra.

This means that w_1 = 1 - N_B/(N_A+N_B) and w_2 = 1 - N_A/(N_A+N_B) which gives our formula for the N = 2 case. The derivation was a little different and was in a different form, but this is why it was the case.

Now for N = 3.

Lets define N_A, N_B, and N_C in the usual way and P_A(x), P_B(x) and P_C(x) in the usual way and T = N_A + N_B + N_C.

w_1 = N_A/T, w_2 = N_B/T, w_3 = N_C/T

Now if you wanted the same 'form' as the early derivation we do the same thing by writing things in terms of 1 - blah in the same way for N = 2.

So we know N_A/T + N_B/T + N_C/T = 1 which means we can use the following substitutions:

w_1 = 1 - (N_B + N_C)/T, w_2 = 1 - (N_A + N_C)/T, w_3 = 1 - (N_A + N_B)/T and then use those weights like we did in the very first derivation of N = 2 case.

So basically the link is that in the first derivation we looked at things in terms of complementary weights by using the fact that all weights sum up to 1 and in the second case we just use the simple idea of calculating a weight to be the fraction of the data in one matrix for one row relative to the total data for all matrices in that row. By using the algebraic trick of writing weight_blah = 1 - rest of weights we get different expressions that mean mathematically and intuitively the same thing.
 
  • #17
Hello Chiro,

Thank you for explaining this to me. It took me a few times going through it to understand it, but I get it now and can write the formula for an value of N. Result.

Could I ask you another question, if you don't mind? It is in relation to Markov Chain states.

You have already seen from my earlier posts that I am recording the speed of a car as it travels down a street and I am creating a transition probability matrix of the probability of moving from one speed to another and modelling this with a markov chain. The output is ultimately a speed - Time profile of the car.

I've created a very simple spreadsheet to illustrate the process (These are not real values, I made this up to keep the values small)

http://dl.dropbox.com/u/54057365/All/forum1.JPG

I understand all this.

My question is to do with the following:

I've found a paper online where somebody is doing the same process (i.e modelling the speed time profile of a car).

However, they have used speed AND acceleration as the states in the Markov chain.

As far as I know, they compute the acceleration directly from the recorded speed values.

Here are two figures illustrating their process.

http://dl.dropbox.com/u/54057365/All/mw1.JPG
http://dl.dropbox.com/u/54057365/All/mw2.JPG

So my question is, if they did compute the acceleration directly from the recorded speed values, would that not make one of the states redundant as you calculate one directly from the other? They are recorded at fixed time intervals (i.e 1 sec) and they're both scalar quantities. What benefit, if any, would using acceleration as a state also bring to the model?

I hope that this is enough information for you to understand it. I would be grateful for comments that you could make on this.

Thank you

John
 
  • #18
Hey bradyj7.

There really is no difference between your way and their way.

Instead of you having before and after velocities, they have before velocity and after velocity in which the the after velocity is implied through the relationship after velocity = before velocity + delta_acceleration * delta_time.

If the delta_time was known and constant you could convert your matrix to their matrix very easily using a spreadsheet and if you used the same kind of resolution as they did, then it would more or less the same kind of system (the resolution they use is very high since they use a massive matrix).

They are not creating any excess redundancy, but yes in terms of the information given in the transition matrix having two velocities is exactly the same as having a fixed time_delta, one velocity and an acceleration for that time delta.

In terms of benefit, that is a good question.

I can't really answer this to be honest. I don't really know anything about the domain of these kinds of experiments and what people are trying to do.

Since you are doing the experiment, it would help if you think about what these kinds of experiments are trying to achieve.

For example let's say we wanted to do these to get an idea of fuel use in a car. Now fuel might have a direct correlation with acceleration. For example if you accelerate a lot then you tend to use a lot of fuel than if you just coast on a certain velocity. In this case, having things in terms of acceleration might help you build an inferential statement to test.

The above is just a guess and I really don't know anything about cars in any major detail, but I think the statement should give you an idea that could help you come up with your own answer.
 
  • #19
Hello Chiro,

You helped me out a lot with a probability question a months ago.

I was wondering if I could ask you another question?

I'm trying to work out a conditional probability.

I have hundreds of measurements of two variables (1) Start Time and (2) Journey time.

I've created a frequency table.

https://dl.dropbox.com/u/54057365/All/forum.JPG

The frequency table shows the number of occurrences for each variable pair in my data set. For example there were 5 journeys that started at 8am and lasted 5 minutes.

How would I work out the probable Journey time given a start time?

P(JT | ST) = P(JT n ST)/P(ST)

For example 9am

P(JT | ST) = P(JT n ST)/P(ST)

P( JT | 9am) = ?

Thanks for your help

John
 
Last edited by a moderator:
  • #20
bradyj7 said:
Hello Chiro,

You helped me out a lot with a probability question a months ago.

I was wondering if I could ask you another question?

I'm trying to work out a conditional probability.

I have hundreds of measurements of two variables (1) Start Time and (2) Journey time.

I've created a frequency table.

https://dl.dropbox.com/u/54057365/All/forum.JPG

The frequency table shows the number of occurrences for each variable pair in my data set. For example there were 5 journeys that started at 8am and lasted 5 minutes.

How would I work out the probable Journey time given a start time?

P(JT | ST) = P(JT n ST)/P(ST)

For example 9am

P(JT | ST) = P(JT n ST)/P(ST)

P( JT | 9am) = ?

Thanks for your help

John

Hey bradyj7.

In terms of the P(JT and ST) you simply divide the appropriate cell by the total cell count (which in this case given the data is 1528 and the bottom right hand corner).

To calculate P(ST) you look at the sum on the right hand side and divide it by the total cell count (again for this example, it is 1528).

To generate the actual distribution, you do this for all cells of JT and ST. You don't have to use individual cells: you can group cells together. If you wish to do this then you need to make sure you include all the information, and that you don't do 'over-lapping' cells. If you do over-lapping cells then your probabilities will not be independent and it will screw up your distribution (and you could possibly get a non-valid probability).

So once you have picked your divisions for JT and ST, calculate P(JT=y|ST=x) = P(JT=y,ST=x)/P(ST=x) and then store this result somewhere.

For your example, the y value can be any JT value, but in reality we want JT to be more constrained. If you don't constrain JT=y for some specific y, you'll get a probability of 1 and the reason is that if you allow all possible events of JT then you are going to calculate P(JT|ST=x) = P(JT,ST=x)/P(ST=x) = P(U,ST=x)/P(ST=x) = P(ST=x)/P(ST=x) = 1.

For now let's assume y correponds to < 4 for JT. We calculate this as:

P(JT<4|ST=9am) = P(JT<4,ST=9am)/P(ST=9am). Now P(JT<4, ST=9am) = (5+18)/1528. and P(ST=9am) = 121/1528. This implies P(JT<4|ST=9am) =23/121.

In terms of the P(X,Y) part (i.e. P(X AND Y)), you simply add up all the corresponding frequency cells for these events and divide by the total. The reason is that X AND Y corresponds both X and Y being satisfied. In Terms of P(X=x), you look at the appropriate final frequency count with respect to the total frequency count.
 
Last edited by a moderator:
  • #21
Hello Chiro,

Thank you for your reply and for taking the time to explain this to me.

I'm just trying to understand the theory.

Is this type of problem a conditional expectation problem? as described here http://en.wikipedia.org/wiki/Conditional_expectation

Can I expand the problem a little.

Say I wanted to calculate the expected journey time given Journey Start Time and Distance travelled.

P(JT | ST, D)

I've expanded the frequency table with actual measurements.

For example,there were 25 journeys that started at 8am, were 3 miles long with a journey time of 5 minutes. The journey times are rounded to the nearest 5 minutes.

https://dl.dropbox.com/u/54057365/All/forum.1JPG.JPG

To calculate the expected journey time given that the start time is 8am and the distance is 3 miles would the calculation be (taking the middle point in each gap to minimize the error)?

(5/2*3 + (5 + 5/2)*25 + (10+5/2)*33 + (15+5/2)*7 + (20+5/2)*2 ) / ( 3 + 25 + 33 + 7 + 2)

This appears to give a reasonable answer but I'm not sure if it is correct.

I understand most of your explanation below, but would you mind doing a sample calculation using the method that you have described?

The total number of observations is 17,711.

Thanks very much

John
 
Last edited by a moderator:
  • #22
bradyj7 said:
Hello Chiro,

Thank you for your reply and for taking the time to explain this to me.

I'm just trying to understand the theory.

Is this type of problem a conditional expectation problem? as described here http://en.wikipedia.org/wiki/Conditional_expectation

Can I expand the problem a little.

Say I wanted to calculate the expected journey time given Journey Start Time and Distance travelled.

P(JT | ST, D)

I've expanded the frequency table with actual measurements.

For example,there were 25 journeys that started at 8am, were 3 miles long with a journey time of 5 minutes. The journey times are rounded to the nearest 5 minutes.

https://dl.dropbox.com/u/54057365/All/forum.1JPG.JPG

To calculate the expected journey time given that the start time is 8am and the distance is 3 miles would the calculation be (taking the middle point in each gap to minimize the error)?

(5/2*3 + (5 + 5/2)*25 + (10+5/2)*33 + (15+5/2)*7 + (20+5/2)*2 ) / ( 3 + 25 + 33 + 7 + 2)

This appears to give a reasonable answer but I'm not sure if it is correct.

I understand most of your explanation below, but would you mind doing a sample calculation using the method that you have described?

The total number of observations is 17,711.

Thanks very much

John

The problem you posed in your response before the one quoted above is simply a calculation of conditional probability. In other words to figure out say P(A) we use #TimesAHappens/#TotalNumberofTimesAllUp.

Conditional expectation has the same interpretation and definition as normal expectation except that you are dealing with a conditional probability distribution as opposed to a non-conditional one. It's still a normal distribution though, but what conditioning does is it limits the probability to a subset of the whole probability space and not the whole probability space.

To understand things consider P(A|U) = P(A and U)/P(U) = P(A)/1 = P(A). If instead of U we used some small subset, we would be considering A with respect to a small subset B instead of U. In other words, we are considering an event with respect to an existing subset and you can draw Venn diagrams to appreciate this in more detail.

In terms of the conditional expectation, you will have to Set ST and D to some specific values. They can be multiple values of ST and D (example ST < 4) but they have to be specific.

You use the above method in my last response to calculate the distribution.

Once you have this distribution, use normal expectation methods. In other words you will have a probability distribution for every value of JT and then just like normal expectation, you calculate Sigma j(t)p(t) where j(t) corresponds to the JT value at t and p(t) is the probability for that value of jt which is calculated above.

Think of it in terms that you have P(JT = x| ST,D) and then you are adding P(JT=1|ST,D)*1 + P(JT=2|ST,D)*2 + ... + and so on. This will give you a conditional expectation for JT given the ST and D restrictions.
 
Last edited by a moderator:
  • #23
Hi Chiro,

You have helped me a lot with with a few previous questions on probability. Thanks for that. Would you have time to look at another question that I have? I've been trying to figure it out myself but have not had much luck.

Thanks

John

Hi there,

I have a question about probability, conditional expectations, copula functions and their use in a Monte carlo simulation. I'd appreciate any help or comments that you can offer.

I'll describe briefly what I am trying to simulate and I'll ask the specific question at the end.

I am trying to create a Monte Carlo simulation of the travel patterns of electric vehicles for a electricity grid impact analysis.

I have recorded the following variables for a fleet of cars over 6 months.

1. Journey Start Time
2. Journey End time
3. Journey Time
4. Journey Distance
5. The parking time between journeys

From the data above, I have extracted the following variables.

1. Departure time from Home (the first journey of the day)
2. Arrival Time Home (the last journey of the day)
3. The number of journeys made during the day (sum of the journeys)
4. The total distance traveled (sum of the individual journeys)

I found that these variables were correlated. I modeled the distributions of the variables and the dependence structure between them using a normal copula function. http://en.wikipedia.org/wiki/Copula_(probability_theory)

So the simulation begins by generating the 4 random variables above.

I also want to simulate the journeys during the day (this simulation above only generates the first and last journey time).

So in order to do this, I created two tables using all the data in the database as follows:

1. The Expected journey time given journey start time and journey distance. E(Journey Time | Start Time, Distance)

2. The Expected Parking time given journey stop time and the time already already parked during the day. E(Parking Time | Stop Time, Time already parked during the day)

The reason why I am using the "time already parked during the day" is because this prevents the parking time going over 24 hours.

This is a lot to take in, so I have made picture illustrating an example of the entire process.

https://dl.dropbox.com/u/54057365/All/pic%20sim.JPG

I've put my question in the green box. The question is theoretically speaking should the simulated arrival time home equal the arrival time home that is got by summing the expected journey times and parking time throughout the day with the simulated departure time in the morning?

I am finding that they are not equalling - I don't know if they should? My initial thoughts are that they probably won't because of the expected journey time and parking time tables.

I hope that I have explained this well enough.

I'd appreciate any help and comments.

Thanks

John
 
  • #24
Hey bradyj7.

I'm just wondering if you could write down your question in mathematical form. It seems like you want to find some kind of expectation or sums of expectations.

Also you might want to look at this result:

http://en.wikipedia.org/wiki/Law_of_total_expectation

I got to take off now, but I'll take a closer look later and wait for your response.
 
  • #25
Hi Chiro,

Thank you for taking the time to look at the problem.

I'm not sure exactly how to express the question in mathematical form, but you are right in saying that it is a sum of expectations problem.

Can I just say, if you need me to provide more detail on any part of the simulation, I can, I just didn't want to clog up the thread with unnecessary information.

As I described, the simulation begins by generating the 4 random variables (modelled with the copula function) as demonstrated in part 1 of the example

https://dl.dropbox.com/u/54057365/All/pic%20sim.JPG

The key thing is that this part generates the "Departure time from Home in the morning" and the "Arrival Time Home in the evening". It also generates the number of journeys during the day and the total distance traveled during the day.

In part 2, the distances of the individual journeys are generated from distributions. I believe the method is called "Stick breaking Construction" in probability theory.

Now the hard part. The simulation constructs the starting time and end time of the journeys during the day. It does this using the initial simulated "departure time from home" value and two expectation tables that I have created using the collected data.

1. The expected journey time given journey start time and distance E(JT | ST, D)
2. The expected parking given journey end time and the time already parked during the day E(PT, Stop Time, Time already parked during the day)

So taking the example that I prepared, I presume the formula looks like this:

Given 8am departure time, 3 journey distances (12, 10, 18 miles).

So,

8am + E(JT, 8am, 12 miles) + E(PT, 8.30am, 0) + E(JT, 11am, 10 miles) + E(PT, 11.30am, 2.5 hours) + E(JT, 6pm, 18 miles) = ?

So to recap, my question is, theoretically should the sum of the expectations above and the "Departure time in the morning" equal the generated "Arrival time home in the evening" (i.e. 7pm).

The problem that I am having is that they do not equal - but I'm not sure if they should?

I appreciate your advice.

Thanks

John
 
  • #26
It might be helpful if you can tell me the criteria that you ultimately want to find.

I'm assuming it has to do with the parking time since you want it to be less than 24 hours, but maybe you could just outline the criteria for parking (between certain hours, constraints on a parking "session", etc).

Typically if you have a specific distribution (even it's a complicated multi-variable joint distribution) and you want to say calculate a probability or expectation under the constraint, the first thing is to outline what the constraint is in the simplest terms and then work towards the definition rather than the reverse.

So I recommend you just outline what your end goal for your current problem so we can work backwards from there to the mathematical constraints.
 
  • #27
This sounds like a good idea.

Basically my end goal is to develop a Monte Carlo simulation approach for modelling the power demands of electric cars. Obviously the times during the day in which the cars will be plugged in is when they are parked. The whole purpose of the simulation is to be able to simulate the travel patterns of cars in order to determine when they are parked and likely to charge.

The reason why it is a Monte Carlo simulation is because vehicle travel behaviour is stochastic in nature and I wanted a tool to be able to generate stochastic travel patterns, so the realistic demands on the grid can be determined The reason why I used the copula function is because the 4 variables "Departure time from home", "Arrival Time home", the number of journeys and the total distance traveled during the day are correlated.

Here is the correlation matrix:

https://dl.dropbox.com/u/54057365/All/corr.JPG

Departure time and arrival time refer to the departure time and arrival time from the home. As an example , Depart time on day 1 is negatively correlated with Distance Day 1 (-0.38), this implies that later a person leaves home the shorter the distance travelled.

I am happy with the theory up to this point.

Basically now I'm trying to work out a way in which to determine the start times and end time of journeys during the day and the parking times between the journeys.

So at this point I have the following:
1. The start time of the first journey
2. The number of journeys
3. The total distance travelled

So next it simulates journey distances as described in the previous post.

I thought that creating expectation tables for journey times and parking times, using the large number of actual observations in the dataset would be a logical approach to "piecing together" the movements during the day.

The expected journey times are reasonable given the journey start time and distance to travel.

It is difficult to determine the factors that influence the parking time of a car. So I thought the end time of a journey and the time already parked during the day were reasonable. For example if the journey end time is at night then one would assume that parking time will be long. The reason why I am using the "time already parked during the day" is because this prevents the parking time going over 24 hours. The expected parking times become very large (>24 hours) if you do not use this constraint.

So that is it. I'm open to any changes in the structure that you could suggest.

One option that I was thing of was at the beginning of the simulation, not generating an "Arrival time home". Instead take the arrival time home as being the sum of the expected journey times and parking times during the day. Can you do this?

Another option would be to scrap the expected parking time table and just generate random parking times depending on certain inputs?

I have a very large database of travel patterns so have the capability of generating all sorts of distributions or expectations etc.

Thanks for your time

John
 
Last edited by a moderator:
  • #28
I will have a closer look later on, but one thing you definitely want to think about is whether journey times affect subsequent parking times and whether parking times affect subsequent journey times.

We assume that time of day affects parking times and journey times (and you have data for it), but if the actual journey times can be considered independent from the parking times, then basically what you can do is add up independent variables corresponding to different parking and journey times and then look at the properties that total random variable (where you can look at expectation).

However if they are not independent, then it means that you will need a complex markovian style model where the next random variable (so if you were travelling, you now park and if were parked, your now travelling) will depend on the state of the previous one and this can be done with the right markov model in which you can look at a distribution for not only so many parking and journey events, but also for a restricted period of time (so basically at some point you have some event where the parking or journey or both times are 0).

The independent case for expectations would mean that for finding expectations, you sum them but if they have strong dependencies, then this will not work.
 
  • #29
Hi Chiro,

Thank you for your advice, I had not thought of the problem like that. I'm honestly don't know whether or not they should be treated as independent or dependent events? One would assume that they are dependent. What is your opinion?

So are you saying that if we assume that they are independent events then the current sum of of expectations method is okay provided that they are not strongly correlated? and the arrival time home calculated using the sum of expectations method does not theoretically have to equal the arrival time home that was generated using the copula function?

This might sound like a stupid question but to investigate if they are correlated, would I just calculate the persons correlation between the journey times and subsequent parking times?

I don't quite follow this bit
then basically what you can do is add up independent variables corresponding to different parking and journey times and then look at the properties that total random variable (where you can look at expectation).

Have I not done this by creating the expected journey time and parking time expectation tables?

With regards to the second option, I believe that they are actually dependent events and would be extremely interested in learning more about this method, if you would care to explain it in more detail. Personally I think that this is a better option. I would like it to be as realistic as possible.

I only know the basics of markov chains, but I sure I could follow a complex model.

Just in relation to the expected journey times and parking time tables, I think that the expected journey times are quite reasonable given the journey start time and distance (obviously journey time is related to distance). The expected parking time are not very reasonable.

I have the start times, end times, dates, distances, parking times of many many journey so I can create pretty much generate any distribution.

Look forward to hearing your suggestion.

Thanks for your time

John
 
  • #30
Basically you want to think about a continuous time chain to model probabilities involving from switching between two main states: parking to journey and journey to parking (parking to parking and journey to journey are just zero probabilities).

But the other is that you will need to look at how to factor in the other parameters like the time of day you start an event and its duration.

From your data and your knowledge on the subject, do you have any kind of rules of thumb for assumptions on how some of the basic conditional distributions should look like?

Are you aware of the setup of continuous-time markov chains?
 
  • #31
Hi Chiro,

Thanks for the advice.

Yes, I do know the basics of continuous Markov Chains. I find this free excel addin very good for simulating Markov chains, mind you I have only used it for discrete problems.

http://www.me.utexas.edu/~jensen/ORMM/excel/markov.html

In terms of the transitions, would they not be 1 because parking has to follow a journey and vice versa?

How would you suggest factoring in a starting time of an event and its duration? I don't really know how would do this with a markov chain?

Apologies, when you say "basic conditional distributions", I'm not sure what you mean. I could investigative certain conditional distributions? I know the conditional expected journey times and parking times.

So just to recap, the 4 variables generated from the copula function are okay, then the distances are of the journeys are worked out, then the markov chain would work out the start times and durations of journeys and parking times?

Thank you

John
 
  • #32
Hello Chiro,

I have been thinking further about this problem and I'm going to investigate further the relationship between journey times and parking times. I am also going to look at the relationship between successive journey times (the 2nd journey could be a return journey).

To be honest what I am struggling with at this point is what is relationship to look at? Would you look at a conditional expected values or conditional probability distributions?

Would you look at conditional probability distributions? For example given journey time = x and journey end time = y , the distribution of parking times? and then generate a random number from the distribution for the parking time?

Would this mean that I would have "a lot" of conditional distributions for every possible x any y value above?

You can't look at conditional expected values if they are dependent right?

Apologies if these are basic questions.

Thanks

John
 
  • #33
Basically what you would have to do is break it up into a small number of intervals (which you have done) and then consider all the branches to get a complete set of conditional distributions.

So instead of making your conditional distribution based on a continuous variable, you make it based on a discrete one.

So in other words you restrict your parking and journey times to fit into "bins" and then you look at each conditional distribution for each of the branches.

For example if you allow the smallest time interval to be ten minutes: then you consider conditional distributions for total times in terms of lumps of these intervals.

So if you have n of these intervals, you will get 2^n branches. Some branches may have zero probabilities, but in general you will have 2^n individual branches corresponding to all the possibilities that you can take.

So an example might be P(Total Journey Time = 30 minutes| First 10 = Travel, Second 10 = Travel, Last Ten = Park) and any other attributes you need.

To count up total journeys, you basically sum up all the positive branches (i.e. when all the times you have a journey) and for parking you do the same for those.

Basically what this will look like is a dependent binomial variable, and what you do is estimate these probabilities from your sample. From this you will have a distribution for n intervals given a history of what you did and by considering whatever subset of these probabilities you wish, you can find things like the expectation.
 
  • #34
Thanks for the advice,

In relation to your previous post could you elaborate on your suggestion of using a continuous time markov chain? What would you suggest using this for?

Thanks
 
  • #35
It's basically the same as a markov chain for discrete time except that the transition probabilities are based on continuous time.

So if you had say three events in continuous time, a continuous time markov model would be a 3x3 matrix (in terms of its transition matrix) but each entry would be a function of some continuous variable instead of a constant.

In terms of what its used for, it's basically used for continuous time parameters instead of discrete time: that's the major difference and if you had phenomena that were based on continuous time and you could solve for the right transition matrix (where it's form is not too complicated), then it would a lot better to use that.

But your model is not markovian which means that you can't really use this framework, and one reason why I suggest not to go down the continuous path is because a non-markovian continuous time model will be really really complicated: so the suggestion was made to use a discrete branch binary tree where the number intervals are low enough to make it simple, but high enough to make the model useful as an analytic tool.
 
  • #36
Okay, thank you for explaining that and for your help.
 
  • #37
Hi Chiro,

A bit off the topic, but just in relation to discrete markov chains, is there a statistical test for the sample size of measurements to ensure that the transition probability matrix (that you construct) for the process significantly represents the process that you are trying to model?
 
  • #38
There is a general area of statistics that concerns itself with sample size and power calculation in relation to a variety of statistical tests.

But with regards to probabilities, that is a little different because you are more interested in whether the distribution based on your sample is good enough to be used as some kind of population model, and typically most statistical tests are concerned with estimation of specific parameters (like say the mean) or non-parametric parameters (i.e. distribution invariant ones like the median).

One thing you can do is to treat your probabilities as proportions and then consider an interval that corresponds to that probability given a sample size.

So your probabilities are proportions just like a Bernoulli trial, and you can then look at the power and sample size calculations for getting one particular proportion in some region correctly or incorrectly and then look at how that applies to the distribution and individual probabilites.
 
  • #39
chiro said:
Basically what you would have to do is break it up into a small number of intervals (which you have done) and then consider all the branches to get a complete set of conditional distributions.

So instead of making your conditional distribution based on a continuous variable, you make it based on a discrete one.

So in other words you restrict your parking and journey times to fit into "bins" and then you look at each conditional distribution for each of the branches.

For example if you allow the smallest time interval to be ten minutes: then you consider conditional distributions for total times in terms of lumps of these intervals.

So if you have n of these intervals, you will get 2^n branches. Some branches may have zero probabilities, but in general you will have 2^n individual branches corresponding to all the possibilities that you can take.

So an example might be P(Total Journey Time = 30 minutes| First 10 = Travel, Second 10 = Travel, Last Ten = Park) and any other attributes you need.

To count up total journeys, you basically sum up all the positive branches (i.e. when all the times you have a journey) and for parking you do the same for those.

Basically what this will look like is a dependent binomial variable, and what you do is estimate these probabilities from your sample. From this you will have a distribution for n intervals given a history of what you did and by considering whatever subset of these probabilities you wish, you can find things like the expectation.

Hello Chiro,

Can I ask you a couple of questions regarding this post?

To help visualise the problem, here is a sample of raw data.

https://dl.dropbox.com/u/54057365/All/rawdata.JPG

So the first thing is to bin the journey and parking times. So say I select 5 minutes as the bin size, I'll have 288 bins. This is a lot of bins but I presume 5 minutes is reasonable for short journeys and short parking times (trip to shop etc).

Apologies, I don't want to come across as stupid but I don't fully understand what you mean by branches? and 2^288 is a lot of branches?

You provided an example

So an example might be P(Total Journey Time = 30 minutes| First 10 = Travel, Second 10 = Travel, Last Ten = Park) and any other attributes you need.

Could you explain what you mean by "travel" and "park", a car would be parked during a journey??

From the sample data above, I have "binned" the journeys with a journey time of 30-35 minutes.

https://dl.dropbox.com/u/54057365/All/rawdata1.JPG

Would it be possible to show me by way example what you mean by branches and an example of a probability calculation? I would really appreciate it and it would help me understand it better.

I just can't visualise how to calculate (or generate) a journey time or parking time now that we are assuming that they are dependent events. Assuming independence was easier!

Are we generating or calculating a journey time given start time distance and the previous journey time and parking time?

I like the idea of calculating the probabilities first and then getting the expected values but I just can't grasp what probabilities to calculate and how to do it. I know basic stuff!

Thank you for your time, I know that I have asked a lot of questions on this forum and I am grateful for the help.

John
 
Last edited:
  • #40
So by branches, I mean that for each outcome chronologically you have two options (i.e. park next or move based on what you're doing now) where there is a chronological element.

Think of it like modelling whether a stock goes up or down given it's current state and it's complete history.

In terms of estimating probabilities and getting confidence interval, what you are doing is estimating a probability distribution with n probabilities and this is equivalent to estimating the parameters of a binomial if you only have two probabilities per branch point: park or keep moving.

If you can only park or keep moving, then this is only two choices which means you only have one probability. You can use either the standard frequentist results to get the variance of the estimator and also use programs to calculate the power and sample size for some given significance level amongst other things. Something like this:

http://biostat.mc.vanderbilt.edu/wiki/Main/PowerSampleSize

If you have a distribution with more than two outcomes, you will want to use the multi-nomial.

Basically the way I am suggesting you start off is to consider the branch model and then simplify as much as possible without losing the actual nature of your model (i.e. don't make it too simple so that it's accuracy reaches a point to be useless).

So with your binned data, I'm a little confused since you have both parking and journey data in a trip: I was under the impression you could only travel or be parked in one indivisible event so maybe you could clarify what is going on.

The parameters like journey and parking times will be based on your branch distribution, but you can throw in some kind of residual term to take care of the finer details (like say if your branch size is 30 minutes, but you want to take into account variation to get probabilities for say 35 minutes or 46 minutes given that the time is between 30 and 60 minutes).

As for the distance, it would probably make sense for this to be a function of your actual times (even it's an approximation) and if you had more data (like location: for example if you are in a busy city or a wide open space) then you would incorporate this as well.

So now we have to look at calculating a branch.

Basically if you have the data divided up into the branches, then it's just a matter of getting the mean and variance of the bernoulli data and this becomes the basic frequentist interval of which you can use a normal distribution if your sample size is big enough.

So let's say you have a branch for the time t = [120,150) minutes where your branch size is 30 minutes. You will have quite a few possibilities leading up to this time level (if a 30 minute interval is assumed, you will have 2^4 or 16 possibilities) so you will have 16 different sets of data say how frequently you will park or move in those next 30 minutes.

You can calculate this stuff from your data to generate a table full of 1's and 0's and these are used to estimate your probabilities.

Now as you have mentioned, this will be a lot of data but if you have 30 minute intervals for 12 hours in total that's just under 17-million probabilities for the entire distribution which isn't actually too bad (a computer will calculate and deal with this data pretty quickly).

For fine grained stuff, as mentioned above you can add a distribution for getting values in that particular range.

With this technique you can adjust these coarse and fine-grained distributions (i.e. change the interval from 30 to 40 minutes but then change the fine-grained one to something with more complexity).
 
  • #41
Hello Chiro,

Thank you for explaining this. I've spent the day reading examples of Binomial distributions and this is a good method, I understand what your suggesting, well most of it:)

Just to clarify the issue with the data. Each row represents the travel for one car per day.

https://dl.dropbox.com/u/54057365/All/rawdata.JPG

So the journey time is the length of the journey (minutes) and the parking time (minutes) is the time between the end of journey 1 and start of journey 2. The 1st and 2nd rows in the sample only made 2 journeys that day, whereas the 3rd row had a 3rd journey. It had a 4th journey too but I left it out so it would fit on the screen.

Altogether I have 20,000 rows (days of travel).

In my mind, I am treating a journey as an event and naturally I am thinking that at the end of the event there is only one option which is park. Are you considering a journey as an event here? I'm not sure if I'm following you correctly here?

So when you say bin the data, do you mean bin the journey times (for example sort the rows from small to large) work out the probabilities and then do the same for the parking times?

Should I keep the order of the journeys as they are Journey 1, Journey 2, Journey 3 etc? or just treat them as one list? Like this

https://dl.dropbox.com/u/54057365/All/rawdatalist.JPG

and then bin them (by journey time first) like this (the journey time column is sorted)?

https://dl.dropbox.com/u/54057365/All/rawdatalistsorted.JPG

If you treat them as one list then you will you not be able to capture the relationships between successive journeys? For example a 2nd journey could be the return leg of a trip and could be of similar length and time?

I have a couple of questions about the remainder of the post but if it is okay I'll wait until I understand these points before continuing.

Thanks for your time

John
 
Last edited:
  • #42
Basically the way I envisage this is that if you have journey data, then basically the way you calculate the branch data is just to create the data so that you get a 1 or a 0 for whether something is in a particular binary distribution, and these become your bins.

So let's say you have a journey that goes for 1 hour and 20 minutes and then goes into parking for an hour. If you broke up this part of the journey, you have 3 bins with journey and then another two bins (all half hour in length) marked as parking.

This would result in a record with 1 1 1 0 0 and then you can take this data and build all the conditional frequencies and estimate the parameter for the probability as well as the confidence interval.

So in terms of your conditional distributions you will have say something like P(X4|X1=a,X2=b,X3=c) and in this example, the above would be a realization if a,b,c all equal 1.

This is the simplest bin structure with a fixed interval size.

So these give all the conditional distributions for each bin (i.e. the distributions for at some time, you have not only have a specific journey/parking history, but you can for example sum up the journey/parking times and do probability/statistics with it).

So with this you can then use a computer and get distributions for whatever you want (for example: you can find the distributions for all parking times in some interval given a history, basically whatever you want).

If you want finer control, you can add a distribution within each cell or you can have some kind of hierarchical scheme (i.e. with multiple levels), but that's going to be up to you.

Basically more resolution equals more complexity, so if you need to add complexity add it, but if you don't then that's a good idea to review the model and if it's use is adequate then you know where you stand.
 
  • #43
Hey,

oh I understand what you mean now regarding the bins. This is what I was thinking a few days ago but I wasn't sure if it was the same thing.

I've done this with some sample data. The bin sizes are 15 minutes.

https://dl.dropbox.com/u/54057365/All/sampletable.JPG

Would you mind me asking a couple of questions about the conditional frequencies and probabilities?

Would it be possible to show me an example of a conditional frequency and how to calculate it? Apologies if this is basic but I've never done something like this before.

For example, say you wanted to build the conditional frequency for parking time given a journey time of 15 minutes? Is this what your suggesting for the conditional probabilities?

Do you work out the conditional frequency, then the probability and then the expected value?

So basically if you had 3 journeys you would repeat this process 3 times, calculating the expected journey time and parking time?

You suggest making the distances a function of journey times instead of generating distances?

Currently the copula function generates a start time in the morning, the number of trips and the total distance traveled during the day. I'm kinda keen to keep the copula function. I'm guessing that if calculate the individual journey distances as a function of time then they probably won't add up to the total distance generated by copula.

I'm nearly there, I think I'll be confident putting this into practice with large dataset once I know how to calculate the probabilities and expected values.

Thanks again your time

John
 
  • #44
If you have journey times and parking times with parking and lengths and you convert this data to the binned branch structure, then you will have complete distribution for any kind of binned journey and parking history that is accurate to that bin size and you can increase it by adding more data within each bin.

So you will have a tonne of entries with say n bins (n columns) and a 1 and 0 in each cell corresponding whether that entry was used for parking or journey. Also before I forget, another point to note that if the bin size is too large you may a lot of events going on in the one bin and you have think about that possibility.

So say you wanted a conditional distribution given all the information that you spend at least k units (i.e. the bins) parking. Then what you do is instead of using all the record information, you just select the records that meet that criteria.

Now if you have multiple data, you can form a sub-distribution for this data given the variation in this data-set as opposed to the whole data set.

So let's say you have a data-base and then you say "I want a conditional distribution that represents the constraint that you must park in at least three bins". So you use your database or statistical software to return back this data (and this is why using something like SQL is useful if you are doing complicated stuff that your statistical software or excel can't do easily) and you get your constrained data.

Then just like any distribution, you find the number of unique entries and you give it a distribution, and this new distribution is your conditional distribution where you have P(X|There is at least three binned parking slots).

If you calculate the probabilities using the frequencies in this new subset of data you will get a distribution but the state-space is now this subset and not the whole thing.

So basically all the process really is, is starting with a complete set of data that represents the global distribution of data and then getting the right slice of that data and calculating its distribution.

Each conditional distribution refers to a particular slice and that distribution becomes its own random variable which has events that are a subset of the universal random variable that describes all of your data.

You can then do all the probability stuff, expectations, variances whatever but the only thing that has changed is the state-space: conditional ones always look at a subset of the more general state-space and that's all that is different.
 
  • #45
Thanks again,

I'm going to use bin sizes of 5 minutes to get a good level of detail and to ensure that events are not being missed.

Could we just do one example? just to be sure. This sample data has bin size of 15 minutes.

https://dl.dropbox.com/u/54057365/All/sampletable.JPG

So say we take your example:

P(X|There is at least three binned parking slots i.e 45 minutes).

Then this data would be returned from the database.

https://dl.dropbox.com/u/54057365/All/examplesample.JPG

Would you mind demonstrating how you would calculate the probability and the expected value using the frequencies? I understand that this is very basic but its all new to me and would be really helpful to see and example.

One last question, do you think that the time day needs to be factored into the problem or is it already factored in way because ultimately you are working out the parking times and journey times given one or the other?

Thanks for your time
 
Last edited by a moderator:
  • #46
So in your example, you have four records.

Now get all the different possibilities that exist (i.e. all the unique records) and these become your possibilities (i.e. your events) in your probability space.

In this case it seems that P(Bin_n = 1) = 1 for n > 3 (i.e. 4th bin and after) so this implies P(Bin_n = 0) = 0 for those same bins.

You have empty cells so I don't know how you handle them, but you need to handle those.

Then to get probabilities you simply take all the unique records and find the frequency of those in comparison to the others (just like you do with a normal set of unconditional data).

So you get those unique records and then you start to see where the variation is: I showed the above example that for this data set, there is no variation for n > 3 bins but in general you should lots of variations.

Now once you have your records you find out where the variation is and these become random variables in your joint distribution.

So to find the frequencies you look at what the variables are and you get the computer to get the frequencies for a particular attribute. So if you want to find P(Attribute1 = 1) then you return the number of times this occurs and divide it by the total number in the database and that's your frequency.

It's up to you to decide what an event is and a random variable corresponds to: it can be an atomic one (can't be divided up any further) or it can be a conglomerate one (made up things that can be divided), but basically the way to get probabilities is to fix an event definition, do a search for how many times that occurs in your total data set, divide it by the number of total items in the data set and that is your probability for that event.
 
  • #47
Hello Chiro,

I was talking to you about a Markov Cahin model of vehicle velocities a few months ago. I'm making the model but I was wondering if you comment on something.

You have probably forgotten so just to recap first.

I am recording the velocity of a car every second as it makes journeys. I have about 1000 journeys in total. I created a transition probability matrix of the probabilities of transitioning from one velocity to another. I intend to use the model to generate a synthetic velocity-time profile for a car.

So here is an example of a actual velocity profile for a car.

https://dl.dropbox.com/u/54057365/All/vel1.JPG

and this is an example of a velocity profile generated from the markov chain.

https://dl.dropbox.com/u/54057365/All/vel2.JPG

Visually you will notice that the actual profile is much smoother than the one generated by the markov chain. You can see in the actual profile that when a car is accelerating to a high velocity the profile is smooth as the velocity increases, but in the generated cycle the velocity is fluctuating as if it is slowing down and speeding up while accelerating to a high speed.

When you compute summary statistics for the profiles such as average velocity and standard deviation of the velocity, they appear similar in nature. But I'm curious about the jagged appearance of the generated profile.

Could you offer any insight as to what is happening?

Appreciate your comments.

John
 
  • #48
With regards to the jagged-ness, typically what happens is that this sort of thing is treated as a time-series and there all kinds of analysis and techniques that are done including ways to "smooth" the data.

The simplest one is known as Moving Average:

http://en.wikipedia.org/wiki/Moving_average

There are, of course, many more but the MA is a basic one.

As for the summary statistics, you might want to specify the constraints that you want to calculate (for example specific periods, conditional constraints, etc).

If you don't add constraints, then you just use integration and for this you can "bin" the values so that they contain small intervals for probability and then you can use numerical integration techniques to get both the mean and the variance of this approximated distribution.

The numerical techniques do allow depending on the technique to smooth out the value so that you get a kind of interpolation happening rather than an average or some other approximation.

As an example let's say you bin the velocities into bins of interval size 0.05. Let's say the probability for the bin corresponding to [1,1.05) is 0.03. Then one numerical scheme (mid-point) for xf(x)dx over this interval would be [1.05+1]/2 * 0.03 * (1.05-1) with the mid-point numerical integration technique.

In general, you could use any quadrature scheme you wish and depending on the requirements and the data, some might be better than others: just depends on the problem and your needs.

There are other techniques to calculate the mean of a general signal or function not given any probability information if you want to pursue this as well, and you can use this formulation to get the variance of a signal.

When I mean the above, I mean doing it with just the v(t) data instead of with a PDF of v(t) against v(t) like you do with normal xf(x)dx.

I can dig up the thread if you want.
 
  • #49
Thanks very much for your help Chiro. I was thinking of smoothing the data, I will investigate the different methods. I'll try MA and I've used Kernel smoothing before so I'll try that too.
 
  • #50
Hi Chiro,

I have been working away on the travel pattern model that you helped me with a lot a few weeks ago. I need to bin the data at 5 minute intervals. Its important that the output of the journey start/stop times have a resolution of 5 minutes because that is important in electricity grid modelling. A problem that I have been having is that I have no observations in the dataset for a lot of the conditional probabilities.

It was suggested to me that a Bayesian probability approach could over come this.

I don't know much about Bayesian probability but have been reading up on it over the last week, however I have not been bale to find an example of what I am trying to do.

Say for example I have the starting times of journeys in the morning. I binned the data into 15 minute bins so I have 96 bins in total (4*24=96). So for example a journey start time of 08:05 am would be in bin number 29.

As an example here is the data for bins numbers 28-50 (8am until 12.30pm).

https://dl.dropbox.com/u/54057365/All/bin.JPG

I've calculate the frequency density of the bins in the last column.

Would you know how I could do the following (this was suggested to me):

Taking Dirichlet prior distribution over the density of each bin for a multinomial model, you estimate the parameters. This way you get a non-zero probability for each bin. Each parameter is basically some prior parameter plus the frequency of the data in that bin.

Would you know if this could be done in excel?

Appreciate your comments.

Regards

John
 
Last edited by a moderator:
Back
Top