- #1
carllacan
- 274
- 3
This is a hard to explain question. If what I wrote makes no sense to you please let me know so that I can fix it.
Suppose a zone where the weather changes like a Markov chain between sunny, cloudy and rainy. You can't observe it directly, but you have a sensor that gives some information. However this sensor is faulty, so that (say) if the weather is sunny it has a 0.8 probability of reporting sunny and a 0.2 probability of reporting cloudy, and so on (you have a complete matrix describing the errors). You use Bayes Filter to find a probability distribution for the weather, and you keep updating this distribution as you keep receiving measurements from the sensor.
But would our distributions change if we were "in the future"? That is, imagine we are looking at the measurements from last week, and we see a day on which the sensor measured rainy. We look at the sensor error info and we see that on a rainy day the sensor measures rainy, without error. If we calculate now the distribution probability for the day previous to the rainy one we should assign a probability of 1 to rain and 0 to the other weather (because if the sensor says it had rained it means that it had rained). In the first scenario, though, you wouldn't have obtained this distribution.
So my question is: given a complete series of measurements on a Hidden Markov chain how can we calculate the probability distributions for the states of the system at different times?
Suppose a zone where the weather changes like a Markov chain between sunny, cloudy and rainy. You can't observe it directly, but you have a sensor that gives some information. However this sensor is faulty, so that (say) if the weather is sunny it has a 0.8 probability of reporting sunny and a 0.2 probability of reporting cloudy, and so on (you have a complete matrix describing the errors). You use Bayes Filter to find a probability distribution for the weather, and you keep updating this distribution as you keep receiving measurements from the sensor.
But would our distributions change if we were "in the future"? That is, imagine we are looking at the measurements from last week, and we see a day on which the sensor measured rainy. We look at the sensor error info and we see that on a rainy day the sensor measures rainy, without error. If we calculate now the distribution probability for the day previous to the rainy one we should assign a probability of 1 to rain and 0 to the other weather (because if the sensor says it had rained it means that it had rained). In the first scenario, though, you wouldn't have obtained this distribution.
So my question is: given a complete series of measurements on a Hidden Markov chain how can we calculate the probability distributions for the states of the system at different times?