Difference between prediction and update on Bayesian Filters

In summary: The predict step doesn't use Bayes' theorem. It uses physics / math to advance the state. It also changes the covariance matrix. The covariance matrix should grow due to the prediction. It's the update step where Bayes' theorem comes into play. This should once again touch on both the state estimate and it's uncertainty. The update should decrease the uncertainty.Yes, that's what I mean, the prediciton doesn't use Bayes' Theorem because we "know" the conditional probability, but in the update step we use Bayes because we need to express the conditional probability in terms of what we know.
  • #1
carllacan
274
3
Hi.

I have a couple ofsimple question about Bayesian Filters, just to check that I'm correctly grasping everything.

I've been thinking on the difference between the prediction and update steps. I understand the "Physical" difference: in the first one we calculate the probability of the world being in a certain state given that the system has performed certain action u, that is [itex]p(x_t) = p(x_t|u_t, x_{t-1})p(x_{t-1})[/itex], and in the second step we calculate the probability that the world is in a certain state given that the system has measured some quantity z, that is [itex]p(x_t) = p(x_t|z_t, x_{t-1})p(x_{t-1})[/itex].

Now, what troubles me is that even when both are the same calculation (finding the probability of some event throgh conditional probabilities on other event) they are performed on different ways in the Bayesian Filter algorithm:
Code:
Algorithm Bayes filter(bel(x[SUB]t−1[/SUB] ), u[SUB]t[/SUB] , z[SUB]t[/SUB] ):
    for all x[SUB]t[/SUB] do
        [u]bel[/u](x[SUB]t[/SUB] ) = ∫ p(x[SUB]t[/SUB] | u[SUB]t[/SUB] , x[SUB]t−1[/SUB] ) bel(x[SUB]t−1[/SUB] ) dx
        bel(xt ) = η p(z[SUB]t[/SUB] | x[SUB]t[/SUB] ) [u]bel[/u](x[SUB]t[/SUB] )
    endfor
    return bel(x[SUB]t[/SUB] )

I've come to the conclusion that this is due that while we know [itex]p(x_t|u_t, x_{t-1})[/itex], that is, the results of our actions, we don't usually have direct information about [itex]p(x_t|z_t, x_{t-1})[/itex], but rather just [itex]p(z_t | x_t)[/itex],so we use Bayes Theorem to "transform" from one conditional probability to the other one. Does this make any sense?

Also, minor side question: on the third line of the algorithm there is an integral. Over which variable are we integrating there, xt or xt-1? Or both?

Thank you for your time. I would be grateful to receive any kind of correction about terminology or formality, be as nitpicking as you can :-)
 
Last edited:
Physics news on Phys.org
  • #2
The predict step doesn't use Bayes' theorem. It uses physics / math to advance the state. It also changes the covariance matrix. The covariance matrix should grow due to the prediction. It's the update step where Bayes' theorem comes into play. This should once again touch on both the state estimate and it's uncertainty. The update should decrease the uncertainty.
 
  • #3
D H said:
The predict step doesn't use Bayes' theorem. It uses physics / math to advance the state. It also changes the covariance matrix. The covariance matrix should grow due to the prediction. It's the update step where Bayes' theorem comes into play. This should once again touch on both the state estimate and it's uncertainty. The update should decrease the uncertainty.

Yes, that's what I mean, the prediciton doesn't use Bayes' Theorem because we "know" the conditional probability, but in the update step we use Bayes because we need to express the conditional probability in terms of what we know. Is that so?
 
  • #4
  • Like
Likes 1 person
  • #5


Hello,

Thank you for your questions about Bayesian filters. You are correct in understanding the difference between the prediction and update steps. The prediction step uses the previous state and the action taken to calculate the probability of the current state, while the update step uses the current state and the measurement to update the probability of the current state.

The reason for the difference in calculation methods is due to the nature of the information available. As you mentioned, in the prediction step, we have direct information about p(x_t|u_t, x_{t-1}), but in the update step, we only have information about p(z_t | x_t). This is why we use Bayes Theorem to transform the conditional probabilities in order to update the probability of the current state.

As for your side question, the integral is taken over the variable xt, as indicated by the limits of integration in the algorithm. This is because we are calculating the probability of the current state, xt, based on the previous state and the action taken, xt-1 and ut.

I hope this clarifies any confusion you had about the difference between prediction and update steps in Bayesian filters. Your understanding and terminology are correct, and I appreciate your attention to detail. If you have any further questions, please don't hesitate to ask.

Best,
 

1. What is the main difference between prediction and update in Bayesian Filters?

The main difference between prediction and update in Bayesian Filters is that prediction involves using prior knowledge and current observations to estimate the state of a system at the next time step, while update involves using new measurements to refine the estimated state of the system.

2. How does prediction work in Bayesian Filters?

In prediction, the Bayesian Filter uses the prior probability distribution, which represents the estimated state of the system before incorporating new measurements, along with the system dynamics model to make a prediction of the state at the next time step.

3. What is the role of update in Bayesian Filters?

The update step in Bayesian Filters uses new measurements to adjust the prior probability distribution and produce a more accurate estimation of the system state. This helps to reduce uncertainty and improve the accuracy of the filter's predictions.

4. Can prediction and update be performed simultaneously in Bayesian Filters?

Yes, prediction and update can be performed simultaneously in Bayesian Filters. This is known as the prediction-update cycle, where new measurements are incorporated into the prediction step to produce an updated estimation of the state, which is then used as the prior for the next prediction step.

5. How do Bayesian Filters handle uncertainty in predictions and updates?

Bayesian Filters use probability distributions to represent uncertainty in predictions and updates. The prior distribution represents the uncertainty in predictions, while the posterior distribution represents the uncertainty in updates. By continuously incorporating new measurements, the filter is able to reduce uncertainty and produce more accurate predictions.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
958
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
841
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Replies
1
Views
632
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Back
Top