- #1
carllacan
- 274
- 3
Hi.
I have a couple ofsimple question about Bayesian Filters, just to check that I'm correctly grasping everything.
I've been thinking on the difference between the prediction and update steps. I understand the "Physical" difference: in the first one we calculate the probability of the world being in a certain state given that the system has performed certain action u, that is [itex]p(x_t) = p(x_t|u_t, x_{t-1})p(x_{t-1})[/itex], and in the second step we calculate the probability that the world is in a certain state given that the system has measured some quantity z, that is [itex]p(x_t) = p(x_t|z_t, x_{t-1})p(x_{t-1})[/itex].
Now, what troubles me is that even when both are the same calculation (finding the probability of some event throgh conditional probabilities on other event) they are performed on different ways in the Bayesian Filter algorithm:
I've come to the conclusion that this is due that while we know [itex]p(x_t|u_t, x_{t-1})[/itex], that is, the results of our actions, we don't usually have direct information about [itex]p(x_t|z_t, x_{t-1})[/itex], but rather just [itex]p(z_t | x_t)[/itex],so we use Bayes Theorem to "transform" from one conditional probability to the other one. Does this make any sense?
Also, minor side question: on the third line of the algorithm there is an integral. Over which variable are we integrating there, xt or xt-1? Or both?
Thank you for your time. I would be grateful to receive any kind of correction about terminology or formality, be as nitpicking as you can :-)
I have a couple ofsimple question about Bayesian Filters, just to check that I'm correctly grasping everything.
I've been thinking on the difference between the prediction and update steps. I understand the "Physical" difference: in the first one we calculate the probability of the world being in a certain state given that the system has performed certain action u, that is [itex]p(x_t) = p(x_t|u_t, x_{t-1})p(x_{t-1})[/itex], and in the second step we calculate the probability that the world is in a certain state given that the system has measured some quantity z, that is [itex]p(x_t) = p(x_t|z_t, x_{t-1})p(x_{t-1})[/itex].
Now, what troubles me is that even when both are the same calculation (finding the probability of some event throgh conditional probabilities on other event) they are performed on different ways in the Bayesian Filter algorithm:
Code:
Algorithm Bayes filter(bel(x[SUB]t−1[/SUB] ), u[SUB]t[/SUB] , z[SUB]t[/SUB] ):
for all x[SUB]t[/SUB] do
[u]bel[/u](x[SUB]t[/SUB] ) = ∫ p(x[SUB]t[/SUB] | u[SUB]t[/SUB] , x[SUB]t−1[/SUB] ) bel(x[SUB]t−1[/SUB] ) dx
bel(xt ) = η p(z[SUB]t[/SUB] | x[SUB]t[/SUB] ) [u]bel[/u](x[SUB]t[/SUB] )
endfor
return bel(x[SUB]t[/SUB] )
I've come to the conclusion that this is due that while we know [itex]p(x_t|u_t, x_{t-1})[/itex], that is, the results of our actions, we don't usually have direct information about [itex]p(x_t|z_t, x_{t-1})[/itex], but rather just [itex]p(z_t | x_t)[/itex],so we use Bayes Theorem to "transform" from one conditional probability to the other one. Does this make any sense?
Also, minor side question: on the third line of the algorithm there is an integral. Over which variable are we integrating there, xt or xt-1? Or both?
Thank you for your time. I would be grateful to receive any kind of correction about terminology or formality, be as nitpicking as you can :-)
Last edited: