- #1
Adel Makram
- 635
- 15
In maximization algorithm like that is used in artificial intelligence, the posterior probability distribution is more likely to favour one or few outcomes than the prior probability distribution. For example, in robots learning of localization, the posterior probability given certain sensor measurments are converged into certain outcomes representing the 2D or 3D characterise of the space. Can we conclude that the total entropy of the system is reduced after establishing the posterior probability distribution?