How to incorporate evidence into parameters of a Bayesian network?

sarikan
Messages
7
Reaction score
0
Greetings,
Maybe I'm getting a little bit confused, but I'm looking for resources which explain how to update parameters of a Bayesian network as a result of observations.

There are various inference methods, but unless I'm missing something here, these methods produce a posterior distribution based on evidence (a set of observations of some nodes).

This posterior distribution is specific to the evidence, but there must be a way of incorporating this into the network, so that the prior distribution of the following observations are modified.

Can I simply use the posterior distribution after observation of evidence E as the prior for the next? This should correspond to a Bayesian update of the network, but I fear that they me be a catch here. Would this be the right way of continuously updating the parameters of the network as evidence is observed?

Regards
 
Physics news on Phys.org
sarikan said:
Greetings,


Can I simply use the posterior distribution after observation of evidence E as the prior for the next? This should correspond to a Bayesian update of the network, but I fear that they me be a catch here. Would this be the right way of continuously updating the parameters of the network as evidence is observed?
Regards

This paper may be helpful. You generally need to create conjugate priors as you update. If you are starting with an uninformative prior or hyperparameters (of prior distributions) the process is more complicated.

http://cran.r-project.org/web/packages/LaplacesDemon/vignettes/BayesianInference.pdf
 
Last edited:
Thanks. A good overall paper it appears. I'll read it in detail. Conjugacy is useful, though I approach it with some hesitation, since my work may end up with arbitrary distributions, where conjugacy may not be possible.
I think I've found the right term for what I'm looking for by the way: adaptive inference in Bayesian networks. For anyone else who may look for something similar in the future...

Regards
 
sarikan said:
Thanks. A good overall paper it appears. I'll read it in detail. Conjugacy is useful, though I approach it with some hesitation, since my work may end up with arbitrary distributions, where conjugacy may not be possible.
I think I've found the right term for what I'm looking for by the way: adaptive inference in Bayesian networks. For anyone else who may look for something similar in the future...

Regards

Here's a paper on adaptive Bayesian inference based on tree structured networks. It describes the sum-product updating algorithm for re-evaluating marginal distributions and other quantities and proposes a faster alternative. I can't say I have any experience in this particular area of Bayesian applications, but it looks interesting.

http://people.cs.uchicago.edu/~osumer/nips07.pdf
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top