MHB Bayesian parameter estimation via MCMC?

witziger_Fuchs
Messages
2
Reaction score
0
Hi folks.

I have the following question. I have a model M containing 20 adjustable parameters k = {k_j}.
I also have 40-50 measured temporal profiles e = {e_i} at my disposal.

I can use M to predict the experimental values after solving complex systems of differential equations.Consequently, I get m(k) = {m_i(k)} which I can compare to e = {e_i}. Now, I want to perform a Bayesian parameter estimation of the system. I am going to define a (first) prior distribution for the parameters k: p_0(k)
Afterwards, I want to get the posterior probability distribution of k: f_p(k) = p(k|e) = L(e|k)*p_0(k)/p(e).
(Whereby p(e) represents, of course, a very complex multi-dimensional integral of "L(e|k)*p_0(k)".Naturally, I cannot compute analytically the solution.
It also stands to reason that an approximate calculation of f_p(k) (and integration of "L(e|k)*p_0(k)") would be computationally intractable. I read that Macrov-Chain-Monte-Carlo (MCMC) methods should be used for computing quantities of interest characterising the posterior (such as the points of highest probability density and high probability density regions, whose bounds can serve as error bars).
To be frank, I am a novice in that field. Do you know any MCMC software freely available to academic researchers which could carry out all these operations, given a "black box" m(k) relying on solving differential equation systems?
If so, are you also aware of any beginner-friendly introduction into the concrete application of these techniques?

I'd be very grateful for your answers.Kind regards.
 
Physics news on Phys.org
Hi witzigerFuchs,

Welcome to MHB! :)

I just took a class on Bayesian probability and we spent quite a bit of time on the Metropolis-Hastings algorithm for sampling from the posterior distribution of the parameters. We were not analyzing differential equations so the structure of our data was each indicator had two groups of the same size and each observation was labeled $1-n$ depending on how the data could be labeled. The hardest part of this is probably finding a distribution that is proportional to the target distribution.

So while I don't know if I can give you a way to solve your question right now, I feel like it can be done through the MH algorithm or Gibbs Sampling. Both of these can be done through some packages in R (free software) if you aren't familiar with it (also get R-Studio to make it look better) I would search Google for "Metropolis hastings r" packages and maybe read up on using this algorithm in the context of Diff EQ. Hope this is a start!
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top