I Likelihood of the maximum of a parabola

dIndy
I have a quadratic regression model ##y = ax^2 + bx + c + \text{noise}##. I also have a prior distribution ##p(a,b,c) = p(a)p(b)p(c)##. What I need to calculate is the likelihood of the data given solely the extremum of the parabola (in my case a maximum) ##x_{max} = M = -\frac{b}{2a}##. What I tried so far is:

$$p(y|M) = \int p(y|M,b,c)p(b,c|M)\,dbdc$$

I would like to rewrite this as a function of ##p(y|a,b,c)## and ##p(b,c|a) = p(b,c)##, substituting ##a## for ##M##. However, I'm not sure how to perform a change of variables for conditional variables.

What I've also tried is using Bayes' theorem to rewrite the likelihood:
$$p(y|M) = \frac{p(y)p(M|y)}{p(M)} = \frac{p(y)\int p(M,b,c|y)\,dbdc}{\int p(M,b,c)\,dbdc} $$
Then performing the substitution for ##M##:
$$ \frac{p(y)\int p(a,b,c|y)\det(J)\,dbdc}{\int p(a,b,c)\det(J)\,dbdc}$$
Using Bayes' theorem again:
$$ \frac{\int p(y|a,b,c)p(a,b,c)\det(J)\,dbdc}{\int p(a,b,c)\det(J)\,dbdc} = \frac{\int p(y|a,b,c)p(b,c)\det(J)\,dbdc}{\int p(b,c)\det(J)\,dbdc}$$

Can this be simplified further?

I've also got a far more challenging and general problem:

Given a non-linear regression model ## y_i = f(\theta,x_i) + \text{noise}##, with ##\theta## the vector of unknown parameters and ##x_i## the vector of dependent variables. I want to calculate the likelihood of the global maximum of ##f##. The problem here is that there is no closed form expression for ##x_{max}##.
 
Physics news on Phys.org
dIndy said:
I have a quadratic regression model ##y = ax^2 + bx + c + \text{noise}##. I also have a prior distribution ##p(a,b,c) = p(a)p(b)p(c)##. What I need to calculate is the likelihood of the data given solely the extremum of the parabola (in my case a maximum) ##x_{max} = M = -\frac{b}{2a}##. What I tried so far is:

$$p(y|M) = \int p(y|M,b,c)p(b,c|M)\,dbdc$$

I would like to rewrite this as a function of ##p(y|a,b,c)## and ##p(b,c|a) = p(b,c)##, substituting ##a## for ##M##. However, I'm not sure how to perform a change of variables for conditional variables.

What I've also tried is using Bayes' theorem to rewrite the likelihood:
$$p(y|M) = \frac{p(y)p(M|y)}{p(M)} = \frac{p(y)\int p(M,b,c|y)\,dbdc}{\int p(M,b,c)\,dbdc} $$
Then performing the substitution for ##M##:
$$ \frac{p(y)\int p(a,b,c|y)\det(J)\,dbdc}{\int p(a,b,c)\det(J)\,dbdc}$$
Using Bayes' theorem again:
$$ \frac{\int p(y|a,b,c)p(a,b,c)\det(J)\,dbdc}{\int p(a,b,c)\det(J)\,dbdc} = \frac{\int p(y|a,b,c)p(b,c)\det(J)\,dbdc}{\int p(b,c)\det(J)\,dbdc}$$

Can this be simplified further?

I've also got a far more challenging and general problem:

Given a non-linear regression model ## y_i = f(\theta,x_i) + \text{noise}##, with ##\theta## the vector of unknown parameters and ##x_i## the vector of dependent variables. I want to calculate the likelihood of the global maximum of ##f##. The problem here is that there is no closed form expression for ##x_{max}##.
It seems to me that you need a pdf ##f_{\frac{-b}{2a}}(M)## for ##M##. Given pdfs for a for b, I think you could derive the pdf for ##M## as a convolution. Something like ##p(M = \frac{-b}{2a}) = p(2aM + b = 0) = \int_{-\infty}^{\infty} p(a=\frac x {2M}) p(b = t-x)dx|_{t=0}##.
 
Last edited:
dIndy said:
. What I tried so far is:

$$p(y|M) = \int p(y|M,b,c)p(b,c|M)\,dbdc$$

It will be confusing if you use the notatiion ##p(...)## to denote both "probability of" and also a probability density function evaluated somewhere.

For example, using the notation ##p_X()## to denote the probability density function of the random variable ##X## we can write
##p_M(m) = \int p_A(a) p_B( -2am)\ da##. I don't know how you would write that using your notation.

Can we assume your priors assign zero probability to the case a=0?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Back
Top