How Does Bayesian Estimation Determine Point Estimates with Prior Distributions?

rayge
Messages
25
Reaction score
0
Homework Statement
Let Y_n be the nth order statistic of a random sample of size n from a distribution with pdf f(x|\theta)=1/\theta from 0 to \theta, zero elsewhere. Take the loss function to be L(\theta, \delta(y))=[\theta-\delta(y_n)]^2. Let \theta be an observed value of the random variable \Theta, which has the prior pdf h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty, zero elsewhere, with \alpha > 0, \beta > 0. Find the Bayes solution \delta(y_n) for a point estimate of \theta.
The attempt at a solution
I've found that the conditional pdf of Y_n given \theta is:
\frac{n y_n^{n-1}}{\theta^n}
which allows us to find the posterior k(\theta|y_n) by finding what it's proportional to:
k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
\frac{1}{\theta^{n+\beta}}
When I integrate from \alpha to \infty, and solve for the fudge factor, I get (n+\beta)\alpha^{n+\beta} as the scaling factor, so for my posterior I get:
(n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}
Which doesn't even have a y_n term in it. Weird.

When I find the expected value of \theta with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a y_n somewhere but I don't know where. Any thoughts? Thanks in advance.
 
Physics news on Phys.org
Not an area I'm familiar with, so can't help with your specific question, but one thing does look wrong to me: if you substitute n=0 in your answer, shouldn't you get h(θ)? The power of theta seems to be one off.
 
I wrote a whole reply here while totally missing what you were saying. Thanks for the response! I'll check it out.
 
rayge said:
Homework Statement
Let Y_n be the nth order statistic of a random sample of size n from a distribution with pdf f(x|\theta)=1/\theta from 0 to \theta, zero elsewhere. Take the loss function to be L(\theta, \delta(y))=[\theta-\delta(y_n)]^2. Let \theta be an observed value of the random variable \Theta, which has the prior pdf h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty, zero elsewhere, with \alpha > 0, \beta > 0. Find the Bayes solution \delta(y_n) for a point estimate of \theta.
The attempt at a solution
I've found that the conditional pdf of Y_n given \theta is:
\frac{n y_n^{n-1}}{\theta^n}
which allows us to find the posterior k(\theta|y_n) by finding what it's proportional to:
k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
\frac{1}{\theta^{n+\beta}}
When I integrate from \alpha to \infty, and solve for the fudge factor, I get (n+\beta)\alpha^{n+\beta} as the scaling factor, so for my posterior I get:
(n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}
Which doesn't even have a y_n term in it. Weird.

When I find the expected value of \theta with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a y_n somewhere but I don't know where. Any thoughts? Thanks in advance.

I'm not sure how the loss-function business enters into the calculation, but you seem to be trying to compute the Bayesian posterior density of ##\theta##, given ##Y_n = y_n##. You have made an error in that. Below, I will use ##Y,y## instead of ##Y_n,y_n##, ##C,c## instead of ##\Theta, \theta## and ##a,b## instead of ##\alpha, \beta##---just to make typing easier.

Using the given prior density, the joint density of ##(y,c)## is
f_{Y,C}(y,c) = \frac{b a^b}{c^{b+1}} \frac{n y^{n-1}}{c^n}, 0 < y < c, a < c < \infty
The (prior) density of ##Y## is ##f_Y(y) = \int f_{Y,C}(y,c) \, dc##, but you need to be careful about integration limits. For ##0 < y < a## we have
f_Y(y) = \int_{c=a}^{\infty} f_{Y,C}(y,c) \, dc <br /> = \frac{n b y^{n-1}}{a^n (b+n)}, \; 0 &lt; y &lt; a For ##y > a## we have
f_Y(y) = \int_{c=y}^{\infty} f_{Y,C}(y,c) \, dc <br /> = \frac{n b a^b}{y^{b+1}(b+n)}, \: y &gt; a Thus, the posterior density of ##C## will depend on ##y##, since the denominator in ##f(c|y) = f_{Y,C}(y,c)/f_Y(y)## has two different forms for ##y < a## and ##y > a##.
 
Last edited:
Prove $$\int\limits_0^{\sqrt2/4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx = \frac{\pi^2}{8}.$$ Let $$I = \int\limits_0^{\sqrt 2 / 4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx. \tag{1}$$ The representation integral of ##\arcsin## is $$\arcsin u = \int\limits_{0}^{1} \frac{\mathrm dt}{\sqrt{1-t^2}}, \qquad 0 \leqslant u \leqslant 1.$$ Plugging identity above into ##(1)## with ##u...
Back
Top