How Does Bayesian Estimation Determine Point Estimates with Prior Distributions?

Click For Summary
Bayesian estimation involves calculating the posterior distribution of a parameter based on prior distributions and observed data. In this discussion, the conditional probability density function (pdf) of the nth order statistic Y_n given the parameter θ is derived, leading to the posterior distribution k(θ|y_n). The calculation reveals a potential error in the integration limits and the relationship between the prior and posterior distributions, particularly regarding the expected value of θ. The expected value calculation yields a result of 1, which raises concerns about the accuracy of the point estimate. The discussion emphasizes the importance of correctly incorporating the observed data into the Bayesian framework to achieve reliable estimates.
rayge
Messages
25
Reaction score
0
Homework Statement
Let Y_n be the nth order statistic of a random sample of size n from a distribution with pdf f(x|\theta)=1/\theta from 0 to \theta, zero elsewhere. Take the loss function to be L(\theta, \delta(y))=[\theta-\delta(y_n)]^2. Let \theta be an observed value of the random variable \Theta, which has the prior pdf h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty, zero elsewhere, with \alpha > 0, \beta > 0. Find the Bayes solution \delta(y_n) for a point estimate of \theta.
The attempt at a solution
I've found that the conditional pdf of Y_n given \theta is:
\frac{n y_n^{n-1}}{\theta^n}
which allows us to find the posterior k(\theta|y_n) by finding what it's proportional to:
k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
\frac{1}{\theta^{n+\beta}}
When I integrate from \alpha to \infty, and solve for the fudge factor, I get (n+\beta)\alpha^{n+\beta} as the scaling factor, so for my posterior I get:
(n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}
Which doesn't even have a y_n term in it. Weird.

When I find the expected value of \theta with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a y_n somewhere but I don't know where. Any thoughts? Thanks in advance.
 
Physics news on Phys.org
Not an area I'm familiar with, so can't help with your specific question, but one thing does look wrong to me: if you substitute n=0 in your answer, shouldn't you get h(θ)? The power of theta seems to be one off.
 
I wrote a whole reply here while totally missing what you were saying. Thanks for the response! I'll check it out.
 
rayge said:
Homework Statement
Let Y_n be the nth order statistic of a random sample of size n from a distribution with pdf f(x|\theta)=1/\theta from 0 to \theta, zero elsewhere. Take the loss function to be L(\theta, \delta(y))=[\theta-\delta(y_n)]^2. Let \theta be an observed value of the random variable \Theta, which has the prior pdf h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty, zero elsewhere, with \alpha > 0, \beta > 0. Find the Bayes solution \delta(y_n) for a point estimate of \theta.
The attempt at a solution
I've found that the conditional pdf of Y_n given \theta is:
\frac{n y_n^{n-1}}{\theta^n}
which allows us to find the posterior k(\theta|y_n) by finding what it's proportional to:
k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
\frac{1}{\theta^{n+\beta}}
When I integrate from \alpha to \infty, and solve for the fudge factor, I get (n+\beta)\alpha^{n+\beta} as the scaling factor, so for my posterior I get:
(n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}
Which doesn't even have a y_n term in it. Weird.

When I find the expected value of \theta with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a y_n somewhere but I don't know where. Any thoughts? Thanks in advance.

I'm not sure how the loss-function business enters into the calculation, but you seem to be trying to compute the Bayesian posterior density of ##\theta##, given ##Y_n = y_n##. You have made an error in that. Below, I will use ##Y,y## instead of ##Y_n,y_n##, ##C,c## instead of ##\Theta, \theta## and ##a,b## instead of ##\alpha, \beta##---just to make typing easier.

Using the given prior density, the joint density of ##(y,c)## is
f_{Y,C}(y,c) = \frac{b a^b}{c^{b+1}} \frac{n y^{n-1}}{c^n}, 0 < y < c, a < c < \infty
The (prior) density of ##Y## is ##f_Y(y) = \int f_{Y,C}(y,c) \, dc##, but you need to be careful about integration limits. For ##0 < y < a## we have
f_Y(y) = \int_{c=a}^{\infty} f_{Y,C}(y,c) \, dc <br /> = \frac{n b y^{n-1}}{a^n (b+n)}, \; 0 &lt; y &lt; a For ##y > a## we have
f_Y(y) = \int_{c=y}^{\infty} f_{Y,C}(y,c) \, dc <br /> = \frac{n b a^b}{y^{b+1}(b+n)}, \: y &gt; a Thus, the posterior density of ##C## will depend on ##y##, since the denominator in ##f(c|y) = f_{Y,C}(y,c)/f_Y(y)## has two different forms for ##y < a## and ##y > a##.
 
Last edited:
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
Replies
9
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
2
Views
3K
  • · Replies 22 ·
Replies
22
Views
904
Replies
46
Views
4K
Replies
33
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K