Bayesian point estimate

In summary: For ##y < a##, f(c|y) = \frac{b+n}{b} \frac{c^{n-1}}{a^n}, y < c < a, and for ##y > a##,f(c|y) = \frac{b+n}{b} \frac{a^b}{c^{b+1}}, a < c < y. This is just the scaled (by a factor of ##(b+n)/b##) version of the prior density.
  • #1
rayge
25
0
Homework Statement
Let [itex]Y_n[/itex] be the nth order statistic of a random sample of size n from a distribution with pdf [itex]f(x|\theta)=1/\theta[/itex] from [itex]0[/itex] to [itex]\theta[/itex], zero elsewhere. Take the loss function to be [itex]L(\theta, \delta(y))=[\theta-\delta(y_n)]^2[/itex]. Let [itex]\theta[/itex] be an observed value of the random variable [itex]\Theta[/itex], which has the prior pdf [itex]h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty[/itex], zero elsewhere, with [itex]\alpha > 0, \beta > 0[/itex]. Find the Bayes solution [itex]\delta(y_n)[/itex] for a point estimate of [itex]\theta[/itex].
The attempt at a solution
I've found that the conditional pdf of [itex]Y_n[/itex] given [itex]\theta[/itex] is:
[tex]\frac{n y_n^{n-1}}{\theta^n}[/tex]
which allows us to find the posterior [itex]k(\theta|y_n)[/itex] by finding what it's proportional to:
[tex]k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}[/tex]
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
[tex]\frac{1}{\theta^{n+\beta}}[/tex]
When I integrate from [itex]\alpha[/itex] to [itex]\infty[/itex], and solve for the fudge factor, I get [itex](n+\beta)\alpha^{n+\beta}[/itex] as the scaling factor, so for my posterior I get:
[tex](n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}[/tex]
Which doesn't even have a [itex]y_n[/itex] term in it. Weird.

When I find the expected value of [itex]\theta[/itex] with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a [itex]y_n[/itex] somewhere but I don't know where. Any thoughts? Thanks in advance.
 
Physics news on Phys.org
  • #2
Not an area I'm familiar with, so can't help with your specific question, but one thing does look wrong to me: if you substitute n=0 in your answer, shouldn't you get h(θ)? The power of theta seems to be one off.
 
  • #3
I wrote a whole reply here while totally missing what you were saying. Thanks for the response! I'll check it out.
 
  • #4
rayge said:
Homework Statement
Let [itex]Y_n[/itex] be the nth order statistic of a random sample of size n from a distribution with pdf [itex]f(x|\theta)=1/\theta[/itex] from [itex]0[/itex] to [itex]\theta[/itex], zero elsewhere. Take the loss function to be [itex]L(\theta, \delta(y))=[\theta-\delta(y_n)]^2[/itex]. Let [itex]\theta[/itex] be an observed value of the random variable [itex]\Theta[/itex], which has the prior pdf [itex]h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty[/itex], zero elsewhere, with [itex]\alpha > 0, \beta > 0[/itex]. Find the Bayes solution [itex]\delta(y_n)[/itex] for a point estimate of [itex]\theta[/itex].
The attempt at a solution
I've found that the conditional pdf of [itex]Y_n[/itex] given [itex]\theta[/itex] is:
[tex]\frac{n y_n^{n-1}}{\theta^n}[/tex]
which allows us to find the posterior [itex]k(\theta|y_n)[/itex] by finding what it's proportional to:
[tex]k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}[/tex]
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
[tex]\frac{1}{\theta^{n+\beta}}[/tex]
When I integrate from [itex]\alpha[/itex] to [itex]\infty[/itex], and solve for the fudge factor, I get [itex](n+\beta)\alpha^{n+\beta}[/itex] as the scaling factor, so for my posterior I get:
[tex](n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}[/tex]
Which doesn't even have a [itex]y_n[/itex] term in it. Weird.

When I find the expected value of [itex]\theta[/itex] with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a [itex]y_n[/itex] somewhere but I don't know where. Any thoughts? Thanks in advance.

I'm not sure how the loss-function business enters into the calculation, but you seem to be trying to compute the Bayesian posterior density of ##\theta##, given ##Y_n = y_n##. You have made an error in that. Below, I will use ##Y,y## instead of ##Y_n,y_n##, ##C,c## instead of ##\Theta, \theta## and ##a,b## instead of ##\alpha, \beta##---just to make typing easier.

Using the given prior density, the joint density of ##(y,c)## is
[tex] f_{Y,C}(y,c) = \frac{b a^b}{c^{b+1}} \frac{n y^{n-1}}{c^n}, 0 < y < c, a < c < \infty [/tex]
The (prior) density of ##Y## is ##f_Y(y) = \int f_{Y,C}(y,c) \, dc##, but you need to be careful about integration limits. For ##0 < y < a## we have
[tex] f_Y(y) = \int_{c=a}^{\infty} f_{Y,C}(y,c) \, dc
= \frac{n b y^{n-1}}{a^n (b+n)}, \; 0 < y < a [/tex] For ##y > a## we have
[tex] f_Y(y) = \int_{c=y}^{\infty} f_{Y,C}(y,c) \, dc
= \frac{n b a^b}{y^{b+1}(b+n)}, \: y > a [/tex] Thus, the posterior density of ##C## will depend on ##y##, since the denominator in ##f(c|y) = f_{Y,C}(y,c)/f_Y(y)## has two different forms for ##y < a## and ##y > a##.
 
Last edited:

1. What is a Bayesian point estimate?

A Bayesian point estimate is a statistical method used to estimate the value of a population parameter based on a sample of data. It takes into account prior knowledge or beliefs about the parameter and updates them with new data to obtain a more accurate estimate.

2. How is a Bayesian point estimate different from a frequentist point estimate?

A frequentist point estimate only considers the data at hand and does not incorporate prior knowledge or beliefs about the parameter. A Bayesian point estimate, on the other hand, combines both the data and prior knowledge to obtain a more precise estimate.

3. What is the role of prior knowledge in a Bayesian point estimate?

Prior knowledge, also known as prior beliefs, plays a crucial role in a Bayesian point estimate. It is used to specify a probability distribution for the parameter before any data is observed. This prior distribution is then updated with the data to obtain a posterior distribution, which represents our updated beliefs about the parameter.

4. How is a Bayesian point estimate calculated?

A Bayesian point estimate involves calculating the posterior distribution by combining the prior distribution with the likelihood function (which describes the relationship between the data and the parameter). The point estimate is then obtained by summarizing the posterior distribution, usually by taking the mean, median, or mode.

5. What are the advantages of using a Bayesian point estimate?

One of the main advantages of a Bayesian point estimate is its ability to incorporate prior knowledge or beliefs about the parameter. This can lead to more accurate and precise estimates, especially when dealing with small sample sizes. Additionally, the resulting posterior distribution provides a measure of uncertainty, which can be useful in decision-making processes.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
389
  • Calculus and Beyond Homework Help
Replies
1
Views
765
  • Calculus and Beyond Homework Help
Replies
10
Views
362
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
735
  • Calculus and Beyond Homework Help
Replies
2
Views
648
Replies
3
Views
619
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
Back
Top