# Bayesian point estimate

1. May 9, 2014

### rayge

The problem statement, all variables and given/known data
Let $Y_n$ be the nth order statistic of a random sample of size n from a distribution with pdf $f(x|\theta)=1/\theta$ from $0$ to $\theta$, zero elsewhere. Take the loss function to be $L(\theta, \delta(y))=[\theta-\delta(y_n)]^2$. Let $\theta$ be an observed value of the random variable $\Theta$, which has the prior pdf $h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty$, zero elsewhere, with $\alpha > 0, \beta > 0$. Find the Bayes solution $\delta(y_n)$ for a point estimate of $\theta$.
The attempt at a solution
I've found that the conditional pdf of $Y_n$ given $\theta$ is:
$$\frac{n y_n^{n-1}}{\theta^n}$$
which allows us to find the posterior $k(\theta|y_n)$ by finding what it's proportional to:
$$k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}$$
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
$$\frac{1}{\theta^{n+\beta}}$$
When I integrate from $\alpha$ to $\infty$, and solve for the fudge factor, I get $(n+\beta)\alpha^{n+\beta}$ as the scaling factor, so for my posterior I get:
$$(n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}$$
Which doesn't even have a $y_n$ term in it. Weird.

When I find the expected value of $\theta$ with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a $y_n$ somewhere but I don't know where. Any thoughts? Thanks in advance.

2. May 9, 2014

### haruspex

Not an area I'm familiar with, so can't help with your specific question, but one thing does look wrong to me: if you substitute n=0 in your answer, shouldn't you get h(θ)? The power of theta seems to be one off.

3. May 10, 2014

### rayge

I wrote a whole reply here while totally missing what you were saying. Thanks for the response! I'll check it out.

4. May 10, 2014

### Ray Vickson

I'm not sure how the loss-function business enters into the calculation, but you seem to be trying to compute the Bayesian posterior density of $\theta$, given $Y_n = y_n$. You have made an error in that. Below, I will use $Y,y$ instead of $Y_n,y_n$, $C,c$ instead of $\Theta, \theta$ and $a,b$ instead of $\alpha, \beta$---just to make typing easier.

Using the given prior density, the joint density of $(y,c)$ is
$$f_{Y,C}(y,c) = \frac{b a^b}{c^{b+1}} \frac{n y^{n-1}}{c^n}, 0 < y < c, a < c < \infty$$
The (prior) density of $Y$ is $f_Y(y) = \int f_{Y,C}(y,c) \, dc$, but you need to be careful about integration limits. For $0 < y < a$ we have
$$f_Y(y) = \int_{c=a}^{\infty} f_{Y,C}(y,c) \, dc = \frac{n b y^{n-1}}{a^n (b+n)}, \; 0 < y < a$$ For $y > a$ we have
$$f_Y(y) = \int_{c=y}^{\infty} f_{Y,C}(y,c) \, dc = \frac{n b a^b}{y^{b+1}(b+n)}, \: y > a$$ Thus, the posterior density of $C$ will depend on $y$, since the denominator in $f(c|y) = f_{Y,C}(y,c)/f_Y(y)$ has two different forms for $y < a$ and $y > a$.

Last edited: May 10, 2014