MHB Show that the maximum likelihood estimator is unbiased

Fermat1
Messages
180
Reaction score
0
Consider a density family $f(x,{\mu})=c_{{\mu}}x^{{\mu}-1}\exp(\frac{-(\ln(x))^2)^2}{2}$ , where $c_{{\mu}}=\frac{1}{{\sqrt{2{\pi}}}}\exp(-{\mu}^2/2)$
For a sample $(X_{1},...,X_{n})$ fnd the maximum likelihood estimator and show it is unbiased. You may find the substitution $y=\ln x$ helpful.

I find the MLE to be ${\mu}_{1}=\frac{1}{n}(\ln(X_{1})+...+\ln(X_{n}))$. For unbiasedness, I'm not sure what to do. If I substitute $y_{i}=\ln(x_{i}$ I get $E({\mu}_{1})=\frac{1}{n}(E(Y_{1})+...+E(Y_{n}))$. Am I meant to recognise the distribution of the $Y_{i}$?
 
Last edited:
Physics news on Phys.org
I'm not sure what $$f(x,\mu)$$ really is. I suppose it's

$$f(x,\mu)=c_{\mu}x^{\mu-1}\exp(-(\text{ln}x)^2/2)$$.

Give a sample $$(X_1,X_2...,X_n)$$, and you've got the MLE, $$\mu_1=\frac{1}{n}\sum_{i=1}^{n}\text{ln}X_i$$. For this $$f(x,\mu)$$, that's right.

To test the unbiasness, you should calculate the expectation of $$\mu_1$$.

Thus, we have, $$E(\mu_1)=\frac{1}{n}\sum_{i=1}^nE(\text{ln}X_i)$$.

Noting $$E(\text{ln}X)=\int_0^{\infty}\text{ln}xf(x,\mu)dx=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}t\exp{(-(t-\mu)^2/2)}dt$$, can you figure out this?
 
stainburg said:
I'm not sure what $$f(x,\mu)$$ really is. I suppose it's

$$f(x,\mu)=c_{\mu}x^{\mu-1}\exp(-(\text{ln}x)^2/2)$$.

Give a sample $$(X_1,X_2...,X_n)$$, and you've got the MLE, $$\mu_1=\frac{1}{n}\sum_{i=1}^{n}\text{ln}X_i$$. For this $$f(x,\mu)$$, that's right.

To test the unbiasness, you should calculate the expectation of $$\mu_1$$.

Thus, we have, $$E(\mu_1)=\frac{1}{n}\sum_{i=1}^nE(\text{ln}X_i)$$.

Noting $$E(\text{ln}X)=\int_0^{\infty}\text{ln}xf(x,\mu)dx=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}t\exp{(-(t-\mu)^2/2)}dt$$, can you figure out this?

what substitution are you using?
 
Fermat said:
what substitution are you using?
Let $$t=\text{ln}x\in (-\infty, \infty)$$, hence $$x=\exp(t)$$.

We then have

$$E(\text{ln}X)\\

=\int_0^{\infty}\text{ln}xc_{\mu}x^{\mu-1}\exp(-(\text{ln}x)^2/2)dx\\

=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\exp(-\mu^2/2)t\exp{((\mu-1)t)}\exp{(-t^2/2)}d(\exp(t))\\

=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}t\exp{(-(t^2-2\mu t+\mu^2)/2)}dt\\

=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}t\exp{(-(t-\mu)^2/2)}dt\\$$
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top