How Can We Reduce Bias in LogNormal Mean Estimators?

  • Thread starter Thread starter Zazubird
  • Start date Start date
  • Tags Tags
    Estimators Mean
Zazubird
Messages
2
Reaction score
0
If Y~N(mu,sigma) and y=logX, with X~LN(mu,sigma),
with a*=exp{ybar+1/2*theta*sample variance of y}, where ybar=sample mean of y and a=E[X]=exp{mu+1/2*sigma^2}, theta is constant.

If theta=1, a* is consistent but biased and we can reduce the bias by choosing a different value of theta. Use a large n-approximation in the expression E[a*] to find a value that reduces the bias, as compared to when theta=1.

During my attempt to do this, I ended up with E[a*]=E[geometric mean of x * geometric variance of x]. Knowing that y=logx and thus ybar=log(x1*x2*...*xn)/n, I ended with such an expression. I have a feeling that this is most likely incorrect and am thus completely lost. I was thinking along the lines of possibly using CLT or Weak Law of Large Numbers, with the n-approximation detail in the question, but still don't know where to go from there.
 
Physics news on Phys.org
If Log[geometric mean] = Mean[y] and Log[geometric var] = Var[y], then Log[g.m.*g.v.] = Mean[y] + Var[y], and this is identical to E[Log[a*]] for theta = 2.
 
I think I made a mistake in my working and have taken a different approach involving mgfs of normal distributions. But I still get stuck halfway. I've put some of my working in the attached file. Any ideas as to how to get this different value of theta?

Edit to file: V= Ybar + 0.5*theta*S^2
 

Attachments

Last edited:
How did you end up getting E[exp (Vt)]?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top