Unbiased estimator for exponential dist.

  • Thread starter Thread starter bennyska
  • Start date Start date
  • Tags Tags
    Exponential
bennyska
Messages
110
Reaction score
0

Homework Statement


let X1, X2,... Xn form a random sample of size n from the exponential distribution whose pdf if f(x|B) = Be-Bx for x>0 and B>0. Find an unbiased estimator of B.


Homework Equations





The Attempt at a Solution


nothing yet. i don't really know where to get started. a push in the right direction would be greatly appreciated.
i'm not really super clear on how to calculate unbiased estimators. i believe i want a parameter, Bhat, such that E(Bhat) = B. but I'm not really sure how to find it. so yeah, a general hint on how to get started would be awesome. thanks!
 
Physics news on Phys.org
so first assume you know B, the probability of getting X1
f(X_1|B) = Be^{-B X_1}

then for a few random variables, the probability of a sequence X1, X2...,Xn given B is:
f(X_1|B)f(X_2|B)..f(X_n|B) <br /> = Be^{-B X_1}Be^{-B X_2}..Be^{-B X_n} <br /> = B^n e^{-B( X_1+X_2 +...+X_n)} <br />

consider maximising this relative to B, this yields the MLE (maximum likelihood estimator) for B. you can then check then whether it is biased and consider how to alter it to remove bias if required...
 
Last edited:
thanks. is this the normal way of finding unbiased estimators, using a known estimator, such as an MLE, and then checking to see if it's biased, or alter it as needed with a constant?
 
not too sure, I've only played with MLEs & bias a little, but seems like a reasonable approach & would work for things like sample variance etc.

probably wortha crack and see if you cna get it to work
 
Last edited:
so if i did my math right, i got B. does it make sense to say B is an unbiased estimator for B?
 
not really... you don't know B.. you want to get to some estimator for B in terms of the observations \hat{B} = f(X_1, X_2, .., x_n)

show your working
 
so i take that function you came up, i guess replace B with B', take the natural log, take the derivative, set it to 0, and get B'=1/xbar (sample average) for my mle, where as B=1/u, where u = true average.
so when i take E(1/xbar), i get the B/xbar*integral(e-Bxdx), and i get B/xbar * -1/B*e-Bx=-B/Bxbar*e-Bx= -1/xbar*e-Bx evaluated from 0 to infinity, which is also throwing me off, because x>0. but when i do that, i get E(1/xbar)=(1/xbar). does that seem right?
 
ok so let me check this. taking log of the likelihood function gives
ln (f(X_1|B)f(X_2|B)..f(X_n|B)) <br />
<br /> = ln(f(X_1|B)) + ln(f(X_2|B))+.. <br />
<br /> = ln(B) + ln(e^{-B X_1}) + ln(B) + ln(e^{-B X_2})+..<br />
<br /> = n.ln(B) + ln(e^{-B X_1}) + ln(e^{-B X_2})+..<br />
<br /> = n.ln(B) -B X_1 -B X_2 -...-B X_n<br />

then differntiating w.r.t. B gives
n/B -X_1 -X_2 -...-X_n

eqauting to zero for our MLE estimator gives
\hat{B} = \frac{n}{\sum_{i=1}^n X_i}

which is one on the sample average as you say
 
Last edited:
  • #10
E(\frac{1}{\bar{x}})=\int\frac{1}{\bar{x}}\beta e^{-\beta x}dx=\frac{\beta}{\overline{x}}\int e^{-\beta x}dx=\frac{\beta}{\bar{x}}\frac{-1}{\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-\beta}{\bar{x}\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-1}{\bar{x}}e^{-\infty}-\frac{-1}{\bar{x}}e^{0}=0+\frac{1}{\bar{x}}=\frac{1}{\bar{x}}=\hat{\beta}

something like that?

now, in order for this to be unbiased, does it need to be identically equal to \beta?
 
  • #11
ok so first probability distribution p(x)
<br /> p(x=X) = \beta e^{\beta x}<br />

then the mean is, after some integration by parts
(v=x, du=\beta e^{-\beta x})
(dv=1, u=-e^{-\beta x})
\int v.du = u.v-\intdv.u<br /> \mu = E[X] <br /> =\int_0^{\infty} dx .x. \beta e^{-\beta x}
= (-x e^{-\beta x})|_0^{\infty} - \int_0^{\infty} dx -e^{-\beta x}
= (0-0)- (-\frac{1}{\beta} e^{-\beta x} )|_0^{\infty} = \frac{1}{\beta}

which gives us a bit of confidence
 
Last edited:
  • #12
bennyska said:
E(\frac{1}{\bar{x}})=\int\frac{1}{\bar{x}}\beta e^{-\beta x}dx=\frac{\beta}{\overline{x}}\int e^{-\beta x}dx=\frac{\beta}{\bar{x}}\frac{-1}{\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-\beta}{\bar{x}\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-1}{\bar{x}}e^{-\infty}-\frac{-1}{\bar{x}}e^{0}=0+\frac{1}{\bar{x}}=\frac{1}{\bar{x}}=\hat{\beta}

something like that?

now, in order for this to be unbiased, does it need to be identically equal to \beta?

I'm pretty sure you can't just take the sample variance outside the integral like that as it is a function of random variables, over which space you are integrating
 
  • #13
now interestingly, the following integral does not converge
E[\hat{\beta(X_1)}] = E[1/X_1] = E[1/X] =\int_0^{\infty} dx .\frac{1}{x}. \beta e^{-\beta x} = \infty

which hints that something is not quite right with our estimator... i think it has infinite bias for the n=1 case?

though it agrees with a wiki search, that doesn't mention bias
http://en.wikipedia.org/wiki/Exponential_distribution#Parameter_estimation
 
Last edited:
  • #14
ok so i checked back to my text and is you were to estimmatte 1/beta using the sample mean it would be unbiased

however estimating beta does lead to a bias, and the unbiased estimator is in fact
\hat{\beta} = \frac{n}{n-1}\frac{1}{\bar{X}}
note it is singluar for n = 1

now it didn't have a derivation, but i think the start would be to use the sum of exponential variables, which will be a convolution to derive the distribution of
\bar{ X }

then use that to find the expectation
E(\frac{1}{ \bar{X}} )
 
  • #15
so if we let Y = X_1 +X_2, then we have
<br /> p(y=Y) = \int \int dxdy p(x)p(y) \delta(z-x-y)
<br /> = \int_0^y dx p(x)p(y-x)
<br /> = \int_0^y dx \beta^2 e^{-\beta(x-x+y)}
<br /> = \int_0^y dx \beta^2 e^{-\beta y}
<br /> = \beta^2 x(e^{-\beta y})|_0^y
<br /> = \beta^2 y e^{-\beta y} <br />

hopefully you can generalise form here for n samples...
 
Last edited:

Similar threads

Replies
11
Views
2K
Replies
5
Views
3K
Replies
4
Views
2K
Replies
8
Views
3K
Replies
2
Views
2K
Replies
12
Views
3K
Back
Top