Unbiased estimator for exponential dist.

  • Thread starter Thread starter bennyska
  • Start date Start date
  • Tags Tags
    Exponential
Click For Summary

Homework Help Overview

The discussion revolves around finding an unbiased estimator for the parameter B of an exponential distribution, defined by its probability density function. Participants explore the properties of estimators, particularly focusing on the maximum likelihood estimator (MLE) and its potential bias.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants discuss starting points for finding unbiased estimators, including the use of MLE and checking for bias. Questions arise about the validity of using known estimators and the process of deriving unbiased estimators from them.

Discussion Status

Some participants have made attempts to derive estimators and check their bias, while others are exploring the implications of their findings. There is an ongoing examination of the relationship between sample means and bias, with various interpretations being considered.

Contextual Notes

Participants note that the estimator for B may be biased and discuss the implications of sample size on the bias of the estimator. There is mention of specific cases, such as n=1, where the estimator's behavior raises questions about its validity.

bennyska
Messages
110
Reaction score
0

Homework Statement


let X1, X2,... Xn form a random sample of size n from the exponential distribution whose pdf if f(x|B) = Be-Bx for x>0 and B>0. Find an unbiased estimator of B.


Homework Equations





The Attempt at a Solution


nothing yet. i don't really know where to get started. a push in the right direction would be greatly appreciated.
i'm not really super clear on how to calculate unbiased estimators. i believe i want a parameter, Bhat, such that E(Bhat) = B. but I'm not really sure how to find it. so yeah, a general hint on how to get started would be awesome. thanks!
 
Physics news on Phys.org
so first assume you know B, the probability of getting X1
[tex]f(X_1|B) = Be^{-B X_1}[/tex]

then for a few random variables, the probability of a sequence X1, X2...,Xn given B is:
[tex]f(X_1|B)f(X_2|B)..f(X_n|B) <br /> = Be^{-B X_1}Be^{-B X_2}..Be^{-B X_n} <br /> = B^n e^{-B( X_1+X_2 +...+X_n)} [/tex]

consider maximising this relative to B, this yields the MLE (maximum likelihood estimator) for B. you can then check then whether it is biased and consider how to alter it to remove bias if required...
 
Last edited:
thanks. is this the normal way of finding unbiased estimators, using a known estimator, such as an MLE, and then checking to see if it's biased, or alter it as needed with a constant?
 
not too sure, I've only played with MLEs & bias a little, but seems like a reasonable approach & would work for things like sample variance etc.

probably wortha crack and see if you cna get it to work
 
Last edited:
so if i did my math right, i got B. does it make sense to say B is an unbiased estimator for B?
 
not really... you don't know B.. you want to get to some estimator for B in terms of the observations [itex]\hat{B} = f(X_1, X_2, .., x_n)[/itex]

show your working
 
so i take that function you came up, i guess replace B with B', take the natural log, take the derivative, set it to 0, and get B'=1/xbar (sample average) for my mle, where as B=1/u, where u = true average.
so when i take E(1/xbar), i get the B/xbar*integral(e-Bxdx), and i get B/xbar * -1/B*e-Bx=-B/Bxbar*e-Bx= -1/xbar*e-Bx evaluated from 0 to infinity, which is also throwing me off, because x>0. but when i do that, i get E(1/xbar)=(1/xbar). does that seem right?
 
ok so let me check this. taking log of the likelihood function gives
[tex]ln (f(X_1|B)f(X_2|B)..f(X_n|B)) [/tex]
[tex] = ln(f(X_1|B)) + ln(f(X_2|B))+.. [/tex]
[tex] = ln(B) + ln(e^{-B X_1}) + ln(B) + ln(e^{-B X_2})+..[/tex]
[tex] = n.ln(B) + ln(e^{-B X_1}) + ln(e^{-B X_2})+..[/tex]
[tex] = n.ln(B) -B X_1 -B X_2 -...-B X_n[/tex]

then differntiating w.r.t. B gives
[tex]n/B -X_1 -X_2 -...-X_n[/tex]

eqauting to zero for our MLE estimator gives
[tex]\hat{B} = \frac{n}{\sum_{i=1}^n X_i}[/tex]

which is one on the sample average as you say
 
Last edited:
  • #10
[tex]E(\frac{1}{\bar{x}})=\int\frac{1}{\bar{x}}\beta e^{-\beta x}dx=\frac{\beta}{\overline{x}}\int e^{-\beta x}dx=\frac{\beta}{\bar{x}}\frac{-1}{\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-\beta}{\bar{x}\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-1}{\bar{x}}e^{-\infty}-\frac{-1}{\bar{x}}e^{0}=0+\frac{1}{\bar{x}}=\frac{1}{\bar{x}}=\hat{\beta}[/tex]

something like that?

now, in order for this to be unbiased, does it need to be identically equal to [tex]\beta[/tex]?
 
  • #11
ok so first probability distribution p(x)
[tex] p(x=X) = \beta e^{\beta x}[/tex]

then the mean is, after some integration by parts
[tex](v=x, du=\beta e^{-\beta x})[/tex]
[tex](dv=1, u=-e^{-\beta x})[/tex]
[tex]\int v.du = u.v-\intdv.u[/tex][tex] \mu = E[X] <br /> =\int_0^{\infty} dx .x. \beta e^{-\beta x}[/tex]
[tex]= (-x e^{-\beta x})|_0^{\infty} - \int_0^{\infty} dx -e^{-\beta x}[/tex]
[tex]= (0-0)- (-\frac{1}{\beta} e^{-\beta x} )|_0^{\infty} = \frac{1}{\beta}[/tex]

which gives us a bit of confidence
 
Last edited:
  • #12
bennyska said:
[tex]E(\frac{1}{\bar{x}})=\int\frac{1}{\bar{x}}\beta e^{-\beta x}dx=\frac{\beta}{\overline{x}}\int e^{-\beta x}dx=\frac{\beta}{\bar{x}}\frac{-1}{\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-\beta}{\bar{x}\beta}e^{-\beta x}|_{0}^{\infty}=\frac{-1}{\bar{x}}e^{-\infty}-\frac{-1}{\bar{x}}e^{0}=0+\frac{1}{\bar{x}}=\frac{1}{\bar{x}}=\hat{\beta}[/tex]

something like that?

now, in order for this to be unbiased, does it need to be identically equal to [tex]\beta[/tex]?

I'm pretty sure you can't just take the sample variance outside the integral like that as it is a function of random variables, over which space you are integrating
 
  • #13
now interestingly, the following integral does not converge
[tex]E[\hat{\beta(X_1)}] = E[1/X_1] = E[1/X] =\int_0^{\infty} dx .\frac{1}{x}. \beta e^{-\beta x} = \infty[/tex]

which hints that something is not quite right with our estimator... i think it has infinite bias for the n=1 case?

though it agrees with a wiki search, that doesn't mention bias
http://en.wikipedia.org/wiki/Exponential_distribution#Parameter_estimation
 
Last edited:
  • #14
ok so i checked back to my text and is you were to estimmatte 1/beta using the sample mean it would be unbiased

however estimating beta does lead to a bias, and the unbiased estimator is in fact
[tex]\hat{\beta} = \frac{n}{n-1}\frac{1}{\bar{X}}[/tex]
note it is singluar for n = 1

now it didn't have a derivation, but i think the start would be to use the sum of exponential variables, which will be a convolution to derive the distribution of
[tex]\bar{ X }[/tex]

then use that to find the expectation
[tex]E(\frac{1}{ \bar{X}} )[/tex]
 
  • #15
so if we let Y = X_1 +X_2, then we have
[tex] p(y=Y) = \int \int dxdy p(x)p(y) \delta(z-x-y)[/tex]
[tex] = \int_0^y dx p(x)p(y-x)[/tex]
[tex] = \int_0^y dx \beta^2 e^{-\beta(x-x+y)}[/tex]
[tex] = \int_0^y dx \beta^2 e^{-\beta y}[/tex]
[tex] = \beta^2 x(e^{-\beta y})|_0^y[/tex]
[tex] = \beta^2 y e^{-\beta y} [/tex]

hopefully you can generalise form here for n samples...
 
Last edited:

Similar threads

  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
4
Views
2K