Stat Theory: Need to Prove Consistent Estimator

  • Thread starter madameclaws
  • Start date
  • Tags
    Theory
In summary, the conversation discusses a homework problem involving a continuous random variable and whether a certain estimator is consistent for a given parameter. The conversation delves into finding the expected value of the estimator and the steps taken to solve it. The final step is to show that the estimator is unbiased and has a variance that converges to zero.
  • #1
madameclaws
8
0
So I am struggling with this homework problem because I got burned out of another problem earlier today, and I just cannot get beyond what I have.

The problem is:

Let X be a continuous random variable with the pdf: f(x)=e^(-(x-θ)) , x > θ ,
and suppose we have a sample of size n , { X1, X2 , … , Xn }.
Is T = Min ( X1, X2 , … , Xn ) a consistent estimator for θ ?

Homework Equations



From my class, I know that my cdf for this is F_T(t)= 1-e^(-n(t-θ)) and my pdf is f_t(t)=ne^(-n(t-θ)) where t>θ.

The Attempt at a Solution



Now, to show that an estimator is consistent, I need to show that my E(T) is unbiased and my Var(T) as n->infinity goes to 0.

What I am currently stuck on is finding my E(T), as silly as that sounds.

I know my integral needs to be:

∫(from 0 to theta) t*ne^(-n(t-θ)) dt

So dusting off my integration by parts, I get my integral to be:
n∫(from 0 to theta) t*e^(-n(t-θ)) dt

[-t*e^(-n(t-θ))|(from 0 to theta)-(1/n)e^(-n(t-θ))|(from 0 to theta)]

[-θ*e^(-n(θ-θ))+0-e^(-n(θ-θ))+e^(nθ)]
[-θ-1/n+(1/n)e^(nθ)]

Which I am pretty much stuck on how to get an unbiased estimator out of that.
Thus I can pretty much assume I did something wrong somewhere and I need help.

Could someone please take a look at this and let me know where I am going wrong with this?

Thanks!
 
Last edited:
Physics news on Phys.org
  • #2
madameclaws said:
So I am struggling with this homework problem because I got burned out of another problem earlier today, and I just cannot get beyond what I have.

The problem is:

Let X be a continuous random variable with the pdf: f(x)=e^(-(x-θ)) , x > θ ,
and suppose we have a sample of size n , { X1, X2 , … , Xn }.
Is T = Min ( X1, X2 , … , Xn ) a consistent estimator for θ ?

Homework Equations



From my class, I know that my cdf for this is F_T(t)= 1-e^(-n(t-θ)) and my pdf is f_t(t)=ne^(-n(t-θ)) where t>θ.

The Attempt at a Solution



Now, to show that an estimator is consistent, I need to show that my E(T) is unbiased and my Var(T) as n->infinity goes to 0.

What I am currently stuck on is finding my E(T), as silly as that sounds.

I know my integral needs to be:

∫(from 0 to theta) t*ne^(-n(t-θ)) dt

So dusting off my integration by parts, I get my integral to be:
n∫(from 0 to theta) t*e^(-n(t-θ)) dt

[-t*e^(-n(t-θ))|(from 0 to theta)-(1/n)e^(-n(t-θ))|(from 0 to theta)]

[-θ*e^(-n(θ-θ))+0-e^(-n(θ-θ))+e^(nθ)]
[-θ-1/n+(1/n)e^(nθ)]

Which I am pretty much stuck on how to get an unbiased estimator out of that.
Thus I can pretty much assume I did something wrong somewhere and I need help.

Could someone please take a look at this and let me know where I am going wrong with this?

Thanks!

Easiest way:
[tex] \int_{\theta}^{\infty} t f(t-\theta) \; dt = \int_{\theta}^{\infty} (t - \theta + \theta) f(t-\theta) \; dt\\
= \theta \int_{\theta}^{\infty} f(t-\theta)\; dt + \int_{\theta}^{\infty} (t-\theta)f(t-\theta) \; dt\\
= \theta + \int_0^{\infty} s f(s) \; ds.[/tex]
 
  • #3
Hi Ray,
I am not understanding how you went from ∫tf(t-θ) to what you have presented below.
Could you provide further details in the steps you have listed below?
Also, should have I made my integral from theta to infinity instead of 0 to theta?

Thanks!
 
  • #4
madameclaws said:
Hi Ray,
I am not understanding how you went from ∫tf(t-θ) to what you have presented below.
Could you provide further details in the steps you have listed below?
Also, should have I made my integral from theta to infinity instead of 0 to theta?

Thanks!

Sorry, I cannot do more. I gave the steps in detail, one-by-one. And no: the final integral goes from 0 to ∞ because we have changed variables from t to s = t-θ. That was the whole point: we reduce the problem to a standard form that is already familiar (or should be).
 
  • #5
Hi Ray,

What I am not understanding is how you have t-theta+theta and then in the next step just eliminated that down to theta outside of the integral.
I just need to understand your thinking behind it because it doesn't make sense to me.

Thanks!
 
  • #6
"Now, to show that an estimator is consistent, I need to show that my E(T) is unbiased and my Var(T) as n->infi"

A consistent estimator is merely one that converges in probability - it doesn't have to be unbiased. Thus you need to show that |T - theta| converges to zero in probability (T is your estimator).
 

FAQ: Stat Theory: Need to Prove Consistent Estimator

1. What is the goal of proving a consistent estimator in statistical theory?

The goal of proving a consistent estimator in statistical theory is to ensure that the estimator provides an accurate estimate of the true value of a population parameter as the sample size increases. A consistent estimator is one that converges to the true value as the sample size goes to infinity.

2. How is consistency of an estimator measured?

The consistency of an estimator is measured by its mean squared error (MSE). An estimator with a smaller MSE is considered more consistent, as it has less variability and is closer to the true value of the population parameter.

3. What are the criteria for proving consistency of an estimator?

There are two criteria for proving consistency of an estimator: 1) the estimator must be unbiased, meaning that it has an expected value equal to the true value of the population parameter, and 2) the variance of the estimator must decrease as the sample size increases, indicating that it becomes more precise.

4. Can an estimator be consistent for one population parameter but not for another?

Yes, an estimator can be consistent for one population parameter but not for another. This depends on the properties of the estimator and the distribution of the population. For example, an estimator may be consistent for the mean but not for the variance of a population.

5. How is the consistency of an estimator affected by the sample size?

The consistency of an estimator is directly affected by the sample size. As the sample size increases, the estimator becomes more consistent and its variance decreases. However, if the sample size is too small, the estimator may not be consistent even if it is unbiased.

Similar threads

Replies
1
Views
400
Replies
1
Views
999
Replies
1
Views
1K
Replies
11
Views
2K
Replies
4
Views
692
Replies
1
Views
1K
Replies
16
Views
1K
Replies
4
Views
982
Replies
15
Views
1K
Back
Top