Proving Sufficient Estimators for Gamma Distribution | Homework Solution

  • Thread starter cse63146
  • Start date
  • Tags
    Estimators
In summary, the product of the sample observations is a sufficient statistic for theta in a random sample taken from a gamma distribution with parameters alpha = theta and beta = 6. This can be shown by using the factorization theorem and showing that the joint distribution can be factored into two nonnegative functions, one of which does not depend on theta. Therefore, the product of the sample observations is a sufficient estimator for theta.
  • #1
cse63146
452
0

Homework Statement



Show that the product of the sample observations is a sufficient statistic for theta if the random sample is taken from a gamma distribution with parameters alpha = theta and beta = 6.

Homework Equations





The Attempt at a Solution



So I need to make sure that [tex]Y = \prod X_i[/tex] is a sufficient estimator for theta, which is true if:

[tex]\frac{f(x_1; \theta)...f(x_n; \theta)}{Y = \prod X_i}[/tex] does not depend on theta.

But I keep getting getting 1 for the ratio. Am I correct?
 
Last edited:
Physics news on Phys.org
  • #2
You take samples from populations, not other samples.
You neglected to state the type of distribution (normal, gamma, uniform, ?) being sampled; until you do that not much can be done.

When you do use the actual distribution, think about writing out the joint distribution - as you started in the numerator of your first post, and work on the factorization criterion for identifying sufficiency.
 
  • #3
statdad said:
You take samples from populations, not other samples.
You neglected to state the type of distribution (normal, gamma, uniform, ?) being sampled; until you do that not much can be done.

When you do use the actual distribution, think about writing out the joint distribution - as you started in the numerator of your first post, and work on the factorization criterion for identifying sufficiency.

Sorry about the type, I meant to say a sample is taken from a gamma distribution with parameters alpha = theta and beta = 6;not from a sample.

The theoroem I'm trying to use is:

[tex]\frac{f(x_1; \theta)...(f(x_n \theta)}{f_{Y_1}[u_1(x_1,...,x_n); \theta]} = h(x_1,...x_n)[/tex]

and if h(x1,...,xn) doesn't depend on theta, then it's an sufficient estimator.

I get h(x1,...,xn)=1 and I'm wondering if this is correct.
 
  • #4
Have you seen this result?

For a random sample from a distribution that has density [tex] f(x;\theta), \, \theta \in \Omega [/tex], a statistic
[tex]
Y=u_1(X_1, X_2, \dots, X_n)
[/tex]

is sufficient for [tex] \theta [/tex] if, and only if there are two nonnegative functions [tex] k_1, k_2 [/tex] such that

[tex]
\prod_{i=1}^n f(x_i ; \theta) = k_1[u_1(x_1, x_2, \dots, x_n);\theta] k_2(x_1, x_2, \dots, x_n)
[/tex]

where [tex] k_2(x_1, x_2, \dots, x_n) [/tex] does not depend on [tex] \theta [/tex].

This is the factorization theorem I referred to in my earlier post.

To use this, write out the joint distribution and factor it so that the second factor does not depend on either unknown parameter. This is essentially a consequence of the approach you cited, but may be a bit easier to deal with.
 
  • #5
That definition is in my textbook, but we haven't gotten to it yet and I'm unsure on how to use it; we've only done the definition I mentioned in my previous post, and I thought I could use it to solve the problem.

Guess I'll have to wait until tuesday to learn it. Thanks for your help.
 
  • #6
May not need to wait. I'm away from my computer now, so typing is a pain: what do you get for the product of the densities? If you can, post it in its gory detail.
 
  • #7
I was looking over my textbook and found an example that used it, and after a while I figured it out:

[tex]\frac{1}{\Gamma(\theta)^n 6^{\theta n}}\prod x_i^{\theta - 1}e^{-\Sigma x_i /6}
= \frac{1}{\Gamma(\theta)^n 6^{\theta n}}\prod x_i^{\theta}e^{-\Sigma x_i /6} \frac{1}{\prod x_i}[/tex]

And since [tex]\frac{1}{\prod x_i}[/tex] doesn't depend on theta, Y is a sufficient estimator of theta.

Thanks for all your help once again.
 
  • #8
It appears that you did all the work. congratulations.
 
  • #9
Thanks for all your help once again
 

1. What is a sufficient estimator?

A sufficient estimator is a statistic that contains all the information needed to make an accurate inference about a population parameter. In other words, it summarizes the data in a way that captures the essential features of the underlying distribution.

2. How is a sufficient estimator different from an unbiased estimator?

A sufficient estimator is a type of estimator that takes into account all the available information in the data, while an unbiased estimator only considers the expected value of the parameter. This means that a sufficient estimator may not always be unbiased, but it will always be efficient in terms of capturing the true distribution of the data.

3. What are the properties of a sufficient estimator?

A sufficient estimator should have the following properties:

  • It should be consistent, meaning that it converges to the true parameter value as the sample size increases.
  • It should be unbiased, meaning that its expected value is equal to the true parameter value.
  • It should be efficient, meaning that it has the smallest possible variance among all estimators.

4. How is the sufficiency of an estimator determined?

The sufficiency of an estimator can be determined by using the Factorization Theorem. This theorem states that if the joint probability distribution of the data can be factored into a product of two functions, one of which is a function of the parameter of interest and the other is a function of the data, then the statistic representing the function of the data is a sufficient estimator.

5. Can a non-sufficient estimator be transformed into a sufficient estimator?

Yes, it is possible to transform a non-sufficient estimator into a sufficient estimator through the use of a function. This is known as the Rao-Blackwell Theorem, which states that if an estimator is unbiased and can be improved by conditioning on a sufficient statistic, then the resulting estimator is sufficient.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
742
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
464
Replies
1
Views
615
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
722
  • Precalculus Mathematics Homework Help
Replies
9
Views
2K
  • Precalculus Mathematics Homework Help
Replies
2
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
958
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
856
Back
Top