Finding the MVUE of a two-sided interval of a normal

  • Context: Graduate 
  • Thread starter Thread starter rayge
  • Start date Start date
  • Tags Tags
    Interval Normal
Click For Summary

Discussion Overview

The discussion centers on whether there exists a minimum variance unbiased estimator (MVUE) for the probability P(-c ≤ X ≤ c) given a sample from a normal distribution N(θ, 1). The scope includes theoretical considerations of estimation and properties of unbiased estimators.

Discussion Character

  • Exploratory, Technical explanation, Debate/contested

Main Points Raised

  • One participant suggests that the MVUE for P(-c ≤ X ≤ c) may not be unique due to the symmetry of the normal distribution around θ and -θ.
  • Another participant points out that since θ is unknown, a decision rule based on the sign of θ cannot be employed.
  • A different approach is proposed involving the construction of two MVUEs for P(X ≤ c) and P(X ≤ -c), but concerns are raised about the uniqueness of the MVUE in this context.
  • One participant notes the ambiguity in estimating P(-c ≤ X ≤ c) if one first estimates θ from the probability, suggesting instead to estimate θ from the sample mean before estimating the probability.

Areas of Agreement / Disagreement

Participants express differing views on the uniqueness of the MVUE and the methods for estimating it, indicating that multiple competing perspectives remain without a consensus on the existence of a unique MVUE.

Contextual Notes

There are limitations regarding the assumptions about the estimator's dependence on the sign of θ and the implications of estimating θ from the probability itself, which remain unresolved.

rayge
Messages
25
Reaction score
0
Our task is to determine if P(-c \le X \le c) has a minimum variance unbiased estimator for a sample from a distribution that is N(\theta,1). The one-sided interval P(X \le c) = \Phi(x - \theta) is unique, so constructing an MVUE is just a matter of applying Rao-Blackwell and Lehmann-Scheffe.

However for our case, P(-c \le X \le c) is the same for \theta and -\theta. So it seems like the MVUE isn't unique. I'm wondering if you can make a decision rule like choosing one unbiased estimator when \theta \ge 0 and the other when \theta < 0, but now instead of two non-unique unbiased estimators, we have three. Any thoughts? Is a MVUE just not possible?
 
Physics news on Phys.org
rayge said:
Our task is to determine if P(-c \le X \le c) has a minimum variance unbiased estimator for a sample from a distribution that is N(\theta,1).

I assume this means that c is given and \theta is unknown. So you can't employ a rule that depends on knowing the sign of \theta.
 
What if we construct two MVUE's, one for P(X \le c), and one for P(X \le -c), and then subtract one from the other? It still seems like we have the same problem, where the MVUE is not one-to-one...
 
rayge said:
However for our case, P(-c \le X \le c) is the same for \theta and -\theta.

There is ambiguity if you estimate P(-c \le X \e c) first and try to estimate \theta from that estimate. However the problem you stated doesn't insist we estimate \theta in that manner. Wouldn't the simplest try be to estimate \theta from the sample mean and then estimate P(-c \le X \le c) from that estimate?
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 20 ·
Replies
20
Views
2K
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K