Maximum likelihood estimator of mean difference

Click For Summary

Homework Help Overview

The problem involves finding the maximum likelihood estimator (MLE) for the difference in means, θ = μ₁ - μ₂, between two normal populations. The first sample has size n₁ and the second has size n₂, with known variances σ²₁ and σ²₂. The challenge includes determining how to allocate a fixed total sample size n = n₁ + n₂ to minimize the variance of the MLE for θ.

Discussion Character

  • Exploratory, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss starting points for deriving the likelihood function based on the samples from both distributions. There is consideration of differentiating the log-likelihood with respect to θ. Questions are raised about the independence of parameters and whether any parameters are known beforehand.

Discussion Status

The discussion is ongoing, with participants exploring different approaches to formulating the likelihood function and considering the implications of parameter independence. Some guidance has been offered regarding the MLE for θ, but no consensus has been reached on the specific form of the likelihood function or the optimal sample allocation.

Contextual Notes

It is noted that none of the parameters are known beforehand, which may influence the approach to finding the MLE.

safina
Messages
26
Reaction score
0

Homework Statement


A sample of size n_{1} is to be drawn from a normal population with mean \mu_{1} and variance \sigma^{2}_{1}. A second sample of size n_{2} is to be drawn from a normal population with mean \mu_{2} and variance \sigma^{2}_{2}. What is the maximum likelihood estimator of \theta = \mu_{1} - \mu_{2}?

If we assume that the total sample size n = n_{1} + n_{2} is fixed, how should the n observations be divided between the two populations in order to minimize the variance of the maximum likelihood estimator of \theta?


Homework Equations





The Attempt at a Solution

 
Physics news on Phys.org
so if we denote the samples from distribution 1, xi, and the samples from distribution 2, yj, then let call the samples
\textbf{x} = (x_1,.., x_i,..)
\textbf{y} = (x_1,.., y_j,..)

i'd start by trying to find the distribution likelihood of the data given the sample parameters

L(\textbf{x}, \textbf{y} | \mu_1, \sigma_1, \mu_2, \sigma_2)

then consider differntiating ln(L) w.r.t. theta

(I haven't tried this, but its how i'd try and get started)
 
Last edited:
also are any of the parameters known beforehand? if all are unknown as they are not independent you could consider the set \left{\theta,\sigma_1,\mu_2, \sigma_2 \right}
 
Last edited:
And one more let, t, m1, m2 be the estimators for theta, mu1 and mu2

Due to the independence, it shouldn't be too hard to convince yourself the MLE for t is
t=m1+m2

Now you should be able to come up with the sampling distribution for m1 and m2
P(m1,m2|mu1,mu2,sig1,sig2) and use that to find a sampling distribution for t, of which the variance should be apparent. Then minimize wrt the constraint
 
lanedance said:
also are any of the parameters known beforehand? if all are unknown as they are not independent you could consider the set \left{\theta,\sigma_1,\mu_2, \sigma_2 \right}
None of the parameters are known beforehand.

I have a feeling that the MLE for t is m1-m2, but could you help me more what will be the form of the likelihood function?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K