MLE of Bivariate Vector Random Variable: Proof & Explanation

Click For Summary
SUMMARY

The discussion focuses on the maximum likelihood estimator (MLE) of the parameter ##\theta## for the bivariate vector random variable ##(X,Y)^T## with the joint probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}$$ and its marginal distribution $$f_X(x|\theta) = \theta e^{-\theta x}$$. Both distributions yield the same MLE, ##\hat{\theta} = \frac{1}{\bar{x}}##, due to the independence of the scale factor in the optimization process. The MLE derived from the joint distribution does not depend on the variable ##y##, confirming that the marginal distribution's MLE is consistent with the joint distribution's MLE.

PREREQUISITES
  • Understanding of maximum likelihood estimation (MLE)
  • Familiarity with probability density functions (PDFs)
  • Knowledge of bivariate random variables
  • Basic calculus for optimization techniques
NEXT STEPS
  • Study the derivation of maximum likelihood estimators for bivariate distributions
  • Explore the properties of joint and marginal distributions in probability theory
  • Learn about optimization techniques in statistical estimation
  • Investigate the implications of scale factors in MLE calculations
USEFUL FOR

Statisticians, data scientists, and students studying statistical inference, particularly those interested in maximum likelihood estimation and its applications in multivariate contexts.

squenshl
Messages
468
Reaction score
4

Homework Statement


Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

Homework Equations

The Attempt at a Solution

 
Physics news on Phys.org
squenshl said:

Homework Statement


Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

Homework Equations

The Attempt at a Solution

In one case you observe ##(X,Y)=(x,y)## and estimate ##\theta## from ##f_{XY}(x,y)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = x e^{-xy}## is a number. In the other case you observe ##X=x## and estimate ##\theta## from ##f_X(x)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = 1##. The "scale factor" ##a## does not affect the optimization.

Put it another way: the MLS estimator of ##\theta##, based on ##f_{XY}(x,y)## is independent of ##y##. Think of ##f_X(x)## as a sum over ##y## of curves ##f_{XY}(x,y)##, where the variable is ##\theta## and ##x, y## are just an input parameters. Each of these constituent curves has a maximum at the same point ##\theta = 1/x##, so the sum over ##y## is maximized at the same point.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
Replies
9
Views
4K
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
5K