# MLE problem

## Homework Statement

Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

Ray Vickson
Homework Helper
Dearly Missed

## Homework Statement

Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

## The Attempt at a Solution

In one case you observe ##(X,Y)=(x,y)## and estimate ##\theta## from ##f_{XY}(x,y)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = x e^{-xy}## is a number. In the other case you observe ##X=x## and estimate ##\theta## from ##f_X(x)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = 1##. The "scale factor" ##a## does not affect the optimization.

Put it another way: the MLS estimator of ##\theta##, based on ##f_{XY}(x,y)## is independent of ##y##. Think of ##f_X(x)## as a sum over ##y## of curves ##f_{XY}(x,y)##, where the variable is ##\theta## and ##x, y## are just an input parameters. Each of these constituent curves has a maximum at the same point ##\theta = 1/x##, so the sum over ##y## is maximized at the same point.

Last edited: