MLE of Bivariate Vector Random Variable: Proof & Explanation

In summary, the bivariate vector random variable ##(X,Y)^T## and the marginal distribution of ##X## have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}## because the "scale factor" ##a## in the optimization problem does not affect the maximum.
  • #1
squenshl
479
4

Homework Statement


Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

Homework Equations

The Attempt at a Solution

 
Physics news on Phys.org
  • #2
squenshl said:

Homework Statement


Consider the bivariate vector random variable ##(X,Y)^T## which has the probability density function $$f_{X,Y}(x,y) = \theta xe^{-x(y+\theta)}, \quad x\geq 0, y\geq 0 \; \; \text{and} \; \; \theta > 0.$$
I have shown that the marginal distribution of ##X## is ##f_X(x|\theta) = \theta e^{-\theta x}, \quad x\geq 0 \; \; \text{and} \; \; \theta > 0.##

My question is why do these two distributions have the same maximum likelihood estimator ##\hat{\theta} = \frac{1}{\bar{x}}??##

Homework Equations

The Attempt at a Solution

In one case you observe ##(X,Y)=(x,y)## and estimate ##\theta## from ##f_{XY}(x,y)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = x e^{-xy}## is a number. In the other case you observe ##X=x## and estimate ##\theta## from ##f_X(x)##, which requires finding the maximum of a function of the form ##a \theta e^{-x \theta}##, where ##a = 1##. The "scale factor" ##a## does not affect the optimization.

Put it another way: the MLS estimator of ##\theta##, based on ##f_{XY}(x,y)## is independent of ##y##. Think of ##f_X(x)## as a sum over ##y## of curves ##f_{XY}(x,y)##, where the variable is ##\theta## and ##x, y## are just an input parameters. Each of these constituent curves has a maximum at the same point ##\theta = 1/x##, so the sum over ##y## is maximized at the same point.
 
Last edited:

1. What is MLE and why is it important in statistics?

MLE stands for Maximum Likelihood Estimation and it is an important statistical method used to estimate the parameters of a probability distribution. It is widely used in data analysis and statistical modeling because it provides a way to find the most likely values for the parameters of a given distribution, based on observed data.

2. What is a bivariate vector random variable?

A bivariate vector random variable is a type of random variable that has two components, each of which is a random variable. It represents a pair of variables that are related to each other in some way, such as height and weight, or temperature and humidity.

3. How is MLE of bivariate vector random variable calculated?

The MLE of a bivariate vector random variable is calculated by finding the values of the parameters that maximize the likelihood function, which is a measure of how likely it is for the observed data to occur given the parameters. This involves taking the derivative of the likelihood function and setting it equal to zero, and then solving for the parameters.

4. What is the proof of MLE for bivariate vector random variable?

The proof of MLE for bivariate vector random variable involves using the properties of multivariate calculus to find the maximum value of the likelihood function. This involves finding the derivative of the likelihood function with respect to each parameter, setting the derivatives equal to zero, and solving the resulting system of equations.

5. What is the importance of MLE for bivariate vector random variable?

The importance of MLE for bivariate vector random variable lies in its ability to provide the most accurate estimates for the parameters of a probability distribution. This allows for better understanding and prediction of the relationship between two variables, as well as more accurate statistical analyses and modeling.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
926
  • Calculus
Replies
29
Views
713
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
732
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Back
Top