# Maximum likelihood estimator and UMVUE

## Homework Statement

Let $$X_{1}, ... , X_{n}$$ be a random sample from $$f\left(x; \theta\right) = \theta x^{\theta - 1} I_{(0, 1)}\left(X\right)$$, where $$\theta > 0$$.
a. Find the maximum-likelihood estimator of $$\theta/\left(1 + \theta\right)$$.

b. Is there a function of $$\theta$$ for which there exists an unbiased estimator whose variance coincides with the Cramer-Rao lower bound?

## The Attempt at a Solution

a.) I understand that in getting the maximum likelihood estimator of $$\theta$$, we should be finding the value of $$\theta$$ that will maximize the likelihood function.
We will do this by taking the derivative of the likelihood function with respect to $$\theta$$ and equate this derivative to zero; or take the derivative of the logarithm of the likelihood function with respect to $$\theta$$ and equate it to zero.
But I cannot figure out how to find the MLE of $$\theta/\left(1 + \theta\right)$$.

Related Calculus and Beyond Homework Help News on Phys.org
lanedance
Homework Helper
not 100% on these, but I think as $\theta/\left(1 + \theta\right)$ is monotonically increasing for $\theta > 0$ it will be given using the maximum liklihood value for $\theta$

alternatively you could let $a(\theta) = \theta/\left(1 + \theta\right)$ then solve for $\theta(a)$ and substitute into your probabilty distribution and solve for the MLE for a

lanedance
Homework Helper
$$L(\theta) = L(\theta(a))$$

when you maximise, you find theat such that
$$\frac{d L(\theta)}{d \theta} = 0$$

considering this for a, you get
$$\frac{d}{da} L(\theta(a)) = \frac{d L(\theta)}{d \theta} \frac{d \theta}{d a}$$

which as the 2nd term is non-zero, gives the same result using the MLE for theta