MHB Maximum Likelihood Estimators for Uniform Distribution

Julio1
Messages
66
Reaction score
0
Find maximum likelihood estimators of an sample of size $n$ if $X\sim U(0,\theta].$

Hello MHB :)! Can any user help me please :)! I don't how follow...
 
Physics news on Phys.org
Julio said:
Find maximum likelihood estimators of an sample of size $n$ if $X\sim U(0,\theta].$

Hello MHB :)! Can any user help me please :)! I don't how follow...

Hi Julio!

We want to maximize the likelihood that some $\theta$ is the right one.

The likelihood that a certain $\theta$ is the right one given a random sample is:
$$\mathcal L(\theta; x_1, ..., x_n) = f(x_1|\theta) \times f(x_2|\theta) \times ... \times f(x_n|\theta)$$
where $f$ is the probability density function.

Since $X\sim U(0,\theta]$, $f$ is given by:
$$f(x|\theta)=\begin{cases}\frac 1 \theta&\text{if }0 < x \le \theta \\ 0 &\text{otherwise}\end{cases}$$

Can you tell for which $\theta$ the likelihood will be at its maximum?
 
Thanks I like Serena :).

Good have that the likelihood function is $L(\theta)=\dfrac{1}{\theta^n}.$ Then applying logarithm we have that

$\ln (L(\theta))=\ln(\dfrac{1}{\theta^n})=-n\ln(\theta).$ Now for derivation with respect $\theta$ we have that $\dfrac{\partial}{\partial \theta}(\ln L(\theta))=\dfrac{\partial}{\partial \theta}(-n\ln(\theta))=-\dfrac{n}{\theta}.$ Thus, match with zero have $-\dfrac{n}{\theta}=0$, i.e., $n=0$.

But why? :(, remove the parameter $\theta$?... Then I don't find an estimator for $\theta$?
 
You're welcome Julio!

What's missing in your approach is that it's not taken into account that the function is piecewise.
So we need to inspect what happens at the boundaires.

Note that any $x_i$ that is in the sample has to be $\le \theta$, because otherwise its probability is $0$.
So $\theta$ has to be at least the maximum value that is in the sample.
What happens to the likelihood if $\theta$ is bigger than that maximum value?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top