- #1

Julio1

- 69

- 0

Hello MHB :)! Can any user help me please :)! I don't how follow...

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- MHB
- Thread starter Julio1
- Start date

In summary: If $\theta$ is bigger than the maximum value in the sample, then the likelihood will be smaller than it would be if $\theta$ were equal to the maximum value.

- #1

Julio1

- 69

- 0

Hello MHB :)! Can any user help me please :)! I don't how follow...

Physics news on Phys.org

- #2

I like Serena

Homework Helper

MHB

- 16,336

- 258

Julio said:

Hello MHB :)! Can any user help me please :)! I don't how follow...

Hi Julio!

We want to maximize the likelihood that some $\theta$ is the right one.

The likelihood that a certain $\theta$ is the right one given a random sample is:

$$\mathcal L(\theta; x_1, ..., x_n) = f(x_1|\theta) \times f(x_2|\theta) \times ... \times f(x_n|\theta)$$

where $f$ is the probability density function.

Since $X\sim U(0,\theta]$, $f$ is given by:

$$f(x|\theta)=\begin{cases}\frac 1 \theta&\text{if }0 < x \le \theta \\ 0 &\text{otherwise}\end{cases}$$

Can you tell for which $\theta$ the likelihood will be at its maximum?

- #3

Julio1

- 69

- 0

Good have that the likelihood function is $L(\theta)=\dfrac{1}{\theta^n}.$ Then applying logarithm we have that

$\ln (L(\theta))=\ln(\dfrac{1}{\theta^n})=-n\ln(\theta).$ Now for derivation with respect $\theta$ we have that $\dfrac{\partial}{\partial \theta}(\ln L(\theta))=\dfrac{\partial}{\partial \theta}(-n\ln(\theta))=-\dfrac{n}{\theta}.$ Thus, match with zero have $-\dfrac{n}{\theta}=0$, i.e., $n=0$.

But why? :(, remove the parameter $\theta$?... Then I don't find an estimator for $\theta$?

- #4

I like Serena

Homework Helper

MHB

- 16,336

- 258

What's missing in your approach is that it's not taken into account that the function is piecewise.

So we need to inspect what happens at the boundaires.

Note that any $x_i$ that is in the sample has to be $\le \theta$, because otherwise its probability is $0$.

So $\theta$ has to be at least the maximum value that is in the sample.

What happens to the likelihood if $\theta$ is bigger than that maximum value?

A Maximum Likelihood Estimator (MLE) is a statistical method used to estimate the parameters of a probability distribution. It is based on the principle of maximum likelihood, which states that the most likely values of the parameters are those that make the observed data most probable.

A Maximum Likelihood Estimator is calculated by finding the values of the parameters that maximize the likelihood function, which is a function of the parameters and the observed data. This can be done analytically or numerically through optimization methods.

The main assumptions of Maximum Likelihood Estimators are that the data is independent and identically distributed, and that the probability distribution being used to model the data is the true distribution.

The advantages of using Maximum Likelihood Estimators include their consistency, efficiency, and asymptotic normality. They also have good statistical properties, such as being unbiased and having a lower variance compared to other estimation methods.

Maximum Likelihood Estimation may not be appropriate when the assumptions of the method are violated, such as when the data is not normally distributed or when there are outliers present. In these cases, alternative estimation methods may be more suitable.

- Replies
- 6

- Views
- 1K

- Replies
- 16

- Views
- 2K

- Replies
- 1

- Views
- 1K

- Replies
- 3

- Views
- 2K

- Replies
- 11

- Views
- 2K

- Replies
- 2

- Views
- 1K

- Replies
- 23

- Views
- 3K

- Replies
- 19

- Views
- 1K

- Replies
- 3

- Views
- 1K

- Replies
- 6

- Views
- 965

Share: