MHB Maximizing Likelihood Estimator of β

  • Thread starter Thread starter jmorgan
  • Start date Start date
  • Tags Tags
    Likelihood
Click For Summary
To find the maximum likelihood estimator (MLE) of β, start with the likelihood function L(β) based on the given distribution. The correct likelihood expression incorporates the observations and is given by the product of individual probabilities. Taking the logarithm of the likelihood simplifies the calculations, leading to the log-likelihood function. By differentiating this log-likelihood with respect to β and setting the derivative equal to zero, you can solve for β to find the MLE. This process ensures that the value of β maximizes the likelihood of observing the given data.
jmorgan
Messages
5
Reaction score
0
Assuming α is known, find the maximum likelihood estimator of β

f(x;α,β) = , 1 ,,,,,,, .(xα.e-x/β)
,,,,,, ,,,,,,α!βα+1

I know that firstly you must take the likelihood of L(β). But unsure if I have done it correctly. I came out with the answer below, please can someone tell me where/if I have gone wrong.

L(β)= (α!βα+1)-n.Σxiα.eΣxi/βn
 
Physics news on Phys.org
I don't understand your question. The "maximum Likelihood" estimator for a parameter is the value of the parameter that makes a given outcome most likely. But you have not given an "outcome" here.
 
I think that you're going in the right direction. However, your calculation is not entirely correct. Suppose that we have given observations $x_1,\ldots,x_n$ from the given distribution. The likelihood is then given by
$$\mathcal{L}(x_1,\ldots,x_n,\alpha,\beta) = \prod_{i=1}^{n} \frac{1}{\alpha ! \beta^{\alpha+1}} x_i^{\alpha}e^{-x_i/\beta}.$$
We wish to find the value of $\beta$ that maximizes the likelihood. Since it is quite common to work with the logarithm, let us first take the log of both sides:
$$\log \mathcal{L}(x_1,\ldots,x_n,\alpha,\beta) = -n \log(\alpha) - n (\alpha+1) \log(\beta)+ \alpha \sum_{i=1}^{n} \log(x_i) - \frac{\sum_{i=1}^{n} x_i}{\beta}.$$
Taking the derivative w.r.t $\beta$, we obtain
$$\frac{\partial \log \mathcal{L}(x_1,\ldots,x_n,\alpha,\beta)}{d\beta} = -n(\alpha+1)\frac{1}{\beta} - \frac{1}{\beta^2} \sum_{i=1}^{n} x_i.$$
To proceed, set the RHS equal to $0$ and solve for $\beta$. This is the required MLE.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K