MHB Maximizing Likelihood Estimator of β

  • Thread starter Thread starter jmorgan
  • Start date Start date
  • Tags Tags
    Likelihood
AI Thread Summary
To find the maximum likelihood estimator (MLE) of β, start with the likelihood function L(β) based on the given distribution. The correct likelihood expression incorporates the observations and is given by the product of individual probabilities. Taking the logarithm of the likelihood simplifies the calculations, leading to the log-likelihood function. By differentiating this log-likelihood with respect to β and setting the derivative equal to zero, you can solve for β to find the MLE. This process ensures that the value of β maximizes the likelihood of observing the given data.
jmorgan
Messages
5
Reaction score
0
Assuming α is known, find the maximum likelihood estimator of β

f(x;α,β) = , 1 ,,,,,,, .(xα.e-x/β)
,,,,,, ,,,,,,α!βα+1

I know that firstly you must take the likelihood of L(β). But unsure if I have done it correctly. I came out with the answer below, please can someone tell me where/if I have gone wrong.

L(β)= (α!βα+1)-n.Σxiα.eΣxi/βn
 
Physics news on Phys.org
I don't understand your question. The "maximum Likelihood" estimator for a parameter is the value of the parameter that makes a given outcome most likely. But you have not given an "outcome" here.
 
I think that you're going in the right direction. However, your calculation is not entirely correct. Suppose that we have given observations $x_1,\ldots,x_n$ from the given distribution. The likelihood is then given by
$$\mathcal{L}(x_1,\ldots,x_n,\alpha,\beta) = \prod_{i=1}^{n} \frac{1}{\alpha ! \beta^{\alpha+1}} x_i^{\alpha}e^{-x_i/\beta}.$$
We wish to find the value of $\beta$ that maximizes the likelihood. Since it is quite common to work with the logarithm, let us first take the log of both sides:
$$\log \mathcal{L}(x_1,\ldots,x_n,\alpha,\beta) = -n \log(\alpha) - n (\alpha+1) \log(\beta)+ \alpha \sum_{i=1}^{n} \log(x_i) - \frac{\sum_{i=1}^{n} x_i}{\beta}.$$
Taking the derivative w.r.t $\beta$, we obtain
$$\frac{\partial \log \mathcal{L}(x_1,\ldots,x_n,\alpha,\beta)}{d\beta} = -n(\alpha+1)\frac{1}{\beta} - \frac{1}{\beta^2} \sum_{i=1}^{n} x_i.$$
To proceed, set the RHS equal to $0$ and solve for $\beta$. This is the required MLE.
 
Hello, I'm joining this forum to ask two questions which have nagged me for some time. They both are presumed obvious, yet don't make sense to me. Nobody will explain their positions, which is...uh...aka science. I also have a thread for the other question. But this one involves probability, known as the Monty Hall Problem. Please see any number of YouTube videos on this for an explanation, I'll leave it to them to explain it. I question the predicate of all those who answer this...
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top