1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Maximum Likelihood

  1. Feb 5, 2016 #1
    1. The problem statement, all variables and given/known data
    Let X1, X2,....Xn be a random sample from pdf,
    f(x|θ) = θx-2 where 0 < θ ≤ x < ∞

    Find the MLE of θ


    My attempt:

    Likelihood fxn: L(θ|x) = ∏θx-2 = θn∏ θx-2

    And to find MLE, I take Log of that function and partial derivative (w.r.t θ, of log L(θ|x) and set that = 0, and get: n/θ = 0

    However, I realize that θ ≤ x and θ > 0...what do I need to do to incorporate this to my likelihood function?
    In class we discuss about Fisher Information and I have a guess that it has some involvement with this problem, but I'm not sure why and what we can use Fisher Information for this problem?[/SUP][/SUP][/SUP][/SUP][/SUB][/SUB][/SUB]
     
    Last edited: Feb 5, 2016
  2. jcsd
  3. Feb 5, 2016 #2

    RUber

    User Avatar
    Homework Helper

    It looks like you are on the right track.
    You are looking to maximize
    ##L(\theta | x ) = \prod_{i=1}^n \theta X_i^{-2} = \theta^n \prod_{i=1}^n x_i^{-2}##
    for a given vector x.
    If you note that ##\prod_{i=1}^n x_i^{-2}## is a constant for any fixed vector X, how do you maximize the likelihood?
    ##\max L(\theta|x) = C\theta^n ##
    You should maximize theta, which is the same result you found with the answer ## \frac{n}{\theta} = 0##.
    Now you apply the constraints. For a vector ##x = [X_1, X_2, ..., X_n]##, what is the maximum allowable ##\theta##?
     
  4. Feb 5, 2016 #3

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    In constrained optimization it is often wrong to set derivatives to zero, because the results violate the constraints. In your problem, you want to solve
    [tex] \max_{\theta} L(\theta|x) = \theta^n \prod_{i=1}^n x_i^{-2}, \; \text{subject to} \; 0 < \theta \leq \min\, (x_1,x_2, \ldots, x_n) [/tex]
    The derivative will definitely not = 0 at the optimal solution.

    If ##\underline{x} = \min\, (x_1,x_2, \ldots, x_n)##, what is the solution of the problem of maximizing ##\theta^n## over the region ##0 < \theta \leq \underline{x}##?
     
    Last edited: Feb 5, 2016
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Maximum Likelihood
  1. Maximum Likelihood (Replies: 3)

Loading...