1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Regression

  1. May 13, 2012 #1
    β1. The problem statement, all variables and given/known data

    Data y1,y2...yn are modelled as observations of random variables Y1,..Yn given by

    Yi = α + β(xi-xbar) + σεi

    Where α , β and σ are unknown parameters x1,x2...xn are known constants and xbar is
    (1/n)Ʃxi and εi's are independent random variables each with the gaussian distribution mean 0 and unknown variance 1.

    Now let x be some additional given value of the explanatory variable. Construct from you estimates of the parameters a suitable estimate for η = α + β(x-xbar), the mean value of the response variable when the explanatory variable is x.

    2. Relevant equations

    From a previous question, for the initial problem where only xi's are the explanatory variable I calculate that;

    ∂l/∂α = Ʃyi - βƩ(xi-xbar) = nα where l is the log likelihood function.

    And then obviously Ʃ(xi-xbar) = 0 , so the MLE(α) = Ʃyi /n in the original case.

    and ∂l/∂β gives MLE(β) = Ʃyi(xi-xbar) / Ʃ(xi-xbar)2


    3. The attempt at a solution

    I assume that to find the MLE(η) I can just add the MLE(α) and MLE(β)(x-xbar)
    where the MLE(β) and MLE(α) are the maximum likelihood estimates for ach parameter in the new regression, with the additional x. But this is the problem, how do I find the MLE's of this regression using the MLE's of the old regression?

    What I have done is replaced xi with x in ∂l/∂α to get Ʃyi - βƩ(x-xbar) = nα which then gives Ʃyi - nβ(x-xbar) .= nα because the x's dont depend on i.
    Which gives the new MLE(α) = Ʃyi/n - β(x-xbar)

    Now for MLE(β) I did the same, replaced xi in the original MLE with x.
    This gives MLE(β)(x-xbar) = Ʃyi/n - α(x-xbar).

    I then replaced the α in the MLE(β) with the MLE(α) for this regression.
    Solving that i get MLE(β)(x-xbar) = Ʃyi/n and MLE(α) = 0.

    Giving the suitable estimate of η as Ʃyi/n which I think is wrong...
     
  2. jcsd
  3. May 13, 2012 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Every textbook on linear regression contains detailed formulas for doing all the calculations in your question. If you do not use a textbook, there are numerous treatments on-line that explain what needs to be done.

    RGV
     
  4. May 15, 2012 #3
    Ok, but I can't find them because I don't know what I'm looking for.. I don't understand what it means by let x be some additional given value of the explanatory variable. I mean, does this mean we have n+1 explanatory variables x1,x2...xn,x
    so that Y = α + β(x-xbar) + σε is the response variable to x and xbar now is
    x1+x2+..+xn+x/n+1
    OR
    does it mean that we still have n explanatoty values but each has x added onto them, so that they are x1+x,x2+x...xn+x and xbar now is x1+..+xn/n + x and
    Yi = α + β(xi+x-xbar) + σεi ?

    I know this may be a stupid question but it's really confused me.
    Thanks for your help
     
  5. May 15, 2012 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    If I understand your original question correctly, you have a single independent variable, x, and a single dependent variable, y, connected by an equation y = γ + βx. You have n observed values of x, call them x1, x2, ..., xn (not necessarily all different) and n observed values of y, call them y1, 2, ..., yn. Your model is that the observed values are yi = γ + βxi + σεi, where σ is a constant and the εi are iid N(0,1). You don't know γ, β or σ, but instead have to estimate them from the observed data, which you do via the least-squared method. These give estimates c of γ, b of β and s2 of σ^2. Of course, your c and b are just single observations of some random variables C and B. Normally, when we just have a single observation of a random variable, we cannot say very much at all about accuracy, etc., but in the present case the underlying random variables C and B are functions obtained by doing some calculations on n items of data, so we actually have (n-2) useful, leftover items that can be used to assess accuracy. Not surprisingly, the Student t-distribution with (n-2) degrees of freedom will be involved.

    Now, why did you bother to fit an equation to data? Presumably, you would like to use that equation in the future to make estimates of y and so forth. So, you want to use the equation to make predictions. In other words, given a new value of x (maybe one you have not yet observed), what can you say about y? Well, if your underlying model really is correct, you have an unbiased estimate of the mean of y---namely, c + b*x. But, suppose you want to construct "error bars" around your estimate---that is, you want a "prediction interval". How could you do that? I think that is what your question is asking, and a Google search on 'linear regression + prediction", or something similar, will give you all you need. The formulas are a bit too lengthy to present here, and anyway, I don't personally believe in the value of just writing down formulas for somebody and saying "here they are". You need to read and absorb that material, and make it your own.

    By the way, I chose to write c + b*x instead of a + b*(x - x_bar), but given a single data set you can pass easily between the two. The reason I prefer c + b*x is that the underlying form γ + β x does not depend on any particular data set, while the underlying form α + β (x - bar_x) is not really defined until we know bar_x. You see the difference?

    RGV
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Linear Regression
  1. Linear regression (Replies: 1)

Loading...