Linear regression and probability distribution

Click For Summary

Discussion Overview

The discussion revolves around the application of linear regression, specifically focusing on estimating the uncertainty in the parameters (slope and intercept) when data points are limited. Participants explore the use of Bayesian linear regression for obtaining probability distributions of these parameters and seek guidance on error estimates and confidence intervals.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant seeks help on how to estimate uncertainty in linear regression parameters using Bayesian methods and what assumptions are necessary.
  • Another participant questions the need for Bayesian methods when confidence intervals can be calculated for the parameters.
  • Some participants mention that many statistical programs provide margins of error and distributions for regression parameters.
  • There is a discussion about various Python libraries, such as statsmodels and sklearn, and their capabilities for estimating confidence intervals and parameter distributions.
  • One participant suggests that numpy.polyfit with cov=True could be useful for obtaining the covariance matrix of the coefficients.
  • Concerns are raised about whether the covariance matrix alone is sufficient for understanding the distribution of coefficients, with suggestions to consider confidence intervals instead.
  • Participants discuss the nature of the distribution of regression coefficients, with references to t-distributions and multivariate normal distributions.
  • There are mentions of using Excel for regression analysis, but some participants prefer Python for their projects and express concerns about Excel's limitations.
  • One participant expresses confusion about how to sample from the distribution of regression parameters and seeks clarification on the theoretical foundations of these distributions.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and appropriateness of Bayesian methods versus traditional confidence intervals. There is no consensus on the best approach to estimate uncertainty in regression parameters, and the discussion remains unresolved regarding the specifics of sampling from parameter distributions.

Contextual Notes

Participants mention various assumptions underlying linear regression and the implications for the distributions of coefficients, but these assumptions are not fully specified or agreed upon. There is also uncertainty regarding the capabilities of different Python libraries in relation to the specific needs of the participants.

maka89
Messages
66
Reaction score
4
I have some data that I want to do simple linear regression on.
However I don't have a lot of datapoints, and would like to know the uncertainty in the parameters. I.e. the slope and the intercept of the linear regression model.

I know it should be possible to get a prob. distribution of the parameters using bayesian linear regression, but I don't know quite how to use it and what assumptions has to be made. Can someone help me how to find reasonable error estimates for my model?
 
Physics news on Phys.org
What's wrong with a confidence interval on the parameters?

if you wish to use a Bayesian approach then it's no different than how one normally approaches these problems. Assume a prior, compute a posterior form a credible interval.
 
Like Marne said, most programs will include the margin of error and distribution of each of both the independent term and the coefficient in the parameters when you do a regression.
 
Sound like what I am after. Any Python library that has this ability?
 
statmodels should have it, but it's been awhile since I've used it. Although i'll also say it's relatively trivial to figure it out on your own and write your own function to do it. I also imagine sklearn should have this feature (if it doesn't then wtf)
 
Sklearn does not have it :( statsmodels does, but api seems kind of lame.

Think numpy.polyfit with cov=True will do the trick :cool:
 
I'm not sure if that'll actually do the trick since all that'll do is return the standard error.

Code:
import numpy as np, statsmodels.api as sm

nsample = 100
x = np.linspace(0, 10, nsample)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)

X = sm.add_constant(X)
y = np.dot(X, beta) + e

mod = sm.OLS(y, X)
res = mod.fit()

print(res.summary())
print(res.conf_int(0.01))

Returns the following

Code:
                           OLS Regression Results                           
==============================================================================
Dep. Variable:                      y   R-squared:                       1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.402e+06
Date:                Mon, 03 Jul 2017   Prob (F-statistic):          1.70e-245
Time:                        09:34:54   Log-Likelihood:                -131.77
No. Observations:                 100   AIC:                             269.5
Df Residuals:                      97   BIC:                             277.4
Df Model:                           2                                         
Covariance Type:            nonrobust                                         
==============================================================================
                coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          0.9008      0.270      3.338      0.001       0.365       1.436
x1             0.0988      0.125      0.792      0.430      -0.149       0.346
x2             9.9997      0.012    828.551      0.000       9.976      10.024
==============================================================================
Omnibus:                        0.170   Durbin-Watson:                   1.765
Prob(Omnibus):                  0.918   Jarque-Bera (JB):                0.274
Skew:                          -0.092   Prob(JB):                        0.872
Kurtosis:                       2.821   Cond. No.                         144.
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[[  0.19176405   1.60988065]
 [ -0.22893401   0.42648449]
 [  9.96797785  10.03139905]]

Which, in my opinion, is much more helpful than just the standard errors.
 
Ah. Okay. But am I right in that the two coeficients in my model has a joint multivariate normal distribution. So to get a 95% confidence interval, I suppose I have to integrate out the other coeficient that I am not looking at(And not naively just add/subtract 1.96std.devs) ?
 
The estimates of the coefficients will actually follow a t-distribution with n-2 degrees of freedom. This is of course assuming that the assumption of linear regressions have been met.
 
  • #10
Thanks! Bit rusty on the ol' central limit theorem... So, since I have two coeficients and numpy.polyfit returns a 2x2 covariance matrix, I guess I am looking at a multivariate t-distribution with n-2 degrees of freedom and the covariance matrix returned by the function .. ?
 
  • #11
You're literally just getting the covariance matrix. There's nothing special about that, the standard way you get the covariance matrix is all the function is doing, nothing about distributions. Thus I keep pointing you towards a confidence interval. The way you figure out confidence intervals of any estimate is basically what you do with the parameters being estimated here. I'm not entirely sure what you don't get about the process.
 
  • #12
I need to be able to sample from the distribution of the regression coeficients, eventually. So I try to understand what pdf to use.
 
  • #13
maka89 said:
Sound like what I am after. Any Python library that has this ability?

Excel itself has it, standard Excel. And besides, Excel has free add-ins for a lot of other stuff, like PCA, Factor Analysis, examples, etc. I know a lot of people look down at it, but I think it is mostly snobbery.
 
  • Like
Likes   Reactions: FactChecker
  • #14
Excel is nice, but I'm looking to use this as a means to an end in a different project, which is python based :)
 
  • #15
maka89 said:
Excel is nice, but I'm looking to use this as a means to an end in a different project, which is python based :)
Can't you call Excel from Python or maybe export the Excel output analysis?
 
  • #16
Excel is not free and the data is user specified so the script should be able to do the numbercrunching itself.
I found that the statsmodels package works well for getting confidence intervals.But I still have some trouble understanding how it works and how I can sample from the distribution of the regression parameters ^^
 
  • #17
maka89 said:
Excel is not free and the data is user specified so the script should be able to do the numbercrunching itself.
I found that the statsmodels package works well for getting confidence intervals.But I still have some trouble understanding how it works and how I can sample from the distribution of the regression parameters ^^

I don't think it samples from the parameters; the parameters have distributions that are know a priori ( derived theoretically) , and the specific values for the confidence intervals are arrived at through the data you enter. I think Excel Online is available for free, though don't mean to push it.
 
  • Like
Likes   Reactions: maka89
  • #18
maka89 said:
Excel is nice, but I'm looking to use this as a means to an end in a different project, which is python based :)
Python and other languages with scripting capabilities (I like Perl) allow you to write top-level scripts that can generate programs for other languages (Excel, R, MATLAB, etc.), run the program, parse the output for results, and continue to the next step. A large project might have a lot of top-level Python scripts that invoke several other programs or tools that are best for specific subtasks.
 
  • Like
Likes   Reactions: WWGD and maka89
  • #19
maka89 said:
Excel is not free and the data is user specified so the script should be able to do the numbercrunching itself.
I found that the statsmodels package works well for getting confidence intervals.But I still have some trouble understanding how it works and how I can sample from the distribution of the regression parameters ^^
The sampling of the regression coefficients converge asymptotically to a normal distribution. If I may ask, why do you want to sample the parameters?
 
  • #20
MarneMath said:
The sampling of the regression coefficients converge asymptotically to a normal distribution. If I may ask, why do you want to sample the parameters?

I got the confidence intervals to work with statsmodels. Need to figure out how to get prediction intervals as well, though.

The reason I want to be able to sample, is that the regression coeficients obeys some (potentially complex) physics equation with several other parameters (That also have uncertainties). Want to be able to see how these uncertainties propagate. I.e. if a and b has some uncertainty, what are the possible values of c, which also is in the equation. One easy way is to monte carlo sample a and b and create a histogram over possible c values
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 23 ·
Replies
23
Views
4K
Replies
3
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K