Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Multivariate Linear Regression With Coefficient Constraint

  1. Feb 5, 2008 #1
    [SOLVED] Multivariate Linear Regression With Coefficient Constraint

    I'm attempting a multivariate linear regression (mvlr) by method of least squares. Basically, I'm solving a matrix of the following form for [tex]\beta_p,[/tex][tex]
    $ \begin{bmatrix} \sum y \\ \sum x_1 y \\ \sum x_2 y \\ \sum x_3 y \end{bmatrix} = \begin{bmatrix} n & \sum x_1 & \sum x_2 & \sum x_3 \\ \sum x_1 & \sum x_1^2 & \sum x_1 x_2 & \sum x_1 x_3 \\ \sum x_2 & \sum x_2 x_1 & \sum x_2^2 & \sum x_2 x_3 \\ \sum x_3 & \sum x_3 x_1 & \sum x_3 x_2 & \sum x_3^2 & \end{bmatrix}\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \\ \beta_3 \end{bmatrix} $\end{text}[/tex]

    x are sets of data, y is the data I want to fit, and [tex]\beta[/tex] are the coefficients.

    My problem is that I want to set a constraint such that [tex]\beta[/tex] remains positive. What would be a good way to achieve this?
  2. jcsd
  3. Feb 5, 2008 #2


    User Avatar
    Science Advisor
    Homework Helper

    In general, you need to set up a restricted minimization problem (Lagrangian) for the sum of squared errors (SSE); then minimize SSE subject to the constraint.

    E.g. the REG procedure in SAS uses RESTRICT statement, which reverts to a constrained optimization algorithm.

    Specifically, if you run the unrestricted MVLR and it outputs a positive coefficient then you don't need the constraint. If the unrestricted coefficient is < 0, then you need to use constrained minimization.
    Last edited: Feb 5, 2008
  4. Feb 7, 2008 #3
    If you don't mind, could you or somebody else explain that procedure in more detail? I have only minimal experience working with lagrange multipliers. I'm unfamiliar with SAS. I've been running my numbers through Matlab.
  5. Feb 7, 2008 #4


    User Avatar
    Science Advisor
    Homework Helper

    In Mathematica NMinimize & NMaximize functions are used to solve nonlinear constrained global optimization problems. Perhaps you can search for similar algorithms in Matlab?
  6. Feb 10, 2008 #5


    User Avatar
    Science Advisor
    Homework Helper

    This might work; but I'd constrain the beta's (to zero) one at a time, and do it alternatively.

    E.g. if beta1 < 0 and beta3 < 0 in the full model then first minimize (y - b0 - b1 x1 - b2 x2 - b4 x4)^2, and see whether beta1 is still < 0. Alternatively, minimize (y - b0 - b2 x2 - b3 x3 - b4 x4)^2 and see whether beta3 is still < 0. If the answer to both questions is "yes," then minimize (y - b0 - b2 x2 - b4 x4)^2. In all other cases (yes&no, no&yes, no&no), you'll have a model selection problem and you will have to use a model selection criterion (e.g. the F statistic or the adjusted R-squared).
    Last edited: Feb 11, 2008
  7. Feb 14, 2008 #6
    thank you all for your help.

    I have found that the matlab function lsqnonneg does exactly what I am looking for. I'm speculating that it follows a procedure similar to what vedenev and EnumaElish are suggesting.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Multivariate Linear Regression With Coefficient Constraint
  1. Linear regression (Replies: 7)