Linear Regression with Non Linear Basis Functions

  • #1
46
1
So I am currently learning some regression techniques for my research and have been reading a text that describes linear regression in terms of basis functions. I got linear basis functions down and no exactly how to get there because I saw this a lot in my undergrad basically, in matrix notation
y=wTx
you then define your loss function as
1/n Σn(wi*xi-yi)2
then you take the partial derivatives with respect to w set it equal to zero and solve.

So now I want to use a non-linear basis functions, lets say I want to use m gaussians basis functions, φi, the procedure is the same but I am not sure exactly on the construction of the model. Lets say I have L features is the model equation of the form

ynmΣLwiφi(xj)

in other words I have created a linear combination of M new features, φ(x), which are constructed with all L of the previous features for each data point n:
yn=w0+w11(x1)+φ1(x2)...+...φ1(xL) ......+......wm1(x1)+φ2(x2)...+...φm(xL))

where xi are features / variables for my model and not data values? I hope this makes sense. Thanks in advance.
 

Answers and Replies

  • #2
22,089
3,296
The parameters you wish to estimate are the ##w_i## and the values ##(x_1,...,x_L)## are known for each data point?
 
  • #3
46
1
The parameters you wish to estimate are the ##w_i## and the values ##(x_1,...,x_L)## are known for each data point?
That is correct.
 
  • #4
22,089
3,296
Then you have a standard linear regression. Linear refers to the coefficients and not the functions used. Thus your loss function is again

[tex]L = \sum_{i=1}^n \left(y_i - w_0 - w_1\sum_k \phi_1(x_k) - w_2\sum_k \phi_2(x_k) - ... - w_N \sum_k \phi_N(x_k)\right)^2[/tex]

and you minimize this by taking partial derivatives and setting them equal to ##0##. In matrix notation, you let ##Y## by the column matrix with entries the ##y_i## and you let ##X## be the design matrix whose ##i##th row is
[tex]\left(1~~\sum_k \phi_1(x_k)~~ ...~~\sum_k \phi_N(x_k)\right)[/tex]
The coefficients are then ##W = (X^TX)^{-1} X^T Y##.
 
  • #5
46
1
Then you have a standard linear regression. Linear refers to the coefficients and not the functions used. Thus your loss function is again

[tex]L = \sum_{i=1}^n \left(y_i - w_0 - w_1\sum_k \phi_1(x_k) - w_2\sum_k \phi_2(x_k) - ... - w_N \sum_k \phi_N(x_k)\right)^2[/tex]

and you minimize this by taking partial derivatives and setting them equal to ##0##. In matrix notation, you let ##Y## by the column matrix with entries the ##y_i## and you let ##X## be the design matrix whose ##i##th row is
[tex]\left(1~~\sum_k \phi_1(x_k)~~ ...~~\sum_k \phi_N(x_k)\right)[/tex]
The coefficients are then ##W = (X^TX)^{-1} X^T Y##.
Great Thanks, this is what I thought it meant but the way you wrote it makes it lot clearer than the text I am using which has all formulas in matrix notation and it hard to tell if they are talking about a single random variable or a vector of random variables.
 

Related Threads on Linear Regression with Non Linear Basis Functions

  • Last Post
Replies
7
Views
2K
  • Last Post
Replies
3
Views
122
Replies
2
Views
10K
Replies
2
Views
6K
Replies
4
Views
1K
  • Last Post
Replies
2
Views
2K
Replies
1
Views
769
  • Last Post
Replies
8
Views
383
  • Last Post
Replies
4
Views
1K
Top