- #1

SchroedingersLion

- 215

- 57

- TL;DR Summary
- ScikitLearn Elastic Net regression gives a hyperparameter of 0, implying that ordinary least squares is the best method.

Hi guys,

I am using ScikitLearn's Elastic Net implementation to perform regression on a data set where number of data points is larger than number of features. The routine uses crossvalidation to find the two hyperparameters: ElasticNetCV

The elastic net minimizes ##\frac {1}{2N} ||y-Xw||^2 + \alpha c ||w||_1 + \frac 1 2 \alpha (1-c) ||w||_2^2 ##, where ##\alpha## and ##c## are the hyperparameters.

However, I obtain a hyperparameter of ##\alpha=0##, which means the routine prefers no regularization at all. I was wondering what this means. The regularization is done in order to decrease overfitting on test data. What does a parameter of 0 imply? Does it mean I cannot have overfitting in this case?SL

I am using ScikitLearn's Elastic Net implementation to perform regression on a data set where number of data points is larger than number of features. The routine uses crossvalidation to find the two hyperparameters: ElasticNetCV

The elastic net minimizes ##\frac {1}{2N} ||y-Xw||^2 + \alpha c ||w||_1 + \frac 1 2 \alpha (1-c) ||w||_2^2 ##, where ##\alpha## and ##c## are the hyperparameters.

However, I obtain a hyperparameter of ##\alpha=0##, which means the routine prefers no regularization at all. I was wondering what this means. The regularization is done in order to decrease overfitting on test data. What does a parameter of 0 imply? Does it mean I cannot have overfitting in this case?SL