官术网_书友最值得收藏!

Applying Lasso and ridge regression

A common problem in machine learning is that an algorithm might work really well on the training set, but when applied to unseen data it makes a lot of mistakes. You can see how this is problematic, since often we are most interested in how a model generalizes to new data. Some algorithms (such as decision trees) are more susceptible to this phenomenon than others, but even linear regression can be affected.

This phenomeon is also known as overfitting, and we will talk about it extensively in Chapter 5, Using Decision Trees to Make a Medical Diagnosis, and Chapter 11, Selecting the Right Model with Hyperparameter Tuning.

A common technique for reducing overfitting is called regularization, which involves adding an additional constraint to the cost function that is independent of all feature values. The two most commonly used regularizors are as follows:

  • L1 regularization: This adds a term to the scoring function that is proportional to the sum of all absolute weight values. In other words, it is based on the L1 norm of the weight vector (also known as the rectilinear distance, snake distance, or Manhattan distance). Due to the grid layout of Manhattan's streets, the L1 norm is akin to measuring the distance a New York cab driver covers by driving from point A to B. The resulting algorithm is also known as Lasso regression.
  • L2 regularization: This adds a term to the scoring function that is proportional to the sum of all squared weight values. In other words, it is based on the L2 norm of the weight vector (also known as the Euclidean distance). Since the L2 norm involves a squaring operation, it punishes strong outliers in the weight vector much harder than the L1 norm. The resulting algorithm is also known as ridge regression.

The procedure is exactly the same as the preceding one, but we replace the initialization command to load either a Lasso or a RidgeRegression object. Specifically, we have to replace the following command:

In [6]: linreg = linear_model.LinearRegression()

For the Lasso regression algorithm, we would change the preceding line of code to the following:

In [6]: lassoreg = linear_model.Lasso()

For the ridge regression algorithm, we would change the preceding line of code to the following:

In [6]: ridgereg = linear_model.RidgeRegression()

I encourage you to test these two algorithms on the Boston dataset in place of conventional linear regression. How does the generalization error (In [12]) change? How does the prediction plot (In [14]) change? Do you see any improvements in performance?

主站蜘蛛池模板: 萝北县| 龙州县| 三江| 登封市| 芜湖县| 敖汉旗| 甘洛县| 涟源市| 闻喜县| 苗栗市| 龙川县| 北安市| 宁陵县| 肥东县| 宁河县| 治县。| 黄大仙区| 京山县| 宜兴市| 贡嘎县| 奉贤区| 鹿邑县| 常德市| 萝北县| 大冶市| 长白| 瓦房店市| 无棣县| 汝南县| 中方县| 邻水| 聂荣县| 黄龙县| 南木林县| 壶关县| 邢台县| 肥乡县| 静海县| 成武县| 乌拉特前旗| 林口县|