- Machine Learning Quick Reference
- Rahul Kumar
- 187字
- 2021-08-20 10:05:06
Ridge regression (L2)
Due to ridge regression, we need to make some changes to the loss function. The original loss function gets added by a shrinkage component:

Now, this modified loss function needs to be minimized to adjust the estimates or coefficients. Here, the lambda is tuning the parameter that regularizes the loss function. That is, it decides how much it should penalize the flexibility of the model. The flexibility of the model is dependent on the coefficients. If the coefficients of the model go up, the flexibility also goes up, which isn't a good sign for our model. Likewise, as the coefficients go down, the flexibility is restricted and the model starts to perform better. The shrinkage of each estimated parameter makes the model better here, and this is what ridge regression does. When lambda keeps going higher and higher, that is, λ → ∞, the penalty component rises, and the estimates start shrinking. However, when λ → 0, the penalty component decreases and starts to become an ordinary least square (OLS) for estimating unknown parameters in a linear regression.
- Ansible Configuration Management
- Verilog HDL數(shù)字系統(tǒng)設(shè)計入門與應(yīng)用實例
- 大數(shù)據(jù)專業(yè)英語
- Getting Started with Oracle SOA B2B Integration:A Hands-On Tutorial
- 離散事件系統(tǒng)建模與仿真
- 機器人智能運動規(guī)劃技術(shù)
- 愛犯錯的智能體
- MCGS嵌入版組態(tài)軟件應(yīng)用教程
- 精通數(shù)據(jù)科學(xué):從線性回歸到深度學(xué)習(xí)
- Linux內(nèi)核精析
- R Data Analysis Projects
- Salesforce Advanced Administrator Certification Guide
- Mastering MongoDB 3.x
- 30天學(xué)通Java Web項目案例開發(fā)
- Machine Learning in Java