官术网_书友最值得收藏!

  • Deep Learning with R for Beginners
  • Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
  • 159字
  • 2021-06-24 14:30:42

Using regularization to overcome overfitting

In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.

The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.

主站蜘蛛池模板: 宕昌县| 贺州市| 林口县| 沙坪坝区| 沈阳市| 平泉县| 巴彦淖尔市| 汝州市| 科尔| 镇远县| 宁蒗| 乡城县| 秭归县| 新巴尔虎右旗| 石楼县| 高台县| 安溪县| 四会市| 乌鲁木齐县| 西和县| 宁夏| 琼海市| 黑河市| 新沂市| 宁阳县| 东辽县| 樟树市| 甘肃省| 任丘市| 体育| 松江区| 南平市| 永城市| 商水县| 蓬溪县| 黎平县| 页游| 邓州市| 巢湖市| 横山县| 华宁县|