官术网_书友最值得收藏!

  • Deep Learning with R for Beginners
  • Mark Hodnett Joshua F. Wiley Yuxi (Hayden) Liu Pablo Maldonado
  • 159字
  • 2021-06-24 14:30:42

Using regularization to overcome overfitting

In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.

The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.

主站蜘蛛池模板: 新沂市| 梨树县| 外汇| 璧山县| 旅游| 阜南县| 石阡县| 宁强县| 伊宁县| 佳木斯市| 通辽市| 岳阳县| 平远县| 宝鸡市| 墨竹工卡县| 巍山| 双流县| 简阳市| 三明市| 黎城县| 龙山县| 穆棱市| 马公市| 凤庆县| 黑龙江省| 伊吾县| 莆田市| 开封市| 大兴区| 肃北| 达拉特旗| 东海县| 南靖县| 阜南县| 宜丰县| 汕尾市| 三亚市| 湘乡市| 珲春市| 探索| 清远市|