官术网_书友最值得收藏!

Using regularization to overcome overfitting

In the previous chapter, we saw the diminishing returns from further training iterations on neural networks in terms of their predictive ability on holdout or test data (that is, data not used to train the model). This is because complex models may memorize some of the noise in the data rather than learning the general patterns. These models then perform much worse when predicting new data. There are some methods we can apply to make our model generalize, that is, fit the overall patterns. These are called regularization and aim to reduce testing errors so that the model performs well on new data.

The most common regularization technique used in deep learning is dropout. However, we will also discuss two other regularization techniques that have a basis in regression and deep learning. These two regularization techniques are L1 penalty, which is also known as Lasso, and L2 penalty, which is also known as Ridge.

主站蜘蛛池模板: 丰县| 梓潼县| 凤冈县| 醴陵市| 北安市| 张家港市| 宜黄县| 安福县| 博客| 嘉义县| 富蕴县| 达拉特旗| 连江县| 禹州市| 湘潭县| 惠州市| 邳州市| 杭锦后旗| 罗城| 宿州市| 蛟河市| 仁化县| 龙海市| 东兰县| 环江| 福清市| 夏邑县| 资兴市| 伊金霍洛旗| 西贡区| 浦东新区| 栾城县| 临江市| 扎鲁特旗| 阳新县| 玛多县| 保康县| 喜德县| 黄龙县| 宝兴县| 义马市|