官术网_书友最值得收藏!

Bias versus variance trade-off

Every model has both bias and variance error components in addition to white noise. Bias and variance are inversely related to each other; while trying to reduce one component, the other component of the model will increase. The true art lies in creating a good fit by balancing both. The ideal model will have both low bias and low variance.

Errors from the bias component come from erroneous assumptions in the underlying learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs; this phenomenon causes an underfitting problem.

On the other hand, errors from the variance component come from sensitivity to change in the fit of the model, even a small change in training data; high variance can cause an overfitting problem:


An example of a high bias model is logistic or linear regression, in which the fit of the model is merely a straight line and may have a high error component due to the fact that a linear model could not approximate underlying data well.

An example of a high variance model is a decision tree, in which the model may create too much wiggly curve as a fit, in which even a small change in training data will cause a drastic change in the fit of the curve.

At the moment, state-of-the-art models are utilizing high variance models such as decision trees and performing ensemble on top of them to reduce the errors caused by high variance and at the same time not compromising on increases in errors due to the bias component. The best example of this category is random forest, in which many decision trees will be grown independently and ensemble in order to come up with the best fit; we will cover this in upcoming chapters:

主站蜘蛛池模板: 兰考县| 盐边县| 微山县| 榆社县| 新郑市| 增城市| 屏山县| 汕尾市| 扶绥县| 丹巴县| 镇江市| 云霄县| 孝义市| 库车县| 筠连县| 龙南县| 龙江县| 满城县| 喀什市| 商水县| 家居| 中山市| 常德市| 呼伦贝尔市| 河津市| 广南县| 珲春市| 合水县| 西平县| 安陆市| 普兰县| 屏东县| 三江| 东阿县| 江西省| 淅川县| 高要市| 武功县| 保靖县| 宁都县| 平塘县|