官术网_书友最值得收藏!

Compensating factors in machine learning models

Compensating factors in machine learning models to equate statistical diagnostics is explained with the example of a beam being supported by two supports. If one of the supports doesn't exist, the beam will eventually fall down by moving out of balance. A similar analogy is applied for comparing statistical modeling and machine learning methodologies here.

The two-point validation is performed on the statistical modeling methodology on training data using overall model accuracy and individual parameters significance test. Due to the fact that either linear or logistic regression has less variance by shape of the model itself, hence there would be very little chance of it working worse on unseen data. Hence, during deployment, these models do not incur too many deviated results.

However, in the machine learning space, models have a high degree of flexibility which can change from simple to highly complex. On top, statistical diagnostics on individual variables are not performed in machine learning. Hence, it is important to ensure the robustness to avoid overfitting of the models, which will ensure its usability during the implementation phase to ensure correct usage on unseen data.

As mentioned previously, in machine learning, data will be split into three parts (train data - 50 percent, validation data - 25 percent, testing data - 25 percent) rather than two parts in statistical methodology. Machine learning models should be developed on training data, and its hyperparameters should be tuned based on validation data to ensure the two-point validation equivalence; this way, the robustness of models is ensured without diagnostics performed at an individual variable level:

Before diving deep into comparisons between both streams, we will start understanding the fundamentals of each model individually. Let us start with linear regression! This model might sound trivial; however, knowing the linear regression working principles will create a foundation for more advanced statistical and machine learning models. Below are the assumptions of linear regression.

主站蜘蛛池模板: 玉山县| 江津市| 团风县| 娱乐| 安福县| 郸城县| 横峰县| 溧水县| 高唐县| 丹棱县| 资阳市| 青岛市| 承德市| 巴楚县| 台江县| 旺苍县| 枣庄市| 白水县| 瑞昌市| 英超| 新郑市| 桂阳县| 乌拉特中旗| 兴化市| 兴文县| 吉木萨尔县| 饶平县| 兴隆县| 五莲县| 昌江| 乐清市| 罗源县| 仙桃市| 栾川县| 通城县| 九龙坡区| 建德市| 泰宁县| 平阴县| 扬中市| 繁昌县|