官术网_书友最值得收藏!

Getting ready

In the previous section, we assigned the same weightage for each class; that is, the categorical cross entropy loss is the same if the magnitude of difference between actual and predicted is the same, irrespective of whether it is for the prediction of a default or not a default.

To understand the scenario further, let's consider the following example:

In the preceding scenario, the cross-entropy loss value is just the same, irrespective of the actual value of default.

However, we know that our objective is to capture as many actual defaulters as possible in the top 10% of predictions when ranked by probability.

Hence, let's go ahead and assign a higher weight of loss (a weight of 100) when the actual value of default is 1 and a lower weightage (a weight of 1) when the actual value of default is 0.

The previous scenario now changes as follows:

 

Now, if we notice the cross entropy loss, it is much higher when the predictions are wrong when the actual value of default is 1 compared to the predictions when the actual value of default is 0.

Now that we have understood the intuition of assigning weightages to classes, let's go ahead and assign weights to output classes in the credit default dataset.

All the steps performed to build the dataset and model remain the same as in the previous section, except for the model-fitting process.

主站蜘蛛池模板: 萍乡市| 宁城县| 旺苍县| 拜泉县| 新河县| 牙克石市| 松潘县| 龙海市| 福鼎市| 龙陵县| 乐山市| 九龙城区| 万载县| 苏尼特右旗| 垦利县| 玉屏| 德保县| 保康县| 横山县| 富宁县| 顺平县| 苍梧县| 隆子县| 大埔区| 嘉鱼县| 龙南县| 苏尼特左旗| 江口县| 澄江县| 马鞍山市| 呼玛县| 咸阳市| 靖安县| 定远县| 平安县| 望奎县| 云浮市| 介休市| 苏尼特右旗| 财经| 东光县|