官术网_书友最值得收藏!

Measuring the performance of our model

Now that our MLP has been trained, we can start to understand how good it is. I'll make a prediction on our Train, Val, and Test datasets to do so. The code for the same is as follows:

print("Model Train MAE: " + str(mean_absolute_error(data["train_y"], model.predict(data["train_X"]))))
print("Model Val MAE: " + str(mean_absolute_error(data["val_y"], model.predict(data["val_X"]))))
print("Model Test MAE: " + str(mean_absolute_error(data["test_y"], model.predict(data["test_X"]))))

For our MLP, this is how well we did:

Model Train MAE: 0.190074701809
Model Val MAE: 0.213255747475
Model Test MAE: 0.199885450841

Keep in mind that our data has been scaled to 0 mean and unit variance. The Train MAE is 0.19, and our Val MAE is 0.21. These two errors are pretty close to each other, so over fitting isn't something I'd be too concerned about. Because I am expecting some amount of over fitting that I don't see (usually over fitting is the bigger problem), I hypothesize this model might have too much bias. Said another way, we might not be able to fit the data closely enough. When this occurs, we need to add more layers, more neurons, or both to our model. We need to go deeper. Let's do that next. 

We can attempt to reduce network bias by adding parameters to the network, in the form of more neurons. While you might be tempted to start tuning your optimizer, it's usually better to find a network architecture you're comfortable with first. 
主站蜘蛛池模板: 娄底市| 包头市| 喀喇沁旗| 宜宾市| 丘北县| 澄江县| 陆良县| 墨江| 灵台县| 吴堡县| 揭西县| 丽水市| 阜宁县| 南城县| 太仆寺旗| 海门市| 昔阳县| 阿巴嘎旗| 普宁市| 邢台市| 喀什市| 三台县| 舞阳县| 公安县| 格尔木市| 鹰潭市| 旬阳县| 贵定县| 菏泽市| 新营市| 蓬安县| 五大连池市| 西安市| 巴里| 桂林市| 邢台市| 多伦县| 铜鼓县| 南漳县| 城口县| 保康县|