- Deep Learning By Example
- Ahmed Menshawy
- 274字
- 2021-06-24 18:52:43
Apparent (training set) error
This the first type of error that you don't have to care about minimizing. Getting a small value for this type of error doesn't mean that your model will work well over the unseen data (generalize). To better understand this type of error, we'll give a trivial example of a class scenario. The purpose of solving problems in the classroom is not to be able to solve the same problem again in the exam, but to be able to solve other problems that won’t necessarily be similar to the ones you practiced in the classroom. The exam problems could be from the same family of the classroom problems, but not necessarily identical.
Apparent error is the ability of the trained model to perform on the training set for which we already know the true outcome/output. If you manage to get 0 error over the training set, then it is a good indicator for you that your model (mostly) won't work well on unseen data (won't generalize). On the other hand, data science is about using a training set as a base knowledge for the learning algorithm to work well on future unseen data.
In Figure 3, the red curve represents the apparent error. Whenever you increase the model's ability to memorize things (such as increasing the model complexity by increasing the number of explanatory features), you will find that this apparent error approaches zero. It can be shown that if you have as many features as observations/samples, then the apparent error will be zero:
- Big Data Analytics with Hadoop 3
- 大學計算機基礎:基礎理論篇
- Verilog HDL數字系統設計入門與應用實例
- 來吧!帶你玩轉Excel VBA
- 視覺檢測技術及智能計算
- 水晶石精粹:3ds max & ZBrush三維數字靜幀藝術
- 統計學習理論與方法:R語言版
- 構建高性能Web站點
- Docker on Amazon Web Services
- 空間機械臂建模、規劃與控制
- Mastering Ceph
- 基于RPA技術財務機器人的應用與研究
- Hands-On Business Intelligence with Qlik Sense
- PostgreSQL 10 High Performance
- Eclipse全程指南