- Deep Learning with Theano
- Christopher Bourez
- 232字
- 2021-07-15 17:17:00
Classification loss function
The loss function is an objective function to minimize during training to get the best model. Many different loss functions exist.
In a classification problem, where the target is to predict the correct class among k classes, cross-entropy is commonly used as it measures the difference between the real probability distribution, q, and the predicted one, p, for each class:

Here, i is the index of the sample in the dataset, n is the number of samples in the dataset, and k is the number of classes.
While the real probability

of each class is unknown, it can simply be approximated in practice by the empirical distribution, that is, randomly drawing a sample out of the dataset in the dataset order. The same way, the cross-entropy of any predicted probability, p
, can be approximated by the empirical cross-entropy:

Here,

is the probability estimated by the model for the correct class of example

.
Accuracy and cross-entropy both evolve in the same direction but measure different things. Accuracy measures how much the predicted class is correct, while cross-entropy measure the distance between the probabilities. A decrease in cross-entropy explains that the probability to predict the correct class gets better, but the accuracy may remain constant or drop.
While accuracy is discrete and not differentiable, the cross-entropy loss is a differentiable function that can be easily used for training a model.
- 程序員面試筆試寶典
- Reactive Programming with Swift
- OpenCV 3和Qt5計算機視覺應用開發
- MariaDB High Performance
- Java持續交付
- Microsoft Dynamics GP 2013 Reporting, Second Edition
- Java EE企業級應用開發教程(Spring+Spring MVC+MyBatis)
- 汽車人機交互界面整合設計
- .NET 4.0面向對象編程漫談:應用篇
- 程序員必會的40種算法
- Building a Media Center with Raspberry Pi
- 輕松學Scratch 3.0 少兒編程(全彩)
- Mastering R for Quantitative Finance
- Python程序設計
- Access 2013數據庫應用案例課堂