- Deep Learning with Keras
- Antonio Gulli Sujit Pal
- 74字
- 2021-07-02 23:58:05
Increasing the size of batch computation
Gradient descent tries to minimize the cost function on all the examples provided in the training sets and, at the same time, for all the features provided in the input. Stochastic gradient descent is a much less expensive variant, which considers only BATCH_SIZE examples. So, let's see what the behavior is by changing this parameter. As you can see, the optimal accuracy value is reached for BATCH_SIZE=128:

推薦閱讀
- 觸摸屏實用技術與工程應用
- 網絡服務器配置與管理(第3版)
- 電腦維護與故障排除傻瓜書(Windows 10適用)
- 深入淺出SSD:固態存儲核心技術、原理與實戰
- Deep Learning with PyTorch
- 硬件產品經理成長手記(全彩)
- 電腦常見故障現場處理
- 嵌入式系統中的模擬電路設計
- Hands-On Machine Learning with C#
- Istio服務網格技術解析與實踐
- Mastering Machine Learning on AWS
- 單片機原理及應用
- UML精粹:標準對象建模語言簡明指南(第3版)
- Corona SDK Mobile Game Development:Beginner's Guide
- Advanced Machine Learning with R