- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 145字
- 2021-08-27 18:51:56
The vanishing gradient problem
The vanishing gradient problem is one of the problems associated with the training of artificial neural networks when the neurons present in the early layers are not able to learn because the gradients that train the weights shrink down to zero. This happens due to the greater depth of neural network, along with activation functions with derivatives resulting in low value.
Try the following steps:
- Create one hidden layer neural network
- Add more hidden layers, one by one
We observe the gradient with regards to all the nodes, and find that the gradient values get relatively smaller when we move from the later layers to the early layers. This condition worsens with the further addition of layers. This shows that the early layer neurons are learning slowly compared to the later layer neurons. This condition is called the vanishing gradient problem.
- 我的J2EE成功之路
- 機器學習及應用(在線實驗+在線自測)
- Machine Learning for Cybersecurity Cookbook
- Python Artificial Intelligence Projects for Beginners
- 實時流計算系統設計與實現
- Apache Hive Essentials
- 快學Flash動畫百例
- 工業機器人入門實用教程(KUKA機器人)
- 視覺檢測技術及智能計算
- 新手學電腦快速入門
- 基于單片機的嵌入式工程開發詳解
- 運動控制系統
- 邊緣智能:關鍵技術與落地實踐
- LMMS:A Complete Guide to Dance Music Production Beginner's Guide
- Linux系統管理員工具集