- Deep Learning Quick Reference
- Mike Bernico
- 163字
- 2021-06-24 18:40:16
What happens if we use too many neurons?
If we make our network architecture too complicated, two things will happen:
- We're likely to develop a high variance model
- The model will train slower than a less complicated model
If we add many layers, our gradients will get smaller and smaller until the first few layers barely train, which is called the vanishing gradient problem. We're nowhere near that yet, but we will talk about it later.
In (almost) the words of rap legend Christopher Wallace, aka Notorious B.I.G., the more neurons we come across, the more problems we see. With that said, the variance can be managed with dropout, regularization, and early stopping, and advances in GPU computing make deeper networks possible.
If I had to pick between a network with too many neurons or too few, and I only got to try one experiment, I'd prefer to err on the side of slightly too many.
推薦閱讀
- Java編程全能詞典
- 高效能辦公必修課:Word圖文處理
- 計算機原理
- MCSA Windows Server 2016 Certification Guide:Exam 70-741
- 樂高創意機器人教程(中級 下冊 10~16歲) (青少年iCAN+創新創意實踐指導叢書)
- CorelDRAW X4中文版平面設計50例
- 可編程控制器技術應用(西門子S7系列)
- 四向穿梭式自動化密集倉儲系統的設計與控制
- JavaScript典型應用與最佳實踐
- Working with Linux:Quick Hacks for the Command Line
- Citrix? XenDesktop? 7 Cookbook
- Mastering GitLab 12
- Photoshop CS4數碼照片處理入門、進階與提高
- 渲染王3ds Max三維特效動畫技術
- 計算機應用基礎學習指導與練習(Windows XP+Office 2003)