- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 127字
- 2021-08-27 18:51:56
Limitations of deep learning
Deep neural networks are black boxes of weights and biases trained over a large amount of data to find hidden patterns through inner representations; it would be impossible for humans, and even if it were possible, then scalability would be an issue. Every neural probably has a different weight. Thus, they will have different gradients.
Training happens during backpropagation. Thus, the direction of training is always from the later layers (output/right side) to the early layers (input/left side). This results in later layers learning very well as compared to the early layers. The deeper the network gets, the more the condition deteriorates. This give rise to two possible problems associated with deep learning, which are:
- The vanishing gradient problem
- The exploding gradient problem
推薦閱讀
- Practical Data Analysis
- 大數(shù)據(jù)戰(zhàn)爭(zhēng):人工智能時(shí)代不能不說的事
- 工業(yè)機(jī)器人產(chǎn)品應(yīng)用實(shí)戰(zhàn)
- CentOS 8 Essentials
- Visual FoxPro數(shù)據(jù)庫(kù)基礎(chǔ)及應(yīng)用
- 網(wǎng)絡(luò)管理工具實(shí)用詳解
- Linux系統(tǒng)管理員工具集
- Microsoft Dynamics CRM 2013 Marketing Automation
- Cloudera Hadoop大數(shù)據(jù)平臺(tái)實(shí)戰(zhàn)指南
- Windows 7故障與技巧200例
- 數(shù)據(jù)清洗
- 計(jì)算機(jī)應(yīng)用基礎(chǔ)學(xué)習(xí)指導(dǎo)與練習(xí)(Windows XP+Office 2003)
- Raspberry Pi 3 Projects for Java Programmers
- Containerization with Ansible 2
- 牛津通識(shí)讀本:大數(shù)據(jù)(中文版)