- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 215字
- 2021-07-02 12:46:30
Speeding up the training process using batch normalization
In the previous section on the scaling dataset, we learned that optimization is slow when the input data is not scaled (that is, it is not between zero and one).
The hidden layer value could be high in the following scenarios:
- Input data values are high
- Weight values are high
- The multiplication of weight and input are high
Any of these scenarios can result in a large output value on the hidden layer.
Note that the hidden layer is the input layer to output layer. Hence, the phenomenon of high input values resulting in a slow optimization holds true when hidden layer values are large as well.
Batch normalization comes to the rescue in this scenario. We have already learned that, when input values are high, we perform scaling to reduce the input values. Additionally, we have learned that scaling can also be performed using a different method, which is to subtract the mean of the input and divide it by the standard deviation of the input. Batch normalization performs this method of scaling.
Typically, all values are scaled using the following formula:




Notice that γ and β are learned during training, along with the original parameters of the network.
- 微服務設計(第2版)
- Flask Web全棧開發(fā)實戰(zhàn)
- Kali Linux Web Penetration Testing Cookbook
- Mastering Adobe Captivate 2017(Fourth Edition)
- The Android Game Developer's Handbook
- Mastering OpenCV Android Application Programming
- WSO2 Developer’s Guide
- Python Tools for Visual Studio
- C++程序設計基礎教程
- Python漫游數學王國:高等數學、線性代數、數理統(tǒng)計及運籌學
- PHP+MySQL+Dreamweaver動態(tài)網站開發(fā)從入門到精通(第3版)
- 細說Python編程:從入門到科學計算
- Serverless Web Applications with React and Firebase
- Learning Concurrency in Python
- 會當凌絕頂:Java開發(fā)修行實錄