官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 射洪县| 罗平县| 姜堰市| 景洪市| 永福县| 双柏县| 汤阴县| 拉孜县| 抚州市| 花莲市| 泽州县| 正蓝旗| 赫章县| 长治县| 都安| 历史| 盐池县| 东宁县| 疏勒县| 万州区| 沂水县| 鄂温| 营口市| 佳木斯市| 茌平县| 湟中县| 天门市| 琼中| 陇川县| 吉安县| 贵港市| 贡觉县| 巴南区| 哈巴河县| 乌海市| 石柱| 会宁县| 江城| 五常市| 闵行区| 拉萨市|