官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 遂昌县| 克东县| 林西县| 广宗县| 谢通门县| 厦门市| 平定县| 龙口市| 土默特右旗| 潮安县| 诸暨市| 惠东县| 凉城县| 兰西县| 余庆县| 浦城县| 凤城市| 吉安县| 盐亭县| 延津县| 静乐县| 屏边| 乐山市| 响水县| 锡林郭勒盟| 祥云县| 龙陵县| 昆明市| 海门市| 成安县| 永吉县| 泸溪县| 阳山县| 合山市| 阜康市| 五台县| 南木林县| 焉耆| 崇礼县| 肇源县| 湘阴县|