官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 武定县| 霍山县| 方山县| 开鲁县| 马龙县| 九寨沟县| 金昌市| 收藏| 吕梁市| 外汇| 开江县| 峨眉山市| 石景山区| 墨玉县| 玉林市| 海林市| 仁寿县| 德庆县| 师宗县| 原平市| 蒙阴县| 宁乡县| 顺平县| 肃南| 普兰店市| 双峰县| 西华县| 福建省| 苍山县| 苏尼特右旗| 灵丘县| 益阳市| 周至县| 明星| 大邑县| 阜城县| 靖边县| 长垣县| 宿松县| 来凤县| 项城市|