官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 玉山县| 抚州市| 宁晋县| 肇东市| 林口县| 响水县| 高青县| 贺兰县| 凤翔县| 呼图壁县| 汾阳市| 安乡县| 孝感市| 平湖市| 连山| 沙坪坝区| 全南县| 东安县| 泸定县| 正定县| 托克托县| 南宁市| 万全县| 长治县| 仁布县| 文山县| 新河县| 乳山市| 白沙| 崇仁县| 安阳市| 太白县| 林芝县| 峨山| 黑山县| 花垣县| 巴青县| 酉阳| 隆林| 景谷| 杭锦旗|