官术网_书友最值得收藏!

How to do it...

In code, batch normalization is applied as follows:

Note that we will be using the same data-preprocessing steps as those we used in step 1 and step 2 in the Scaling the input dataset recipe.

  1. Import the BatchNormalization method as follows:
from keras.layers.normalization import BatchNormalization
  1. Instantiate a model and build the same architecture as we built when using the regularization technique. The only addition is that we perform batch normalization in a hidden layer:
model = Sequential()
model.add(Dense(1000, input_dim=784,activation='relu', kernel_regularizer = l2(0.01)))
model.add(BatchNormalization())
model.add(Dense(10, activation='softmax', kernel_regularizer = l2(0.01)))
  1. Build, compile, and fit the model as follows:
from keras.optimizers import Adam
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=1024, verbose=1)

The preceding results in training that is much faster than when there is no batch normalization, as follows:

The previous graphs show the training and test loss and accuracy when there is no batch normalization, but only regularization. The following graphs show the training and test loss and accuracy with both regularization and batch normalization:

Note that, in the preceding two scenarios, we see much faster training when we perform batch normalization (test dataset accuracy of ~97%) than compared to when we don't (test dataset accuracy of ~91%).

Thus, batch normalization results in much quicker training.

主站蜘蛛池模板: 夏河县| 逊克县| 慈利县| 郴州市| 柳州市| 新河县| 黄梅县| 林州市| 阜城县| 太仓市| 东阳市| 都昌县| 眉山市| 五峰| 汉沽区| 普安县| 南靖县| 聂拉木县| 上饶县| 汝南县| 武宁县| 普定县| 和平县| 进贤县| 铅山县| 瑞丽市| 陵川县| 芷江| 高州市| 栾川县| 准格尔旗| 漳平市| 高碑店市| 察哈| 双柏县| 罗平县| 都安| 敦化市| 洪江市| 无为县| 民权县|