官术网_书友最值得收藏!

How to do it...

In code, batch normalization is applied as follows:

Note that we will be using the same data-preprocessing steps as those we used in step 1 and step 2 in the Scaling the input dataset recipe.

  1. Import the BatchNormalization method as follows:
from keras.layers.normalization import BatchNormalization
  1. Instantiate a model and build the same architecture as we built when using the regularization technique. The only addition is that we perform batch normalization in a hidden layer:
model = Sequential()
model.add(Dense(1000, input_dim=784,activation='relu', kernel_regularizer = l2(0.01)))
model.add(BatchNormalization())
model.add(Dense(10, activation='softmax', kernel_regularizer = l2(0.01)))
  1. Build, compile, and fit the model as follows:
from keras.optimizers import Adam
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=1024, verbose=1)

The preceding results in training that is much faster than when there is no batch normalization, as follows:

The previous graphs show the training and test loss and accuracy when there is no batch normalization, but only regularization. The following graphs show the training and test loss and accuracy with both regularization and batch normalization:

Note that, in the preceding two scenarios, we see much faster training when we perform batch normalization (test dataset accuracy of ~97%) than compared to when we don't (test dataset accuracy of ~91%).

Thus, batch normalization results in much quicker training.

主站蜘蛛池模板: 从江县| 诸城市| 丰台区| 三穗县| 阳西县| 宣化县| 乐平市| 偏关县| 浦北县| 婺源县| 乾安县| 司法| 屏边| 旌德县| 阿克苏市| 长岭县| 吉隆县| 平果县| 崇信县| 光山县| 忻城县| 吴忠市| 望江县| 汝南县| 买车| 长阳| 政和县| 和田县| 丽水市| 吴江市| 德格县| 合阳县| 鲁山县| 定边县| 吉水县| 宝鸡市| 离岛区| 昂仁县| 如皋市| 青海省| 石泉县|