官术网_书友最值得收藏!

How to do it...

In the previous recipe, we built a model with a batch size of 32. In this recipe, we will go ahead and implement the model to contrast the scenario between a low batch size and a high batch size for the same number of epochs:

  1. Preprocess the dataset and fit the model as follows:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=30000, verbose=1)

Note that the only change in code is the batch_size parameter in the model fit process.

  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

In the preceding scenario, you should notice that the model accuracy reached ~98% at a much later epoch, when compared to the model accuracy it reached when the batch size was smaller.

主站蜘蛛池模板: 孟津县| 城固县| 塔河县| 河曲县| 彰化市| 东乡| 鄂伦春自治旗| 新源县| 鹤庆县| 兰坪| 郴州市| 陆川县| 平邑县| 武强县| 大名县| 闽侯县| 寿宁县| 临桂县| 会理县| 禄丰县| 绥宁县| 信宜市| 永福县| 靖安县| 当涂县| 额济纳旗| 诸城市| 阿克陶县| 普兰店市| 瑞安市| 拜城县| 莲花县| 龙州县| 吴川市| 厦门市| 五华县| 丰城市| 依安县| 乐东| 化州市| 威信县|