官术网_书友最值得收藏!

Getting ready

To understand the impact of varying the optimizer on network accuracy, let's contrast the scenario laid out in previous sections (which was the Adam optimizer) with using a stochastic gradient descent optimizer in this section, while reusing the same MNIST training and test datasets that were scaled (the same data-preprocessing steps as those of step 1 and step 2 in the Scaling the dataset recipe):

model = Sequential()
model.add(Dense(1000, input_dim=784, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, verbose=1)

Note that when we used the stochastic gradient descent optimizer in the preceding code, the final accuracy after 100 epochs is ~98% (the code to generate the plots in the following diagram remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

However, we should also note that the model achieved the high accuracy levels much more slowly when compared to the model that used Adam optimization.

主站蜘蛛池模板: 永宁县| 奎屯市| 勐海县| 五峰| 沽源县| 鱼台县| 枝江市| 弥渡县| 浦江县| 永德县| 鹤峰县| 江达县| 河池市| 元氏县| 浦江县| 双流县| 渝北区| 隆昌县| 吴旗县| 江阴市| 八宿县| 玛纳斯县| 莲花县| 安顺市| 监利县| 安顺市| 维西| 淮阳县| 徐州市| 冷水江市| 葫芦岛市| 繁峙县| 陆丰市| 鱼台县| 德江县| 浦东新区| 永福县| 平陆县| 随州市| 阿瓦提县| 湖北省|