官术网_书友最值得收藏!

How to do it...

  1. Load and scale the input dataset:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
  1. Let's look at the distribution of the input values:
X_train.flatten()

The preceding code flattens all the inputs into a single list, and hence, is of the shape (47,040,000), which is the same as the 28 x 28 x X_train.shape[0]. Let's plot the distribution of all the input values:

plt.hist(X_train.flatten())
plt.grid('off')
plt.title('Histogram of input values')
plt.xlabel('Input values')
plt.ylabel('Frequency of input values')

We notice that the majority of the inputs are zero (you should note that all the input images have a background that is black hence, a majority of the values are zero, which is the pixel value of the color black).

  1. In this section, let's explore a scenario where we invert the colors, in which the background is white and the letters are written in black, using the following code:
X_train = 1-X_train
X_test = 1-X_test

Let's plot the images:

import matplotlib.pyplot as plt
%matplotlib inline
plt.subplot(221)
plt.imshow(X_train[0].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(222)
plt.imshow(X_train[1].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(223)
plt.imshow(X_train[2].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(224)
plt.imshow(X_train[3].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.show()

They will look as follows:

The histogram of the resulting images now looks as follows:

You should notice that the majority of the input values now have a value of one.

  1. Let's go ahead and build our model using the same model architecture that we built in the Scaling input dataset section:
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=32, verbose=1)
  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the one we used in step 8 of the Training a vanilla neural network recipe):

We should note that model accuracy has now fallen to ~97%, compared to ~98% when using the same model for the same number of epochs and batch size, but on a dataset that has a majority of zeros (and not a majority of ones). Additionally, the model achieved an accuracy of 97%, considerably more slowly than in the scenario where the majority of the input pixels are zero.

The intuition for the decrease in accuracy, when the majority of the data points are non-zero is that, when the majority of pixels are zero, the model's task was easier (less weights had to be fine-tuned), as it had to make predictions based on a few pixel values (the minority that had a pixel value greater than zero). However, a higher number of weights need to be fine-tuned to make predictions when a majority of the data points are non-zero.

主站蜘蛛池模板: 靖安县| 南木林县| 永济市| 深州市| 南川市| 天全县| 青海省| 武平县| 尉犁县| 图们市| 珠海市| 锡林浩特市| 澄江县| 明星| 集安市| 宿州市| 威信县| 金塔县| 晋城| 汉沽区| 会同县| 陈巴尔虎旗| 临武县| 建德市| 闻喜县| 故城县| 祁门县| 水富县| 阿拉尔市| 邵武市| 灵台县| 卫辉市| 蒲江县| 旺苍县| 兴海县| 西城区| 安达市| 竹溪县| 鄂托克前旗| 阜阳市| 威宁|