官术网_书友最值得收藏!

How to do it...

  1. Load and scale the input dataset:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
  1. Let's look at the distribution of the input values:
X_train.flatten()

The preceding code flattens all the inputs into a single list, and hence, is of the shape (47,040,000), which is the same as the 28 x 28 x X_train.shape[0]. Let's plot the distribution of all the input values:

plt.hist(X_train.flatten())
plt.grid('off')
plt.title('Histogram of input values')
plt.xlabel('Input values')
plt.ylabel('Frequency of input values')

We notice that the majority of the inputs are zero (you should note that all the input images have a background that is black hence, a majority of the values are zero, which is the pixel value of the color black).

  1. In this section, let's explore a scenario where we invert the colors, in which the background is white and the letters are written in black, using the following code:
X_train = 1-X_train
X_test = 1-X_test

Let's plot the images:

import matplotlib.pyplot as plt
%matplotlib inline
plt.subplot(221)
plt.imshow(X_train[0].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(222)
plt.imshow(X_train[1].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(223)
plt.imshow(X_train[2].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(224)
plt.imshow(X_train[3].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.show()

They will look as follows:

The histogram of the resulting images now looks as follows:

You should notice that the majority of the input values now have a value of one.

  1. Let's go ahead and build our model using the same model architecture that we built in the Scaling input dataset section:
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=32, verbose=1)
  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the one we used in step 8 of the Training a vanilla neural network recipe):

We should note that model accuracy has now fallen to ~97%, compared to ~98% when using the same model for the same number of epochs and batch size, but on a dataset that has a majority of zeros (and not a majority of ones). Additionally, the model achieved an accuracy of 97%, considerably more slowly than in the scenario where the majority of the input pixels are zero.

The intuition for the decrease in accuracy, when the majority of the data points are non-zero is that, when the majority of pixels are zero, the model's task was easier (less weights had to be fine-tuned), as it had to make predictions based on a few pixel values (the minority that had a pixel value greater than zero). However, a higher number of weights need to be fine-tuned to make predictions when a majority of the data points are non-zero.

主站蜘蛛池模板: 彩票| 鲁甸县| 龙南县| 措勤县| 晋中市| 长顺县| 全州县| 邳州市| 屏山县| 汕尾市| 泰安市| 承德县| 平安县| 凤庆县| 新民市| 水富县| 衡水市| 乡城县| 柞水县| 泰州市| 蒲江县| 康保县| 龙口市| 广河县| 平潭县| 陇南市| 紫云| 江源县| 鄂伦春自治旗| 湘西| 普洱| 邯郸县| 吴桥县| 阳信县| 新安县| 海伦市| 时尚| 江源县| 承德县| 澎湖县| 通化县|