官术网_书友最值得收藏!

How to do it...

  1. We start by import the libraries as follows:
import numpy as np 
import pandas as pd
from sklearn.model_selection import train_test_split

from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import Adam

from sklearn.preprocessing import StandardScaler

SEED = 2017
  1. Load dataset:
data = pd.read_csv('Data/winequality-red.csv', sep=';')
y = data['quality']
X = data.drop(['quality'], axis=1)
  1. Split data for training and testing:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED)
  1. Print average quality and first rows of training set:
print('Average quality training set: {:.4f}'.format(y_train.mean()))
X_train.head()

In the following screenshot, we can see an example of the output of the training data:

Figure 2-8: Training data
  1. An important next step is to normalize the input data:
scaler = StandardScaler().fit(X_train)
X_train = pd.DataFrame(scaler.transform(X_train))
X_test = pd.DataFrame(scaler.transform(X_test))
  1. Determine baseline predictions:
# Predict the mean quality of the training data for each validation input
print('MSE:', np.mean((y_test - ([y_train.mean()] * y_test.shape[0])) ** 2).round(4))
## MSE: 0.594
  1. Now, let's build our neural network by defining the network architecture:
model = Sequential()
# First hidden layer with 100 hidden units
model.add(Dense(200, input_dim=X_train.shape[1], activation='relu'))
# Second hidden layer with 50 hidden units
model.add(Dense(25, activation='relu'))
# Output layer
model.add(Dense(1, activation='linear'))
# Set optimizer
opt = Adam()
# Compile model
model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])
  1. Let's define the callback for early stopping and saving the best model:
callbacks = [
EarlyStopping(monitor='val_acc', patience=20, verbose=2),
ModelCheckpoint('checkpoints/multi_layer_best_model.h5', monitor='val_acc', save_best_only=True, verbose=0)
]
  1. Run the model with a batch size of 64, 5,000 epochs, and a validation split of 20%:
batch_size = 64
n_epochs = 5000
model.fit(X_train.values, y_train, batch_size=batch_size, epochs=n_epochs, validation_split=0.2,
verbose=2,
callbacks=callbacks)
  1. We can now print the performance on the test set after loading the optimal weights:
best_model = model
best_model.load_weights('checkpoints/multi_layer_best_model.h5')
best_model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])

# Evaluate on test set
score = best_model.evaluate(X_test.values, y_test, verbose=0)
print('Test accuracy: %.2f%%' % (score[1]*100))

## Test accuracy: 66.25%
## Benchmark accuracy on dataset 62.4%
With a small dataset, it's advisable to retrain on the complete training set (without validation set) and increase the number of epochs proportional to the additional data. Another option, is to use cross-validation and average the results when making predictions.  
主站蜘蛛池模板: 五大连池市| 贵州省| 曲阳县| 大荔县| 八宿县| 大关县| 安新县| 西和县| 密云县| 广元市| 米脂县| 金寨县| 庄河市| 金秀| 杭锦后旗| 浮梁县| 满城县| 冕宁县| 肇源县| 清丰县| 大连市| 城口县| 射阳县| 新竹县| 枣阳市| 九龙城区| 竹山县| 盐亭县| 镇康县| 钦州市| 达州市| 绥宁县| 通海县| 军事| 亳州市| 克东县| 商南县| 合水县| 托里县| 彰化市| 海门市|