官术网_书友最值得收藏!

How to do it...

  1. To perform specifying weightages at a row level, we will modify our train and test datasets in such a way that the first 2100 data points after ordering the dataset are in the train dataset and the rest are in the test dataset:
X_train = x[:2100,:,:]
y_train = y[:2100]
X_test = x[2100:,:,:]
y_test = y[2100:]
  1. A row in input shall have a higher weight if it occurred more recently and less weightage otherwise:
weights = np.arange(X_train.shape[0]).reshape((X_train.shape[0]),1)/2100

The preceding code block assigns lower weightage to initial data points and a higher weightage to data points that occurred more recently.

Now that we have defined the weights for each row, we will include them in the custom loss function. Note that in this case our custom loss function shall include both the predicted and actual values of output as well as the weight that needs to be assigned to each row.

  1. The partial method enables us to pass more variables than just the actual and predicted values to the custom loss function:
import keras.backend as K
from functools import partial
  1. To pass weights to the custom_loss function, we shall be using the partial function to pass both custom_loss and weights as a parameter in step 7. In the code that follows, we are defining the  custom_loss  function:
def custom_loss_4(y_true, y_pred, weights):
return K.square(K.abs(y_true - y_pred) * weights)
  1. Given that the model we are building has two inputs, input variables and weights corresponding to each row, we will first define the shape input of the two as follows:
input_layer = Input(shape=(5,1))
weights_tensor = Input(shape=(1,))
  1. Now that we have defined the inputs, let's initialize model that accepts the two inputs as follows:
inp1 = Dense(1000, activation='relu')(input_layer)
out = Dense(1, activation='linear')(i3)
model = Model([input_layer, weights_tensor], out)
  1. Now that we have initialized model, we will define the optimization function as follows:
cl4 = partial(custom_loss_4, weights=weights_tensor)

In the preceding scenario, we specify that we need to minimize the custom_loss_4 function and also that we provide an additional variable (weights_tensor) to the custom loss function.

  1. Finally, before fitting the model, we will also provide weights for each row corresponding to the test dataset. Given that we are predicting these values, it is of no use to provide a low weightage to certain rows over others, as the test dataset is not provided to model. However, we will only specify this to make a prediction using the model we defined (which accepts two inputs):
test_weights = np.ones((156,1))
  1. Once we specify the weights  of test data, we will go ahead and fit the model as follows:
model = Model([input_layer, weights_tensor], out)
model.compile(adam, cl4)
model.fit(x=[X_train, weights], y=y_train, epochs=300,batch_size = 32, validation_data = ([X_test, test_weights], y_test))

The preceding results in a test dataset loss that is very different to what we saw in the previous section. We will look at the reason for this in more detail in the Chapter 11Building a Recurrent Neural Network chapter.

You need to be extremely careful while implementing the preceding model, as it has a few pitfalls. However, in general, it is advised to implement models to predict stock price movements only after sufficient due diligence.
主站蜘蛛池模板: 社旗县| 屯门区| 滨海县| 诏安县| 谷城县| 灌阳县| 蓝山县| 鸡西市| 璧山县| 肇州县| 吉首市| 兰州市| 建平县| 同心县| 姚安县| 香格里拉县| 饶平县| 泰兴市| 巴林右旗| 东乌珠穆沁旗| 台中县| 宜宾县| 大悟县| 吐鲁番市| 卓资县| 淄博市| 福安市| 江安县| 新安县| 玉屏| 濮阳市| 政和县| 太和县| 乡宁县| 都匀市| 巴彦淖尔市| 平罗县| 江山市| 成安县| 凤凰县| 丰顺县|