- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 198字
- 2021-07-02 12:46:33
Defining the custom loss function
In the previous section, we used the predefined mean absolute error loss function to perform the optimization. In this section, we will learn about defining a custom loss function to perform optimization.
The custom loss function that we shall build is a modified mean squared error value, where the error is the difference between the square root of the actual value and the square root of the predicted value.
The custom loss function is defined as follows:
import keras.backend as K
def loss_function(y_true, y_pred):
return K.square(K.sqrt(y_pred)-K.sqrt(y_true))
Now that we have defined the loss function, we will be reusing the same input and output datasets that we prepared in previous section, and we will also be using the same model that we defined earlier.
Now, let's compile the model:
model.compile(loss=loss_function, optimizer='adam')
In the preceding code, note that we defined the loss value as the custom loss function that we defined earlier—loss_function.
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
Once we fit the model, we will note that the mean absolute error is ~6.5 units, which is slightly less than the previous iteration where we used the mean_absolute_error loss function.
- Learning LibGDX Game Development(Second Edition)
- SQL Server 從入門到項目實踐(超值版)
- 造個小程序:與微信一起干件正經事兒
- Visual C++數字圖像模式識別技術詳解
- Learning AndEngine
- 教孩子學編程:C++入門圖解
- 青少年學Python(第1冊)
- Java語言程序設計教程
- 區塊鏈項目開發指南
- GitHub入門與實踐
- OpenCV with Python Blueprints
- PHP+MySQL動態網站開發從入門到精通(視頻教學版)
- Hadoop Blueprints
- VMware vSphere Design Essentials
- Python滲透測試編程技術:方法與實踐(第2版)