官术网_书友最值得收藏!

Getting ready

In the previous chapter on building feedforward neural networks, we learned that the learning rate is used in updating weights and the change in weight is proportional to the amount of loss reduction.

Additionally, a change in a weight's value is equal to the decrease in loss multiplied by the learning rate. Hence, the lower the learning rate, the lower the change in the weight value, and vice versa.

You can essentially think of the weight values as a continuous spectrum where the weights are initialized randomly. When the change in the weight values is great, there is a good possibility that the various weight values in the spectrum are not considered. However, when the change in the weight value is slight, the weights might achieve a global minima, as more possible weight values could be considered.

To understand this further, let's consider the toy example of fitting the y = 2x line where the initial weight value is 1.477 and the initial bias value is zero. The feedforward and back propagation functions will remain the same as we saw in the previous chapter:

def feed_forward(inputs, outputs, weights):
hidden = np.dot(inputs,weights[0])
out = hidden+weights[1]
squared_error = (np.square(out - outputs))
return squared_error

def update_weights(inputs, outputs, weights, epochs, lr):
for epoch in range(epochs):
org_loss = feed_forward(inputs, outputs, weights)
wts_tmp = deepcopy(weights)
wts_tmp2 = deepcopy(weights)
for ix, wt in enumerate(weights):
print(ix, wt)
wts_tmp[-(ix+1)] += 0.0001
loss = feed_forward(inputs, outputs, wts_tmp)
del_loss = np.sum(org_loss - loss)/(0.0001*len(inputs))
wts_tmp2[-(ix+1)] += del_loss*lr
wts_tmp = deepcopy(weights)
weights = deepcopy(wts_tmp2)
return wts_tmp2

Note that the only change from the backward propagation function that we saw in the previous chapter is that we are passing the learning rate as a parameter in the preceding function. The value of weight when the learning rate is 0.01 over a different number of epochs is as follows:

w_val = []
b_val = []
for k in range(1000):
w_new, b_new = update_weights(x,y,w,(k+1),0.01)
w_val.append(w_new)
b_val.append(b_new)

The plot of the change in weight over different epochs can be obtained using the following code:

import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(w_val)
plt.title('Weight value over different epochs when learning rate is 0.01')
plt.xlabel('epochs')
plt.ylabel('weight value')
plt.grid('off')

The output of the preceding code is as follows:

In a similar manner, the value of the weight over a different number of epochs when the learning rate is 0.1 is as follows:

This screenshot shows the value of the weight over a different number of epochs when the learning rate is 0.5:

Note that, in the preceding scenario, there was a drastic change in the weight values initially, and the 0.1 learning rate converged, while the 0.5 learning rate did not converge to an optimal solution, and thus became stuck in a local minima.

In the case when the learning rate was 0.5, given the weight value was stuck in a local minima, it could not reach the optimal value of two.

主站蜘蛛池模板: 孝昌县| 古交市| 清苑县| 福海县| 河曲县| 大田县| 宁强县| 宜良县| 高尔夫| 棋牌| 商南县| 孝感市| 休宁县| 北宁市| 巴林右旗| 时尚| 绥棱县| 永登县| 响水县| 焉耆| 宝应县| 鄢陵县| 获嘉县| 永春县| 昭觉县| 长岭县| 塘沽区| 沂水县| 华坪县| 仁化县| 武鸣县| 阜康市| 将乐县| 枣强县| 资源县| 青神县| 罗田县| 凤凰县| 剑河县| 盐池县| 扎鲁特旗|