官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 156字
  • 2021-06-30 19:17:55

Automatic differentiation

TensorFlow provides a very convenient API that can help us to directly derive the deltas and update the network parameters:

# Define the cost as the square of the errors
cost = tf.square(error)

# The Gradient Descent Optimizer will do the heavy lifting
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Define the function we want to approximate
def linear_fun(x):
y = x[:,0] * 2 + x[:,1] * 4 + 1
return y.reshape(y.shape[0],1)

# Other variables during learning
train_batch_size = 100
test_batch_size = 50

# Normal TensorFlow - initialize values, create a session and run the model
sess = tf.Session()
sess.run(tf.initialize_all_variables())

for i in range(1000):
x_value = np.random.rand(train_batch_size,2)
y_value = linear_fun(x_value)
sess.run(optimizer, feed_dict={a_0:x_value, y: y_value})
if i % 100 == 0:
test_x = np.random.rand(test_batch_size,2)
res_val = sess.run(res, feed_dict =
{a_0: test_x, y: linear_fun(test_x)})
print res_val

In addition to this basic setting, let’s now talk about a few important concepts you might encounter in practice.

主站蜘蛛池模板: 广河县| 武夷山市| 财经| 冷水江市| 新和县| 荔波县| 门源| 兴隆县| 通渭县| 中牟县| 麦盖提县| 陆川县| 抚松县| 通许县| 木兰县| 微博| 鄂托克前旗| 措勤县| 商水县| 屏东市| 阿瓦提县| 万年县| 贡嘎县| 福贡县| 汝城县| 湖南省| 汪清县| 准格尔旗| 巫山县| 黎城县| 广汉市| 米泉市| 雅江县| 嵊泗县| 苏尼特左旗| 思南县| 同德县| 湘乡市| 剑河县| 永靖县| 宁安市|