官术网_书友最值得收藏!

Optimizers

We will study AdamOptimizer here; TensorFlow AdamOptimizer uses Kingma and Ba's Adam algorithm to manage the learning rate. Adam has many advantages over the simple GradientDescentOptimizer. The first is that it uses moving averages of the parameters, which enables Adam to use a larger step size, and it will converge to this step size without any fine-tuning.

The disadvantage of Adam is that it requires more computation to be performed for each parameter in each training step. GradientDescentOptimizer can be used as well, but it would require more hyperparameter tuning before it would converge as quickly.
The following example shows how to use AdamOptimizer:

  • tf.train.Optimizer creates an optimizer
  • tf.train.Optimizer.minimize(loss, var_list) adds the optimization operation to the computation graph

Here, automatic differentiation computes gradients without user input:

import numpy as np
import seaborn
import matplotlib.pyplot as plt
import tensorflow as tf

# input dataset
xData = np.arange(100, step=.1)
yData = xData + 20 * np.sin(xData/10)

# scatter plot for input data
plt.scatter(xData, yData)
plt.show()

# defining data size and batch size
nSamples = 1000
batchSize = 100

# resize
xData = np.reshape(xData, (nSamples,1))
yData = np.reshape(yData, (nSamples,1))

# input placeholders
x = tf.placeholder(tf.float32, shape=(batchSize, 1))
y = tf.placeholder(tf.float32, shape=(batchSize, 1))

# init weight and bias
with tf.variable_scope("linearRegression"):
W = tf.get_variable("weights", (1, 1), initializer=tf.random_normal_initializer())
b = tf.get_variable("bias", (1,), initializer=tf.constant_initializer(0.0))

y_pred = tf.matmul(x, W) + b
loss = tf.reduce_sum((y - y_pred)**2/nSamples)

# optimizer
opt = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

# gradient descent loop for 500 steps
for _ in range(500):
# random minibatch
indices = np.random.choice(nSamples, batchSize)

X_batch, y_batch = xData[indices], yData[indices]

# gradient descent step
_, loss_val = sess.run([opt, loss], feed_dict={x: X_batch, y: y_batch})

Here is the scatter plot for the dataset:

This is the plot of the learned model on the data:

主站蜘蛛池模板: 莲花县| 读书| 手游| 樟树市| 临泉县| 河津市| 报价| 华蓥市| 泸西县| 平江县| 临江市| 台北县| 沙洋县| 塔城市| 宣恩县| 沁源县| 霍林郭勒市| 安龙县| 洞头县| 南部县| 阿巴嘎旗| 鹤峰县| 万山特区| 庆安县| 沈丘县| 甘泉县| 肃南| 芷江| 资阳市| 怀来县| 汝南县| 砀山县| 南投市| 搜索| 乌恰县| 平罗县| 阜平县| 邢台市| 合作市| 东安县| 平泉县|