官术网_书友最值得收藏!

How to do it...

  1. At the moment, gluon is included in the latest release of MXNet (follow the steps in Building efficient models with MXNet to install MXNet). 
  2. After installing, we can directly import gluon as follows:
from mxnet import gluon
  1. Next, we create some dummy data. For this we need the data to be in MXNet's NDArray or Symbol:
import mxnet as mx
import numpy as np
x_input = mx.nd.empty((1, 5), mx.gpu())
x_input[:] = np.array([[1,2,3,4,5]], np.float32)

y_input = mx.nd.empty((1, 5), mx.gpu())
y_input[:] = np.array([[10, 15, 20, 22.5, 25]], np.float32)
  1. With Gluon, it's really straightforward to build a neural network by stacking layers:
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Dense(16, activation="relu"))
net.add(gluon.nn.Dense(len(y_input)))
  1. Next, we initialize the parameters and we store these on our GPU as follows:
net.collect_params().initialize(mx.init.Normal(), ctx=mx.gpu())
  1. With the following code we set the loss function and the optimizer:
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': .1})
  1. We're ready to start training or model:
n_epochs = 10

for e in range(n_epochs):
for i in range(len(x_input)):
input = x_input[i]
target = y_input[i]
with mx.autograd.record():
output = net(input)
loss = softmax_cross_entropy(output, target)
loss.backward()
trainer.step(input.shape[0])
We've shortly demonstrated how to implement a neural network architecture with Gluon. Gluon is a powerful extension that can be used to implement deep learning architectures with clean code. At the same time, there is almost no performance loss when using Gluon.

主站蜘蛛池模板: 阿拉尔市| 昌图县| 瓦房店市| 郓城县| 西平县| 台前县| 石家庄市| 静海县| 九寨沟县| 阳朔县| 山阴县| 台北市| 达州市| 宿松县| 资阳市| 江山市| 察雅县| 克拉玛依市| 凤山县| 台安县| 宁明县| 安多县| 荃湾区| 建昌县| 杭州市| 镇平县| 石泉县| 寿阳县| 西城区| 卢湾区| 宿州市| 沭阳县| 蒙城县| 腾冲县| 获嘉县| 临沂市| 高青县| 三明市| 仪征市| 湖口县| 公主岭市|