官术网_书友最值得收藏!

Our first neural network

We present our first neural network, which learns how to map training examples (input array) to targets (output array). Let's assume that we work for one of the largest online companies, Wondermovies, which serves videos on demand. Our training dataset contains a feature that represents the average hours spent by users watching movies on the platform and we would like to predict how much time each user would spend on the platform in the coming week. It's just an imaginary use case, don't think too much about it. Some of the high-level activities for building such a solution are as follows:

  • Data preparation: The get_data function prepares the tensors (arrays) containing input and output data
  • Creating learnable parameters: The get_weights function provides us with tensors containing random values that we will optimize to solve our problem
  • Network model: The simple_network function produces the output for the input data, applying a linear rule, multiplying weights with input data, and adding the bias term (y = Wx+b)
  • Loss: The loss_fn function provides information about how good the model is
  • Optimizer: The optimize function helps us in adjusting random weights created initially to help the model calculate target values more accurately

If you are new to machine learning, do not worry, as we will understand exactly what each function does by the end of the chapter. The following functions abstract away PyTorch code to make it easier for us to understand. We will dive deep into each of these functionalities in detail. The aforementioned high level activities are common for most machine learning and deep learning problems. Later chapters in the book discuss techniques that can be used to improve each function to build useful applications.

Lets consider following linear regression equation for our neural network:

Let's write our first neural network in PyTorch:

x,y = get_data() # x - represents training data,y -                 represents target variables

w,b = get_weights() # w,b - Learnable parameters

for i in range(500):
y_pred = simple_network(x) # function which computes wx + b
loss = loss_fn(y,y_pred) # calculates sum of the squared differences of y and y_pred

if i % 50 == 0:
print(loss)
optimize(learning_rate) # Adjust w,b to minimize the loss

By the end of this chapter, you will have an idea of what is happening inside each function.

主站蜘蛛池模板: 思南县| 海安县| 龙陵县| 开平市| 建昌县| 育儿| 陆川县| 德安县| 化德县| 弥勒县| 商河县| 朝阳市| 蒙城县| 衡阳市| 尼勒克县| 建始县| 香格里拉县| 镇雄县| 浑源县| 天镇县| 盘锦市| 永城市| 荆州市| 高雄市| 文昌市| 万山特区| 梁河县| 湘潭县| 衢州市| 泰州市| 定远县| 黎城县| 宝鸡市| 永福县| 贵港市| 溧水县| 体育| 渭南市| 天等县| 玉山县| 平泉县|