- Deep Learning with PyTorch
- Vishnu Subramanian
- 182字
- 2021-06-24 19:16:25
Network implementation
As we have all the parameters (x, w, b, and y) required to implement the network, we perform a matrix multiplication between w and x. Then, sum the result with b. That will give our predicted y. The function is implemented as follows:
def simple_network(x):
y_pred = torch.matmul(x,w)+b
return y_pred
PyTorch also provides a higher-level abstraction in torch.nn called layers, which will take care of most of these underlying initialization and operations associated with most of the common techniques available in the neural network. We are using the lower-level operations to understand what happens inside these functions. In later chapters, that is Chapter 5, Deep Learning for Computer Vision and Chapter 6, Deep Learning with Sequence Data and Text, we will be relying on the PyTorch abstractions to build complex neural networks or functions. The previous model can be represented as a torch.nn layer, as follows:
f = nn.Linear(17,1) # Much simpler.
Now that we have calculated the y values, we need to know how good our model is, which is done in the loss function.
- 24小時學會電腦組裝與維護
- 電腦常見問題與故障排除
- INSTANT Wijmo Widgets How-to
- 基于ARM的嵌入式系統和物聯網開發
- 從零開始學51單片機C語言
- Camtasia Studio 8:Advanced Editing and Publishing Techniques
- 電腦維護365問
- 微服務分布式架構基礎與實戰:基于Spring Boot + Spring Cloud
- 電腦高級維修及故障排除實戰
- Intel Edison智能硬件開發指南:基于Yocto Project
- Mastering Quantum Computing with IBM QX
- Drupal Rules How-to
- 計算機組裝與維護(慕課版)
- The Applied Artificial Intelligence Workshop
- 筆記本電腦的結構、原理與維修