- R Deep Learning Essentials
- Mark Hodnett Joshua F. Wiley
- 441字
- 2021-08-13 15:34:27
Neural networks as a network of memory cells
Another way to consider neural networks is to compare them to how humans think. As their name suggests, neural networks draw inspiration from neural processes and neurons in the mind. Neural networks contain a series of neurons, or nodes, which are interconnected and process input. The neurons have weights that are learned from previous observations (data). The output of a neuron is a function of its input and its weights. The activation of some final neuron(s) is the prediction.
We will consider a hypothetical case where a small part of the brain is responsible for matching basic shapes, such as squares and circles. In this scenario, some neurons at the basic level fire for horizontal lines, another set of neurons fires for vertical lines, and yet another set of neurons fire for curved segments. These neurons feed into higher-order process that combines the input so that it recognizes more complex objects, for example, a square when the horizontal and vertical neurons both are activated simultaneously.
In the following diagram, the input data is represented as squares. These could be pixels in an image. The next layer of hidden neurons consists of neurons that recognize basic features, such as horizontal lines, vertical lines, or curved lines. Finally, the output may be a neuron that is activated by the simultaneous activation of two of the hidden neurons:

In this example, the first node in the hidden layer is good at matching horizontal lines, while the second node in the hidden layer is good at matching vertical lines. These nodes remember what these objects are. If these nodes combine, more sophisticated objects can be detected. For example, if the hidden layer recognizes horizontal lines and vertical lines, the object is more likely to be a square than a circle. This is similar to how convolutional neural networks work, which we will cover in Chapter 5, Image Classification Using Convolutional Neural Networks.
We have covered the theory behind neural networks very superficially here as we do not want to overwhelm you in the first chapter! In future chapters, we will cover some of these issues in more depth, but in the meantime, if you wish to get a deeper understanding of the theory behind neural networks, the following resources are recommended:
- Chapter 6 of Goodfellow-et-al (2016)
- Chapter 11 of Hastie, T., Tibshirani, R., and Friedman, J. (2009), which is freely available at https://web.stanford.edu/~hastie/Papers/ESLII.pdf
- Chapter 16 of Murphy, K. P. (2012)
Next, we will turn to a brief introduction to deep neural networks.
- Aftershot Pro:Non-destructive photo editing and management
- 顯卡維修知識精解
- 電腦維護與故障排除傻瓜書(Windows 10適用)
- 硬件產品經理成長手記(全彩)
- Getting Started with Qt 5
- 計算機維修與維護技術速成
- 微服務分布式架構基礎與實戰:基于Spring Boot + Spring Cloud
- 電腦高級維修及故障排除實戰
- LPC1100系列處理器原理及應用
- Python Machine Learning Blueprints
- 單片微機原理及應用
- Advanced Machine Learning with R
- USB 3.0編程寶典
- PLC技術實用教程
- 新型復印機·傳真機維修數據速查寶典