- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 567字
- 2021-07-02 14:31:03
An introduction to neurons
A neuron is a mathematical function that takes one or more input values, and outputs a single numerical value:

The neuron is defined as follows:

- First, we compute the weighted sum
of the inputs xi and the weights wi (also known as an activation value). Here, xi is either numerical values that represent the input data, or the outputs of other neurons (that is, if the neuron is part of a neural network):
- The weights wi are numerical values that represent either the strength of the inputs or, alternatively, the strength of the connections between the neurons.
- The weight b is a special value called bias whose input is always 1.
- Then, we use the result of the weighted sum as an input to the activation function f, which is also known as transfer function. There are many types of activation functions, but they all have to satisfy the requirement to be non-linear, which we'll explain later in the chapter.
As we mentioned in Chapter 1, Machine Learning – an Introduction, the activation value defined previously can be interpreted as the dot product between the vector w and the vector x: . The vector x will be perpendicular to the weight vector w, if
. Therefore, all vectors x such that
define a hyperplane in the feature space Rn , where n is the dimension of x.
That sounds complicated! To better understand it, let's consider a special case where the activation function is f(x) = x and we only have a single input value, x. The output of the neuron then becomes y = wx + b, which is the linear equation. This shows that in one-dimensional input space, the neuron defines a line. If we visualize the same for two or more inputs, we'll see that the neuron defines a plane, or a hyperplane, for an arbitrary number of input dimensions.
In the following diagram, we can also see that the role of the bias, b, is to allow the hyperplane to shift away from the center of the coordinate system. If we don't use bias, the neuron will have limited representation power:

We already know from Chapter 1, Machine Learning – an Introduction, that the perceptron (hence the neuron) only works with linearly separable classes, and now we know that because it defines a hyperplane. To overcome this limitation, we'll need to organize the neurons in a neural network.
- 深入核心的敏捷開(kāi)發(fā):ThoughtWorks五大關(guān)鍵實(shí)踐
- 零基礎(chǔ)搭建量化投資系統(tǒng):以Python為工具
- Programming ArcGIS 10.1 with Python Cookbook
- JavaScript by Example
- 深入淺出RxJS
- SQL Server從入門(mén)到精通(第3版)
- Unity&VR游戲美術(shù)設(shè)計(jì)實(shí)戰(zhàn)
- C# Multithreaded and Parallel Programming
- Modular Programming with JavaScript
- Go Systems Programming
- 青少年P(guān)ython趣味編程
- 程序員的英語(yǔ)
- 分布式系統(tǒng)架構(gòu)與開(kāi)發(fā):技術(shù)原理與面試題解析
- Web應(yīng)用程序設(shè)計(jì):ASP
- 信息安全技術(shù)(第2版)