官术网_书友最值得收藏!

  • Python Deep Learning
  • Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
  • 460字
  • 2021-07-02 14:31:04

An introduction to layers

A neural network can have an indefinite number of neurons, which are organized in interconnected layers. The input layer represents the dataset and the initial conditions. For example, if the input is a grayscale image, the output of each neuron in the input layer is the intensity of one pixel of the image. For this very reason, we don't generally count the input layer as a part of the other layers. When we say 1-layer net, we actually mean that it is a simple network with just a single layer, the output, in addition to the input layer.

Unlike the examples we've seen so far, the output layer can have more than one neuron. This is especially useful in classification, where each output neuron represents one class. For example, in the case of the Modified National Institute of Standards and Technology(MNIST) dataset, we'll have 10 output neurons, where each neuron corresponds to a digit from 0-9. In this way, we can use the 1-layer net to classify the digit on each image. We'll determine the digit by taking the output neuron with the highest activation function value. If this is y, we'll know that the network thinks that the image shows the number 7.

In the following diagram, you can see the 1-layer feedforward network. In this case, we explicitly show the weights w for each connection between the neurons, but usually, the edges connecting neurons represent the weights implicitly. Weight wij connects the i-th input neuron with the j-th output neuron. The first input, 1, is the bias unit, and the weight, b1, is the bias weight:

1-layer feedforward network

In the preceding diagram, we see the 1-layer neural network wherein the neurons on the left represent the input with bias b, the middle column represents the weights for each connection, and the neurons on the right represent the output given the weights w.

The neurons of one-layer can be connected to the neurons of other layers, but not to other neurons of the same layer. In this case, the input neurons are connected only to the output neurons.

But why do we need to organize the neurons in layers in the first place? One argument is that the neuron can convey limited information (just one value). But when we combine the neurons in layers, their outputs compose a vector and, instead of single activation, we can now consider the vector in its entirety. In this way, we can convey a lot more information, not only because the vector has multiple values, but also because the relative ratios between them carry additional information.

主站蜘蛛池模板: 九龙城区| 三亚市| 宾阳县| 墨江| 南漳县| 沙雅县| 禹州市| 饶河县| 清河县| 乌什县| 蚌埠市| 大理市| 临江市| 博罗县| 甘孜| 清原| 闽侯县| 安仁县| 宝坻区| 贵州省| 秦皇岛市| 玛纳斯县| 兴隆县| 兴安县| 六盘水市| 普宁市| 崇礼县| 乡城县| 青川县| 通化县| 扶沟县| 苏尼特右旗| 南乐县| 社旗县| 山东| 堆龙德庆县| 黑水县| 蒙山县| 新化县| 常熟市| 进贤县|