官术网_书友最值得收藏!

The workings of ANNs

We have seen the concept of how a single neuron or perceptron works; so now, let's expand the concept to the idea of deep learning. The following diagram shows us what multiple perceptrons look like:

Fig 2.12: Multiple perceptrons

In the preceding diagram, we can see various layers of single perceptrons connected to each other through their inputs and outputs. The input layer is violet, the hidden layers are blue and green, and the output layer of the network is represented in red.

Input layers are real values from the data, so they take in actual data as their input. The next layers are the hidden layers, which are between the input and output layers. If three or more hidden layers are present, then it's considered a deep neural network. The final layer is the output layer, where we have some sort of final estimation of whatever the output that we are trying to estimate is. As we progress through more layers, the level of abstraction increases. 

In the next section, we will understand an important topic in deep learning—activation functions and their types.

主站蜘蛛池模板: 天津市| 鄂州市| 治县。| 万州区| 毕节市| 鹤岗市| 永胜县| 循化| 浦北县| 当雄县| 宁化县| 虞城县| 巫山县| 通渭县| 柳林县| 柘荣县| 德昌县| 铜川市| 柳江县| 正宁县| 南部县| 蒙阴县| 岱山县| 武安市| 米泉市| 噶尔县| 德格县| 赤壁市| 东莞市| 兴安县| 浦城县| 东明县| 巩义市| 汝州市| 开封市| 克拉玛依市| 嘉善县| 额敏县| 桂阳县| 石景山区| 宁德市|