- Hands-On Java Deep Learning for Computer Vision
- Klevis Ramo
- 466字
- 2021-07-02 13:25:42
Building a neural network
So now that we have the structure for one neuron, it's time to build a neural network. A neural network, just like a neuron, has three parts:
- The input layer
- The output layer
- The hidden layers
The following diagram should help you visualize the structure better:

Usually, we have many hidden layers with hundreds and thousands of functions, but here, we have just two hidden layers: one with one neuron and the second with three neurons.
The first layer will give us one output that is achieved after multiplying by the activation function. By applying different values of weights to this, we can produce three different output values and connect them to three new rows, each of which will be multiplied by an activation function. Lastly, sum up these values and apply it to a sigmoid function to obtain the final output. You could add more hidden layers to this as well.
The indexes assigned to each weight in the diagram are decided based on the starting neuron of the first hidden layer and the neuron of the second hidden layer. Thus, the indexes for the weights in the first first hidden later are ,
, and
.
The indexes for the Z value are also assigned in a similar manner. The first index represents the neuron that requires the weight, and the second index of Z represents the hidden layer that the Z value belongs to.
Similarly, we may want the input layer to be connected to different neurons, and we can do that simply by multiplying the input values by weights. The following diagram depicts an additional neuron in hidden layer 1:

Notice how now we added a bunch of other Zs, which are simply the contribution of this neuron. The second index for this will be 2, because it comes from the second neuron.
The last thing in this section is trying to make a clear distinction between the weights and the Z values that have the same indexes, but actually belong to different hidden layers. We can apply a superscript, as shown in the following diagram:

This implies that all the weights and Z values are contributing to a heightened level 1. To further distinguish, we can have 2 added to layer 2, making a clear distinction between the weight in layer 1 and and this weight in layer 2. These contribute to the heightened layer 2, and we can add 3 to the weights for the output layer because those contribute to the heightened output layer 3. The following diagram depicts all the heightened layers:

In general, we will mention the superscript index only if it is necessary, because it makes the network messy.
- Voice Application Development for Android
- PySpark大數(shù)據(jù)分析與應(yīng)用
- 大數(shù)據(jù)導(dǎo)論
- 算法與數(shù)據(jù)中臺(tái):基于Google、Facebook與微博實(shí)踐
- Mockito Cookbook
- 數(shù)據(jù)庫程序員面試筆試真題庫
- ZeroMQ
- 從0到1:JavaScript 快速上手
- 圖數(shù)據(jù)實(shí)戰(zhàn):用圖思維和圖技術(shù)解決復(fù)雜問題
- 辦公應(yīng)用與計(jì)算思維案例教程
- 大數(shù)據(jù)分析:數(shù)據(jù)倉庫項(xiàng)目實(shí)戰(zhàn)
- Visual Studio 2013 and .NET 4.5 Expert Cookbook
- MySQL數(shù)據(jù)庫技術(shù)與應(yīng)用
- Python數(shù)據(jù)分析從小白到專家
- 從實(shí)踐中學(xué)習(xí)sqlmap數(shù)據(jù)庫注入測(cè)試