官术网_书友最值得收藏!

Predicting output with a neural network

By combining layers of neurons together we create a stacked function that has non-linear transformations and trainable weights so it can learn to recognize complex relationships. To visualize this, let's transform the neural network from previous sections into a mathematical formula. First, let's take a look at the formula for a single layer:

The X variable is a vector that represents the input for the layer in the neural network. The w parameter represents a vector of weights for each of the elements in the input vector, X. In many neural network implementations, an additional term, b, is added, this is called the bias and basically increases or decreases the overall level of input required to activate the neuron. Finally, there's a function, f, which is the activation function for the layer.

Now that you've seen the formula for a single layer, let's put together additional layers to create the formula for the neural network:

Notice how the formula has changed. We now have the formula for the first layer wrapped in another layer function. This wrapping or stacking of functions continues when we add more layers to the neural network. Each layer introduces more parameters that need to be optimized to train the neural network. It also allows the neural network to learn more complex relationships from the data we feed into it. 

To make a prediction with a neural network, we need to fill all of the parameters in the neural network. Let's assume we know those because we trained it before. What's left is the input value for the neural network. 

The input is a vector of floating-point numbers that is a representation of the input of our neural network. The output is a vector that forms a representation of the predicted output of the neural network.

主站蜘蛛池模板: 南昌县| 肥城市| 如皋市| 博客| 临桂县| 临朐县| 石楼县| 两当县| 苍溪县| 通城县| 九寨沟县| 什邡市| 桑日县| 奉化市| 重庆市| 丰县| 北海市| 会昌县| 湾仔区| 石林| 桐城市| 庆云县| 尉氏县| 奇台县| 瑞丽市| 台中县| 绍兴市| 辽阳市| 神农架林区| 商都县| 图们市| 芜湖市| 射阳县| 平安县| 尉氏县| 石阡县| 中牟县| 南靖县| 定襄县| 马公市| 洛南县|