- Hands-On Deep Learning for Games
- Micheal Lanham
- 688字
- 2021-06-24 15:47:54
Neural networks – the foundation
The inspiration for neural networks or multilayer perceptrons is the human brain and nervous system. At the heart of our nervous system is the neuron pictured above the computer analog, which is a perceptron:

The neurons in our brain collect input, do something, and then spit out a response much like the computer analog, the perceptron. A perceptron takes a set of inputs, sums them all up, and passes them through an activation function. That activation function determines whether to send output, and at what level to send it when activated. Let's take a closer look at the perceptron, as follows:

On the left-hand side of the preceding diagram, you can see the set of inputs getting pushed in, plus a constant bias. We will get more into the bias later. Then the inputs are multiplied by a set of individual weights and passed through an activation function. In Python code, it is as simple as the one in Chapter_1_1.py:
inputs = [1,2]
weights = [1,1,1]
def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i] * input
return 1.0 if activation >= 0.0 else 0.0
print(perceptron_predict(inputs,weights))
Note how the weights list has one more element than the inputs list; that is to account for the bias (weights[0]). Other than that, you can see we just simply loop through the inputs, multiplying them by the designated weight and adding the bias. Then the activation is compared to 0.0, and if it is greater than 0, we output. In this very simple example, we are just comparing the value to 0, which is essentially a simple step function. We will spend some time later revisiting various activation functions over and over again; consider this simple model an essential part of carrying out those functions.
In the previous code example, we are looking at one point of input data, [1,2], which is hardly useful when it comes to DL. DL models typically require hundreds, thousands, or even millions of data points or sets of input data to train and learn effectively. Fortunately, with one perceptron, the amount of data we need is less than 10.
Let's expand on the preceding example and run a training set of 10 points through the perceptron_predict function by opening up your preferred Python editor and following these steps:
- Enter the following block of code in your preferred Python editor or open Chapter_1_2.py from the downloaded source code:
train = [[1,2],[2,3],[1,1],[2,2],[3,3],[4,2],[2,5],[5,5],[4,1],[4,4]]
weights = [1,1,1]
def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i+1] * inputs[i]
return 1.0 if activation >= 0.0 else 0.0
for inputs in train:
print(perceptron_predict(inputs,weights))
- This code just extends the earlier example we looked at. In this case, we are testing multiple points of data defined in the train list. Then we just iterate through each item in the list and print out the predicted value.
- Run the code and observe the output. If you are unsure of how to run Python code, be sure to take that course first before going any further.
You should see an output of repeating 1.0s, which essentially means all input values are recognized as the same. This is not something that is very useful. The reason for this is that we have not trained or adjusted the input weights to match a known output. What we need to do is train the weights to recognize the data, and we will look at how to do that in the next section.
- 數據可視化:從小白到數據工程師的成長之路
- 算法競賽入門經典:習題與解答
- Modern Programming: Object Oriented Programming and Best Practices
- Python廣告數據挖掘與分析實戰
- Creating Dynamic UIs with Android Fragments(Second Edition)
- 大話Oracle Grid:云時代的RAC
- 中國數字流域
- 數字媒體交互設計(初級):Web產品交互設計方法與案例
- Python金融數據分析(原書第2版)
- MATLAB Graphics and Data Visualization Cookbook
- 數據科學工程實踐:用戶行為分析與建模、A/B實驗、SQLFlow
- 圖數據實戰:用圖思維和圖技術解決復雜問題
- 達夢數據庫運維實戰
- 數據庫應用系統開發實例
- Solaris操作系統原理實驗教程