官术网_书友最值得收藏!

Evolutionary path to DFNs

Warren McCulloch and Walter Pitts were the first to create a model of artificial neural networks back in 1943. They built the model on something called threshold logic. A threshold was calculated by summing up inputs, and the output was binary, zero, or one, according to the threshold. In 1958, another model of a neuron was created by Rosenblatt called perceptron. Perceptron is the simplest model of an artificial neuron that can classify inputs into two classes (we discussed this neuron in Chapter 1, Getting started with Deep Learning). The concept of training neural networks by backpropagating errors using chain rule was developed by Henry J. Kelley around the early 1960s. However, backpropagation as an algorithm was unstructured and the perceptron model failed to solve that famous XOR problem. In 1986, Geoff Hinton, David Rumelhart, and Ronald Williams demonstrated that neural networks with hidden layers can learn non-linear functions with backpropagation. Further, it was also highlighted that neural networks are capable of learning any function through a universal approximation theorem. But, neural networks didn't scale to solve large problems, and by the '90s, other machine learning algorithms, such as support vector machine (SVM), dominated the space. Around 2006, Hinton once again came up with the idea of adding layers one over the other and training parameters for new layers. Deeper networks were trained using the strategy and the networks were termed as deep feedforward networks. From here, the neural networks got a new name—deep learning!

Next, we will discuss the architectural design of DFN. We will see how the layered structure is built, trained, and what significance the terms deep and feedforward carry.

主站蜘蛛池模板: 封开县| 澄城县| 株洲市| 凉山| 昭苏县| 溧水县| 华亭县| 泾阳县| 乐清市| 当雄县| 神农架林区| 三河市| 宝应县| 大方县| 定远县| 嘉黎县| 克山县| 枝江市| 彭山县| 惠东县| 宁国市| 贵港市| 兴宁市| 偃师市| 集安市| 商都县| 霍州市| 永寿县| 龙川县| 福州市| 正定县| 天台县| 革吉县| 建湖县| 金阳县| 建宁县| 祁连县| 许昌县| 娱乐| 宝鸡市| 赣州市|