官术网_书友最值得收藏!

Multilayer Perceptron

As discussed earlier, a single perceptron is even incapable of approximating an XOR function. To overcome this limitation, multiple perceptrons are stacked together as MLPs, where layers are connected as a directed graph. This way, the signal propagates one way, from input layer to hidden layers to output layer, as shown in the following diagram:

An MLP architecture having an input layer, two hidden layers, and an output layer

Fundamentally, an MLP is one the most simple FFNNs having at least three layers: an input layer, a hidden layer, and an output layer. An MLP was first trained with a backpropogation algorithm in the 1980s.

主站蜘蛛池模板: 嘉祥县| 金湖县| 云浮市| 宁津县| 民乐县| 全州县| 阳曲县| 兰西县| 甘泉县| 丹江口市| 花垣县| 万年县| 武平县| 德格县| 襄樊市| 吴忠市| 开阳县| 新昌县| 东方市| 高陵县| 黔东| 天等县| 涞水县| 深圳市| 甘洛县| 大城县| 定南县| 阳泉市| 桂东县| 闵行区| 菏泽市| 怀宁县| 凤冈县| 庐江县| 菏泽市| 林芝县| 贵州省| 合肥市| 遵化市| 昌乐县| 故城县|