- Java Deep Learning Projects
- Md. Rezaul Karim
- 107字
- 2021-06-18 19:08:01
Multilayer Perceptron
As discussed earlier, a single perceptron is even incapable of approximating an XOR function. To overcome this limitation, multiple perceptrons are stacked together as MLPs, where layers are connected as a directed graph. This way, the signal propagates one way, from input layer to hidden layers to output layer, as shown in the following diagram:

An MLP architecture having an input layer, two hidden layers, and an output layer
Fundamentally, an MLP is one the most simple FFNNs having at least three layers: an input layer, a hidden layer, and an output layer. An MLP was first trained with a backpropogation algorithm in the 1980s.
推薦閱讀
- 輕松學(xué)C語言
- Oracle SOA Governance 11g Implementation
- 21小時(shí)學(xué)通AutoCAD
- 大數(shù)據(jù)技術(shù)基礎(chǔ)
- 實(shí)時(shí)流計(jì)算系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)
- 輕松學(xué)PHP
- 條碼技術(shù)及應(yīng)用
- 大數(shù)據(jù)時(shí)代的數(shù)據(jù)挖掘
- Mastering Salesforce CRM Administration
- Learning Apache Cassandra(Second Edition)
- WordPress Theme Development Beginner's Guide(Third Edition)
- AWS Certified SysOps Administrator:Associate Guide
- OpenStack Cloud Computing Cookbook(Second Edition)
- 基于多目標(biāo)決策的數(shù)據(jù)挖掘方法評(píng)估與應(yīng)用
- AI 3.0