- Hands-On Neural Networks
- Leonardo De Marchi Laura Mitchell
- 172字
- 2021-06-24 14:00:11
How deep learning performs feature engineering
The theoretical advantage of neural networks is that they are universal approximators. The Universal Approximation Theorem states that a feed-forward network with a single hidden layer, a finite number of neurons, and some assumptions regarding the activation function can approximate any continuous functions. However, this theorem does not specify whether the parameters of the network are learnable algorithmically.
In practice, layers are added to the network to increase the non-linearity of the approximated function, and there is a lot of empirical evidence that the deeper the network is and the more the data we feed into the network, the better the results will be. There are some caveats on this statement that we will see later on in this book.
Nevertheless, there are some deep learning tasks that still require feature engineering—for example, natural Language processing (NLP). In this case, feature engineering can be anything from dividing the text into small subsets, called n-grams, to a vectorized representation using, for example, word embedding.
- Clojure Data Analysis Cookbook
- Big Data Analytics with Hadoop 3
- Hands-On Internet of Things with MQTT
- Excel 2007函數與公式自學寶典
- Windows 8應用開發實戰
- 電腦上網直通車
- 空間傳感器網絡復雜區域智能監測技術
- INSTANT Varnish Cache How-to
- 深度學習中的圖像分類與對抗技術
- RPA(機器人流程自動化)快速入門:基于Blue Prism
- 人工智能實踐錄
- 教育機器人的風口:全球發展現狀及趨勢
- Godot Engine Game Development Projects
- Artificial Intelligence By Example
- Java組件設計