官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 415字
  • 2021-06-30 19:17:43

Hierarchical feature representation

The learnt features capture both local and inter-relationships for the data as a whole, it is not only the learnt features that are distributed, the representations also come hierarchically structured. The previous figure, Comparing deep and shallow architecture. It can be seen that shallow architecture has a more flat topology, while deep architecture has many layers of hierarchical topology compares the typical structure of shallow versus deep architectures, where we can see that the shallow architecture often has a flat structure with one layer at most, whereas the deep architecture structures have multiple layers, and lower layers are composited that serve as input to the higher layer. The following figure uses a more concrete example to show what information has been learned through layers of the hierarchy.

As shown in the image, the lower layer focuses on edges or colors, while higher layers often focus more on patches, curves, and shapes. Such representation effectively captures part-and-whole relationships from various granularity and naturally addresses multi-task problems, for example, edge detection or part recognition. The lower layer often represents the basic and fundamental information that can be used for many distinct tasks in a wide variety of domains. For example, Deep Belief networks have been successfully used to learn high-level structures in a wide variety of domains, including handwritten digits and human motion capture data. The hierarchical structure of representation mimics the human understanding of concepts, that is, learning simple concepts first and then successfully building up more complex concepts by composing the simpler ones together. It is also easier to monitor what is being learnt and to guide the machine to better subspaces. If one treats each neuron as a feature detector, then deep architectures can be seen as consisting of feature detector units arranged in layers. Lower layers detect simple features and feed into higher layers, which in turn detect more complex features. If the feature is detected, the responsible unit or units generate large activations, which can be picked up by the later classifier stages as a good indicator that the class is present:

Illustration of hierarchical features learned from a deep learning algorithm.  Image by Honglak Lee and colleagues as  published  in Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations , 2009

The above figure illustrates that each feature can be thought of as a detector, which tries to the detector a particular feature (blob, edges, nose, or eye) on the input image. 

主站蜘蛛池模板: 西充县| 克什克腾旗| 五大连池市| 吴川市| 扬中市| 荥经县| 青铜峡市| 青海省| 钦州市| 邮箱| 怀远县| 日土县| 柳林县| 闵行区| 肇东市| 中山市| 武胜县| 南木林县| 玛曲县| 元谋县| 得荣县| 翁源县| 南阳市| 神木县| 玉林市| 赤峰市| 轮台县| 彭州市| 东台市| 旬邑县| 桓台县| 台中市| 雅江县| 盘锦市| 峨边| 文山县| 全椒县| 汉中市| 江北区| 临城县| 蒙阴县|