官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 126字
  • 2021-06-30 19:17:52

Leaky ReLU and maxout

A Leaky ReLU will have a small slope α on the negative side, such as 0.01. The slope α can also be made into a parameter of each neuron, such as in PReLU neurons (P stands for parametric). The problem with this activation function is the inconsistency of the effectiveness of such modifications to various problems.

Maxout is another attempt to solve the dead neuron problem in ReLU. It takes the form . From this form, we can see that both ReLU and leaky ReLU are just special cases of this form, that is, for ReLU, it's . Although it benefits from linearity and having no saturation, it has doubled the number of parameters for every single neuron.

主站蜘蛛池模板: 平度市| 抚顺县| 竹溪县| 马关县| 宁乡县| 哈巴河县| 双流县| 专栏| 准格尔旗| 全椒县| 上高县| 建平县| 沁水县| 巴彦淖尔市| 微山县| 莱西市| 色达县| 武城县| 柯坪县| 河东区| 阿瓦提县| 梓潼县| 永城市| 镇江市| 罗江县| 夹江县| 台湾省| 五莲县| 舟曲县| 阿巴嘎旗| 禹州市| 治多县| 营口市| 冷水江市| 印江| 高要市| 汶川县| 青川县| 桂平市| 南通市| 英超|