官术网_书友最值得收藏!

ReLU activation

Rectified linear, or as it is more commonly known, ReLU function is the most widely used activation function in deep learning models. It suppresses the negative values to zero. The reason for ReLU being so widely used is it deactivates the neurons that produce negative values. This kind of behavior is desired in most of the networks containing thousands of neurons. Following, is the plot for the ReLU activation function:

A modified form of ReLU is leaky ReLU. ReLU completely deactivates the neuron with a negative value. Instead of completely deactivating the neuron, leaky ReLU reduces the effect of those neurons by a factor of, say c. The following equation defines the leaky ReLU activation function:

Following, is the plot of output values from ReLU activation function:

主站蜘蛛池模板: 定日县| 肥东县| 名山县| 宜川县| 电白县| 松滋市| 陆丰市| 如皋市| 鹤壁市| 张家川| 德州市| 台南县| 阿合奇县| 饶阳县| 拉萨市| 安义县| 建始县| 县级市| 维西| 西青区| 齐河县| 仙游县| 怀来县| 桦甸市| 江西省| 南丰县| 宣威市| 仁怀市| 曲松县| 安塞县| 佳木斯市| 宁远县| 怀仁县| 怀安县| 池州市| 阜平县| 古丈县| 炎陵县| 景宁| 固阳县| 林甸县|