- Hands-On Neural Networks
- Leonardo De Marchi Laura Mitchell
- 143字
- 2021-06-24 14:00:16
ReLU
ReLU is one of the most commonly used activation functions. It behaves like a linear function when the input is greater than 0; otherwise, it will always be equal to 0. It's the analog of the half-wave rectification in electrical engineering, :

The ReLU function
The range for this function is from 0 to infinite. The issue is that the negative values become zero; therefore, the derivative will always be constant. This is clearly an issue for backpropagation, but in practical cases, it does not have an effect.
There are a few variants of ReLU; one of the most common ones is Leaky ReLU, which aims to allow a positive small gradient when the function is not active. Its formula is as follows:

Here, is typically 0.01, as shown in the following diagram:

The Leaky ReLU function
推薦閱讀
- 高效能辦公必修課:Word圖文處理
- Instant Raspberry Pi Gaming
- 大數據戰爭:人工智能時代不能不說的事
- Dreamweaver CS3網頁設計與網站建設詳解
- iClone 4.31 3D Animation Beginner's Guide
- Spark大數據技術與應用
- OpenStack Cloud Computing Cookbook
- Visual C++項目開發案例精粹
- 基于敏捷開發的數據結構研究
- 水晶石影視動畫精粹:After Effects & Nuke 影視后期合成
- 一步步寫嵌入式操作系統
- 工業機器人入門實用教程
- Mastering OpenStack(Second Edition)
- AVR單片機工程師是怎樣煉成的
- 貫通開源Web圖形與報表技術全集