- Reinforcement Learning with TensorFlow
- Sayon Dutta
- 105字
- 2021-08-27 18:51:52
How to choose the right activation function
The activation function is decided depending upon the objective of the problem statement and the concerned properties. Some of the inferences are as follows:
Sigmoid functions work very well in the case of shallow networks and binary classifiers. Deeper networks may lead to vanishing gradients.
The ReLU function is the most widely used, and try using Leaky ReLU to avoid the case of dead neurons. Thus, start with ReLU, then move to another activation function if ReLU doesn't provide good results.
Use softmax in the outer layer for the multi-class classification.
Avoid using ReLU in the outer layer.
推薦閱讀
- 腦動力:Linux指令速查效率手冊
- Dreamweaver CS3+Flash CS3+Fireworks CS3創意網站構建實例詳解
- Div+CSS 3.0網頁布局案例精粹
- Learning Microsoft Azure Storage
- Julia 1.0 Programming
- Mastering Salesforce CRM Administration
- 空間傳感器網絡復雜區域智能監測技術
- 大數據挑戰與NoSQL數據庫技術
- 現代機械運動控制技術
- Mastering Machine Learning Algorithms
- Embedded Programming with Modern C++ Cookbook
- Cloudera Administration Handbook
- 從零開始學SQL Server
- TensorFlow Deep Learning Projects
- 樂高創意機器人教程(中級 上冊 10~16歲) (青少年iCAN+創新創意實踐指導叢書)