官术网_书友最值得收藏!

  • R Deep Learning Cookbook
  • Dr. PKS Prakash Achyutuni Sri Krishna Rao
  • 221字
  • 2021-07-02 20:49:14

How to do it...

This section covers type activation functions in multilayer perceptrons. Activation is one of the critical component of ANN as it defines the output of that node based on the given input. There are many different activation functions used while building a neural network:

  • Sigmoid: The sigmoid activation function is a continuous function also known as a logistic function and has the form, 1/(1+exp(-x)). The sigmoid function has a tendency to zero out the backpropagation terms during training leading to saturation in response. In TensorFlow, the sigmoid activation function is defined using the tf.nn.sigmoid function.
  • ReLU: Rectified linear unit (ReLU) is one of the most famous continuous, but not smooth, activation functions used in neural networks to capture non-linearity. The ReLU function is defined as max(0,x). In TensorFlow, the ReLU activation function is defined as tf.nn.relu.
  • ReLU6: It caps the ReLU function at 6 and is defined as min(max(0,x), 6), thus the value does not become very small or large. The function is defined in TensorFlow as tf.nn.relu6.
  • tanh: Hypertangent is another smooth function used as an activation function in neural networks and is bound [ -1 to 1] and implemented as tf.nn.tanh.
  • softplus: It is a continuous version of ReLU, so the differential exists and is defined as log(exp(x)+1). In TensorFlow the softplus is defined as tf.nn.softplus.
主站蜘蛛池模板: 西安市| 普兰店市| 友谊县| 慈溪市| 大厂| 彰化县| 保康县| 西吉县| 上虞市| 商洛市| 商都县| 思南县| 呼和浩特市| 托克逊县| 桐城市| 弥勒县| 行唐县| 镶黄旗| 博白县| 闽清县| 铜梁县| 类乌齐县| 手游| 湾仔区| 霍林郭勒市| 龙口市| 武胜县| 广宁县| 五指山市| 信丰县| 普兰店市| 孝义市| 米易县| 高清| 乌兰县| 罗江县| 教育| 清镇市| 景德镇市| 永和县| 黎平县|