官术网_书友最值得收藏!

  • Deep Learning Essentials
  • Wei Di Anurag Bhardwaj Jianing Wei
  • 139字
  • 2021-06-30 19:17:55

Optimization algorithms

Optimization is the key to how a network learns. Learning is basically an optimization process. It refers to the process that minimizes the error, cost, or finds the locus of least errors. It then adjusts the network coefficients step by step. A very basic optimization approach is the one we used in the previous section on gradient descents. However, there are multiple variations that do a similar job but with a bit of improvement added. TensorFlow provides multiple options for you to choose as the optimizer, for example, GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer, AdamOptimizer, FtrlOptimizer, and RMSPropOptimizer. For the API and how to use them, please see this page:

https://www.tensorflow.org/versions/master/api_docs/python/tf/train#optimizers.

These optimizers should be sufficient for most deep learning techniques. If you aren’t sure which one to use, use GradientDescentOptimizer as a starting point.

主站蜘蛛池模板: 芦山县| 南溪县| 宣威市| 宝应县| 海宁市| 兴山县| 南召县| 湖南省| 固阳县| 宁化县| 平山县| 本溪市| 道孚县| 会东县| 泽库县| 永平县| 兴隆县| 胶南市| 江阴市| 茌平县| 蒲城县| 上栗县| 镇平县| 大英县| 平湖市| 芷江| 兰溪市| 金坛市| 瑞丽市| 济宁市| 九江县| 老河口市| 青岛市| 德阳市| 小金县| 任丘市| 新营市| 永登县| 绩溪县| 江孜县| 曲松县|