- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 177字
- 2021-06-24 15:29:08
Summary
In this chapter, we were introduced to the basic concepts of RL. We understood the relationship between an agent and its environment, and also learned about the MDP setting. We learned the concept of reward functions and the use of discounted rewards, as well as the idea of value and advantage functions. In addition, we saw the Bellman equation and how it is used in RL. We also learned the difference between an on-policy and an off-policy RL algorithm. Furthermore, we examined the distinction between model-free and model-based RL algorithms. All of this lays the groundwork for us to delve deeper into RL algorithms and how we can use them to train agents for a given task.
In the next chapter, we will investigate our first two RL algorithms: Q-learning and SARSA. Note that in Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will be using Python-based agents as they are tabular-learning. But from Chapter 3, Deep Q-Network, onward, we will be using TensorFlow to code deep RL agents, as we will require neural networks.
- 機器學習實戰:基于Sophon平臺的機器學習理論與實踐
- 輕輕松松自動化測試
- 火格局的時空變異及其在電網防火中的應用
- 樂高機器人EV3設計指南:創造者的搭建邏輯
- Hands-On Cloud Solutions with Azure
- 數據挖掘實用案例分析
- MicroPython Projects
- 視覺檢測技術及智能計算
- 21天學通Visual Basic
- 控制系統計算機仿真
- 網絡綜合布線設計與施工技術
- Visual C++項目開發案例精粹
- 基于Proteus的單片機應用技術
- Dreamweaver+Photoshop+Flash+Fireworks網站建設與網頁設計完全實用
- Puppet 3 Beginner’s Guide