官术网_书友最值得收藏!

Summary

In this chapter, we were introduced to the basic concepts of RL. We understood the relationship between an agent and its environment, and also learned about the MDP setting. We learned the concept of reward functions and the use of discounted rewards, as well as the idea of value and advantage functions. In addition, we saw the Bellman equation and how it is used in RL. We also learned the difference between an on-policy and an off-policy RL algorithm. Furthermore, we examined the distinction between model-free and model-based RL algorithms. All of this lays the groundwork for us to delve deeper into RL algorithms and how we can use them to train agents for a given task.

In the next chapter, we will investigate our first two RL algorithms: Q-learning and SARSA. Note that in Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will be using Python-based agents as they are tabular-learning. But from Chapter 3, Deep Q-Network, onward, we will be using TensorFlow to code deep RL agents, as we will require neural networks.

主站蜘蛛池模板: 深泽县| 汉源县| 濉溪县| 鄯善县| 南召县| 抚顺县| 克什克腾旗| 托克逊县| 称多县| 肥西县| 富平县| 永安市| 汉川市| 西充县| 平潭县| 张掖市| 临清市| 乌拉特中旗| 麻阳| 平潭县| 连州市| 东宁县| 通道| 抚顺市| 钦州市| 博乐市| 晋城| 宜君县| 临安市| 洪江市| 莫力| 陈巴尔虎旗| 黄骅市| 乌海市| 会同县| 扎兰屯市| 商城县| 清河县| 汝南县| 兴安盟| 五家渠市|