官术网_书友最值得收藏!

When to choose SARSA over Q-learning

As mentioned earlier, Q-learning and SARSA are very similar algorithms, and in fact, Q-learning is sometimes called SARSA-max. When the agent's policy is simply the greedy one (that is, it chooses the highest-valued action from the next state no matter what), Q-learning and SARSA will produce the same results.

In practice, we will not be using a simple greedy strategy and will instead choose something such as epsilon-greedy, where some of the actions are chosen at random. We will explore this in more depth when we discuss epsilon decay strategies further. 

We can, therefore, think of SARSA as a more general version of Q-learning. The algorithms are very similar, and in practice, modifying a Q-learning implementation to SARSA involves nothing more than changing the update method for the Q-values. As we've seen, however, the difference in performance can be profound.

In many problems, SARSA will perform better than Q-learning, especially when there is a good chance that the agent will choose to take a random suboptimal action in the next step, as we explored in the cliff-walking example. In this case, Q-learning's assumption that the agent is following the optimal policy may be far enough from true that SARSA will converge faster and with fewer errors.

主站蜘蛛池模板: 明星| 那坡县| 大新县| 柏乡县| 保德县| 津市市| 大荔县| 五华县| 来安县| 台湾省| 钟山县| 嘉禾县| 南丰县| 静宁县| 获嘉县| 故城县| 灵宝市| 固镇县| 随州市| 鄯善县| 横峰县| 东光县| 万载县| 白河县| 汉阴县| 吉安县| 青冈县| 乌兰县| 府谷县| 东乡族自治县| 河津市| 衢州市| 白河县| 叶城县| 宁明县| 漳浦县| 黄石市| 新宁县| 乌恰县| 金阳县| 封开县|