官术网_书友最值得收藏!

SARSA and the cliff-walking problem

In Q-learning, the agent starts out in state S, performs action A, sees what the highest possible reward is for taking any action from its new state, T, and updates its value for the state S-action A pair based on this new highest possible value. In SARSA, the agent starts in state S, takes action A and gets a reward, then moves to state T, takes action B and gets a reward, and then goes back to update the value for S-A based on the actual value of the reward it received from taking action B

A famous illustration of the differences in performance between Q-learning and SARSA is the cliff-walking example from Sutton and Barto's Reinforcement Learning: An Introduction (1998):

There is a penalty of -1 for each step that the agent takes, and a penalty of -100 for falling off the cliff. The optimal path is, therefore, to run exactly along the edge of the cliff and reach the reward as quickly as possible. This minimizes the number of steps the agent takes and maximizes its reward as long as it does not fall into the cliff at any point.

Q-learning takes the optimal path in this example, while SARSA takes the safe path. The result is that there is a nonzero risk (with an epsilon-greedy or other exploration-based policy) that at any point a Q-learning agent will fall off the cliff as a result of choosing exploration.

SARSA, unlike Q-learning, looks ahead to the next action to see what the agent will actually do at the next step and updates the Q-value of its current state-action pair accordingly. For this reason, it learns that the agent might fall into the cliff and that this would lead to a large negative reward, so it lowers the Q-values of those state-action pairs accordingly.

The result is that Q-learning assumes that the agent is following the best possible policy without attempting to resolve what that policy actually is, while SARSA takes into account the agent's actual policy (that is, what it ends up doing when it moves to the next state as opposed to the best possible thing it could be assumed to do).

主站蜘蛛池模板: 任丘市| 沧州市| 峡江县| 泗阳县| 靖边县| 沧州市| 义乌市| 临洮县| 瑞昌市| 垣曲县| 正安县| 米泉市| 香河县| 胶南市| 修水县| 宁城县| 肃南| 克什克腾旗| 甘谷县| 石棉县| 太谷县| 武冈市| 黄骅市| 宣恩县| 武威市| 玉屏| 苏尼特左旗| 贵港市| 遵化市| 芜湖县| 平潭县| 扶沟县| 漳平市| 明光市| 宜丰县| 普宁市| 华蓥市| 东源县| 阳泉市| 沙坪坝区| 岚皋县|