官术网_书友最值得收藏!

Relation between the value functions and state

The value function is an agent's estimate of how good a given state is. For instance, if a robot is near the edge of a cliff and may fall, that state is bad and must have a low value. On the other hand, if the robot/agent is near its final goal, that state is a good state to be in, as the rewards they will soon receive are high, and so that state will have a higher value.

The value function, V, is updated after reaching a st state and receiving a rt reward from the environment. The simplest TD learning algorithm is called TD(0) and performs an update using the following equation where α is the learning rate and 0 ≤ α ≤ 1:

Note that in some reference papers or books, the preceding formula will have rt instead of rt+1. This is just a difference in convention and is not an error; rt+1 here denotes the reward received from st state and transitioning to st+1

There is also another TD learning variant called TD(λ) that used eligibility traces e(s), which are a record of visiting a state. More formally, we perform a TD(λ) update as follows:

The eligibility traces are given by the following equation:

Here, e(s) = 0 at t = 0. For each step the agent takes, the eligibility trace decreases by γλ for all states, and is incremented by 1 for the state visited in the current time step. Here, 0 ≤ λ ≤ 1, and it is a parameter that decides how much of the credit from a reward is to be assigned to distant states. Next, we will look at the theory behind our next two RL algorithms, SARSA and Q-learning, both of which are quite popular in the RL community.

主站蜘蛛池模板: 普洱| 定日县| 西昌市| 景宁| 苍溪县| 盱眙县| 临澧县| 东丰县| 法库县| 桐城市| 株洲县| 新源县| 濮阳县| 德庆县| 陇川县| 保靖县| 额济纳旗| 怀来县| 灵武市| 高邮市| 乌兰浩特市| 威海市| 响水县| 蒲江县| 南城县| 定西市| 红原县| 汨罗市| 金湖县| 英吉沙县| 泾源县| 鄂伦春自治旗| 鄂伦春自治旗| 馆陶县| 鄂伦春自治旗| 徐水县| 方正县| 金乡县| 陈巴尔虎旗| 井陉县| 南投县|