官术网_书友最值得收藏!

Understanding TD learning

We will first learn about TD learning. This is a very fundamental concept in RL. In TD learning, the learning of the agent is attained by experience. Several trial episodes are undertaken of the environment, and the rewards accrued are used to update the value functions. Specifically, the agent will keep an update of the state-action value functions as it experiences new states/actions. The Bellman equation is used to update this state-action value function, and the goal is to minimize the TD error. This essentially means the agent is reducing its uncertainty of which action is the optimal action in a given state; it gains confidence on the optimal action in a given state by lowering the TD error. 

主站蜘蛛池模板: 三明市| 晋城| 上饶县| 运城市| 香港| 平阳县| 夏河县| 清镇市| 云霄县| 原平市| 鸡东县| 田林县| 紫云| 即墨市| 丹寨县| 武清区| 静海县| 古田县| 会理县| 乐亭县| 开江县| 中山市| 微博| 长汀县| 方城县| 长兴县| 清丰县| 腾冲县| 枣强县| 上林县| 阳新县| 虹口区| 仁怀市| 张家界市| 色达县| 馆陶县| 兴义市| 开封市| 韶关市| 江源县| 孟州市|