官术网_书友最值得收藏!

Off-policy method

Off-policy methods, on the other hand, use different policies to make action decisions and to evaluate the performance. For instance, many off-policy algorithms use a replay buffer to store the experiences, and sample data from this buffer to train the model. During the training step, a mini-batch of experience data is randomly sampled and used to train the policy and value functions. Coming back to the previous robot example, in an off-policy setting, the robot will not use the current policy to evaluate its performance, but rather use a different policy for exploring and for evaluation. If a replay buffer is used to sample a mini-batch of experience data and then train the agent, then it is off-policy learning, as the current policy of the robot (which was used to obtain the immediate actions) is different from the policy that was used to obtain the samples in the mini-batch of experience used to train the agent (as the policy has changed from an earlier time instant when the data was collected, to the current time instant). DQN, DDQN, and DDPG are off-policy algorithms that we'll look at in later chapters of this book.

主站蜘蛛池模板: 呼图壁县| 安新县| 县级市| 旌德县| 开原市| 北流市| 雷波县| 七台河市| 平潭县| 江川县| 东乡县| 股票| 宜丰县| 芜湖市| 万全县| 资中县| 石泉县| 华蓥市| 华容县| 昆山市| 淮北市| 温州市| 佛教| 陆川县| 绥化市| 鹤山市| 洪洞县| 遵义市| 黎城县| 邓州市| 克拉玛依市| 增城市| 兴仁县| 洪江市| 桦南县| 双鸭山市| 抚远县| 淮安市| 城固县| 荆州市| 迭部县|