官术网_书友最值得收藏!

Markov Decision Process

The Markov decision process, better known as MDP, is an approach in reinforcement learning to take decisions in a gridworld environment. A gridworld environment consists of states in the form of grids, such as the one in the FrozenLake-v0 environment from OpenAI gym, which we tried to examine and solve in the last chapter.

The MDP tries to capture a world in the form of a grid by dividing it into states, actions, models/transition models, and rewards. The solution to an MDP is called a policy and the objective is to find the optimal policy for that MDP task.

Thus, any reinforcement learning task composed of a set of states, actions, and rewards that follows the Markov property would be considered an MDP.

In this chapter, we will dig deep into MDPs, states, actions, rewards, policies, and how to solve them using Bellman equations. Moreover, we will cover the basics of Partially Observable MDP and their complexity in solving. We will also cover the exploration-exploitation dilemma and the famous E3 (explicit, explore, or exploit) algorithm. Then we will come to the fascinating part, where we will program an agent to learn and play pong using the principles of MDP.

We will cover the following topics in this chapter:

  • Markov decision processes
  • Partially observable Markov decision processes
  • Training the FrozenLake-v0 environment using MDP
主站蜘蛛池模板: 高邮市| 桓台县| 茶陵县| 道孚县| 庐江县| 西吉县| 右玉县| 大方县| 腾冲县| 台中市| 志丹县| 冀州市| 贵南县| 兴化市| 敦煌市| 渝中区| 竹北市| 北安市| 府谷县| 绿春县| 镇宁| 宁晋县| 旺苍县| 罗定市| 水城县| 临朐县| 克什克腾旗| 聂荣县| 普洱| 聂拉木县| 宁武县| 都安| 土默特左旗| 沙雅县| 西昌市| 天台县| 武功县| 鄂托克旗| 河源市| 阜康市| 桓台县|