官术网_书友最值得收藏!

Markov Decision Process

The Markov decision process, better known as MDP, is an approach in reinforcement learning to take decisions in a gridworld environment. A gridworld environment consists of states in the form of grids, such as the one in the FrozenLake-v0 environment from OpenAI gym, which we tried to examine and solve in the last chapter.

The MDP tries to capture a world in the form of a grid by dividing it into states, actions, models/transition models, and rewards. The solution to an MDP is called a policy and the objective is to find the optimal policy for that MDP task.

Thus, any reinforcement learning task composed of a set of states, actions, and rewards that follows the Markov property would be considered an MDP.

In this chapter, we will dig deep into MDPs, states, actions, rewards, policies, and how to solve them using Bellman equations. Moreover, we will cover the basics of Partially Observable MDP and their complexity in solving. We will also cover the exploration-exploitation dilemma and the famous E3 (explicit, explore, or exploit) algorithm. Then we will come to the fascinating part, where we will program an agent to learn and play pong using the principles of MDP.

We will cover the following topics in this chapter:

  • Markov decision processes
  • Partially observable Markov decision processes
  • Training the FrozenLake-v0 environment using MDP
主站蜘蛛池模板: 祁连县| 上林县| 大城县| 襄垣县| 丹阳市| 邻水| 布尔津县| 南京市| 永吉县| 瓦房店市| 城固县| 明光市| 永福县| 申扎县| 金塔县| 莱阳市| 漳浦县| 兴安盟| 札达县| 长葛市| 巴楚县| 西华县| 绥江县| 和政县| 塘沽区| 太谷县| 南京市| 琼海市| 中方县| 寻乌县| 察雅县| 阳原县| 静海县| 会东县| 宁津县| 三原县| 芜湖县| 乐东| 涟水县| 卢龙县| 泰州市|