官术网_书友最值得收藏!

Solving MDPs with RL

RL algorithms are designed to solve exactly the type of optimization problem an MDP frames; that is, to find an optimal decision-making policy to maximize the rewards offered by making decisions within this environment.

The rewards offered for taking each action are shown in the preceding MDP diagram as yellow arrows. When we take action a0 and end up in state S0, we get a reward of +5; and when we take action a1 and end up in state S0, we get a reward of -1.

The Taxi-v2 environment has 500 states, as we'll see shortly, so it is not practical to represent them all in a diagram such as the previous one. Instead, we will be enumerating them in our Q-table in the next section. We'll use a state vector to represent the variables in each state that we'll be keeping track of.

In general, we can keep track of any variables in a Q-learning problem that we think are relevant for our model, and incorporate them into the state vector. The state vector can be treated as a set of state variables, and it can also be treated as a set of linear numbered states, as long as the individual information about each state is not lost, no matter how the vector information is stored:

The preceding diagram models how agents and environments act with each other in a general way; an agent is in a state, takes an action on its environment, and then receives a reward and moves to a new state. In control process terms, the environment acts on the agent through state and reward information, and the agent acts on the state through actions. 

In the case of Taxi-v2 and other OpenAI Gym environments that we'll be using, the state space is predetermined for us, so we do not have to decide what state variables to keep track of or how to enumerate our states. In an environment that we have designed on our own, we will have to choose how to model these attributes ourselves as efficiently as possible. 

We will also see that in the problems we are working with, we don't need knowledge of any previous states to determine what actions to take in our current state. Every state can be represented by a state variable, and every action is available in an action space that the agent can choose and act on with only the knowledge of the current state. 

主站蜘蛛池模板: 广元市| 东宁县| 临沧市| 开江县| 宁海县| 齐河县| 东城区| 栾城县| 宜章县| 达日县| 镇江市| 彝良县| 中西区| 锦屏县| 措美县| 盐津县| 苍梧县| 中宁县| 神池县| 福海县| 义马市| 拉孜县| 清苑县| 宁津县| 江孜县| 昌平区| 邵东县| 盐城市| 来安县| 泉州市| 石首市| 石柱| 南川市| 略阳县| 石首市| 石柱| 东兴市| 永康市| 垦利县| 崇礼县| 普兰店市|