官术网_书友最值得收藏!

The random CartPole agent

Although the environment is much more complex than our first example in The anatomy of the agent section, the code of the agent is much shorter. This is the power of reusability, abstractions, and third-party libraries!

So, here is the code (you can find it in Chapter02/02_cartpole_random.py):

import gym

if __name__ == "__main__":
    env = gym.make("CartPole-v0")
    total_reward = 0.0
    total_steps = 0
    obs = env.reset()

Here, we create the environment and initialize the counter of steps and the reward accumulator. On the last line, we reset the environment to obtain the first observation (which we'll not use, as our agent is stochastic):

   while True:
        action = env.action_space.sample()
        obs, reward, done, _ = env.step(action)
        total_reward += reward
        total_steps += 1
        if done:
            break

   print("Episode done in %d steps, total reward %.2f" % (total_steps, total_reward))

In this loop, we sample a random action, then ask the environment to execute it and return to us the next observation(obs), the reward, and the done flag. If the episode is over, we stop the loop and show how many steps we've done and how much reward has been accumulated. If you start this example, you'll see something like this (not exactly, due to the agent's randomness):

rl_book_samples/Chapter02$ python 02_cartpole_random.py
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
Episode done in 12 steps, total reward 12.00

As with the interactive session, the warning is not related to our code, but to Gym's internals. On average, our random agent makes 12–15 steps before the pole falls and the episode ends. Most of the environments in Gym have a "reward boundary," which is the average reward that the agent should gain during 100 consecutive episodes to "solve" the environment. For CartPole, this boundary is 195, which means that on average, the agent must hold the stick during 195-time steps or longer. Using this perspective, our random agent's performance looks poor. However, don't be disappointed too early, because we are just at the beginning, and soon we will solve CartPole and many other much more interesting and challenging environments.

主站蜘蛛池模板: 晴隆县| 吉水县| 麻江县| 江阴市| 延边| 珲春市| 镇巴县| 郓城县| 于都县| 临潭县| 武城县| 肥东县| 陆丰市| 侯马市| 江口县| 霍林郭勒市| 新疆| 通榆县| 蓬安县| 石柱| 清镇市| 夏河县| 博客| 桐庐县| 麻江县| 岳池县| 和田县| 基隆市| 樟树市| 邵武市| 永新县| 合肥市| 敖汉旗| 安平县| 阿瓦提县| 班玛县| 武邑县| 西乌| 河源市| 当涂县| 北宁市|