OpenAI Gym2

The last blog introduced OpenAI Gym, OpenAI Gym and reinforcement learning, and the installation of OpenAI Gym . Next, run a demo to experience the OpenAI Gym platform. Taking CartPole (inverted pendulum) as an example, create a python module in the working directory with the code as follows:

import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
    env.render()
    env.step(env.action_space.sample()) # take a random action
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Among them, env.reset() resets the state of the environment, and env.render() redraws a frame of the environment. 
write picture description here 
It can be seen from the animation results that the random control algorithm diverges, and the system quickly loses stability. If you want to see some other environments, try replacing the above CartPole-v0 with MountainCar-v0, MsPacman-v0 (requires Atari dependencies) or Hopper-v1 (requires MuJoCo dependencies), all from the Env base class.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326038880&siteId=291194637