Environments

Coach supports a large number of environments which can be solved using reinforcement learning.To find a detailed documentation of the environments API, see the environments section.The supported environments are:

  • DeepMind Control Suite - a set of reinforcement learning environmentspowered by the MuJoCo physics engine.

  • Blizzard Starcraft II - a popular strategy game which was wrapped with apython interface by DeepMind.

  • ViZDoom - a Doom-based AI research platform for reinforcement learningfrom raw visual information.

  • CARLA - an open-source simulator for autonomous driving research.

  • OpenAI Gym - a library which consists of a set of environments, from games to robotics.Additionally, it can be extended using the API defined by the authors.

In Coach, we support all the native environments in Gym, along with several extensions such as:

  • Roboschool - a set of environments powered by the PyBullet engine,that offer a free alternative to MuJoCo.

  • Gym Extensions - a set of environments that extends Gym forauxiliary tasks (multitask learning, transfer learning, inverse reinforcement learning, etc.)

  • PyBullet - a physics engine thatincludes a set of robotics environments.