A set of high-dimensional continuous control environments for use with Unity ML-Agents Toolkit.
Preview MarathonEnvs using the Web Demo
MarathonEnvs is a set of high-dimensional continuous control benchmarks using Unity’s native physics simulator, PhysX. MarathonEnvs can be trained using Unity ML-Agents or any OpenAI Gym compatable algorthem. MarathonEnvs maybe useful for:
- Video Game researchers interested in apply bleeding edge robotics research into the domain of locomotion and AI for video games.
- Academic researchers looking to leverage the strengths of Unity and ML-Agents along with the body of existing research and benchmarks provided by projects such as the DeepMind Control Suite, or OpenAI Mujoco environments.
Note: This project is the result of a contribution from Joe Booth (@Sohojo), a member of the Unity community who currently maintains the repository. As such, the contents of this repository are not officially supported by Unity Technologies.
- See Web Demo
- Use marathon-envs as a OpenAI Gym environment - see documentation
- Updated to work with ml-agents 0.14.1 / new inference engine
- Updated to use Unity 2018.4 LTS. Should work with later versions. However, sometimes Unity makes breaking physics changes.
- Train the agent to complete a backflip based on motion capture data
- Merged from StyleTransfer experimental repro
- Optimized for Unity3d + fixes some bugs with the DeepMind.xml version
- Merged from StyleTransfer experimental repro
- Replaces DeepMindHumanoid
- Sparse version of MarathonMan.
- Single reward is given at end of episode.
- Random Terrain envionments
- Merged from AssaultCourse experimental repro
- Set the number of instances of an envrionmwnt you want for training and inference
- Envrionments are spawned from prefabs, so no need to manually duplicate
- Supports ability to select from multiple agents in one build
- Unique Physics Scene per Environment (makes it easier to port envionments however runs slower)
- SelectEnvToSpawn.cs - Optional menu to enable user to select from all agents in build
- Score agent against 'goal' (for example, max distance) to distinguish rewards from goals
- Gives mean and std-div over 100 agents
- No need to use normalize flag in training. Helps with OpenAI.Baselines training
- 1, 2, 3 - Slow-mo modes
- arrow keys or w-a-s-d rotate around agent
- q-e zoom in / out
- (1m steps for hopper, walker, ant, 10m for humanoid)
- Unity 2018.4 (Download here).
- Cloan / Download this repro
- Install ml-agents version 0.14.1 - install via:
pip3 install mlagents==0.14.1
- Build or install the correct runtime for your version into the
envs\
folder
- See Training.md for training us ML-Agents
- AAAI 2019 Workshop on Games and Simulations for Artificial Intelligence: Marathon Environments: Multi-Agent Continuous Control Benchmarks in a Modern Video Game Engine
- An early version of this work was presented March 19th, 2018 at the AI Summit - Game Developer Conference 2018
- ActiveRagdollAssaultCourse - Mastering Dynamic Environments
- ActiveRagdollControllers - Implementing a Player Controller
- ActiveRagdollStyleTransfer - Learning From Motioncapture Data
- MarathonEnvsBaselines - Experimental implementation with OpenAI.Baselines and Stable.Baselines
- OpenAI.Gym Mujoco implementation. Good reference for enviroment setup, reward functions and termination functions.
- PyBullet pybullet_envs - a bit harder than MuJoCo gym environments but with an open source simulator. Pre-trained environments in stable-baselines zoo.
- DeepMind Control Suite - Set of continuous control tasks.
- DeepMind paper Emergence of Locomotion Behaviours in Rich Environments and video- see page 13 b.2 for detail of reward functions
- MuJoCo homepage.
- A good primer on the differences between physics engines is 'Physics simulation engines have traditional made tradeoffs between performance’ and it’s accompanying video.
- MuJoCo Unity Plugin MuJoCo's Unity plugin which uses socket to comunicate between MuJoCo (for running the physics simulation and control) and Unity (for rendering).
If you use MarathonEnvs in your research, we ask that you please cite our paper.