This project marks the application of Deep Reinforcement learning to RoboCup Rescue Simulator which is a perfect scenario for multiple agents coordinating with each other to achieve certain goals.
(Linux) Instructions to download, build and run the RoboCup Rescue Simulator (RCRS) integrated with OpenAI Gym. OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms.
- Git
- Gradle 3.4+
- OpenJDK Java 11+
- Python 3.5+
- Openai Gym
- Stable Baselines
- Tensorboard 1.14 (Note: Stable baselines is not compatible with Tensorboard 2.0)
$ git clone https://github.com/animeshgoyal9/RoboCup-Gym.git
Open a terminal window, navigate to the rcrs-server-master
root directory and compile
$ ./gradlew clean
$ ./gradlew completeBuild
Open another terminal window, navigate to the rcrs-adf-sample
root directory and compile
$ sh clean.sh
$ sh compile.sh
Close the second terminal
On the first terminal, navigate to the boot
folder in rcrs-server-master
directory and run the following python file
$ testing.py
To visualize the reward over time, losses etc, you can use tensorboard.
Open a new terminal window and run the following bash command
$ tensorboard --logdir ./ppo2_RoboCupGym_tensorboard/
./rcrs-server-master/
: folder where simulation server resides./rcrs-adf-sample/
: folder where simulation client resides./PyRCRSClient/
: gRPC python and proto files for client side./JavaRCRSServer/
: gRPC java and proto files for server side./rcrs-server-master/boot/testing.py
: contains code for applying Deep Reinforcement learning to RCRS./rcrs-server-master/boot/RCRS_gym/
: folder containing gym integration for RCRS./rcrs-server-master/maps/
: maps that can be run in the simulation server..maps/gml/test/map
: default map..maps/gml/test/config
: configuration file associated with the map
![]() |
![]() |
![]() |
---|---|---|
5 episodes | 150 episodes | 400 episodes |
![]() |
![]() |
![]() |
---|---|---|
5 episodes | 150 episodes | 400 episodes |