Skip to content

Gym environment for building simulation and control using reinforcement learning

License

Notifications You must be signed in to change notification settings

Terramorpha/sinergym

 
 

Repository files navigation

Sinergym



Github latest release Github last commit Pypi version Pypi downloads GitHub Contributors Github issues GitHub pull requests Github License Pypi Python version

Welcome to Sinergym!



The goal of this project is to create an environment following Gymnasium interface for wrapping simulation engines (EnergyPlus) for building control using deep reinforcement learning or any external control.

For more information about Sinergym, please visit our documentation.

To report questions and issues, please use our issue tracker. We appreciate your feedback and contributions. Check out our CONTRIBUTING.md for more details on how to contribute.

The main functionalities of Sinergym are the following:

  • Simulation Engine Compatibility: Uses EnergyPlus Python API for Python-EnergyPlus communication. Future plans include more engines like OpenModelica.

  • Benchmark Environments: Designs environments for benchmarking and testing deep RL algorithms or other external strategies, similar to Atari or Mujoco.

  • Customizable Environments: Allows easy modification of experimental settings. Users can create their own environments or modify pre-configured ones in Sinergym.

  • Customizable Components: Enables creation of new custom components for new environments, making Sinergym scalable, such as function rewards, wrappers, controllers, etc.

  • Automatic Building Model Adaptation: Sinergym automates the process of adapting the building model to user changes in the environment definition.

  • Automatic Actuators Control: Controls actuators through the Gymnasium interface based on user specification, only actuators names are required and Sinergym will do the rest.

  • Extensive Environment Information: Provides comprehensive information about Sinergym background components from the environment interface.

  • Stable Baseline 3 Integration: Customizes functionalities for easy testing of environments with SB3 algorithms, such as callbacks and customizable training real-time logging. However, Sinergym is agnostic to any DRL algorithm.

  • Google Cloud Integration: Offers guidance on using Sinergym with Google Cloud infrastructure.

  • Weights & Biases Compatibility: Automates and facilitates training, reproducibility, and comparison of agents in simulation-based building control problems. WandB assists in managing and monitoring model lifecycle.

  • Notebook Examples: Provides code in notebook format for user familiarity with the tool.

  • Extensive Documentation, Unit Tests, and GitHub Actions Workflows: Ensures Sinergym is an efficient ecosystem for understanding and development.

  • And much more!

This is a project in active development. Stay tuned for upcoming releases.



Project Structure

This repository is organized into the following directories:

  • sinergym/: Contains the source code for Sinergym, including the environment, modeling, simulator, and tools such as wrappers and reward functions.
  • docs/: Online documentation generated with Sphinx and using Restructured Text (RST).
  • examples/: Jupyter notebooks illustrating use cases with Sinergym.
  • tests/: Unit tests for Sinergym to ensure stability.
  • scripts/: Scripts for various tasks such as agent training and performance checks, allowing configuration using JSON format.

Available Environments

For a complete and up-to-date list of available environments, please refer to our documentation.

Installation

Please visit INSTALL.md for detailed installation instructions.

Usage example

If you used our Dockerfile during installation, you should have the try_env.py file in your workspace as soon as you enter in. In case you have installed everything on your local machine directly, place it inside our cloned repository. In any case, we start from the point that you have at your disposal a terminal with the appropriate python version and Sinergym running correctly.

Sinergym uses the standard Gymnasium API. So a basic loop should look like:

import gymnasium as gym
import sinergym
# Create the environment
env = gym.make('Eplus-datacenter-mixed-continuous-stochastic-v1')
# Initialize the episode
obs, info = env.reset()
truncated = terminated = False
R = 0.0
while not (terminated or truncated):
    a = env.action_space.sample() # random action selection
    obs, reward, terminated, truncated, info = env.step(a) # get new observation and reward
    R += reward
print('Total reward for the episode: %.4f' % R)
env.close()

A folder will be created in the working directory after creating the environment. It will contain the Sinergym outputs produced during the simulation.

For more examples and details, please visit our usage examples documentation section.

Google Cloud Platform support

For more information about this functionality, please, visit our documentation here.

Projects using Sinergym

The following are some of the projects benefiting from the advantages of Sinergym:

📝 If you want to appear in this list, do not hesitate to send us a PR and include the following badge in your repository:

Repo Activity

Alt

Citing Sinergym

If you use Sinergym in your work, please cite our paper:

@inproceedings{2021sinergym,
    title={Sinergym: A Building Simulation and Control Framework for Training Reinforcement Learning Agents}, 
    author={Jiménez-Raboso, Javier and Campoy-Nieves, Alejandro and Manjavacas-Lucas, Antonio and Gómez-Romero, Juan and Molina-Solana, Miguel},
    year={2021},
    isbn = {9781450391146},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3486611.3488729},
    doi = {10.1145/3486611.3488729},
    booktitle = {Proceedings of the 8th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation},
    pages = {319–323},
    numpages = {5},
}

About

Gym environment for building simulation and control using reinforcement learning

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.2%
  • Dockerfile 1.8%