This repo contains an implementation of Foundation, a framework for flexible, modular, and composable environments that model socio-economic behaviors and dynamics in a society with both agents and governments.
Foundation provides a Gym-style API:
reset
: resets the environment's state and returns the observation.step
: advances the environment by one timestep, and returns the tuple (observation, reward, done, info).
The Foundation of this repository is from:
@misc{2004.13332,
Author = {Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher},
Title = {The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies},
Year = {2020},
Eprint = {arXiv:2004.13332},
}
The agents were adjusted to have equality and the state of nature as part of their reward structure. For more information, we refer to our report.
The original training procedure of the AI Economist paper trains the tax agent and active agents concurrently (after the pretraining phase for the active agents).
In order to find Stackelberg Equilibria, one has to train the agents after each other. Following the procedure of Gerstgrasser and Parkes, we implemented meta learning algorithms where the followers (active agents) are trained first to behave well given information about the leaders policy(tax agent), and the leader is trained afterwards. For that several new capabilities had to be added.
The config files of the experiments can be found here. Some experiments include:
- Assuming that the current tax brackets are enough for the followers to plan optimally
- Monte carlo estimate for policy of the leader using a rolling window
- Tax agent can select taxes from a predetermined set and shift them. The rolling window approach is used again
To get started, you'll need to have Python 3.7+ installed.
Simply use the Python package manager:
pip install ai-economist
- Clone this repository to your local machine:
git clone www.github.com/Thahit/AI_Econ
- Create a new conda environment (named "ai-economist" below - replace with anything else) and activate it
conda create --name ai-economist python=3.7 --yes
conda activate ai-economist
- Install as an editable Python package
cd ai-economist
pip install -e .
You can then simply run aiecon
once to activate the conda environment.
To test your installation, try running:
conda activate ai-economist
python -c "import ai_economist"
To familiarize yourself with Foundation, check out the tutorials in the tutorials
folder. You can run these notebooks interactively in your browser on Google Colab.
- economic_simulation_basic (Try this on Colab!): Shows how to interact with and visualize the simulation.
- economic_simulation_advanced (Try this on Colab!): Explains how Foundation is built up using composable and flexible building blocks.
- optimal_taxation_theory_and_simulation (Try this on Colab!): Demonstrates how economic simulations can be used to study the problem of optimal taxation.
- multi_agent_gpu_training_with_warp_drive (Try this on Colab!): Introduces our multi-agent reinforcement learning framework WarpDrive, which we then use to train the COVID-19 and economic simulation.
- multi_agent_training_with_rllib (Try this on Colab!): Shows how to perform distributed multi-agent reinforcement learning with RLlib.
- two_level_curriculum_training_with_rllib: Describes how to implement two-level curriculum training with RLlib.
To run these notebooks locally, you need Jupyter. See https://jupyter.readthedocs.io/en/latest/install.html for installation instructions and (https://jupyter-notebook.readthedocs.io/en/stable/ for examples of how to work with Jupyter.
- The simulation is located in the
ai_economist/foundation
folder.
The code repository is organized into the following components:
Component | Description |
---|---|
base | Contains base classes to can be extended to define Agents, Components and Scenarios. |
agents | Agents represent economic actors in the environment. Currently, we have mobile Agents (representing workers) and a social planner (representing a government). |
entities | Endogenous and exogenous components of the environment. Endogenous entities include labor, while exogenous entity includes landmarks (such as Water and Grass) and collectible Resources (such as Wood and Stone). |
components | Components are used to add some particular dynamics to an environment. They also add action spaces that define how Agents can interact with the environment via the Component. |
scenarios | Scenarios compose Components to define the dynamics of the world. It also computes rewards and exposes states for visualization. |
Please see our Simulation Card for a review of the intended use and ethical review of our framework.
Foundation and the AI Economist are released under the BSD-3 License.