Welcome to my Reinforcement Learning Playground! 🎉 This repository serves as my hands-on lab 🧪 to explore, understand, and implement reinforcement learning algorithms.
The goal of this project is to create a comprehensive guide 📚 to understanding and building reinforcement learning algorithms. The algorithms are demonstrated in the context of the The Farama Foundation's Gym, previously known as OpenAI's Gym. Gym provides a wide variety of environments to test and compare these algorithms' performances.
In this project, we focus on Q-Learning, a simple yet powerful value-based reinforcement learning algorithm. The entire learning process is captured and optimized using Weights and Biases, a tool that allows tracking and visualizing metrics 📈 during the training process. This helps in understanding the learning dynamics and optimizing the hyperparameters for the best performance.
Implementation of Q-Learning algorithm: A value-based RL algorithm that learns the optimal policy for decision-making problems. Hyperparameter tuning: Utilizing Weights and Biases' sweep feature, various hyperparameters for the learning algorithm are experimented with to find the best performing ones. Performance Metrics: Tracks and visualizes several key performance metrics such as reward per episode, steps per episode, and epsilon (for ε-greedy policy) over the course of learning. Code modularity: The code is divided into multiple functions for better readability and easy debugging.
This project is built and tested in Deepnote, a powerful online Jupyter notebook platform. To run this notebook:
- Clone the repository to your local machine.
- Upload the notebook to Deepnote.
- Ensure that you have the necessary libraries installed (check the import statements at the top of the notebook).
- Run the cells to train the Q-Learning algorithm.
- Monitor the training progress and results in the Weights and Biases dashboard.
I invite you to explore this playground and hope it serves as a useful resource in your own reinforcement learning journey! Your suggestions and contributions to improve this guide are most welcome. Enjoy learning! 🎓
Find detailed explanations, insights, and more in the accompanying Deepnote articles:
Project | Code and Run | Data Report |
---|---|---|
Q-Learning on MountainCar-v0 Environment | Code | Report |
Q-Learning on LunarLander-v2 Environment | Code | Report |
(Additional links will be added here as the articles are created)