Skip to content

Repository for my bachelor thesis on inventory management in a logistics distribution network with reinforcement learning.

License

Notifications You must be signed in to change notification settings

PatrickSinger99/ReinforcementLearningInventoryManagement

Repository files navigation

Inventory Management in a Logistics Distribution Network through Reinforcement Learning

Table of Contents

About The Project

This repository contains the results of my bachelors thesis on the application of reinforcement learning in logistics distribution networks and inventory management. The project was implemented in python. The focus of this project was on the creation of an openAI Gym reinforcement learning environment containing a distribution network simulation. The implementation of different features was done in an iterative approach with frequent testing of the environment through common reinforcement learning agents from Stable-Baselines.

Used Libraries

(back to top)

Installation

As of July 2022, because of current library incompatibilities, python version 3.7 needs to be installed.

  1. Install the base Gym library with pip install gym
  2. Install stable-baselines with pip install stable-baselines3
  3. For compatibility reasons, an old version of tensorflow needs to be installed pip install tensorflow==1.15.0
  4. To display graphs in the main file, install matplotlib pip install matplotlib

If problems occur, compare all library versions to the collection below:

All Libraries/Versions used
  • absl-py 1.1.0
  • ale-py 0.7.5
  • argon2-cffi 21.3.0
  • argon2-cffi-bindings 21.2.0
  • astor 0.8.1
  • attrs 21.4.0
  • backcall 0.2.0
  • beautifulsoup4 4.11.1
  • bleach 5.0.1
  • cffi 1.15.1
  • cloudpickle 2.1.0
  • colorama 0.4.5
  • cycler 0.11.0
  • debugpy 1.6.0
  • decorator 5.1.1
  • defusedxml 0.7.1
  • EditorConfig 0.12.3
  • entrypoints 0.4
  • fastjsonschema 2.15.3
  • fonttools 4.33.3
  • gast 0.2.2
  • google-pasta 0.2.0
  • grpcio 1.47.0
  • gym 0.21.0
  • gym-notices 0.0.7
  • h5py 3.7.0
  • importlib-metadata 4.12.0
  • importlib-resources 5.8.0
  • ipykernel 6.15.0
  • ipython 7.34.0
  • ipython-genutils 0.2.0
  • jedi 0.18.1
  • Jinja2 3.1.2
  • joblib 1.1.0
  • jsbeautifier 1.14.4
  • jsonschema 4.6.1
  • jupyter-client 7.3.4
  • jupyter-core 4.10.0
  • jupyterlab-pygments 0.2.2
  • Keras-Applications 1.0.8
  • Keras-Preprocessing 1.1.2
  • kiwisolver 1.4.3
  • lxml 4.9.1
  • Markdown 3.3.7
  • MarkupSafe 2.1.1
  • matplotlib 3.5.2
  • matplotlib-inline 0.1.3
  • mistune 0.8.4
  • nbclient 0.6.6
  • nbconvert 6.5.0
  • nbformat 5.4.0
  • nest-asyncio 1.5.5
  • notebook 6.4.12
  • numpy 1.21.6
  • opencv-python 4.6.0.66
  • opt-einsum 3.3.0
  • packaging 21.3
  • pandas 1.1.5
  • pandocfilters 1.5.0
  • parso 0.8.3
  • pickleshare 0.7.5
  • Pillow 9.1.1
  • pip 22.1.2
  • prometheus-client 0.14.1
  • prompt-toolkit 3.0.30
  • protobuf 3.20.1
  • psutil 5.9.1
  • pycparser 2.21
  • pygame 2.1.0
  • Pygments 2.12.0
  • pyparsing 3.0.9
  • pyrsistent 0.18.1
  • python-dateutil 2.8.2
  • pytz 2022.1
  • pywin32 304
  • pywinpty 2.0.5
  • pyzmq 23.2.0
  • scipy 1.7.3
  • Send2Trash 1.8.0
  • setuptools 57.0.0
  • six 1.16.0
  • soupsieve 2.3.2.post1
  • stable-baselines 2.10.2
  • stable-baselines3 1.5.0
  • tensorboard 1.15.0
  • tensorflow 1.15.0
  • tensorflow-estimator 1.15.1
  • termcolor 1.1.0
  • terminado 0.15.0
  • tinycss2 1.1.1
  • torch 1.12.0
  • tornado 6.2
  • traitlets 5.3.0
  • typing_extensions 4.2.0
  • wcwidth 0.2.5
  • webencodings 0.5.1
  • Werkzeug 2.1.2
  • wheel 0.36.2
  • wrapt 1.14.1
  • zipp 3.8.

(back to top)

Usage

If only the Environment is needed, the class can be found in the rl_environment.py file. It can be used as a standard openAI Gym environment as described in the official Gym documentation.

The main.ipynb file provides a scenario in which the environment is getting set up and trained with a stable baselines model, as well as the application of the trained agent on a distribution network simulation.

(back to top)

Project Organization

The repository contains multiple versions of the artifact. They are organized in folders beginning with the version number and ending with the newest implemented feature. Inside the version folder the structure is as follows:

└── 0_example_feature               <- Version Folder
    │
    ├── main.ipynb                  <- Main notebook to train RL algorithmns and run simulations
    ├── rl_environment.py           <- RL environment
    │ 
    ├── simulation                  <- Folder: Simulation
    │   ├── simulation.py           <- Main simulation class. Controls actor classes. Starts simualtions
    │   │
    │   └── actor_classes           <- Folder: Contains all simulation actor classes
    │       ├── class_warehouse     <- Abstract Warehouse class and subclasses RegionalWarehouse and CentralWarehouse
    │       ├── class_customer      <- Customer class
    │       ├── class_manufacturer  <- Manufacturer class (Not available in all versions)
    │
    └── logfiles                    <- Folder: Contains created logfiles from main.ipynb (Not available in all versions)
        ├── logfile_date_time.json  <- Logfile containing information about a training/simulation run
        └── ...

(back to top)

About

Repository for my bachelor thesis on inventory management in a logistics distribution network with reinforcement learning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published