Skip to content

An Ethical Trajectory Planning Algorithm for Autonomous Vehicles

License

Notifications You must be signed in to change notification settings

glongh/AutonomousVehiclesTrajectoryPlanning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DOI Linux Python 3.8 GPLv3 license GitHub version

An Ethical Trajectory Planning Algorithm for Autonomous Vehicles

This repository includes comprehensive ethical trajectory planning software. Its core consists of a quantification of risks in the motion planning of autonomous vehicles and a subsequent fair distribution of risk. The algorithm is compatible to the simulation environment CommonRoad and can be evaluated and tested empirically. Further information can be found in the paper (see References).

System Requirements

  • Operating System: Linux Ubuntu (tested on 20.04)
  • Programming Language: Python >= 3.7 (tested on 3.8)
  • Software Dependencies

Installation

The installation of this repository takes around 10 min and consists of three steps. We recommend using an isolated virtual environment for installation.

  1. Clone this repository with:

    git clone https://github.com/TUMFTM/EthicalTrajectoryPlanning.git

  2. Navigate to the root folder of the repository ([..]/EthicalTrajectoryPlanning) and install requirements:

    pip install -r requirements.txt

  3. Download CommonRoad scenarios by cloning the scneario repository next to this repository:

    git clone https://gitlab.lrz.de/tum-cps/commonroad-scenarios

    so that you have the following folder structure:

    ├── EthicalTrajectoryPlanning (This repository)
    ├── commonroad-scenarios
    

    Results were obtained with commonroad-scenarios at commit 35b9cfb5b89d33249ea0d5507b9465650ebeb289. Instead of cloning you can download and unpack this commit directly with:

    wget -c https://gitlab.lrz.de/tum-cps/commonroad-scenarios/-/archive/35b9cfb5b89d33249ea0d5507b9465650ebeb289/commonroad-scenarios-35b9cfb5b89d33249ea0d5507b9465650ebeb289.tar.gz -O - | tar -xz

Quick Start Demo

To run the ethical planner on an exemplary default scenario, execute the following command from the root directory of this repository:

  • python planner/Frenet/frenet_planner.py

Exemplary Result

You will see a live visualization of the scenario being solved by the planner. Now you can start with your own experiments by changing the configs or selecting another scenario by adding

  • --scenario <path-to-scenario>

to the command.

By default logs are saved and can be analyzed afterwards by running:

  • python planner/Frenet/analyze_tools/analyze_log.py

How to reproduce results

The idea and basic principles of this algorithm are presented in Geisslinger et al. 20231. The following describes how the results from the paper can be reproduced. To evaluate the planning algorithm on multiple scenarios execute:

  • python planner/Frenet/plannertools/evaluatefrenet.py

By default a single scenario from all available scenarios is randomly chosen. The number of evaluation scenarios can be changed with the limit_scenarios variable in the script or by adding --all to the command to evaluate on all available scenarios.

To evaluate with the according config settings of ethical, ego and standard, add an according argument, for example:

  • python planner/Frenet/plannertools/evaluatefrenet.py --weights ethical --all

To evaluate on all 2000 scenarios, make sure to have at least 200 GB space left on your device for saving the log files. For better runtime, we recommend using multiprocessing and a GPU for the prediction network. Evaluating all scenarios in 10 parallel threads with a GPU takes around 48 hours. Results and logfiles for each run are stored in planner/Frenet/results.

Standard evaluation metrics such as cummulated harm on all scenarios are provided within the results (e.g. results/eval/harm.json). planner/Frenet/analyze_tools/analyze_risk_dist.py helps to extract risk values out of multiple logfiles. Boxplots with risk distribtuions as in Geisslinger et al. 20231 can be generated using planner/Frenet/plot_tools/boxplots_risks.py.

References

  1. Geisslinger, M., Poszler, F. & Lienkamp, M. An ethical trajectory planning algorithm for autonomous vehicles. Nat Mach Intell (2023). https://doi.org/10.1038/s42256-022-00607-z
  2. Geisslinger, M., Trauth, R., Kaljavesi G. & Lienkamp M. Maximum Acceptable Risk as Criterion for Decision-Making in Autonomous Vehicle Trajectory Planning. IEEE Open Journal of Intelligent Transportation Systems (2023). DOI:10.1109/OJITS.2023.3298973

Contributions

  • Maximilian Geisslinger (Main Contributor, [email protected])
  • Rainer Trauth (Computing Performance)
  • Florian Pfab (Master Thesis: Motion Planning with Risk Assessment for Automated Vehicles)
  • Simon Sagmeister (Master Thesis: Neural Networks: Real-time Capable Trajectory Planning through Supervised Learning)
  • Tobias Geissenberger (Bachelor Thesis: Harm Prediction for Risk-Aware Motion Planning of Automated Vehicles)
  • Clemens Krispler (Bachelor Thesis: Motion Planning for Autonomous Vehicles: Developing a Principle of Responsibility for Ethical Decision-Making)
  • Zhi Zheng (Semester Thesis: Parallelization of a Planning Algorithm in the Field of Autonomous Driving supervised by Rainer Trauth)

About

An Ethical Trajectory Planning Algorithm for Autonomous Vehicles

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%