Novel algorithms to predict Remaining Useful Life (RUL) on NASA’s benchmark dataset, CMAPSS turbofan engine degradation simulation.
Benchmark source: NASA Intelligent Systems Division: Prognostics Center of Excellence - Prognostic Health Management, Predictive Maintenance of Turbofan Engines.
The generation of data-driven prognostics models requires the availability of data sets with run-to-failure trajectories. To contribute to the development of these methods, the data set provides a new realistic data set of run-to-failure trajectories for a small fleet of aircraft engines under realistic flight conditions. The damage propagation modelling used for the generation of this synthetic data set builds on the modeling strategy from previous work. The data set was generated with the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dynamical model. The data set has been provided by the NASA Prognostics Center of Excellence (PCoE) in collaboration with ETH Zurich and PARC.
Download Mirror: https://phm-datasets.s3.amazonaws.com/NASA/17.+Turbofan+Engine+Degradation+Simulation+Data+Set+2.zip
This project uses PyTorch Lightning powered rul-datasets
to generate data sets. Hydra is an open source powerful utility to configure
experiments and generate configuration files. The template has many loggers to choose from including popular MLFlow and Weights and Biases.
Install dependencies
# clone project
git clone https://github.com/ozogxyz/cmapss
cd cmapss
# [OPTIONAL] create conda environment
conda create -n myenv python=3.9
conda activate myenv
# install pytorch according to instructions
# https://pytorch.org/get-started/
# install requirements
pip install -r requirements.txt
# rul-datasets library requires the environment variable RUL_DATASETS_DATA_ROOT to be set.
export RUL_DATASETS_DATA_ROOT={path-to-project}/data
Train model with default configuration
# train on CPU
python src/train.py trainer=cpu
# train on GPU
python src/train.py trainer=gpu
Train model with chosen experiment configuration from configs/experiment/
python src/train.py experiment=experiment_name.yaml
You can override any parameter from command line like this
python src/train.py trainer.max_epochs=20 datamodule.batch_size=64