Skip to content

Code and materials for the paper "Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks".

Notifications You must be signed in to change notification settings

kamilazdybal/pv-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

📄 Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks

This repository contains code, datasets, and results from the paper:

Kamila Zdybał, James C. Sutherland, Alessandro Parente - Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks, 2024.

Data and results files

Data and results files will be shared separately via GoogleDrive as they take over 5GB of space.

Python scripts

The order of executing scripts

First, run the PV optimization with RUN-PV-optimization.py with desired parameters. Once you have the results files, you can run quantitative assessment of PVs with RUN-VarianceData.py. Both those scripts load the appropriate data under the hood using ammonia-Stagni-load-data.py.

You have a lot of flexibility in setting different ANN hyper-parameters in those two scripts using the argparse Python library. If you're new to argparse, check out my short video tutorials:

Optimizing PVs

The above script uses one of the following under the hood:

depending on which --parameterization you selected.

Quantitative assessment of PVs

The above script uses one of the following under the hood:

depending on which --parameterization you selected.

Running Python jobs

This is a minimal example for running a Python script with all hyper-parameters set as per §2.2 in the paper:

python RUN-PV-optimization.py --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9

Alternatively, you can change various parameters (kernel initializer, learning rate, etc.) using the appropriate argument:

python RUN-PV-optimization.py --initializer 'GlorotUniform' --init_lr 0.001 --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9

If you'd like to remove pure stream components from the PV definition (non-trainable pure streams preprocessing as discussed in §3.4. in the paper) use the flag:

--no-pure_streams

as an extra argument.

To run $(f, PV)$ optimization, use:

--parameterization 'f-PV'

To run $(f, PV, \gamma)$ optimization, use:

--parameterization 'f-PV-h'

Note: Logging with Weights & Biases is also possible in the scripts above.

Jupyter notebooks

All results are post-processed and visualized in dedicated Jupyter notebooks. You can access the appropriate notebook below:

Reproducing Figs. 2-3 - Quantitative assessment of the optimized PVs

→ This Jupyter notebook can be used to reproduce Figs. 2-3.

Reproducing Fig. 4 and Fig. 10 - Physical insight into the optimized PVs

→ This Jupyter notebook can be used to reproduce Fig. 4 and Fig. 10.

Reproducing supplementary Figs. S37-S38 - The effect of scaling encoder inputs prior to training

→ This Jupyter notebook can be used to reproduce supplementary Figs. S37-S38.

About

Code and materials for the paper "Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published