- step 1: download meta data from google drive and put them into
Nested_FNO/ECLIPSE/meta_data
- step 2: run following code to convert
.npy
file into.pt
files indataset
folder
cd data_config
bash file_config.sh
cd ..
- step 3: run
python3 save_data_loader.py
to createDATA_LOADER_DICT.pth
- step 1: train each models seperately using the following code. Each model requires an NVIDIA A100 GPU.
python3 train_FNO4D_DP_GLOBAL.py
python3 train_FNO4D_DP_LGR.py LGR1
python3 train_FNO4D_DP_LGR.py LGR2
python3 train_FNO4D_DP_LGR.py LGR3
python3 train_FNO4D_DP_LGR.py LGR4
python3 train_FNO4D_SG_LGR.py LGR1
python3 train_FNO4D_SG_LGR.py LGR2
python3 train_FNO4D_SG_LGR.py LGR3
python3 train_FNO4D_SG_LGR.py LGR4
- step 2: monitor training and validation loss with tensorboard
tensorboard --logdir=logs --port=6007 --host=xxxxxx
As discussed in the paper, we finetuned dP_LGR1
, dP_LGR4
, SG_LGR1
, SG_LGR1
models with a random instance of pre-generated error.