Skip to content

Sysuzqs/PDENNEval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PDENNEval

This is the official code repository for PDENNEval: A Comprehensive Evaluation of Neural Network Methods for Solving PDEs. The appendix can be found in here.

Introduction

PDENNEval conducts a comprehensive and systematic evaluation of 12 NN methods for PDEs, including 6 function learning-based NN methods: DRM, PINN, WAN, DFLM, RFM, DFVM, and 6 operator learning-based NN methods: U-Net, MPNN, FNO, DeepONet, PINO, U-NO. In this repository, we provide code reference for all evaluated methods. If this repository is helpful to your research, please cite our paper.

Requirement

Our implementation is based on PyTorch. Before starting, make sure you have configured environment.

Installation

Create a conda environment and install dependencies (ours):

  • Python 3.8
  • CUDA 11.6
  • PyTorch 1.13.1
  • PyTorch Geometric (for MPNN)
  • DeepXDE 1.10.0 (for PINNs)
  • scipy (for RFM)
  • scikit-learn (for RFM)
# create environment
conda create -n PDENNEval python=3.8 
conda activate PDENNEval

# install pytorch
conda install pytorch==1.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

# For PINNs
pip install deepxde # install DeepXDE
conda env config vars set DDE_BACKEND=pytorch # set backend as pytorch

# For MPNN
pip install torch_geometric # install torch geometric
conda install pytorch-cluster -c pyg # install torch cluster

# For RFM
pip install scipy scikit-learn

# Other dependencies
pip install h5py # to read dataset file in HDF5 format
pip install tensorboard matplotlib tqdm # visualization

Datasets

The data used in our evaluation are from two sources: PDEBench and self-generated.

PDEBench Data

PDEBench provides large datasets covering wide range PDEs. You can download these datasets from DaRUS data repository. The data files used in our work are as follows:

PDE File Name File Size
1D Advection 1D_Advection_Sols_beta0.1.hdf5 7.7G
1D Diffusion-Reaction ReacDiff_Nu0.5_Rho1.0.hdf5 3.9G
1D Burgers 1D_Burgers_Sols_Nu0.001.hdf5 7.7G
1D Diffusion-Sorption 1D_diff-sorp_NA_NA.h5 4.0G
1D Compressible NS 1D_CFD_Rand_Eta0.1_Zeta0.1_periodic_Train.hdf5 12G
2D Compressible NS 2D_CFD_Rand_M0.1_Eta0.1_Zeta0.1_periodic_128_Train.hdf5 52G
2D Darcy Flow 2D_DarcyFlow_beta1.0_Train.hdf5 1.3G
2D Shallow Water 2D_rdb_NA_NA.h5 6.2G
3D Compressible NS 3D_CFD_Rand_M1.0_Eta1e-08_Zeta1e-08 _periodic_Train.hdf5 83G

If you use PDEBench datasets in your reseach, please cite their papers:

PDEBench: An Extensive Benchmark for Scientific Machine Learning - NeurIPS'2022
@inproceedings{PDEBench2022,
author = {Takamoto, Makoto and Praditia, Timothy and Leiteritz, Raphael and MacKinlay, Dan and Alesiani, Francesco and Pflüger, Dirk and Niepert, Mathias},
title = {{PDEBench: An Extensive Benchmark for Scientific Machine Learning}},
year = {2022},
booktitle = {36th Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmarks},
url = {https://arxiv.org/abs/2210.07182}
}
PDEBench Datasets - NeurIPS'2022
@data{darus-2986_2022,
author = {Takamoto, Makoto and Praditia, Timothy and Leiteritz, Raphael and MacKinlay, Dan and Alesiani, Francesco and Pflüger, Dirk and Niepert, Mathias},
publisher = {DaRUS},
title = {{PDEBench Datasets}},
year = {2022},
doi = {10.18419/darus-2986},
url = {https://doi.org/10.18419/darus-2986}
}

Self-generated Data

PDE File Size Download Link
1D Allen-Cahn Equation 3.9G AI4SC Website, Google Drive
1D Cahn-Hilliard Equation 3.9G AI4SC Website, Google Drive
2D Allen-Cahn Equation 6.2G AI4SC Website, Google Drive
2D Black-Scholes-Barenblatt Equation 6.2G AI4SC Website, Google Drive
2D Burgers Equation 12.3G AI4SC Website, Google Drive
3D Euler Equation 83G AI4SC Website, Google Drive
3D Maxwell Equation 5.9G AI4SC Website, Google Drive

Getting Started

Train and Test

Our implementation is saved in the src directory. The relevant code files for each methods are saved in a subdirectorys named after the method name. If you want to evaluate a certain method, please enter the corresponding subdirectory. A detailed guidance is provided to help you running training and testing.

Estimate Lipschitz Upper Bound

We use SeqLip algorithm to estimate the Lipschitz upper bound of trained neural networks. Specifically, we provide estimation scripts for UNet, DeepONet, and all methods that only use MLP. You can find these scripts in the folder corresponding to each method.

Contributors

Changye He, Haolong Fan, Hongji Li, Jianhuan Cen, Liao Chen, Ping Wei, Ziyang Zhou

Citation

Our work is based on many previous work. If you use the corresponding codes, please cite their papers. In details:

DeepONet
@article{lu2021learning,
  title={Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators},
  author={Lu, Lu and Jin, Pengzhan and Pang, Guofei and Zhang, Zhongqiang and Karniadakis, George Em},
  journal={Nature machine intelligence},
  volume={3},
  number={3},
  pages={218--229},
  year={2021},
  publisher={Nature Publishing Group UK London}
}
MPNN
@article{brandstetter2022message,
  title={Message passing neural PDE solvers},
  author={Brandstetter, Johannes and Worrall, Daniel and Welling, Max},
  journal={arXiv preprint arXiv:2202.03376},
  year={2022}
}
FNO
@article{li2020fourier,
  title={Fourier neural operator for parametric partial differential equations},
  author={Li, Zongyi and Kovachki, Nikola and Azizzadenesheli, Kamyar and Liu, Burigede and Bhattacharya, Kaushik and Stuart, Andrew and Anandkumar, Anima},
  journal={arXiv preprint arXiv:2010.08895},
  year={2020}
}
U-NO
@article{rahman2022u,
  title={U-no: U-shaped neural operators},
  author={Rahman, Md Ashiqur and Ross, Zachary E and Azizzadenesheli, Kamyar},
  journal={arXiv preprint arXiv:2204.11127},
  year={2022}
}
DeepXDE
@article{lu2021deepxde,
  title={DeepXDE: A deep learning library for solving differential equations},
  author={Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em},
  journal={SIAM review},
  volume={63},
  number={1},
  pages={208--228},
  year={2021},
  publisher={SIAM}
}
SeqLip
@article{virmaux2018lipschitz,
  title={Lipschitz regularity of deep neural networks: analysis and efficient estimation},
  author={Virmaux, Aladin and Scaman, Kevin},
  journal={Advances in Neural Information Processing Systems},
  volume={31},
  year={2018}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •