Skip to content

cb-geo/mpm

Repository files navigation

High-Performance Material Point Method (CB-Geo mpm)

CB-Geo Computational Geomechanics Research Group

License Developer docs User docs CircleCI codecov Coverity Language grade: C/C++ Project management Discourse forum

Documentation

Please refer to CB-Geo MPM Documentation for information on compiling, and running the code. The documentation also include the MPM theory.

If you have any issues running or compiling the MPM code please open a issue on the CB-Geo Discourse forum.

Running code on Docker

Running code locally

Prerequisite packages

The following prerequisite packages can be found in the docker image:

Optional

Fedora installation

Please run the following command:

dnf install -y boost boost-devel clang cmake cppcheck eigen3-devel findutils gcc gcc-c++ \
                   git hdf5 hdf5-devel hdf5-openmpi hdf5-openmpi-devel kernel-devel lcov\
                   make ninja-build openmpi openmpi-devel sqlite sqlite-devel tar tbb tbb-devel valgrind vim \
                   voro++ voro++-devel vtk vtk-devel wget

Ubuntu installation

Please run the following commands to install dependencies:

sudo apt-get install -y gcc git libboost-all-dev libeigen3-dev libhdf5-serial-dev libopenmpi-dev \
                        libtbb-dev

To install other dependencies:

CMake 3.15

sudo apt-get install software-properties-common
sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
sudo apt update
sudo apt upgrade

OpenGL and X11:Xt

sudo apt-get install freeglut3-dev libxt-dev

VTK

git clone git://vtk.org/VTK.git VTK
cd VTK && mkdir build && cd build/
cmake -DCMAKE_BUILD_TYPE:STRING=Release ..
make -j
sudo make install

KaHIP installation for domain decomposition

cd ~/workspace/ && git clone https://github.com/schulzchristian/KaHIP && \
   cd KaHIP && sh ./compile_withcmake.sh

Compile

See https://mpm-doc.cb-geo.com/ for more detailed instructions.

  1. Run mkdir build && cd build && cmake -DCMAKE_CXX_COMPILER=g++ ...

  2. Run make clean && make -jN (where N is the number of cores).

To compile without KaHIP partitioning use cmake -DNO_KAHIP=True ..

Compile mpm or mpmtest

  • To compile either mpm or mpmtest alone, run make mpm -jN or make mpmtest -jN (where N is the number of cores).

Compile without tests [Editing CMake options]

To compile without tests run: mkdir build && cd build && cmake -DMPM_BUILD_TESTING=Off -DCMAKE_CXX_COMPILER=g++ ...

Compile with MPI (Running on a cluster)

The CB-Geo mpm code can be compiled with MPI to distribute the workload across compute nodes in a cluster.

Additional steps to load OpenMPI on Fedora:

source /etc/profile.d/modules.sh
export MODULEPATH=$MODULEPATH:/usr/share/modulefiles
module load mpi/openmpi-x86_64

Compile with OpenMPI:

mkdir build && cd build 
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ ..
make -jN

Compile with Ninja build system [Alternative to Make]

  1. Run mkdir build && cd build && cmake -GNinja -DCMAKE_CXX_COMPILER=g++ ...

  2. Run ninja

Run tests

  1. Run ./mpmtest -s (for a verbose output) or ctest -VV.

Run MPM

See https://mpm-doc.cb-geo.com/ for more detailed instructions.

The CB-Geo MPM code uses a JSON file for input configuration. To run the mpm code:

   ./mpm  [-p <tbb_parallel>] [-i <input_file>] -f <working_dir> [--]
          [--version] [-h]

For example:

./mpm -f /path/to/input-dir/ -i mpm-usf-3d.json -p 8

Where:

   -p <tbb_parallel>,  --tbb_parallel <tbb_parallel>
     Number of parallel TBB threads

   -i <input_file>,  --input_file <input_file>
     Input JSON file [mpm.json]

   -f <working_dir>,  --working_dir <working_dir>
     (required)  Current working folder

   --,  --ignore_rest
     Ignores the rest of the labeled arguments following this flag.

   --version
     Displays version information and exits.

   -h,  --help
     Displays usage information and exits.

Running the code with MPI

To run the CB-Geo mpm code on a cluster with MPI:

mpirun -N <#-MPI-tasks> ./mpm -f /path/to/input-dir/ -i mpm.json

For example to run the code on 4 compute nodes (MPI tasks):

mpirun -N 4 ./mpm -f ~/benchmarks/3d/uniaxial-stress -i mpm.json

Citation

If you publish results using our code, please acknowledge our work by quoting the following paper:

Kumar, K., Salmond, J., Kularathna, S., Wilkes, C., Tjung, E., Biscontin, G., & Soga, K. (2019). Scalable and modular material point method for large scale simulations. 2nd International Conference on the Material Point Method. Cambridge, UK. https://arxiv.org/abs/1909.13380