forked from randbatch-md/gromacs-rbe
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathINSTALL
1437 lines (1089 loc) · 60.4 KB
/
INSTALL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Installation guide
******************
Introduction to building GROMACS
================================
These instructions pertain to building GROMACS 2022.3. You might also
want to check the up-to-date installation instructions.
Quick and dirty installation
----------------------------
1. Get the latest version of your C and C++ compilers.
2. Check that you have CMake version 3.16.3 or later.
3. Get and unpack the latest version of the GROMACS tarball.
4. Make a separate build directory and change to it.
5. Run "cmake" with the path to the source as an argument
6. Run "make", "make check", and "make install"
7. Source "GMXRC" to get access to GROMACS
Or, as a sequence of commands to execute:
tar xfz gromacs-2022.3.tar.gz
cd gromacs-2022.3
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC
This will download and build first the prerequisite FFT library
followed by GROMACS. If you already have FFTW installed, you can
remove that argument to "cmake". Overall, this build of GROMACS will
be correct and reasonably fast on the machine upon which "cmake" ran.
On another machine, it may not run, or may not run fast. If you want
to get the maximum value for your hardware with GROMACS, you will have
to read further. Sadly, the interactions of hardware, libraries, and
compilers are only going to continue to get more complex.
Quick and dirty cluster installation
------------------------------------
On a cluster where users are expected to be running across multiple
nodes using MPI, make one installation similar to the above, and
another using "-DGMX_MPI=on". The latter will install binaries and
libraries named using a default suffix of "_mpi" ie "gmx_mpi". Hence
it is safe and common practice to install this into the same location
where the non-MPI build is installed.
Typical installation
--------------------
As above, and with further details below, but you should consider
using the following CMake options with the appropriate value instead
of "xxx" :
* "-DCMAKE_C_COMPILER=xxx" equal to the name of the C99 Compiler you
wish to use (or the environment variable "CC")
* "-DCMAKE_CXX_COMPILER=xxx" equal to the name of the C++17 compiler
you wish to use (or the environment variable "CXX")
* "-DGMX_MPI=on" to build using MPI support
* "-DGMX_GPU=CUDA" to build with NVIDIA CUDA support enabled.
* "-DGMX_GPU=OpenCL" to build with OpenCL support enabled.
* "-DGMX_GPU=SYCL" to build with SYCL support enabled (using Intel
oneAPI DPC++ by default).
* "-DGMX_SYCL_HIPSYCL=on" to build with SYCL support using hipSYCL
(requires "-DGMX_GPU=SYCL").
* "-DGMX_SIMD=xxx" to specify the level of SIMD support of the node on
which GROMACS will run
* "-DGMX_DOUBLE=on" to build GROMACS in double precision (slower, and
not normally useful)
* "-DCMAKE_PREFIX_PATH=xxx" to add a non-standard location for CMake
to search for libraries, headers or programs
* "-DCMAKE_INSTALL_PREFIX=xxx" to install GROMACS to a non-standard
location (default "/usr/local/gromacs")
* "-DBUILD_SHARED_LIBS=off" to turn off the building of shared
libraries to help with static linking
* "-DGMX_FFT_LIBRARY=xxx" to select whether to use "fftw3", "mkl" or
"fftpack" libraries for FFT support
* "-DCMAKE_BUILD_TYPE=Debug" to build GROMACS in debug mode
Building older versions
-----------------------
Installation instructions for old GROMACS versions can be found at the
GROMACS documentation page.
Prerequisites
=============
Platform
--------
GROMACS can be compiled for many operating systems and architectures.
These include any distribution of Linux, Mac OS X or Windows, and
architectures including x86, AMD64/x86-64, several PowerPC including
POWER8, ARM v8, and SPARC VIII.
Compiler
--------
GROMACS can be compiled on any platform with ANSI C99 and C++17
compilers, and their respective standard C/C++ libraries. Good
performance on an OS and architecture requires choosing a good
compiler. We recommend gcc, because it is free, widely available and
frequently provides the best performance.
You should strive to use the most recent version of your compiler.
Since we require full C++17 support the minimum compiler versions
supported by the GROMACS team are
* GNU (gcc/libstdc++) 7
* LLVM (clang/libc++) 7
* Microsoft (MSVC) 2019
Other compilers may work (Cray, Pathscale, older clang) but do not
offer competitive performance. We recommend against PGI because the
performance with C++ is very bad.
The Intel classic compiler (icc/icpc) is no longer supported in
GROMACS. Use Intel's newer clang-based compiler from oneAPI, or gcc.
The xlc compiler is not supported and version 16.1 does not compile on
POWER architectures for GROMACS-2022.3. We recommend to use the gcc
compiler instead, as it is being extensively tested.
You may also need the most recent version of other compiler toolchain
components beside the compiler itself (e.g. assembler or linker);
these are often shipped by your OS distribution's binutils package.
C++17 support requires adequate support in both the compiler and the
C++ library. The gcc and MSVC compilers include their own standard
libraries and require no further configuration. If your vendor's
compiler also manages the standard library library via compiler flags,
these will be honored. For configuration of other compilers, read on.
On Linux, the clang compilers typically use for their C++ library the
libstdc++ which comes with g++. For GROMACS, we require the compiler
to support libstc++ version 7.1 or higher. To select a particular
libstdc++ library, provide the path to g++ with
"-DGMX_GPLUSPLUS_PATH=/path/to/g++".
To build with clang and llvm's libcxx standard library, use
"-DCMAKE_CXX_FLAGS=-stdlib=libc++".
If you are running on Mac OS X, the best option is gcc. The Apple
clang compiler provided by MacPorts will work, but does not support
OpenMP, so will probably not provide best performance.
For all non-x86 platforms, your best option is typically to use gcc or
the vendor's default or recommended compiler, and check for
specialized information below.
For updated versions of gcc to add to your Linux OS, see
* Ubuntu: Ubuntu toolchain ppa page
* RHEL/CentOS: EPEL page or the RedHat Developer Toolset
Compiling with parallelization options
--------------------------------------
For maximum performance you will need to examine how you will use
GROMACS and what hardware you plan to run on. Often OpenMP parallelism
is an advantage for GROMACS, but support for this is generally built
into your compiler and detected automatically.
GPU support
~~~~~~~~~~~
GROMACS has excellent support for NVIDIA GPUs supported via CUDA. On
Linux, NVIDIA CUDA toolkit with minimum version 11.0 is required, and
the latest version is strongly encouraged. NVIDIA GPUs with at least
NVIDIA compute capability 3.5 are required. You are strongly
recommended to get the latest CUDA version and driver that supports
your hardware, but beware of possible performance regressions in newer
CUDA versions on older hardware. While some CUDA compilers (nvcc)
might not officially support recent versions of gcc as the back-end
compiler, we still recommend that you at least use a gcc version
recent enough to get the best SIMD support for your CPU, since GROMACS
always runs some code on the CPU. It is most reliable to use the same
C++ compiler version for GROMACS code as used as the host compiler for
nvcc.
To make it possible to use other accelerators, GROMACS also includes
OpenCL support. The minimum OpenCL version required is unknown and
only 64-bit implementations are supported. The current OpenCL
implementation is recommended for use with GCN-based AMD GPUs, and on
Linux we recommend the ROCm runtime. Intel integrated GPUs are
supported with the Neo drivers. OpenCL is also supported with NVIDIA
GPUs, but using the latest NVIDIA driver (which includes the NVIDIA
OpenCL runtime) is recommended. Also note that there are performance
limitations (inherent to the NVIDIA OpenCL runtime). It is not
possible to support both Intel and other vendors' GPUs with OpenCL. A
64-bit implementation of OpenCL is required and therefore OpenCL is
only supported on 64-bit platforms.
Since GROMACS 2021, the support for SYCL is added. The current SYCL
implementation can be compiled either with Intel oneAPI DPC++ compiler
for Intel GPUs, or with hipSYCL compiler and ROCm runtime for AMD GFX9
and CDNA GPUs. Using other devices supported by these compilers is
possible, but not recommended.
It is not possible to configure several GPU backends in the same build
of GROMACS.
MPI support
~~~~~~~~~~~
GROMACS can run in parallel on multiple cores of a single workstation
using its built-in thread-MPI. No user action is required in order to
enable this.
If you wish to run in parallel on multiple machines across a network,
you will need to have an MPI library installed that supports the MPI
2.0 standard. That's true for any MPI library version released since
about 2009, but the GROMACS team recommends the latest version (for
best performance) of either your vendor's library, OpenMPI or MPICH.
To compile with MPI set your compiler to the normal (non-MPI) compiler
and add "-DGMX_MPI=on" to the cmake options. It is possible to set the
compiler to the MPI compiler wrapper but it is neither necessary nor
recommended.
GPU-aware MPI support
~~~~~~~~~~~~~~~~~~~~~
In simulations using multiple GPUs, an MPI implementation with GPU
support allows communication to be performed directly between the
distinct GPU memory spaces without staging through CPU memory, often
resulting in higher bandwidth and lower latency communication. The
only current support for this in GROMACS is with a CUDA build
targeting Nvidia GPUs using "CUDA-aware" MPI libraries. For more
details, see Introduction to CUDA-aware MPI.
To use CUDA-aware MPI for direct GPU communication we recommend using
the latest OpenMPI version (>=4.1.0) with the latest UCX version
(>=1.10), since most GROMACS internal testing on CUDA-aware support
has been performed using these versions. OpenMPI with CUDA-aware
support can be built following the procedure in these OpenMPI build
instructions.
With "GMX_MPI=ON", GROMACS attempts to automatically detect CUDA
support in the underlying MPI library at compile time, and enables
direct GPU communication when this is detected. However, there are
some cases when GROMACS may fail to detect existing CUDA-aware
support, in which case it can be manually enabled by setting
environment variable "GMX_FORCE_GPU_AWARE_MPI=1" at runtime (although
such cases still lack substantial testing, so we urge the user to
carefully check correctness of results against those using default
build options, and report any issues).
CMake
-----
GROMACS builds with the CMake build system, requiring at least version
3.16.3. You can check whether CMake is installed, and what version it
is, with "cmake --version". If you need to install CMake, then first
check whether your platform's package management system provides a
suitable version, or visit the CMake installation page for pre-
compiled binaries, source code and installation instructions. The
GROMACS team recommends you install the most recent version of CMake
you can.
Fast Fourier Transform library
------------------------------
Many simulations in GROMACS make extensive use of fast Fourier
transforms, and a software library to perform these is always
required. We recommend FFTW (version 3 or higher only) or Intel MKL.
The choice of library can be set with "cmake
-DGMX_FFT_LIBRARY=<name>", where "<name>" is one of "fftw3", "mkl", or
"fftpack". FFTPACK is bundled with GROMACS as a fallback, and is
acceptable if simulation performance is not a priority. When choosing
MKL, GROMACS will also use MKL for BLAS and LAPACK (see linear algebra
libraries). Generally, there is no advantage in using MKL with
GROMACS, and FFTW is often faster. With PME GPU offload support using
CUDA, a GPU-based FFT library is required. The CUDA-based GPU FFT
library cuFFT is part of the CUDA toolkit (required for all CUDA
builds) and therefore no additional software component is needed when
building with CUDA GPU acceleration.
Using FFTW
~~~~~~~~~~
FFTW is likely to be available for your platform via its package
management system, but there can be compatibility and significant
performance issues associated with these packages. In particular,
GROMACS simulations are normally run in "mixed" floating-point
precision, which is suited for the use of single precision in FFTW.
The default FFTW package is normally in double precision, and good
compiler options to use for FFTW when linked to GROMACS may not have
been used. Accordingly, the GROMACS team recommends either
* that you permit the GROMACS installation to download and build FFTW
from source automatically for you (use "cmake
-DGMX_BUILD_OWN_FFTW=ON"), or
* that you build FFTW from the source code.
If you build FFTW from source yourself, get the most recent version
and follow the FFTW installation guide. Choose the precision for FFTW
(i.e. single/float vs. double) to match whether you will later use
mixed or double precision for GROMACS. There is no need to compile
FFTW with threading or MPI support, but it does no harm. On x86
hardware, compile with *both* "--enable-sse2" and "--enable-avx" for
FFTW-3.3.4 and earlier. From FFTW-3.3.5, you should also add "--
enable-avx2" also. On Intel processors supporting 512-wide AVX,
including KNL, add "--enable-avx512" also. FFTW will create a fat
library with codelets for all different instruction sets, and pick the
fastest supported one at runtime. On ARM architectures with SIMD
support and IBM Power8 and later, you definitely want version 3.3.5 or
later, and to compile it with "--enable-neon" and "--enable-vsx",
respectively, for SIMD support. If you are using a Cray, there is a
special modified (commercial) version of FFTs using the FFTW interface
which can be slightly faster.
Using MKL
~~~~~~~~~
Use MKL bundled with Intel compilers by setting up the compiler
environment, e.g., through "source /path/to/compilervars.sh intel64"
or similar before running CMake including setting
"-DGMX_FFT_LIBRARY=mkl".
If you need to customize this further, use
cmake -DGMX_FFT_LIBRARY=mkl \
-DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so" \
-DMKL_INCLUDE_DIR="/full/path/to/mkl/include"
The full list and order(!) of libraries you require are found in
Intel's MKL documentation for your system.
Using ARM Performance Libraries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ARM Performance Libraries provides FFT transforms implementation
for ARM architectures. Preliminary support is provided for ARMPL in
GROMACS through its FFTW-compatible API. Assuming that the ARM HPC
toolchain environment including the ARMPL paths are set up (e.g.
through loading the appropriate modules like "module load Module-
Prefix/arm-hpc-compiler-X.Y/armpl/X.Y") use the following cmake
options:
cmake -DGMX_FFT_LIBRARY=fftw3 \
-DFFTWF_LIBRARY="${ARMPL_DIR}/lib/libarmpl_lp64.so" \
-DFFTWF_INCLUDE_DIR=${ARMPL_DIR}/include
Other optional build components
-------------------------------
* Run-time detection of hardware capabilities can be improved by
linking with hwloc. By default this is turned off since it might not
be supported everywhere, but if you have hwloc installed it should
work by just setting "-DGMX_HWLOC=ON"
* Hardware-optimized BLAS and LAPACK libraries are useful for a few of
the GROMACS utilities focused on normal modes and matrix
manipulation, but they do not provide any benefits for normal
simulations. Configuring these is discussed at linear algebra
libraries.
* The built-in GROMACS trajectory viewer "gmx view" requires X11 and
Motif/Lesstif libraries and header files. You may prefer to use
third-party software for visualization, such as VMD or PyMol.
* An external TNG library for trajectory-file handling can be used by
setting "-DGMX_EXTERNAL_TNG=yes", but TNG 1.7.10 is bundled in the
GROMACS source already.
* The lmfit library for Levenberg-Marquardt curve fitting is used in
GROMACS. Only lmfit 7.0 is supported. A reduced version of that
library is bundled in the GROMACS distribution, and the default
build uses it. That default may be explicitly enabled with
"-DGMX_USE_LMFIT=internal". To use an external lmfit library, set
"-DGMX_USE_LMFIT=external", and adjust "CMAKE_PREFIX_PATH" as
needed. lmfit support can be disabled with "-DGMX_USE_LMFIT=none".
* zlib is used by TNG for compressing some kinds of trajectory data
* Building the GROMACS documentation is optional, and requires
ImageMagick, pdflatex, bibtex, doxygen, python 3.6, sphinx 1.6.1,
and pygments.
* The GROMACS utility programs often write data files in formats
suitable for the Grace plotting tool, but it is straightforward to
use these files in other plotting programs, too.
* Set "-DGMX_PYTHON_PACKAGE=ON" when configuring GROMACS with CMake to
enable additional CMake targets for the gmxapi Python package and
sample_restraint package from the main GROMACS CMake build. This
supports additional testing and documentation generation.
Doing a build of GROMACS
========================
This section will cover a general build of GROMACS with CMake, but it
is not an exhaustive discussion of how to use CMake. There are many
resources available on the web, which we suggest you search for when
you encounter problems not covered here. The material below applies
specifically to builds on Unix-like systems, including Linux, and Mac
OS X. For other platforms, see the specialist instructions below.
Configuring with CMake
----------------------
CMake will run many tests on your system and do its best to work out
how to build GROMACS for you. If your build machine is the same as
your target machine, then you can be sure that the defaults and
detection will be pretty good. However, if you want to control aspects
of the build, or you are compiling on a cluster head node for back-end
nodes with a different architecture, there are a few things you should
consider specifying.
The best way to use CMake to configure GROMACS is to do an "out-of-
source" build, by making another directory from which you will run
CMake. This can be outside the source directory, or a subdirectory of
it. It also means you can never corrupt your source code by trying to
build it! So, the only required argument on the CMake command line is
the name of the directory containing the "CMakeLists.txt" file of the
code you want to build. For example, download the source tarball and
use
tar xfz gromacs-2022.3.tgz
cd gromacs-2022.3
mkdir build-gromacs
cd build-gromacs
cmake ..
You will see "cmake" report a sequence of results of tests and
detections done by the GROMACS build system. These are written to the
"cmake" cache, kept in "CMakeCache.txt". You can edit this file by
hand, but this is not recommended because you could make a mistake.
You should not attempt to move or copy this file to do another build,
because file paths are hard-coded within it. If you mess things up,
just delete this file and start again with "cmake".
If there is a serious problem detected at this stage, then you will
see a fatal error and some suggestions for how to overcome it. If you
are not sure how to deal with that, please start by searching on the
web (most computer problems already have known solutions!) and then
consult the gmx-users mailing list. There are also informational
warnings that you might like to take on board or not. Piping the
output of "cmake" through "less" or "tee" can be useful, too.
Once "cmake" returns, you can see all the settings that were chosen
and information about them by using e.g. the curses interface
ccmake ..
You can actually use "ccmake" (available on most Unix platforms)
directly in the first step, but then most of the status messages will
merely blink in the lower part of the terminal rather than be written
to standard output. Most platforms including Linux, Windows, and Mac
OS X even have native graphical user interfaces for "cmake", and it
can create project files for almost any build environment you want
(including Visual Studio or Xcode). Check out running CMake for
general advice on what you are seeing and how to navigate and change
things. The settings you might normally want to change are already
presented. You may make changes, then re-configure (using "c"), so
that it gets a chance to make changes that depend on yours and perform
more checking. It may take several configuration passes to reach the
desired configuration, in particular if you need to resolve errors.
When you have reached the desired configuration with "ccmake", the
build system can be generated by pressing "g". This requires that the
previous configuration pass did not reveal any additional settings (if
it did, you need to configure once more with "c"). With "cmake", the
build system is generated after each pass that does not produce
errors.
You cannot attempt to change compilers after the initial run of
"cmake". If you need to change, clean up, and start again.
Where to install GROMACS
~~~~~~~~~~~~~~~~~~~~~~~~
GROMACS is installed in the directory to which "CMAKE_INSTALL_PREFIX"
points. It may not be the source directory or the build directory.
You require write permissions to this directory. Thus, without super-
user privileges, "CMAKE_INSTALL_PREFIX" will have to be within your
home directory. Even if you do have super-user privileges, you should
use them only for the installation phase, and never for configuring,
building, or running GROMACS!
Using CMake command-line options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once you become comfortable with setting and changing options, you may
know in advance how you will configure GROMACS. If so, you can speed
things up by invoking "cmake" and passing the various options at once
on the command line. This can be done by setting cache variable at the
cmake invocation using "-DOPTION=VALUE". Note that some environment
variables are also taken into account, in particular variables like
"CC" and "CXX".
For example, the following command line
cmake .. -DGMX_GPU=CUDA -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
can be used to build with CUDA GPUs, MPI and install in a custom
location. You can even save that in a shell script to make it even
easier next time. You can also do this kind of thing with "ccmake",
but you should avoid this, because the options set with "-D" will not
be able to be changed interactively in that run of "ccmake".
SIMD support
~~~~~~~~~~~~
GROMACS has extensive support for detecting and using the SIMD
capabilities of many modern HPC CPU architectures. If you are building
GROMACS on the same hardware you will run it on, then you don't need
to read more about this, unless you are getting configuration warnings
you do not understand. By default, the GROMACS build system will
detect the SIMD instruction set supported by the CPU architecture (on
which the configuring is done), and thus pick the best available SIMD
parallelization supported by GROMACS. The build system will also check
that the compiler and linker used also support the selected SIMD
instruction set and issue a fatal error if they do not.
Valid values are listed below, and the applicable value with the
largest number in the list is generally the one you should choose. In
most cases, choosing an inappropriate higher number will lead to
compiling a binary that will not run. However, on a number of
processor architectures choosing the highest supported value can lead
to performance loss, e.g. on Intel Skylake-X/SP and AMD Zen.
1. "None" For use only on an architecture either lacking SIMD, or to
which GROMACS has not yet been ported and none of the options below
are applicable.
2. "SSE2" This SIMD instruction set was introduced in Intel processors
in 2001, and AMD in 2003. Essentially all x86 machines in existence
have this, so it might be a good choice if you need to support
dinosaur x86 computers too.
3. "SSE4.1" Present in all Intel core processors since 2007, but
notably not in AMD Magny-Cours. Still, almost all recent processors
support this, so this can also be considered a good baseline if you
are content with slow simulations and prefer portability between
reasonably modern processors.
4. "AVX_128_FMA" AMD Bulldozer, Piledriver (and later Family 15h)
processors have this but it is NOT supported on any AMD processors
since Zen1.
5. "AVX_256" Intel processors since Sandy Bridge (2011). While this
code will work on the AMD Bulldozer and Piledriver processors, it
is significantly less efficient than the "AVX_128_FMA" choice above
- do not be fooled to assume that 256 is better than 128 in this
case.
6. "AVX2_128" AMD Zen/Zen2 and Hygon Dhyana microarchitecture
processors; it will enable AVX2 with 3-way fused multiply-add
instructions. While these microarchitectures do support 256-bit
AVX2 instructions, hence "AVX2_256" is also supported, 128-bit will
generally be faster, in particular when the non-bonded tasks run on
the CPU -- hence the default "AVX2_128". With GPU offload however
"AVX2_256" can be faster on Zen processors.
7. "AVX2_256" Present on Intel Haswell (and later) processors (2013),
and it will also enable Intel 3-way fused multiply-add
instructions.
8. "AVX_512" Skylake-X desktop and Skylake-SP Xeon processors (2017);
it will generally be fastest on the higher-end desktop and server
processors with two 512-bit fused multiply-add units (e.g. Core i9
and Xeon Gold). However, certain desktop and server models (e.g.
Xeon Bronze and Silver) come with only one AVX512 FMA unit and
therefore on these processors "AVX2_256" is faster (compile- and
runtime checks try to inform about such cases). Additionally, with
GPU accelerated runs "AVX2_256" can also be faster on high-end
Skylake CPUs with both 512-bit FMA units enabled.
9. "AVX_512_KNL" Knights Landing Xeon Phi processors
10. "IBM_VSX" Power7, Power8, Power9 and later have this.
11. "ARM_NEON_ASIMD" 64-bit ARMv8 and later.
12. "ARM_SVE" 64-bit ARMv8 and later with the Scalable Vector
Extensions (SVE). The SVE vector length is fixed at CMake
configure time. The default vector length is automatically
detected, and this can be changed via the
"GMX_SIMD_ARM_SVE_LENGTH" CMake variable. Minimum required
compiler versions are GNU >= 10, LLVM >=13, or ARM >= 21.1. For
maximum performance we strongly suggest the latest gcc compilers,
or at least LLVM 14 (when released) or ARM 22.0 (when released).
Lower performance has been observed with LLVM 13 and Arm compiler
21.1.
The CMake configure system will check that the compiler you have
chosen can target the architecture you have chosen. mdrun will check
further at runtime, so if in doubt, choose the lowest number you think
might work, and see what mdrun says. The configure system also works
around many known issues in many versions of common HPC compilers.
A further "GMX_SIMD=Reference" option exists, which is a special SIMD-
like implementation written in plain C that developers can use when
developing support in GROMACS for new SIMD architectures. It is not
designed for use in production simulations, but if you are using an
architecture with SIMD support to which GROMACS has not yet been
ported, you may wish to try this option instead of the default
"GMX_SIMD=None", as it can often out-perform this when the auto-
vectorization in your compiler does a good job. And post on the
GROMACS mailing lists, because GROMACS can probably be ported for new
SIMD architectures in a few days.
CMake advanced options
~~~~~~~~~~~~~~~~~~~~~~
The options that are displayed in the default view of "ccmake" are
ones that we think a reasonable number of users might want to consider
changing. There are a lot more options available, which you can see by
toggling the advanced mode in "ccmake" on and off with "t". Even
there, most of the variables that you might want to change have a
"CMAKE_" or "GMX_" prefix. There are also some options that will be
visible or not according to whether their preconditions are satisfied.
Helping CMake find the right libraries, headers, or programs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If libraries are installed in non-default locations their location can
be specified using the following variables:
* "CMAKE_INCLUDE_PATH" for header files
* "CMAKE_LIBRARY_PATH" for libraries
* "CMAKE_PREFIX_PATH" for header, libraries and binaries (e.g.
"/usr/local").
The respective "include", "lib", or "bin" is appended to the path. For
each of these variables, a list of paths can be specified (on Unix,
separated with ":"). These can be set as environment variables like:
CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..
(assuming "bash" shell). Alternatively, these variables are also
"cmake" options, so they can be set like
"-DCMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda".
The "CC" and "CXX" environment variables are also useful for
indicating to "cmake" which compilers to use. Similarly,
"CFLAGS"/"CXXFLAGS" can be used to pass compiler options, but note
that these will be appended to those set by GROMACS for your build
platform and build type. You can customize some of this with advanced
CMake options such as "CMAKE_C_FLAGS" and its relatives.
See also the page on CMake environment variables.
CUDA GPU acceleration
~~~~~~~~~~~~~~~~~~~~~
If you have the CUDA Toolkit installed, you can use "cmake" with:
cmake .. -DGMX_GPU=CUDA -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
(or whichever path has your installation). In some cases, you might
need to specify manually which of your C++ compilers should be used,
e.g. with the advanced option "CUDA_HOST_COMPILER".
By default, code will be generated for the most common CUDA
architectures. However, to reduce build time and binary size we do not
generate code for every single possible architecture, which in rare
cases (say, Tegra systems) can result in the default build not being
able to use some GPUs. If this happens, or if you want to remove some
architectures to reduce binary size and build time, you can alter the
target CUDA architectures. This can be done either with the
"GMX_CUDA_TARGET_SM" or "GMX_CUDA_TARGET_COMPUTE" CMake variables,
which take a semicolon delimited string with the two digit suffixes of
CUDA (virtual) architectures names, for instance "35;50;51;52;53;60".
For details, see the "Options for steering GPU code generation"
section of the nvcc man / help or Chapter 6. of the nvcc manual.
The GPU acceleration has been tested on AMD64/x86-64 platforms with
Linux, Mac OS X and Windows operating systems, but Linux is the best-
tested and supported of these. Linux running on POWER 8 and ARM v8
CPUs also works well.
Experimental support is available for compiling CUDA code, both for
host and device, using clang (version 6.0 or later). A CUDA toolkit is
still required but it is used only for GPU device code generation and
to link against the CUDA runtime library. The clang CUDA support
simplifies compilation and provides benefits for development (e.g.
allows the use code sanitizers in CUDA host-code). Additionally, using
clang for both CPU and GPU compilation can be beneficial to avoid
compatibility issues between the GNU toolchain and the CUDA toolkit.
clang for CUDA can be triggered using the "GMX_CLANG_CUDA=ON" CMake
option. Target architectures can be selected with
"GMX_CUDA_TARGET_SM", virtual architecture code is always embedded for
all requested architectures (hence GMX_CUDA_TARGET_COMPUTE is
ignored). Note that this is mainly a developer-oriented feature and it
is not recommended for production use as the performance can be
significantly lower than that of code compiled with nvcc (and it has
also received less testing). However, note that since clang 5.0 the
performance gap is only moderate (at the time of writing, about 20%
slower GPU kernels), so this version could be considered in non
performance-critical use-cases.
OpenCL GPU acceleration
~~~~~~~~~~~~~~~~~~~~~~~
The primary targets of the GROMACS OpenCL support is accelerating
simulations on AMD and Intel hardware. For AMD, we target both
discrete GPUs and APUs (integrated CPU+GPU chips), and for Intel we
target the integrated GPUs found on modern workstation and mobile
hardware. The GROMACS OpenCL on NVIDIA GPUs works, but performance and
other limitations make it less practical (for details see the user
guide).
To build GROMACS with OpenCL support enabled, two components are
required: the OpenCL headers and the wrapper library that acts as a
client driver loader (so-called ICD loader). The additional, runtime-
only dependency is the vendor-specific GPU driver for the device
targeted. This also contains the OpenCL compiler. As the GPU compute
kernels are compiled on-demand at run time, this vendor-specific
compiler and driver is not needed for building GROMACS. The former,
compile-time dependencies are standard components, hence stock
versions can be obtained from most Linux distribution repositories
(e.g. "opencl-headers" and "ocl-icd-libopencl1" on Debian/Ubuntu).
Only the compatibility with the required OpenCL version unknown needs
to be ensured. Alternatively, the headers and library can also be
obtained from vendor SDKs, which must be installed in a path found in
"CMAKE_PREFIX_PATH" (or via the environment variables "AMDAPPSDKROOT"
or "CUDA_PATH").
To trigger an OpenCL build the following CMake flags must be set
cmake .. -DGMX_GPU=OpenCL
To build with support for Intel integrated GPUs, it is required to add
"-DGMX_GPU_NB_CLUSTER_SIZE=4" to the cmake command line, so that the
GPU kernels match the characteristics of the hardware. The Neo driver
is recommended.
On Mac OS, an AMD GPU can be used only with OS version 10.10.4 and
higher; earlier OS versions are known to run incorrectly.
By default, any clFFT library on the system will be used with GROMACS,
but if none is found then the code will fall back on a version bundled
with GROMACS. To require GROMACS to link with an external library, use
cmake .. -DGMX_GPU=OpenCL -DclFFT_ROOT_DIR=/path/to/your/clFFT -DGMX_EXTERNAL_CLFFT=TRUE
SYCL GPU acceleration
~~~~~~~~~~~~~~~~~~~~~
SYCL is a modern portable heterogeneous acceleration API, with
multiple implementations targeting different hardware platforms
(similar to OpenCL).
Currently, supported platforms in GROMACS are:
* Intel GPUs using Intel oneAPI DPC++ (both OpenCL and LevelZero
backends),
* AMD GPUs with hipSYCL: only discrete GPUs with GFX9 (RX Vega 64, Pro
VII, Instinct MI25, Instinct MI50) and CDNA (Instinct MI100)
architectures,
* NVIDIA GPUs (experimental) using either hipSYCL or open-source Intel
LLVM.
Feature support is broader than that of the OpenCL, but not yet on par
with CUDA.
The SYCL support in GROMACS is intended to eventually replace OpenCL
as an acceleration mechanism for AMD and Intel hardware.
Note: SYCL support in GROMACS is less mature than either OpenCL or
CUDA. Please, pay extra attention to simulation correctness when you
are using it.
SYCL GPU acceleration for Intel GPUs
""""""""""""""""""""""""""""""""""""
You should install the recent Intel oneAPI DPC++ compiler toolkit. For
GROMACS 2022, version 2021.4 is recommended. Using open-source Intel
LLVM is possible, but not extensively tested. We also recommend
installing the most recent Neo driver.
With the toolkit installed and added to the environment (usually by
running "source /opt/intel/oneapi/setvars.sh" or using an appropriate
**module load** on an HPC system), the following CMake flags must be
set:
cmake .. -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGMX_GPU=SYCL
SYCL GPU acceleration for AMD GPUs
""""""""""""""""""""""""""""""""""
Using the most recent hipSYCL "develop" branch and the most recent
ROCm release is recommended.
Additionally, we strongly recommend using the ROCm-bundled LLVM for
building both hipSYCL and GROMACS.
The following CMake command can be used **when configuring hipSYCL**
to ensure that the proper Clang is used (assuming "ROCM_PATH" is set
correctly, e.g. to "/opt/rocm" in the case of default installation):
cmake .. -DCMAKE_C_COMPILER=${ROCM_PATH}/llvm/bin/clang -DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ -DLLVM_DIR=${ROCM_PATH}/llvm/lib/cmake/llvm/
After compiling and installing hipSYCL, the following settings can be
used for building GROMACS itself (set "HIPSYCL_TARGETS" to the target
hardware):
cmake .. -DCMAKE_C_COMPILER=${ROCM_PATH}/llvm/bin/clang -DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ -DGMX_GPU=SYCL -DGMX_SYCL_HIPSYCL=ON -DHIPSYCL_TARGETS='hip:gfxXYZ'
SYCL GPU acceleration for NVIDIA GPUs
"""""""""""""""""""""""""""""""""""""
SYCL support for NVIDIA GPUs is highly experimental. For production,
please use CUDA (CUDA GPU acceleration). Note that FFT is not
currently supported on NVIDIA devices when using SYCL, PME offload is
only possible in mixed mode ("-pme gpu -pmefft cpu").
NVIDIA GPUs can be used with either hipSYCL or the open-source Intel
LLVM.
For hipSYCL, make sure that hipSYCL itself is compiled with CUDA
support, and supply proper devices via "HIPSYCL_TARGETS" (e.g.,
"-DHIPSYCL_TARGETS=cuda:sm_75"). When compiling for CUDA, we recommend
using the mainline Clang, not the ROCm-bundled one.
For Intel LLVM, make sure it is compiled with CUDA and OpenMP support,
then use the following CMake invocation:
cmake .. -DCMAKE_C_COMPILER=/path/to/intel/clang -DCMAKE_CXX_COMPILER=/path/to/intel/clang++ -DGMX_GPU=SYCL -DGMX_GPU_NB_CLUSTER_SIZE=8 -DSYCL_CXX_FLAGS_EXTRA=-fsycl-targets=nvptx64-nvidia-cuda
SYCL GPU compilation options
""""""""""""""""""""""""""""
The following flags can be passed to CMake in order to tune GROMACS:
"-DGMX_GPU_NB_CLUSTER_SIZE"
changes the data layout of non-bonded kernels. Default values: 4
when compiling with Intel oneAPI DPC++, 8 when compiling with
hipSYCL. Those are reasonable defaults for Intel and AMD devices,
respectively.
"-DGMX_SYCL_USE_USM"
switches between SYCL buffers ("OFF") and USM ("ON") for data
management. Default: on (for performance reasons).
Static linking
~~~~~~~~~~~~~~
Dynamic linking of the GROMACS executables will lead to a smaller disk
footprint when installed, and so is the default on platforms where we
believe it has been tested repeatedly and found to work. In general,
this includes Linux, Windows, Mac OS X and BSD systems. Static
binaries take more space, but on some hardware and/or under some
conditions they are necessary, most commonly when you are running a
parallel simulation using MPI libraries (e.g. Cray).
* To link GROMACS binaries statically against the internal GROMACS
libraries, set "-DBUILD_SHARED_LIBS=OFF".
* To link statically against external (non-system) libraries as well,
set "-DGMX_PREFER_STATIC_LIBS=ON". Note, that in general "cmake"
picks up whatever is available, so this option only instructs
"cmake" to prefer static libraries when both static and shared are
available. If no static version of an external library is available,
even when the aforementioned option is "ON", the shared library will
be used. Also note that the resulting binaries will still be
dynamically linked against system libraries on platforms where that
is the default. To use static system libraries, additional
compiler/linker flags are necessary, e.g. "-static-libgcc -static-
libstdc++".
* To attempt to link a fully static binary set
"-DGMX_BUILD_SHARED_EXE=OFF". This will prevent CMake from
explicitly setting any dynamic linking flags. This option also sets
"-DBUILD_SHARED_LIBS=OFF" and "-DGMX_PREFER_STATIC_LIBS=ON" by
default, but the above caveats apply. For compilers which don't
default to static linking, the required flags have to be specified.
On Linux, this is usually "CFLAGS=-static CXXFLAGS=-static".
gmxapi C++ API
~~~~~~~~~~~~~~
For dynamic linking builds and on non-Windows platforms, an extra
library and headers are installed by setting "-DGMXAPI=ON" (default).
Build targets "gmxapi-cppdocs" and "gmxapi-cppdocs-dev" produce
documentation in "docs/api-user" and "docs/api-dev", respectively. For
more project information and use cases, refer to the tracked Issue
2585, associated GitHub gmxapi projects, or DOI
10.1093/bioinformatics/bty484.
gmxapi is not yet tested on Windows or with static linking, but these
use cases are targeted for future versions.
Portability aspects
~~~~~~~~~~~~~~~~~~~
A GROMACS build will normally not be portable, not even across
hardware with the same base instruction set, like x86. Non-portable
hardware-specific optimizations are selected at configure-time, such
as the SIMD instruction set used in the compute kernels. This
selection will be done by the build system based on the capabilities
of the build host machine or otherwise specified to "cmake" during
configuration.
Often it is possible to ensure portability by choosing the least
common denominator of SIMD support, e.g. SSE2 for x86. In rare cases
of very old x86 machines, ensure that you use "cmake
-DGMX_USE_RDTSCP=off" if any of the target CPU architectures does not
support the "RDTSCP" instruction. However, we discourage attempts to
use a single GROMACS installation when the execution environment is
heterogeneous, such as a mix of AVX and earlier hardware, because this
will lead to programs (especially mdrun) that run slowly on the new
hardware. Building two full installations and locally managing how to
call the correct one (e.g. using a module system) is the recommended
approach. Alternatively, one can use different suffixes to install
several versions of GROMACS in the same location. To achieve this, one
can first build a full installation with the least-common-denominator
SIMD instruction set, e.g. "-DGMX_SIMD=SSE2", in order for simple
commands like "gmx grompp" to work on all machines, then build
specialized "gmx" binaries for each architecture present in the
heterogeneous environment. By using custom binary and library suffixes
(with CMake variables "-DGMX_BINARY_SUFFIX=xxx" and
"-DGMX_LIBS_SUFFIX=xxx"), these can be installed to the same location.
Linear algebra libraries
~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned above, sometimes vendor BLAS and LAPACK libraries can
provide performance enhancements for GROMACS when doing normal-mode
analysis or covariance analysis. For simplicity, the text below will
refer only to BLAS, but the same options are available for LAPACK. By
default, CMake will search for BLAS, use it if it is found, and
otherwise fall back on a version of BLAS internal to GROMACS. The
"cmake" option "-DGMX_EXTERNAL_BLAS=on" will be set accordingly. The
internal versions are fine for normal use. If you need to specify a
non-standard path to search, use
"-DCMAKE_PREFIX_PATH=/path/to/search". If you need to specify a
library with a non-standard name (e.g. ESSL on Power machines or ARMPL
on ARM machines), then set