Skip to content

Latest commit

 

History

History
73 lines (52 loc) · 3.87 KB

File metadata and controls

73 lines (52 loc) · 3.87 KB

Software

OpenHPC Stack

The cluster runs OpenHPC stack on top of a CentOS 7.5 operating system.

Available component
Base OS CentOS 7.5 x86_64
Compilers GNU6(gcc, g++, gfortran), Intel
Math/Numerical Libraries BLAS, LAPACK, OpenBLAS, ATLAS, MKL, Scalapack
MPI libraries OpenMPI, MPICH, MPICH2, MVAPICH, IMPI
I/O libraries HDF5(pHDF), NetCDF
Development tools Autotools (autoconf, automake, libtool), Valgrind
Debugging and profiling tools Gprof, TAU

Some of the underlying management components are:

Available component
Node provisioning Warewulf
Resource management SLURM
Software provisioning Modules, Built using Lmod/easybuild/Spack
Cluster monitoring Ganglia, Nagios

Application software

A wide range of software from specific disciplines as well as general ones (Python, R, Stata) will be pre-compiled and provisioned as modules users can load at run time. If there is a particular software users want to use, please submit a request to have them installed in a central location. Otherwise, users can install them in their own area for their personal use. If users prefer working with containers, we encourage using Singularity containers which are preferred over Docker for HPC applications.

Provisioning software

Our software environment uses Linux environment modules to perform this configuration. The software modules available to users also contain preconfigured compiler toolchains, or programming environments which include parallel compiler wrappers and associated MPI stacks. There are also workflow tools that may help with your applications as well.

Modules

The HPC software environment uses Linux environment modules to manage versions and dependencies of software packages. When you load a module, it sets the environment variables necessary for running your program.

A list of available software modules can be viewed by typing module avail.

A list of software modules that are currently loaded can be viewed by typing module list.

By default the local repository is used as a source of software installations.

Additional information on HPC modules may be found here.

Notes on Specific Software Usage

Singularity Containers over MPI-IB

By default, Singularity does not use the InfiniBand libraries when doing message passing with MPI. In order to make sure Singularity uses the InfiniBand libraries while using MPI, perform the following step after loading the Singularity module:

source sourceme_for_mpioverib

Following the above step, the Singularity containers should use the InfiniBand libraries when running MPI applications.