- Our HPC website
All information on High Performance Computing at the ICP can be read here. From use cases at the ICP, overview of the computing clusters in use to user guides and bibliography - What is HPC?
New to High Performance Computing? Here you can find an Introduction to the topic including its terminology
Why do we need HPC at the ICP?
The ICP is largely built around performing large scale simulations with a focus on soft matter physics, energy materials, active matter and fluid dynamics. Multiscale modeling is often necessary to adequately capture physical properties that resolve different time- or length-scales. This type of modelling couples particle-based and lattice-based algorithms to resolve different scales. Some algorithms can leverage GPU accelerators for lattice-based problems and machine learning, or large memory compute nodes for large-scale data analysis or simulations involving billions of particles.
Which HPC expertise do we have?
The ICP manages its own cluster, Ant, for high-performance parallel computing [1]. It is also used for benchmarking and improving the scalability of parallel algorithms, together with a fleet of GPU-equipped servers exclusively dedicated to software testing. The ICP has access to the SimTech cluster and bwHPC resources (bwForCluster, bwUniCluster 2.0). Through the PRACE program, the ICP can apply for computing time at any European HPC facility, and currently has development access to the petascale Vega supercomputer.
The University of Stuttgart is an active player in the HPC field: it is a shareholder of the bwHPC initiative [2] and owns the HLRS supercomputing center and the IPVS. The ICP is a member of the Center of Excellence MultiXscale [3] [4]. All ICP PIs are project members of the Cluster of Excellence SimTech [5] and Christian Holm was a member of the SFB 716 [6]. The ICP, University of Stuttgart and SimTech have agreements to cover the participation fees and travel costs to EuroHPC training events of their staff members and students, in an effort to foster continuous learning in HPC. The ICP employs a HPC-RSE (Jean-Noël Grad), a generalist RSE (Hideki Kobayashi), the SFB 1313 employs a FAIR-RSE (Hamza Oukili), SimTech employs a RDM-RSE (Sarbani Roy). Their role is to assist domain scientists in leveraging highly parallel computing environments, writing quality-assured and future-proof software/libraries/scripts, and making simulation data archivable/findable/re-usable in compliance with the requirements of funding agencies and academic institutions.
HPC-driven research poses unique challenges in terms of software engineering, energy efficiency, software quality assurance, scientific reproducibility and data management. We actively participate in these discussions and disseminate their outcome to domain scientists of the University of Stuttgart through regular meetings and seminars. In addition, SimTech organizes the SIGDIUS Seminars, a monthly event to discuss policies, infrastructure and tools for software engineers, data stewards and domain scientists. IntCDC organizes a Software Carpentry every semester [14]. The IPVS offers a RSE course Simulation Software Engineering every winter semester. The HLRS, SimTech and University of Stuttgart are founding members of the str-RSE chapter of the German Research Software Engineers association, and manage a rich portfolio of highly extensible research software that are funded by software engineering grants [7-13].
References
[1] DFG grant 492175459 for Ant.
[2] bwHPC User Steering Committee (LNA-BW).
[3] EuroHPC-JU grant number 101093169: Centre of Excellence in exascale-oriented application co-design and delivery for multiscale simulations.
[4] BMBF grant number 16HPC095: Verbundprojekt MultiXscale: HPC-Exzellenzzentrum für Multi-Skalen-Simulationen auf Höchstleitungsrechnern.
[5] DFG grant 390740016: Data-Integrated Simulation Science (SimTech, EXC 2075).
[6] DFG grant 17546514: Dynamic simulation of systems with large particle numbers (SFB 716).
[7] DFG grant 391126171: Fostering an international community to sustain the development of the ESPResSo software package.
[8] DFG grant 528726435: Strengthening the quality and user base of the research software ESPResSo for particle-based simulations.
[9] DFG grant 391150578: PreDem – Democratization of the coupling library preCICE.
[10] DFG grant 391049448: Sustainable infrastructure for the improved usability and archivability of research software on the example of the porous-media-simulator DuMux.
[11] DFG grant 391302154: Research software sustainability for the open-source particle visualization framework MegaMol.
[12] DFG grant 265686075: Load-balancing for scalable simulations with large particle numbers.
[13] CoE MultiXscale WP1+WP2: ESPResSo performance, productivity and portability (subprojects of EuroHPC-JU grant number 101093169).
[14] Software Carpentries are announced on the University Events feed. A list of past and future events can be found on the IntCDC GitHub page.