University of Hertfordshire

With the same participants

View graph of relations


SPH (smoothed particle hydrodynamics), and lagrangian approaches to hydrodynamics in general are a powerful approach to hydrodynamics problems. In this scheme, the fluid is represented by a large number of particles. Each particle tracks a mass element, moving with the flow. The scheme does not require a predefined grid making it very suitable for tracking flows with moving boundaries, particularly flows with a free surfaces, problems that involve the mixing of different fluids, and flows with physically active elements. The method was originally developed to study stellar collisions, but has now been widely adopted in engineering and cosmology because of it flexibility, adaptivity and multi-physics capability.

The widespread use of SPH, and its potential for adoption across a wide range of science domains make it a priority use case for the Excalibur project. Massively parallel simulations with billions to hundreds of billions of particles have the potential for revolutionising our understanding of the Universe and will empower engineering applications of unprecedented scale, ranging from the end-to-end simulation of transients in a jet engine to simulation to tsunami waves over-running a series of defensive walls.

The working group will identify a path to the exascale computing limit. The group has expertise across both Engineering and Astrophysics allowing us to develop an approach that satisfies the needs of a wide community. The group will start from two recent codes that already highlight the key issues.

• SWIFT (SPH with Fine-grained Tasks) implements a cutting edge approach to task based parallelism. Breaking the problem into a series of inter-dependent tasks allows for great flexibility in scheduling, and allows communication tasks to be entirely overlapped with communication. The code uses a timestep hierarchy to focus computational effort where is most need in response to the problems.
• DualSPHysics draws its speed from effective use of vectors processing units (GPUs). This allows the code to gain from exceptional parallel execution of the SPH operations.

The working group will build on these codes to identify the optimal approach to massively parallel execution on exa-scale systems. The working group will benefit from close connections to the hardware technical working group in durham, driving the co-design of code and hardware. Our ultimate aim is to be able to tackle simulations that are 1000s of times more detailed than currently possible, with consequent benefits to science and engineering.

The particular challenges that we will address are:
- Optimal algorithms for Exascale performance. In particular, we will address the best approaches to the adaptive time-stepping and to the adaptive domain decomposition. The first allows different spatial regions to be integrated forward in time optimally, the second allows the regions to be optimally distributed over the hardware.
- Modularisation and Separation of Concerns. Future codes need to be flexible and modularised, so that a separation can be achieved between integration routines, task scheduling and physics modules. This will make the code future-proof and easy to adapt to new science domain requirements.
- CPU/GPU performance optimisation. Next generation hardware will require specific (and possibly novel) techniques to be developed to optimally advance particles in the SPH scheme. We will build on the programming expertise gained in DualSPHysics to allow efficient GPU use across multiple nodes.
- Communication performance optimisation. Separated computational regions need to exchange information at their boundaries. This can be done asynchronously, so that the time-lag of communication does not slow computation. While this has been demonstrated on current systems, the scale of Excalibur, will overload current communication subsystems, and a new solution is required.


ID: 20958519