CERN Accelerating science

Computing Platforms (offline)

Cisco Data Plane Computing System (DPCS)

In the DPCS project we will investigate modern Operating System concepts, addressing performance challenges arising from the evolution of hardware towards high number of cores per server, heterogeneous designs, high system I/O capability and the increasing functionality of the network interface hardware. We will focus on data plane operating systems for virtualized high-throughput I/O, multicore system scalability beyond single chassis.

 

Intel Code Modernization

The increased computing requirements of the LHC’s second run means it’s more important than ever to optimize high-energy physics codes for new computing architectures. There are 4 different use cases:

  1. Geant software simulation toolkit, widely used in space, particle, and medical research, with aims of up to a 5X performance boost, thanks to experts and tools from Intel’s Software and Services Group; this use case is implemented in collaboration with Intel IPCC program
  2. FairRoot, (in collaboration with GSI) a widely used HEP experiment framework used by a large number of collaborations. Key parts of the framework will be re-designed to make optimal use of the Xeon Phi architecture
  3. BioDynaMo (in collaboration with Innopolis University, Kazan Federal University and School of Computing Science – Newcastle University) sees code used for biological simulation rewritten from Java to C++. This rewriting is combined with a re-engineering of the data structures to make optimal use of the Intel CPU vector instruction and of the Xeon Phi architecture.
  4. Several Beams Department Injector Simulation codes. These codes are written in a mix of FORTRAN, C/C++ and Python. Some use already MPI. Additional optimization possibilities will be researched.
  5. Inside this project Intel is also delivering on site workshops addressing latest Intel s/w tools and training on code vectorization technologies.

 

Huawei ARM64 Porting, Optimisation, and Benchmarking Environment

The LHC experiments and the CERN computing and data infrastructure make use of a large number of analysis tools. It is important to continuously monitor advances and trends in technology and evaluate software on different computing platforms as they evolve. As part of this collaboration, several widely used codes will be ported for running on ARM-based architectures. A study will be carried out to test and measure performance, energy consumption, and operational aspects, so as to understand the strengths and weaknesses of the architecture.

 

Related content