Big Science (HPC)
Samplify’s APAX™ technology accelerates the high performance computing applications executing on supercomputing data centers and on the cloud. High performance computing applications break a single problem down into millions, billions, or even trillions of smaller elements which are then distributed over hundreds, thousands, or hundreds of thousands of computing cores. HPC applications are also highly iterative, requiring all of the computing cores to exchange intermediate results at each iteration. The exchange and distribution of these large data sets across a large number of compute nodes makes HPC applications I/O bound resulting in sustained performance which is usually less than 10% of the peak rating of the supercomputer.
Since 2005, CPUs and GPUs have effectively exploited Moore’s Law for high-performance computing (HPC) applications by adding more cores per die, as CMOS process geometries continue their inexorable shrink. Unfortunately, providing numerical operands to multi-core CPUs and GPUs has not kept pace, because pins to memory and I/O don’t scale with Moore’s Law.
Many applications on NASA’s Pleiades Supercomputer can achieve less than 1.4% of the peak rating of the supercomputer. Using MPI analysis tools, NASA was able to characterize the proportion of time each CPU spent computing versus performing I/O or storage transactions.