NSF’s five-year goal for high performance computing (HPC) is to enable petascale science and engineering through the deployment and support of a world-class HPC environment comprising the most capable combination of HPC assets available to the academic community. By the year 2010, the petascale HPC environment will enable investigations of computationally challenging problems that require computing systems capable of delivering sustained performance approaching 1015 floating point operations per second (petaflops) on real applications, that consume large amounts of memory, and/or that work with very large data sets. Among other things, researchers will be able to perform simulations that are intrinsically multi-scale or that involve the simultaneous interaction of multiple processes.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.