LinuxWorld.au: Size matters in supercomputers because size translates into speed. And supercomputers are all about speed. The quest for the fastest computer to discover new drugs, crack ciphertext or model global weather and nuclear reactions has set a lot of records in a short time.
Supercomputers are defined loosely by IDC as systems that cost more than US$1 million and are used in very-large-scale numerical and data-intensive applications. Today, their power is measured in trillions of floating-point operations per second, or TFLOPS.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.