HPCwire: Natural science can be understood as the process of developing models that predict the behavior of the natural world, and we celebrate as great science the creation of the simplest models that give accurate predictions. Computer architecture seems, over the past decade or two, to have moved in the opposite direction, glorifying complexity at the expense of understandability and predictability, and even performance and usability. Highly speculative out-of-order superscalar microprocessors with north- and south-bridges, graphics adapters and raid controllers have evolved out of what was once the modest domain of hobbyists.
In and of itself, there's nothing wrong with the fact that the hardware and software of modern PCs are complex; they have adapted very successfully to the needs of home and office users, to the point of becoming nearly indispensable for civilization as we know it. But that complexity does make it next to impossible to create accurate models of their performance, and hence to design software that performs efficiently. And when your application is running for days or weeks at a time on hundreds or thousands of computers, you care about its efficiency.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.