Linux Magazine: High-performance computing (HPC) using clusters has come a long, long way from its early days. Back then, a cluster was a network of disparate workstations, which often sat on people’s desks, harnessed together into a Parallel Virtual Machine (PVM) computation. Back then, nascent Beowulf clusters consisted of cheap tower PCs literally stacked up on shelves.
Early on, cluster computing was only being done by a tiny handful of people — usually by computer scientists who were working on building the future or by people who were doing real science (with computers) who needed the future a bit before the future was ready for them. To those pioneers, a “network” generally meant 10-Base-2 Ethernet — a daisy chain network terminated with resistors at both ends that used these nifty (and expensive) little AUI doohickeys to connect very expensive Unix workstations to RG-58 coaxial cable — or worse, a thickwire Ethernet, which used a bloodsucking device known as a “vampire tap” to make the actual connection to the wire (preferably spaced on a half-meter mark to minimize reflections) to insert a coaxial “T” connector.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.