Linux clusters have become so successful that they've proliferated internationally through research labs, universities, and large industries that require an inexpensive source of high performance computing cycles. Developers and users have pushed the technology by scaling their applications to more and more processors so that larger problems can be solved more quickly. This has resulted in clusters where some applications can actually become I/O bound -- the input/output of data to/from a large number of processors limits the performance of the application.
Most Linux clusters use NFS (the Network File System) to share data among nodes and provide a consistent name space across all machines. As a result, parallel applications (executing simultaneously on multiple processors) typically read data from files stored on a single disk on a single server.
While NFS works well for small clusters (less than 32 nodes), its performance decays rapidly as more than 64 nodes start making simultaneous I/O requests. These requests "saturate" the NFS server that stores the files of interest.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.