Linux.com: When you're trying to share files and perform I/O-intensive operations across 100+ nodes of a Beowulf cluster, the old model of a central NFS file server handling every client request begins to break horribly. What many cluster admins do instead is limit NFS to distributing home directories across the cluster, and using some form of parallel I/O model for doing "real work."
Of course, cluster administrators and users alike crave the benefits of an NFS-like system; admins
(in many cases) want logins only to the head node of the cluster, and users want to have all of their output in one place so they don't have to collect it from the various local disks of the individual compute nodes. However, it has been proven time and again that NFS is ill-suited for the kind of pounding it would take in a large, I/O-intensive cluster environment. The answer to this problem has come from researchers, open source projects, and the private sector, in the form of a parallel I/O model.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.