Share and Share Alike: The State of Cluster Resource Management and Scheduling
Posted by Ken Farmer, Tuesday June 19 2007 @ 11:47AM EDT
Linux Magazine: In a perfect world, everyone would have their own supercomputer. Sadly, we don't live in a perfect world, and so clusters and other high performance computing systems tend to be shared resources. Unfortunately, once the number of simultaneous users on a system reaches double digits, scheduling methods that involve yelling down the hall "Is anybody using the machine now?" become impractical. One solution to this is to impose batch processing on the user community. This requires all users to submit their work to a central point of control that handles scheduling access to the system. Batch processing allows equitable access to the computing resource (making everyone more or less equally unhappy), but it also allows the system administrators to schedule the resource based on the goals and policies of the organization.
* Linux Magazine login required to access article.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.