The Good, The Bad, and the Ugly of Linux Clustering
In my last Linux article, entitled "Low Fees, No Fluster, With Today's Linux Clusters," I provided a brief overview of Linux' expanding role in the HPTC arena as well as in other market segments craving high performance at popular prices. As promised, here's a followup piece that discusses the birth of Linux clustering as well as some of the good and bad points to this new approach to computing on the cheap.
Cheap Computers Commingle With Linux and Beowulf
VAXclusters appeared on the scene in about 1983, but it wasn't until 1994 that a group of NASA engineers developed the first Linux cluster, which they promptly awarded the "Beowulf" sobriquet in honor of the hero in the epic poem. The birth of Beowulf was an exercise that consisted of equal amounts of scavenging and savvy: the engineers managed to resurrect 16 Intel 486-based PC that were consigned to the trash heap, lashed the systems together with 10Mbps Ethernet, and shoehorned Linux onto this aggregation as a distributed operating system. The result was a parallel compute engine composed of technically obsolete hardware, a free operating system, and a lot of hard work. The economy-class cluster achieved some ~70M FLOPS per second at a cost of around $40K-roughly ten percent of the cost of a commercial computer that could achieve 70MFLOPS in 1994. Since then, "Beowulf" has been used to describe a class of Linux clusters than leverage a similar economy-class architecture to deliver high performance at bargain-basement prices.
Faster, Cheaper, But What About Security?
It didn't take early adopters of Linux clusters long to conclude that clustering boosted processing speed, increased transaction speeds, and improved reliability. But along high performance and low cost come with a new issue: security. Being Open Source software, Linux is not subject to any single entity controlling its growth or mandating security requirements. To the paranoid, this situation bordered on OS anarchy. Ironically, the same group of paranoid individuals will cheerfully eat Microsoft dog food without a care in the world. This despite the myriad patches, mandatory updates, service packs, and whatnot that Microsoft must distribute on an all-too-frequent basis to address fundamental flaws and security abysses in its products. Granted, far more people use Microsoft products than their Linux counterparts, so it's difficult to quantify the relative security of the two OSes. Time will tell, but SKHPC still thinks the smart money is on Linux! With most Linux clusters invisible to the public Internet and hidden behind firewalls, these systems are inherently less vulnerable to hacking than are high-profile Windows-powered sites. And, as we mentioned in our last article, the U.S. National Security Agency is busy armor-plating Linux. Word has it that Microsoft's security credentials were not issued by security organizations, but by acts of the U.S. Congress.
Lots of Bang for the Buck
Rapidly-emerging life science enterprises, in which applications such as drug discovery, protein folding, human genome research, and defensive measures against potential biowarfare weapons are generating enormous amounts of data and emphasizing the need for radically new and highly cost-efficient approaches to computing. In these realms, the Linux network-of-nodes-wherein each PC is a node-approach to computing is a great fit. Programs like SETI@Home and United Devices take somewhat similar approaches by scavenging spare cycles from millions of interconnected computers. In the Linux space, however, bang for the buck is what renders the OS attractive. In general, Linux clustering delivers a minimum fivefold improvement in price-performance over HPTC offerings from traditional IT vendors. And customers are catching on: Linux is asserting a growing presence on Top 500 computing sites. And why not? For a fraction of the cost of a top-of-the-line Sun or IBM server, you can buy a slew of CPUS, lash them together with low-cost cluster interconnects, throw on Linux and Beowulf software, and go to town in the HPTC space. IBM touts the fact that its mainframes run Linux, HP's Superdome can do the same thing. But while both platforms can run Linux, so can far more economical alternatives. Suffice it so say that most customers will not be gulled by a sales pitch wherein a smiling salesman says "But this million dollar box will run Linux!" After all, equivalent performance can be had with a cluster of aging IA-32 Linux boxes!
Wanted: More Linux Expertise
None of these wonderful things happen auto-magically with Linux clusters, hence users need specialized understanding of the OS. Linux consultancies, training courses, workshops, and even vendor certification efforts are being developed. In the meantime, Linux is generally a familiar environment for "propellerheads" such as laboratory scientists and bioinformatics researchers. Most of these experts became familiar with open-source software from their college computer studies, where Linux enjoys widespread use in price-sensitive academic environments.
Scalability and Performance Soar
One California-based genomics information firm, which would prefer to remain anonymous, claims that it ,slashed its computing costs by ~95 percent when it migrated to Linux clusters about three years ago. Given the decline in the cost of proprietary systems, the savings today would be less staggering, but it doesn't take a math major to figure out that a 128-node Linux cluster that sells for perhaps $100K USD can do the same job as a $1M Sun UE10K Starfire. What's more the Linux cluster isn't subject to the outright onerous licensing and maintenance costs that accompany big iron.
Big iron is far from obsolete, as it generally houses the databases and data warehouses that contain info on gene structure, sequence, and function. This data is used by pharmaceutical and biotech companies for drug development and scientific discovery. That said, about half the firm's 4.5K processors from Compaq, Sun, Intel, and SGI, run Linux. The remainder of the systems handle tasks that Linux isn't ready to take on yet-such as apps that demand low latency and extremely high bandwidth. Still, it's estimated that Linux can accommodate ~80 percent of common HPTC apps.
The Management Mares-Nest
As in the past, the 80-20 Rule holds true with Linux clusters. The biggest challenge facing Linux today is the developing and maintaining industrial-strength Linux cluster management tools. Proprietary Linux management apps are all well and good, but they render it nearly impossible to easily move apps from one computing resource to another. Hence, unused processing power remains unused, rather than reallocated, processing power. In many early Linux cluster implementations, system administrators often wrote scripts for adding users, configuring an application, or cross-mounting a new network file system partition. These added administration costs cut into the initial savings provided by Linux clustering. The firm in question opted to purchase Platform Computing Inc.'s LSF management platform to handle these tasks and others.
New And Improved Tools
The firm also had the in-house Linux expertise to build its management tools, but was eager to avoid the effort if possible. The company chose Linux NetworX' ICE Box, which provides serial switching, remote power control, and system monitoring capabilities. The product helps the firm focus on finding new genes rather than on server operation and maintenance. The time saved by finding an off-the-shelf Linux management tool has shortened the time to market for products, say officials at several firms, eliminating their need to build proprietary tools.
Scarcely any Linux cluster management tools were available in Y2K.. Today, Linux cluster suppliers are developing both open-source and proprietary cluster management products. Some commercial suppliers are building from scratch; others, such as Red Hat Inc., are picking, choosing, and using various pieces of open-source software to shorten their development cycles. Other vendors are taking proprietary Unix cluster technology and modifying it to run on Linux , including HP, SteelEye Technology Inc., and Veritas Software Corp. And Platform Computing recently announced its Platform Clusterware for Linux, the first hardware-independent support solution for cluster management.
Today's Linux clusters are rivaling the throughput capabilities of legacy mainframes and current enterprise server offerings from the likes of HP and IBM. As these Linux clusters play a larger role in the HPTC realm, the HPs and IBMs of the world will be forced to come out with bigger faster, and cheaper future enterprise servers that will accommodate Linux as well as proprietary OSes. As usual, the customer will be the Big Winner in this race.
PART 1: Low Fees, No Fluster, with Today's Linux Cluster...
© 2003 by Terry C. Shannon, Consultant and Publisher, SKHPC
Terry C. Shannon, consultant and publisher of "Shannon Knows HPC," has more than 25 years' experience in the IT industry as a system manager/administrator, programmer, analyst, journalist and consultant. Mr. Shannon's opinions are his own and do not necessarily reflect the opinion of this website He can be reached at email@example.com or his website at http://www.shannonknowshpc.com. He welcomes your feedback and suggestions.