TeamHPC Implements the First Dual-Core AMD Opteron-Based Linux Cluster Integrated with the Breakthrough Technology of PathScale’s InfiniPath™ HTX™ InfiniBand Adapter
-- Computational Science Center at the University of California at Davis Purchases 144-CPU AMD Opteron Processor-Based Linux Cluster --
Eudora, KS – August 11, 2005 – TeamHPC, a division of M&A Technology, Inc. and a leading global provider of cluster computing, is implementing the world’s first AMD dual-core Opteron cluster integrated with the breakthrough technology of the PathScale InfiniPath™ HTX™ InfiniBand™ Adapter, the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications.
The cluster was purchased by the Center for Computational Science and Engineering (CSE) at the University of California, Davis. TeamHPC is delivering a 144-CPU AMD Opteron processor-based Linux cluster that leverages the PathScale InfiniPath interconnect to run computational models and simulations related to physics, discrete mathematics, engineering, biomedical diagnostics, and other processor-intensive HPC applications. The deployment consists of 36 server nodes each equipped with two dual-core AMD Opteron processors and a PathScale InfiniPath Adapter. They are interconnected a Cisco TopSpin 270 InfiniBand switch.
“TeamHPC has the ability to grasp, understand, test and integrate the newest and most innovative high performance supercomputing technologies,” said Bret Stouder, Vice President of TeamHPC. “We enable every customer to remotely access and test their clusters prior to shipment because we believe this is an important step in the process of acquiring a high performance Linux cluster. UC Davis is the latest example of the world-class scientific research institutions who recognize the unique value that TeamHPC brings to the HPC industry.”
TeamHPC proposed the PathScale InfiniPath HTX InfiniBand Adapters because of their ultra-low latency, highest effective bandwidth and unprecedented messaging rate. These attributes greatly improve MPI application performance and Linux cluster utilization. The highly pipelined, cut-thru design of InfiniPath is optimized for applications sensitive to communication latency. PathScale InfiniPath delivers superior Interconnect performance at commodity price levels by connecting directly to the AMD Opteron™ via an open standard HyperTransport HTX slot, and by using standard InfiniBand switching to scale to hundreds or thousands of nodes.
“We support scientists and academic researchers working to analyze and visualize highly complex physical and biological processes,” said Bill Broadley, an Information Architect at UC Davis. “We require our compute resources to facilitate the best possible performance for our many communications-intensive applications. The PathScale InfiniPath Adapter is performing exceptionally thus far.”
Performance results achieved on well-known HPC application benchmarks and in real HPC installations such as the new UC Davis cluster prove that the PathScale InfiniPath Adapter is the world’s highest performance cluster interconnect. PathScale’s innovative approach to high-speed InfiniBand interconnect reduces the workload required to process messages, enabling a dramatically higher message rate, and ultimately increasing the effective bandwidth. This enables users to solve their most challenging computational problems in the minimum period of time.
“TeamHPC and PathScale have collaborated to deliver a high performance research platform to UC Davis that enables scientists and academic researchers to overcome the performance bottlenecks of computing systems of the past,” said Len Rosenthal, VP of Marketing at PathScale. “The combined performance of AMD Opteron processors and the low-latency PathScale InfiniPath interconnect along with complete testing and integration solutions from TeamHPC opens a new chapter in high performance computing, where an economically priced system does not mean compromised performance.”
New performance results have recently been published for InfiniPath including the full Pallas Benchmark Suite and the full HPC Challenge Benchmarks. These latest results validate the performance advantages of PathScale InfiniPath as the highest performance commodity cluster interconnect for Linux-based HPC applications. These results can be viewed at http://www.pathscale.com/infinipath-perf.html
About UC Davis CSE
The Center for Computational Science and Engineering (CSE) at the University of California, Davis is concerned with the development and implementation of computational models and simulations as an alternative means of understanding complex physical and biological processes, and to model and visualize entirely abstract processes encountered in physics, mathematics, engineering and computer science. Read more at http://www.cse.ucdavis.edu
Based in Mountain View, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale’s technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale’s investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit http://www.pathscale.com , send email to email@example.com or telephone 1-650-934-8100.
TeamHPC, a division of M&A Technology, specializes in High Performance Computing, and assembles and integrates all of its products in an ISO-9000: 2000 certified manufacturing plant. TeamHPC provides the unique opportunity of granting researchers access to their clusters for benchmark and application testing before products are shipped. In carving new paths in the HPC market, TeamHPC also provides a 24-hour data center environment that allows researchers to host their computational machines at M&A Technology's headquarters in Dallas, TX. More information about TeamHPC is available at http://www.teamhpc.com