Obsidian Research Corporation, the leader in InfiniBand range extension, has teamed with Rackable Systems and Cisco Systems to deliver computing, storage, and networking equipment to the University of Florida (UF). This equipment enables high-performance storage across Wide-Area Network (WAN) links at rates approaching wire speed. UF’s InfiniBand-based (SDP) cluster file system built using Rackable Systems’ clustered file system technology currently supports sustained transfer rates of 1.4 GB/sec to the underlying Rackable Systems OmniStor™ Fibre Channel (FC) RAID arrays.
Obsidian Research Corp.’s Longbow InfiniBand range extension technology exposes this high-performance, parallel file system to other remote clusters on campus. “The Obsidian Longbow products allow UF to distribute InfiniBand-connected storage over a campus-area or wide-area link at full data rates,” said Dr. David Southwell, CEO of Obsidian Research Corporation, “This capability eases management and sharing of data in UF’s emerging grid infrastructure.” UF will showcase its advanced network technology in real-time at Supercomputing ’06 (Booth #2051 and #252).
UF Provides Ideal Case Study for InfiniBand and Range Extension
In October 2005, UF deployed a high-performance compute cluster from Rackable Systems consisting of two-hundred IB-connected, dual-processor nodes with AMD Opteron 275 processors (800 cores). They wanted an I/O subsystem that would compliment rather than negate the cluster’s computational capacity; that would utilize the IB interconnect; and that would scale in performance and capacity as storage needs increased. “We found our solution in Rackable Systems’ clustered file system technology utilizing IB/SDP for iSCSI transport”, said Dr. Craig Prescott from the UF HPC Center, “This I/O solution is capable of sustaining in excess of 1.4 GB/s of aggregate throughput for random write access patterns and will soon be expanded to support over 2.4 GB/s.”
With the local storage issue resolved, UF now needed a solution that allowed remote clusters to access the high-performance storage as simply as possible. “The Obsidian Longbow products were the only solution that allowed us to transparently extend the reach and performance of our InfiniBand storage network,” said Dr. Charles Taylor from the UF HPC Center, “The Longbow Campus product allows us to extend the reach of InfiniBand across our campus, while the Longbow XR would enable us to connect all major Florida Universities via the 10 Gb/s Florida Lambda Rail (FLR) WAN.”
See InfiniBand Storage and Range Extension in Action at SC06
With the help of InfiniBand infrastructure switches and host channel adapters (HCAs) from Cisco Systems, network connectivity from Florida Lambda Rail, and Longbow InfiniBand range extension products from Obsidian, UF will demonstrate remote InfiniBand storage not only across Campus, but across 1,100 km of Florida State.
In this demonstration, Rackable Systems servers located in the UF/FLR booth (#2051) access large data sets located in arrays of Rackable Systems storage appliances (Booth #252) using InfiniBand. Obsidian Longbow Campus units transport the iSCSI protocol payload via IB/SDP through a 10 km, dark-fiber spool, preserving local performance levels across a simulated campus network.
Identical application software drives I/O traffic to the UF/FLR booth (#2051) from additional storage appliances located in the High Performance Computing facility on the Gainesville campus using Obsidian Longbow XRs, a networking platform that preserves all the performance advantages of InfiniBand across 10GE, ATM, and OC-192 WANs. UF’s demo proves that islands of InfiniBand storage can be aggregated campus-wide and maintain superior performance to other storage and storage transport technologies while easing configuration, access control, capacity balancing, and management processes.