Posted by: Rachel Zelal on Wednesday October 04 2006 @ 09:33PM EDT views: 1139
Deep Thunder is a web-based business-oriented service that provides weather forecasts precisely and quickly enough to address specific business problems and streamline weather-sensitive operations.
Posted by: Dana D Booze on Wednesday October 04 2006 @ 08:00AM EDT views: 1434
Northrop Grumman Corporation, in partnership with NASA Goddard Space Flight Center, Intel Corp. and Silicon Graphics, Inc., has delivered a unique high performance computing solution to NASA to improve hurricane forecasts. NASA will use the solution for its African Monsoon Multidisciplinary Analyses (NAMMA) campaign.
Posted by: Maria McLaughlin on Saturday September 30 2006 @ 11:05PM EDT views: 1286
Appro and CyrusOne Compute On Demand Solution Scores Major Win with Top Oil and Gas Company for over 2000 Nodes of High Performance Computing
Posted by: Kenneth Farmer on Wednesday September 27 2006 @ 11:45AM EDT views: 3246
Addressing the rapid migration to Linux clusters, Platform Computing is introducing Platform Open Cluster Stack (OCS), a modular and hybrid stack that transparently integrates open source and commercial software into a single consistent cluster operating environment. Platform OCS is a pre-integrated, vendor certified software stack that assures the consistent delivery of scale-out application clusters. This enhanced software stack eliminates the higher costs of development, sales and support caused by inconsistent software development and certification in Linux clusters.
Posted by: Kenneth Farmer on Tuesday September 26 2006 @ 08:27PM EDT views: 1595
NewsOK.com: ..."Your half-life of being on the high end is on the order of two or three years," said Stephen Wheat, senior director for high-performance computing for Intel and a graduate of Tulsa's Booker T. Washington High School. "Within five years, it's easily well surpassed by much less expensive and smaller systems."...
Posted by: Dana D Booze on Tuesday September 26 2006 @ 08:28AM EDT views: 1288
Mellanox 20Gb/s InfiniBand and Dual-Core Intel Xeon-Based Servers Empower Mellanox Advanced Development and Customer Test-Bead Environment
Posted by: Kenneth Farmer on Monday September 25 2006 @ 10:56AM EDT views: 1646
Ray Kurzweil, described as “the restless genius” by the Wall Street Journal, and “the ultimate thinking machine” by Forbes, will be the keynote speaker at SC06, the premier international conference on high performance computing, networking, date storage and analysis. Under the theme "Powerful Beyond Imagination," SC06 will be held November 11-17, 2006, in Tampa, Florida.
Posted by: Kenneth Farmer on Friday September 22 2006 @ 06:42AM EDT views: 1801
Linux.com: In computing world, the term "cluster" refers to a group of independent computers combined through software and networking, which is often used to run highly compute-intensive jobs. With a cluster, you can build a high-speed supercomputer out of hundreds or even thousands of relatively low-speed systems. Cluster management software offers an easy-to-use interface for managing clusters, and automates the process of queuing jobs, matching the requirements of a job and the resources available to the cluster, and migrating jobs across the cluster. Here's an introduction to five open source CMS applications.
Posted by: Nick Ihli on Wednesday September 20 2006 @ 12:53PM EDT views: 1564
Provo, Utah – Cluster Resources, Inc. and LinuxHPC.org announced today the release of ClusterBuilder.org version 1.3, featuring the new Clustering Encyclopedia – a specialized reference source of high-performance computing (HPC) technologies and products.
Posted by: Kenneth Farmer on Tuesday September 19 2006 @ 11:42AM EDT views: 2076
TGDaily: The most powerful computing device in your PC may not be that dual-core processor, but your average graphics card. Interest in tapping the hidden processing power in graphics processor has been growing over the past two years, but Peakstream is the first company to actually offer a solution to create a supercomputer based on graphics cards.
Posted by: Kenneth Farmer on Tuesday September 19 2006 @ 11:40AM EDT views: 1275
ClusterMonkey.net: Fifteen years ago I wrote a short article in a now defunct parallel computing magazine (Parallelogram) entitled "How Will You Program 1000 Processors?" Back then it was a good question that had no easy answer. Today, it is still a good question that still has no easy answer. Except now it seems a bit more urgent as we step into the "mulit-core" era. Indeed, when I originally wrote the article, using 1000 processors was a far off, but real possibility. Today, 1000 processors are a reality for many practitioners of HPC. As dual cores hit the server rooms, effectively doubling the processor counts, many more people will be joining the 1000P club very soon.
Posted by: Kenneth Farmer on Tuesday September 19 2006 @ 11:38AM EDT views: 1261
ClusterMonkey.net: The Beowulf mailing list provides detailed discussions about issues concerning Linux HPC clusters. In this article I review some postings to the Beowulf list on clusters of bare motherboards and choosing a high-speed interconnect.
Posted by: Kenneth Farmer on Tuesday September 19 2006 @ 11:36AM EDT views: 1225
A*STAR, the Singapore government's Agency for Science, Technology and Research, and HP today announced they will collaborate on technologies to provide secure, seamless transmission of video, audio and multimedia content over future high-speed broadband and wireless networks
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.