Posted by: Cluster Resources on Wednesday August 09 2006 @ 10:51AM EDT views: 1258
Cluster Resources, Inc., a leading provider of cluster, grid and utility computing software, announced today that the Department of Energy's National Nuclear Security Administration's Advanced Simulation and Computing Program has selected Cluster Resources' Moab workload and resource management software as a standard for use across NNSA's high-performance computing systems.
Posted by: Ken Farmer on Tuesday August 08 2006 @ 08:02PM EDT views: 1249
HP is a leading global provider of products, technologies, solutions and services to consumers and business. The company's offerings span IT infrastructure, personal computing and access devices, global services, and imaging and printing. Our $4 billion annual R&D investment fuels the invention of products, solutions and new technologies so we can better serve customers and enter new markets. We invent, engineer and deliver technology solutions that drive business value, create social value and improve the lives of our customers.
Posted by: Maria McLaughlin, Appro on Tuesday August 08 2006 @ 02:29PM EDT views: 1268
Next Generation Appro HyperBlade, 1U and 2U HyperServers boost Power Efficiency, Consolidate and Simplify Datacenter Performance
Posted by: Jeffrey Swartz on Tuesday August 08 2006 @ 10:17AM EDT views: 1089
SAN DIEGO--(BUSINESS WIRE)--Aug. 8, 2006--Storix Inc. announced today that it has signed a partnership agreement with Clark Data Systems (CDS), a computer network development, service and support company based in Quakertown, Penn. CDS will bundle its routers with Storix's System Backup Administrator (SBAdmin) backup and disaster recovery software for Linux and AIX.
Posted by: Kenneth Farmer on Tuesday August 08 2006 @ 09:02AM EDT views: 1212
ClusterMonkey.net: ...no good discussion of cluster administration can continue without coming to that thorniest of issues, file systems and I/O.
Posted by: Ken Farmer on Monday August 07 2006 @ 07:28PM EDT views: 1101
LLNL Awards the Peloton Project to Appro’s SuperComputing Scalable Units based on Quad Socket, Dual-Core AMD Opteron™ Processors
Posted by: Ken Farmer on Monday August 07 2006 @ 05:49PM EDT views: 1109
LinuxPlanet: With its new round of "Cool Blue" PC servers, rolled out last week, IBM is starting to push HPC (high-performance computing) beyond the scientific-technical niche and into the mainstream, particularly among SMBs (small to mid-sized businesses). More business-oriented software solutions are becoming available for Linux and Windows editions of the servers alike, but a set of built-in innovations for making the most of electrical power could act as an even stronger draw for the five new servers.
Posted by: Ken Farmer on Monday August 07 2006 @ 04:27PM EDT views: 1133
FCW: The Energy Department’s Oak Ridge National Laboratory and Cray announced a $200 million deal in June to complete the world’s most powerful supercomputer in 2008.
Posted by: Ken Farmer on Monday August 07 2006 @ 04:24PM EDT views: 1113
GRIDToday: If you asked many industry/economic analysts around the world to name the most important variable in future economic growth off the top of their heads, the knee-jerk answer today would be the price of oil. Of course the cost of energy will be a major factor in all of our futures, but C-level industry executives will tell you that their most important resources are the gooey blobs nestled in the skulls of their wonderful employees.
Posted by: Steve Jones on Sunday July 30 2006 @ 08:38PM EDT views: 2188
The focus this year is software development and research computing, bringing together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancement in High Performance Computing.
Posted by: Ken Farmer on Friday July 21 2006 @ 10:40PM EDT views: 1711
ClusterMonkey.net: It is a common practice to have development and test servers for each production server, so that you can experiment with changes without the fear of breaking anything important, but this is usually not feasible with clusters. So how do you try that new version of your favorite program before committing it to the production cluster? A cheap and convenient possibility is to build a virtual cluster.
Posted by: Ken Farmer on Friday July 21 2006 @ 10:39PM EDT views: 1636
ClusterMonkey.net: Now that we know how to identify parallel parts of our program, the question is now what to do with this knowledge. Or, how do you write a parallel program. To answer this question, we will discuss what the structure of a parallel program may look like. Programs can be organized in different ways. We already discussed SPMD (Single Program Multiple Data) and MPMD (Multiple Programs Multiple Data) models. SPMD and MPMD represents the way a program looks from the point of view of the cluster. Note, that using a MPMD model with MPI an "app" or "procgroup" file will be needed to start different programs on cluster nodes. Let's see what the programs look from the implementation standpoint.
Posted by: Ken Farmer on Friday July 21 2006 @ 10:37PM EDT views: 1777
HPCwire: During our coverage of the High Performance Computing and Communication conference in March, HPCwire conducted an interview with Douglass Post, chief scientist of the DoD High Performance Computing Modernization Program, where he talked about the major challenges currently facing high performance computing.
Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.
Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.