SC11 Supercomputing hardware and software vendors are getting impatient for the SC11 supercomputing conference in Seattle, which kicks off next week. More than a few have jumped the gun with product announcements this week, including chipmaker Intel.

No, Intel is not going to launch its "Sandy Bridge-EP" Xeon E5 processors, which are expected early next year. But the new Cluster Studio XE toolset for HPC customers will help those lucky few HPC and cloud shops that have been able to get systems this year to squeeze more performance out of their Xeon E5 clusters.

The Cluster Studio XE stack includes a slew of Intel tools for creating, tuning, and monitoring parallel applications running on x86-based parallel clusters. Intel had already been selling a set of application tools called Cluster Studio, which bundled up the chip giant's C, C++, and Fortran compilers, its rendition of the message passing interface (MPI) messaging protocol that allows server nodes to share work, and various math and multithreading libraries to goose the performance of applications.

With the XE (Extended Edition) of the HPC cluster tools, Intel is goosing the performance of the MPI library, and claims its MPI 4.0.3 stack is anywhere from 3.3 to 6.5 times as fast as the OpenMPI 1.5.4 and MVAPICH2 1.6 MPI stacks from the open source community. Benchmark tests were done on a 64-node system running 768 processes and linked by InfiniBand switches.

Intel tested the Platform Computing MPI 8.1.1 stack against the three MPI stacks listed above, only this time on an eight-mode system; in this case the performance differences between Intel and Platform (which is now owned by IBM) were not huge. With the Microsoft MPI 3.2 stack on the same iron, the Intel MPI stack running on Windows servers was anywhere from 2.17 to 2.74 times faster than the Microsoft MPI.

Read full story at theregister.co.uk


Flash and other storage-class memory (SCM) technologies can revolutionize the data center and cloud computing but they have significant performance, reliability, and serviceability problems. These problems compound in large-scale deployments of SSDs, requiring a cohesive Tier-0 framework for a systematic build-out. Virident Systems products solve these problems.

Click here to access full white paper. Registration required.

Games That Matter

One such popular defense tool is a virtual world game called Boarders Ahoy!. Developed by NATO's Allied Command Transformation, the training simulator prepares sailors for boarding a ship as part of a military inspection process. Users can practice interviewing the crew of the virtual ship, checking identification and locating the game's 250 searchable objects.

The United States Secret Service is also leveraging the training power of virtual worlds to prepare for national threats. "Tiny Town" is a small-scale model in use for the last four decades, which helps officials plan for such emergencies as chemical threats and enemy attacks. The model now has a 3D computer-generated counterpart, called Virtual Tiny Town. The software's advanced modeling cabilities can realistically simulate a variety of possible real-life threats, such as chemical, biological or radiological attacks, armed assaults, or suicide bombers. Planned upgrades will enhance the program's life-saving measures by adding health impacts and crowd behaviors to the model.

Academic institutions are also contributing to the arsenal of public safety measures. The recently-launched Center for Advanced Modeling in the Social, Behavioral and Health Sciences at Johns Hopkins University specializes in agent-based simulation, which predicts how individuals will react in emergency situations. The center draws from diverse disciplines, such as public safety, sociology, economics and supercomputing, to fine-tune its virtual model.

Read full story at Game Forward

Abt brand banner
SGI, a global leader in HPC and datacenter solutions, today announced the immediate availability of Altix ICE 8400, the next generation of the award-winning Altix ICE scale-out high performance computing (HPC) blade platform. SGI also announced today it has submitted world record breaking SPEC benchmark performance results with Altix ICE 8400, demonstrating its readiness to implement multi-petascale supercomputers.

Altix ICE 8400 contains significant enhancements over prior Altix ICE generations, including through and through Quad Data Rate InfiniBand interconnect. SGI's elegantly-designed integrated backplane sports up to three times the link-to-node bandwidth relative to competitive QDR InfiniBand clusters to maximize performance where most job traffic occurs. Altix ICE 8400 supports up to 128 processors (1,536 cores) per cabinet with support for 130W CPU sockets. Three on-board Mellanox ConnectX-2 InfiniBand HCA compute blade configurations are also supported, and include single-port, dual-port or two single-port chipsets.

Altix ICE 8400, with its innovative blade design, easily and affordably scales to up to 65,536 compute nodes with integrated single or dual plane InfiniBand backplane interconnect. Open x86 architecture makes it equally simple to deploy commercial, open source or custom applications on completely unmodified Novell SUSE or Red Hat Linux operating systems.

Altix ICE 8400 easily meets the needs of the world's largest supercomputing deployments. Recognized for its design win at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center, SGI helps Pleiades Supercomputer, the world's largest InfiniBand cluster, scale with an additional 32 cabinets of Altix ICE 8400 to nearly a petaflop. The system fully leverages SGI's hypercube topology to enable seamless cabinet-level upgrades without any production downtime, saving millions of core hours in the process.

(Full version of this article can be obtained from HPCwire's web pages)
The last two years have seen a large number of file systems added to the kernel with many of them maturing to the point where they are useful, reliable, and in production in some cases. In the run up to the 2.6.34 kernel, Linus recently added the Ceph client. What is unique about Ceph is that it is a distributed parallel file system promising scalability and performance, something that NFS lacks.


High-level view of Ceph

One might ask about the origin of Ceph since it is somewhat unusual. Ceph is really short for Cephalopod which is the class of moulluscs to which the octopus belongs. So it’s really short for octopus, sort of. If you want more detail, talk a look at the Wikipedia article about Ceph. Now that name has been partially explained, let’s look at the file system.

Ceph was started by Sage Weil for his PhD dissertation at the University of California, Santa Cruz in the Storage Systems Research Center in the Jack Baskin School of Engineering. The lab is funded by the DOE/NNSA involving LLNL (Lawrence Livermore National Labs), LANL (Los Alamos National Labs), and Sandia National Laboratories. He graduated in the fall of 2007 and has kept developing Ceph. As mentioned previously, his efforts have been rewarded with the integration of the Ceph client into the upcoming 2.6.34 kernel.

The design goals of Ceph are to create a POSIX file system (or close to POSIX) that is scalable, reliable, and has very good performance. To reach these goals Ceph has the following major features:
  • It is object-based
  • It decouples metadata and data (many parallel file systems do this as well)
  • It uses a dynamic distributed metadata approach
These three features and how they are implemented are at the core of Ceph (more on that in the next section).

However, probably the most fundamental core assumption in the design of Ceph is that large-scale storage systems are dynamic and there are guaranteed to be failures. The first part of the assumption, assuming storage systems are dynamic, means that storage hardware is added and removed and the workloads on the system are changing. Included in this assumption is that it is presumed there will be hardware failures and the file system needs to adaptable and resilient.

(Full version of this article can be obtained from Linux Magazine's web pages)