Evergrid, making Intel's future manycores work in practice, and the rest of the week

In this week's enterprise HPC summary Evergrid introduces pain free checkpointing, making Intel's future manycores work in practice, grid vendors make the move to virtualization, new blades abound, and much more.

Here’s a collection of highlights, selected totally subjectively, from this week’s enterprise HPC news stream as reported at insideHPC.com.

  • Supermicro launches new blades, denser than HP and Sun
  • IBM launches new BladeCenter S for small and medium-sized businesses
  • InfoWorld review on Sun Fire X4500 server: 48 drives in 4U
  • Gigaspaces launches version 6 of their eXtreme Application Platform; bindings for Java, .Net, others
  • Italy’s Borsa Italiana joins some of the world’s largest stock exchanges with move to AMD, HP
  • Intel announces Kittson, and the jump from 65 to 32nm in Poulson

Evergrid launches new job management tools, partners with Platform

Evergrid, which released its Cluster Availability Management Suite (CAMS) this week, has also announced a partnership with Platform to integrate its Availability Services with LSF.

Evergrid provides transparent fault tolerance using an OS abstraction layer that loads between the operating system (OS) and the application. Without modifying either the application or the operating system, CAMS/AvS periodically captures the collective state of the application across the entire infrastructure while the application continues processing. By recording the state of an application and all of the OS and system state, Evergrid is able to checkpoint and resume from failures or interruptions rapidly with minimal overhead. Even failure of multiple servers or of software systems does not stop an application from being able to resume processing from a checkpoint.

This is good for long running user applications obviously, but it also provides something very powerful for data center operators: preemptive scheduling. Because jobs can be suspended and returned to execution on command, centers can now be a lot more creative with batch scheduling policies without sacrificing high utilization numbers.

This is great news for enterprise users who want bigger machines — especially as virtualization continues to coalesce services into single physical boxes — but are plagued by the instabilities that come with them.

Making Intel’s manycores work in practice

An insideHPC.com reader pointed me to this story over at c|net’s News.com discussing the practical steps that Intel is spearheading to make its future manycore chips work in practice.

Recall that earlier this year Intel introduced a much-talked-about prototype 80 core chip. According to Jerry Bautista, co-director of Intel’s Tera-scale Computing Research Program, production variants of a manycore chips are likely to be heterogeneous.

A 64-core chip, for instance, might contain 42 x86 cores, 18 accelerators and four embedded graphics cores.

The company is also working on the software infrastructure that will be key in making the chips practically applicable to the broader computing market. While the scientific HPC community is fairly skilled at the basics of writing parallel code, your average Windows application coder isn’t, and that’s a problem.

One idea, proposed in a paper released this month at the Programming Language Design and Implementation Conference in San Diego, involves cloaking all of the cores in a heterogeneous multicore chip in a metaphorical exoskeleton so that all of the cores look like a series of conventional x86 cores, or even just one big core.

…A paper at the International Symposium on Computer Architecture, also in San Diego, details a hardware scheduler that will split up computing jobs among various cores on a chip.

…Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access.

Grid vendor United Devices moving away from HPTC, toward virtualization

Internetnews.com thinks it’s spotted a trend: enterprise grid software manufacturers moving into data center virtualization support.

Like fellow commercial grid computing pioneers DataSynapse and Platform Computing, [United Devices] has been moving away from its grid roots and into the data center virtualization market.

Rouse sees grid environments as able to complement existing virtualization offerings, giving them more stability and flexibility using technology that virtualization vendors haven’t traditionally focused on.

Grid, says Rouse, can create pools of virtual machines, set policies, provide automation and help meet service-level agreements (SLAs) in virtualized environments.

And UD is putting its money where its mouth is:

UD recently laid off a significant portion of its HPC sales staff in what Rouse called a “decision to organize aggressively around” the data center virtualization market.

Last year, one-third of UD’s sales were in the data center, a number that Rouse expects to exceed 50 percent this year. “The lifetime revenue expectations in the data center application market are so much higher than HPC,” he said.

John West summarizes the HPC news headlines every day at insideHPC.com, and writes on leadership and career issues for technology professionals at InfoWorld. You can contact him at john@insidehpc.com.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies