The year-long compile, Harvard's HPC in biomed summit, and EnginFrame is revved

In today's enterprise HPC news summary we get word of Harvard's invitation-only HPC biomed summit, the EnginFrame grid portal is revved, and Dan Reed asks us to rethink why we expect application compilation to take seconds, not months.

Here’s a collection of highlights, selected totally subjectively, from the recent enterprise HPC news stream as reported at insideHPC.com.

Harvard biomedical HPC summit

If you’re in an enterprise supporting HPC in medicine and the life sciences, you’ll want to try to get yourself invited to Harvard’s upcoming HPC summit

Harvard Medical School’s first annual Biomedical High Performance Computing Leadership Summit. This invite [sic] only summit will bring together a small group of the IT leaders who deploy and maintain high performance computing infrastructure for biomedical research in major academic and private sector research organizations.

The meeting is in October. You can get more info from the story at Supercomputing Online.

EnginFrame 5.0 released

NICE has announced they’ve released version 5.0 of their EnginFrame software. EnginFrame is a grid portal that enables user-friendly and application-oriented HPC job submission, control and monitoring.

They aren’t well known in the US, but in Europe they’ve been around for 10 years or so and have a long list of major customers (Airbus, Audi, BMW, Bridgestone, British Gas, Delphi, Ferrari, FIAT, MTU, Northrop Grumman, Procter & Gamble, Raytheon, Schlumberger, Statoil, Toyota, TOTAL, TRW, STMicro, and so on). (More on this enterprise HPC news item)

Taking the long view of application compilation

With 100-core commodity processors on the horizon for the end of the decade, HPC luminary Dan Reed points out that as machines get more complex we may need to come to rely on them to get our software to run well.

What I am really arguing is that we need to rethink aggressive machine optimization, virtualization and abstraction. What’s wrong with devoting a teraflop-year to large-scale code optimization? I don’t just mean peephole optimization or interprocedural analysis. Think about genetic programming, evolutionary algorithms, feedback-directed optimization, multiple objective code optimization, redundancy for fault tolerance and other techniques that assemble functionality from building blocks. Why have we come to believe that compilation times should be measurable with a stopwatch rather than a sundial?

A great question. I think it’s worth pursuing many paths to this goal; a diverse investment portfolio is the best way to ensure that we get a workable answer to problem of commodity manycore software.

Dan’s post also summarizes the latest DOE workshop that looked at the challenges of computing at the very highest end. This matters for those of you focused on what’s coming to the enterprise in the next 5-10 years. (More on this enterprise HPC news item)

John West summarizes the HPC news headlines every day at insideHPC.com, and writes on leadership and career issues for technology professionals at InfoWorld. You can contact him at john@insidehpc.com.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies