Here’s a collection of highlights, selected totally subjectively, from the recent enterprise HPC news stream as reported at insideHPC.com.
Too busy to keep up? Make your commute productive and subscribe to the Weekly Takeout, insideHPC.com’s weekly podcast summary of the HPC news week in review.
Detailed analysis on what Intel’s CSI may look like
You’ve no doubt heard of Intel’s Common System Interconnect by now. This is Intel’s shot at moving its own architectures away from the frontside bus in the way that AMD did with HyperTransport.
The Reg is reporting the work of analyst David Kanter at Real World Technologies who has dug through patents and interviewed engineers to put together a detailed report on what he thinks CSI will look like. We won’t get to hear Intel’s plans for its technology until the IDF next month.
A taste from Kanter’s report
Unlike the front-side bus, CSI is a cleanly defined, layered network fabric used to communicate between various agents. These ‘agents’ may be microprocessors, coprocessors, FPGAs, chipsets, or generally any device with a CSI port….Initial CSI implementations in Intel’s 65nm and 45nm high performance CMOS processes target 4.8-6.4GT/s operation, thus providing 12-16GB/s of bandwidth in each direction and 24-32GB/s for each link.
The 411: Woven Systems
I’ve got a company profile of 10GbE startup Woven Systems over at insideHPC.com:
Why (you care): 10 GbE has not used in large clusters because 10 GbE switches are expensive, and building a cluster of any size would entail adding Layer 3 switches into the mix which introduce severe latency. The adaptive routing bit also offers a lot of promise in communication-intensive application over statically-routed solutions like IB (as shown in the Sandia results (text here, informative graph here. Woven claims there solution is 1/5 the power, cost, and rack space of existing solutions.
Get the 411 on Woven Systems.
Purdue to study undergraduate parallel programming
As we look forward to the challenge of shifting commodity software to make good use of chips an important step (that we are already taking too late) is to make sure that programmers are ready to hit the ground running when they exit college.
Purdue University has announced they are part of a 3-year NSF-funded effort to study when and how best to introduce the concepts of parallel programming to undergraduates. (More on this enterprise HPC news item)
Coming soon: 9,200 core Windows CCS cluster
Actually, the cluster is a dual boot Linux/CCS cluster, which makes it even more interesting than the headline. The machine is reported to have 1,1151 Dell servers with dual-socket Barcelonas for a total of over 9,200 cores and will be housed at the University of Nebraska, Omaha. Why do you care? Windows CCS needs more testing at scale if its to become a rock solid cluster OS for the enterprise.
(More on this HPC news item)