I'm spending the weekend and Monday at SC05 in Seattle, taking in the latest in supercomputing concepts. Today, I've been focusing mostly on interconnects. The cluster interconnect talk by DK Panda proved quite interesting, focusing on Infiniband usage in HPC.
One very interesting item of note was an off-the-cuff discussion I had with a conference attendee. We were talking about the usefulness of Infiniband in smaller clusters given the rise of affordable gigE interconnects. Since it's a given that commodity servers already have gigE NICs, it makes more sense to move in that direction rather than Infiniband or Myrinet for some smaller HPC clusters.
While discussing this, he noted that his lab had tested 40 commodity gigabit switches for their performance in HPC applications, and the standout was the NetGear GSM7248 -- at least I believe it was that model. Apparently, in an effort to reduce cost within the 7248, NetGear designed the switch with only two 24-port gigE ASICs, rather than the 8- or 4-port ASICs found in most other switches. Given the lack of interconnect requirements between ASICs on the switch backplane for each 24-port block, the HPC latency and performance numbers were apparently outstanding as long as you didn't cross the ASICs and only used the first or second 24 ports. The inter-ASIC communication introduced significant latency increases. For roughly $1,300, it became the switch they used to build a good-size HPC cluster on gigE.