Superconducting computing


How about petaflops performance to keep that enterprise really humming? Superconducting circuits -- which are frictionless and therefore generate no heat -- would certainly free you from any thermal limits on clock frequencies. But who has the funds to cool these circuits with liquid helium as required? That is, of course, assuming someone comes up with the extremely complex schemes necessary to interface this circuitry with the room-temperature components of an operable computer.

Of all the technologies proposed in the past 50 years, superconducting computing stands out as "psychoceramic." IBM’s program, started in the late 1960s, was cancelled by the early 1980s, and the Japan Ministry of Trade and Industry's attempt to develop a superconducting mainframe was dropped in the mid-1990s. Both resulted in clock frequencies of only a few gigahertz.

Yet the dream persists in the form of the HTMT (Hybrid Technology Multi-Threaded) program, which takes advantage of superconducting rapid single-flux quantum logic and should eventually scale to about 100GHz. Its proposed NUMA (non-uniform memory access) architecture uses superconducting processors and data buffers, cryo-SRAM (static RAM) semiconductor buffers, semiconductor DRAM main memory, and optical holographic storage in its quest for petaflops performance. Its chief obstacle? A clock cycle that will be shorter than the time it takes to transmit a signal through an entire chip.

So, unless you're the National Security Agency, which has asked for $400 million to build an HTMT-based prototype, don't hold your breath waiting for superconducting's benefits. In fact, the expected long-term impact of superconducting on the enterprise remains in range of absolute zero.

-- Martin Heller

How do you see the long-term prospects of superconducting computing shaping up?

From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies