The never-ending global push to build ever faster supercomputers took another step today with Cray's announcement that it was awarded a contract from U.S. Department of Energy's Oak Ridge National Laboratory to build a system that could potentially deliver up to 20 petaflops of peak performance, or 20 quadrillion floating operations per second.
Cray said the contract is worth more than $97 million.
[ Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter. ]
This new system, called Titan, is due to be completed by 2013. It is a major upgrade of Oak Ridge's Jaguar supercomputer, also a Cray system.
Oak Ridge says Jaguar is the fastest supercomputer in the U.S. with a peak performance of 2.33 petaflops.
The Titan will be built using GPU and CPU processors. Each compute node on the Jaguar system has two AMD Opteron processors.
The Titan project involves, in part, removing one Opteron processor and replacing it with a Nvidia GPU.
Sumit Gupta, manager of the Telsa GPU business for Nvidia, said the Oak Ridge system may see performance "well north of 20 petaflops" if it is built out to its capability. The Oak Ridge system will have as many as 18,000 GPUs.
"We see this as a step toward the next kind of large system, which is obviously going to be 100 petaflops, moving toward exascale," said Gupta.
A 20-petaflop system is also being built by IBM for the Lawrence Livermore National Laboratory. The cost of of building that system, dubbed the Sequoia, has not been disclosed. The Sequoia is slated to be completed in in 2012.
Overt the past several years, systems makers have begun turning to GPUs to improve supercomputer performance. GPUs, sometimes called co-processors, can boost the performance of simulations.
GPUs have largely been in an experimental phase so far in supercomputing, said Steve Conway, a high performance computing analyst at IDC.
But, he added, the Oak Ridge system "provides an extra measure of confidence about their ability to exploit GPUs."
Titan will be used by Oak Ridge researchers for "increasing the realism of nuclear simulations," and "improving the predictive power of climate simulations." It will also be used to develop and understand "novel nanomaterials for batteries, electronics and other uses," the lab said.
The limits to building supercomputers include their cost, the amount of power they need, and the ability of applications to scale at such a large size.
Conway said there are only six applications today that can run at a petaflop or over because of the difficulty of scaling software over thousands of processors.
China and Japan have both talked about building a 20 petaflop system.
The fastest system in the world today is Japan's K Computer, which runs nearly 69,000 eight core Sparc chips and is capable of 8 petaflops.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed. His email address is firstname.lastname@example.org.
Read more about mainframes and supercomputers in Computerworld's Mainframes and Supercomputers Topic Center.