The world’s most powerful computers
“The industry has more than delivered on Moore’s law,” explains Cox from his office at the university’s Computational Engineering Design Research Group. “But underlying Moore’s law is an incredible price-performance story. The explosion in high-performance computing has been driven by commoditisation, using the kinds of machines that your readers drool over for gaming, word processing – we use those same commodity machines to do supercomputing.
“My research in the mid-1990s led to the University of Southampton being one of the first to install a commodity-based machine, meaning it was built out of desktops, ordinary machines that you could buy down at your local shop. Generally speaking, the whole industry has moved now – with one or two exceptions – to the use of this sort of commodity technology.”
Power is nothing if nobody uses it
Such supercomputers, known as clusters, provide power to their users by distributing a task across many slower processors. The concept isn’t new: supercomputers have been using multiple CPUs for years, more recently reaching a new level with the introduction of clusters that operate over the internet to harness unused CPU cycles from home computers in projects such as Folding@home.
The University of Southampton’s current supercomputer, named Iridis 3 after the Greek goddess of the rainbow, was the UK’s fastest university-owned machine and greenest supercomputer when it launched in 2009. However, power is nothing if nobody uses it.
Modern supercomputers are broad-spectrum tools – hence Iridis 3’s rainbow-themed nomenclature. “It has an engineering application, archaeology, medicine, climate change, optics and electronics, particle physics – all of those were able to use this machine we’d bought to do their science,” explains Cox. “At least 40% of our research income, in some way or more, is related to the use of computing.”
The research carried out at the university isn’t just theoretical.
“Sometimes people are just using the computer to do a simulation, but that computing technology is also being linked to some of our large-scale experiments – big engineering experiments or designing a turbine blade with Rolls-Royce.”
Medical science is one of the biggest users of supercomputing resources, working with both the output of data-heavy instruments such as computer-aided tomography (CAT) scanners, and creating simulations to better understand real-world phenomena.
“I saw a paper recently where they were simulating how a cell deforms as you insert a needle into it to extract the nucleus, and improving the design of the needle and the technique for doing cellular biology,” recounts Ian Buck, general manager for GPU computing at Nvidia, and creator of the Compute Unified Device Architecture (CUDA) parallel programming language.
It isn’t all turbine blades and medicine, however. The US Department of Energy, which handles the nation’s nuclear weapons programme as well as domestic energy production, is a major power in the supercomputing world, currently holding positions one, two and four in the biannual TOP500 list of the world’s fastest supercomputers.
The technology is also regularly harnessed by oil companies in their continual search for caches of the dwindling resource, chewing through vast quantities of data to find the most beneficial locations to drill.
The increasing power of supercomputers enables the simulation and data processing required for these disciplines. Moore’s law has certainly helped, but the demand for ever-increasing performance has outstripped that trend.
It’s here that a technology developed for consumer-grade computing is helping to push supercomputing forwards: graphics cards.