The quest for power pervades the computing industry. A desire to outdo Moore’s law – the trend that sees the number of transistors on integrated circuits doubling roughly every two years – means that, every year, AMD and Intel release processors that eclipse the capabilities of the previous generation and drive a constant upgrade cycle of new hardware, new software and yet more new hardware.
Nowhere is this trend more obvious than in the high-performance computing (HPC) arena. It’s exemplified by supercomputers, those massively parallel systems that echo the early days of computing by filling rooms and demanding the output of electricity substations for their sole use.
HPC has led to scientific and technological breakthroughs previously thought impossible
HPC has led to scientific and technological breakthroughs previously thought impossible, giving researchers access to the computing power required to process vast quantities of data and simulate real-world environments and phenomena.
In recent years, however, the balance of power has shifted. Developments were once made at the supercomputing level and trickled down to the desktop, as with the Arpanet network that gave rise to the internet, but today’s supercomputers owe their existence to technologies originally developed for desktops, laptops and even games consoles.
Early supercomputers
The word “supercomputer” was first coined in the 1960s by the Control Data Corporation as a marketing term for its CDC 6600, designed by Seymour Cray before he left to found his own supercomputing company.
The concept of supercomputing stretches back still further. British company Ferranti was one of the forerunners of affordable scientific computing, releasing its Pegasus 1 system in 1956. The technology found in the Pegasus is, naturally, antiquated: considered fast at the time, its speed of a few thousand operations per second is outstripped by even the most modest modern smartphone.
Each generation of supercomputer typically at least doubles the performance of its predecessors, either through technological innovation – the replacement of vacuum tubes with solid-state transistors led to a massive increase in both performance and reliability, for example – or through brute force, adding in more components and cabinets.
Some things in the industry don’t change; a typical supercomputer today takes up a good portion of a room, as did the Ferranti Pegasus 1, and requires specialist staff, a robust power supply and adequate cooling. Other things, however, do change – and that’s where desktop computing comes into the supercomputing story.
Consumer computing
Simon Cox, professor of computational methods at the University of Southampton, has been at the forefront of supercomputing for two decades. In his time at the university, he’s seen the supercomputing capability upgraded repeatedly – a trend that started back in 1956, when the university took delivery of its own Ferranti Pegasus.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.