Build your own supercomputer

The term “supercomputer” is a loose one. There’s no official definition, so there’s nothing preventing you from applying the term to your desktop PC, laptop or digital watch.

Build your own supercomputer

Broadly, though, it refers to a computer that’s much more powerful than the typical hardware of its period.

The first supercomputer is often said to be the CDC 6600, designed in the early 1960s by Seymour Cray (whose name would become synonymous with supercomputing). It could perform calculations at a rate of around one megaflops – that is, one million floating-point arithmetical operations per second; roughly five times the performance of a contemporary mainframe such as the IBM 7090.

Today, the term might refer to a system such as the Fujitsu K computer, capable of more than ten petaflops – a staggering ten-billionfold increase over the original Cray. The two aren’t perfectly comparable since the two systems performed quite different tasks, but it’s clear we’re dealing with vast amounts of power.

Supercomputing applications

It might not be immediately obvious what anybody might need with such incredible computational power, but there are a number of real-world tasks that will devour all the processing resources you can throw at them.

It might not be immediately obvious what anybody might need with such incredible computational power, but there are a number of real-world tasks that will devour all the processing resources you can throw at them

In scientific research, supercomputers can be used to test fluid dynamic or aerodynamic models without the need to build expensive prototypes. At CERN, supercomputers perform simulated subatomic experiments.

Seismologists use supercomputer resources to model the effects of earthquakes, and meteorologists can rapidly analyse large quantities of sensor data to predict how weather systems will develop.

Supercomputing is at the forefront of new technologies, too. Creating a computer interface that responds to natural language, for example, is an extremely challenging task, owing to the immense variety of sounds, situations and nuances that must be understood; the more horsepower that can be thrown at the problem, the better it will be.

Looking further ahead, supercomputing could even deliver the holy grail of artificial intelligence. Back in 1997, IBM’s Deep Blue supercomputer notoriously defeated grandmaster Garry Kasparov at chess.

Its Blue Gene/P supercomputer, unveiled in 2007, has been used to simulate a neural network of 1.6 billion neurons, representing around 1% of the complexity of the human brain.

And last year, IBM’s Watson computer appeared as a contestant on US game show Jeopardy!, defeating two former champions to walk away – well, to be wheeled away – with a million-dollar prize.

A supercomputer at home

Few of us run seismology labs, or develop artificial intelligence systems. However, there are domestic roles for supercomputing, too. If you’re a budding film-maker, you’ll know that creating sophisticated cinematic effects involves much intensive computation. The more power you have on hand, the more quickly you can try things out and see results.

With enough grunt, you could recreate the photorealistic animations of Michael Bay’s Transformers movies, or the fantastically detailed world of Wall-E – but even for a dedicated studio such as Pixar, each frame of an animated movie can take around 90 minutes to render.

The precise figure varies from frame to frame, depending on its complexity, and the computing resources available. Many scenes are rendered simultaneously – otherwise a film such as Toy Story 3 would take decades to render.

With a high-performance computer, you can also play a big part in distributed projects such as SETI@home and Folding@home. These projects let you use your computer to analyse raw data for worthy causes; in the case of SETI@home, you’ll be analysing radio telescope data for possible evidence of extraterrestrial life.

The Folding@home project uses volunteer computing power to conduct simulated experiments that could lead to treatments for diseases such as Alzheimer’s and Parkinson’s (the project takes its name from the way proteins “fold” into shapes that cause various behaviours within the human body).

You don’t need a supercomputer to participate in these distributed efforts, but by donating an exceptional quantity of computing power, you can make a significant contribution to research that could change the world. There’s also the cachet to be gained from working your way up the leaderboards of the most active contributors: the faster your PC, the higher you’ll be placed.

Building your own supercomputer

If you fancy getting stuck into tasks such as this, you could buy dedicated hardware from the likes of HP or Cray, but this is probably overkill, and would certainly be tremendously expensive.

The Cray XK6, for example, can perform more than one petaflop, but system prices start at around half a million dollars. A cheaper option is to make use of hosted computing services such as Microsoft Azure or Amazon Web Services.

But if you want to own and control your own hardware, a home-brew approach can provide a usable measure of supercomputing power at a comparatively realistic price.

What does a homemade supercomputer look like? As we’ve noted, there’s no formal definition of a supercomputer. One thing that’s likely to characterise your hardware, however, is parallelisation: historically, parallel processing is the means that has allowed supercomputers to achieve their exceptional levels of performance.

Almost every modern CPU on the market has two or more physical cores built directly into the chip package (physical cores being the important measure), so arguably you could install a mainstream CPU in a regular motherboard and call it a supercomputer. Indeed, a modern Core i7 system will deliver computing power on a similar scale to that of a real supercomputer from 20 years ago, such as the Intel Paragon, which cost a million dollars and filled half a room.

However, the term supercomputer implies something beyond the norm, and these days, an eight-core system is comparatively run-of-the-mill. A 16-core system might qualify. A 48-core system? Now we’re getting somewhere.

How do you go about assembling a system like this? One option is to invest in a motherboard that supports multiple processors. Another is to combine many computers into a cluster that functions as a single supercomputer.

Alternatively, you could look beyond the CPU to add-on cards that place huge quantities of raw number-crunching power in the hands of the CPU. Or you could use the hundreds of stream processors on a graphics card to the same end. Let’s look at each of these approaches in turn.

Multiple CPUs

Mainstream desktop chips aren’t ordinarily used in multiprocessor configurations, and you’ll find very little hardware support for doing so. If you want to run multiple CPUs in parallel, you’re basically limited to workstation or server architectures.

On Intel hardware, this means LGA 2011 chips, most of which come under the Xeon brand. If you prefer AMD, you can use the still-supported Socket G34 platform, or the newer Socket C32 that supports the latest Opteron models.

None of this is cheap – the hardware is aimed at businesses, which are typically willing to pay for heavy-duty hardware. Dual Intel socket 2011 motherboards start at around £200, and processors at around £220 each for the Core i7-3820. Move up to the top-of-the-range eight-core Xeon E5-2690 and you’re looking at well over £1,000 per processor.

This approach has one major benefit, however: Windows is designed to “just work” in multiprocessor environments, so any program that can make sensible use of a dual-core processor should automatically scale up to run in a 16-core environment.

This makes a multiprocessor model appealing if you want to use your supercomputer to run mainstream multithreaded applications such as 3D-rendering tools or media encoders.

Forming a cluster

The multiprocessor approach has limitations. Once you’ve installed your two expensive processors in your expensive motherboard, there’s almost no scope to expand organically; you could install more RAM, or swap out your processors for a pair of more powerful models, but basically what you have is a closed system. A more flexible approach is clustering.

A cluster is a group of computers, typically connected via a local area network, which acts as if it were a single system.

A computational cluster can be seen as a macrocosm of a multiprocessor system, with multiple physical computers working on their individual tasks in parallel

Clusters can be used for all sorts of purposes, such as providing load balancing and fault tolerance for network services, but the model lends itself particularly well to supercomputing applications. Indeed, a clustering approach has been the basis of most of the best-known supercomputers in history, including Fujitsu’s world-beating K computer.

The philosophy behind supercomputing clustering is simple. One physical (or virtual) machine is configured as the “master” system or the “head node”, and it’s on this system that the main application code runs. The other nodes do nothing but sit and wait for the master system to delegate workloads to them; when these are received, they do the work and return the results as quickly as possible.

A computational cluster can be seen as a macrocosm of a multiprocessor system, with multiple physical computers working on their individual tasks in parallel.

The difference is that nodes can be added to your cluster, or removed from it, as easily as connecting a new PC to a network; and, what’s more, there’s no requirement at all for the node hardware to use any particular architecture.

If you wanted, you could assemble a cluster from a hotchpotch of systems including netbooks, laptops, workstations and high-performance servers. The only requirement is that each node is running suitable client software.

Arguably, the best-known examples of computing clusters are the SETI@home and Folding@home projects – but the term “cluster” more usually implies a centrally managed system (projects that combine the power of remote computers are referred to instead as “grid computing”).

The nodes of a cluster are also usually connected via a much faster link than a regular internet connection, to minimise the latency involved in sending workloads back and forth. In your home cluster, that might be Gigabit or 10GbE; the K computer uses a proprietary interconnect called “Tofu”, which provides 100GB/sec of bandwidth.

Find out more

Supercomputing coding

Windows-based clusters can be assembled quite easily using the Windows HPC Server 2008 operating system, and Microsoft provides guidelines for creating “cluster-aware” applications that will make use of cluster resources when run on such a system. Alternatively, there are various free Linux distributions that are designed for clustering, such as openMosix and ClusterKnoppix. These provide a user-friendly experience that makes it almost effortless to set up a cluster of any size using the popular Beowulf system.

Whichever route you choose, however, one limitation that you’re likely to encounter is a dearth of pre-existing applications that are designed to make use of cluster resources. This isn’t necessarily a problem, as supercomputer tasks are typically carried out by bespoke code.

Add-on cards

The cluster approach is flexible, but quite wasteful – it basically means leaving an entire computer switched on and drawing power when you’re typically making use of only a few functions of the processor.

A more energy-efficient approach is to mount a large number of processor cores on one expansion card and use these cores as a virtual cluster.

This was the thinking behind Intel’s ill-fated Larrabee project, which sought to integrate 32 x86 cores – processor cores such as you might find in a regular PC – onto a single PCI Express card.

An early demonstration of the hardware showed a Larrabee card achieving performance of just over one teraflop, and the idea was that its huge parallel-processing power could be used to render complex, high-quality graphics in real-time.

Larrabee couldn’t be made to work as a graphics-orientated product, and the project was officially shelved in 2010. But Intel kept working on a more general-purpose Larrabee-type architecture – called the Many Integrated Core architecture, or MIC for short – which could be used for any sort of parallel processing. A prototype 32-core PCI Express card, codenamed Knights Ferry, was trialled in 2010 at the Leibniz Supercomputing Centre and at CERN, and proved capable of providing around 750 gigaflops of computing power. Its successor, codenamed Knights Corner, is expected to go on general sale later this year, and will probably sport 48 cores or more.

Knights Corner looks set to be a neat and power-efficient way to turn your desktop PC into a supercomputer

Knights Corner looks set to be a neat and power-efficient way to turn your desktop PC into a supercomputer, but it’s a specialist market, so hardware costs are likely to be steep: it could actually work out cheaper to buy an entire cluster of multicore PCs. And the applications you run will need to be written specifically for parallelised execution.

GPU options

Your last option for supercomputing is to eschew conventional CPU cores entirely, and instead exploit the power of your graphics card. After all, the shaders in a GPU (or stream processors, as they’re also called) are designed to carry out large numbers of calculations in parallel at very high speeds – which is exactly what supercomputers are traditionally best at doing. As we’ve noted above, supercomputers have often been used by professional studios for rendering 3D scenes in the past.

GPUs offer far greater parallelism than CPUs. While a high-end CPU might have eight cores, even a mid-range desktop graphics card typically has more than 100 stream processors, and today’s high-end models have more than 2,000. This enables a top-of-the-range AMD Radeon HD 7970 to turn over nearly four teraflops – almost 40 times the computational power of a Core i7-980X.

Note that GPU performance is typically cited in terms of “single-precision” calculations, which can lead to rounding errors. Working with double-precision values, for accuracy comparable to that of a CPU, roughly halves performance.

Even so, using graphics hardware is vastly more economical than conventional processors, with AMD’s high-end card costing less than £400.

The reason GPU stream processors are so cheap by comparison to CPUs is that they’re massively simpler – their capabilities are largely limited to performing straightforward mathematical operations on presupplied data. A GPU would be very ill-suited to running full-fat applications, but for supercomputing workloads, it’s just the ticket.

Since GPU architectures are fundamentally different to CPU designs, applications must be written specifically to use the GPU as a computing resource (an approach known as GPGPU, short for “general-purpose graphics processing unit” computing). However, this needn’t mean learning a whole new programming paradigm. Nvidia cards use what’s called the Compute Unified Device Architecture (CUDA), which means that they can be programmed in a variant of C – and, with recent hardware, C++ – with extensions to access GPU-specific functions.

Windows programmers can alternatively make use of a library of DirectX functions called DirectCompute, which sends mathematical tasks to the graphics hardware. A third option is OpenCL, which can be used to create GPU-bound functions in a C-like language. Both frameworks will work on any AMD or Nvidia graphics card, and even with Intel’s integrated GPUs, so your code needn’t be tied to any particular platform.

If you choose to take the GPU route, you can start supercomputing very cheaply with mainstream hardware. But both Nvidia and AMD also offer premium cards designed specifically for GPGPU applications (branded “Tesla” and “FireStream” respectively).

These include performance optimisations that are irrelevant to gaming but potentially valuable to the supercomputing market, such as improved performance in double-precision calculations, giving them even more of a lead over conventional desktop processors. These cards aren’t cheap – a Tesla model with 512 stream processors will cost more than £2,000. But it’s still cheaper than 512 CPUs.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.