My return to parallel processing

Recently, I wrote about the renaissance of parallel processing and what new problems it would soon present to programmers. As I write now, two announcements from Intel really rub in the point.

First, it announced a new experimental CPU called the Single-chip Cloud Computer (SCC), which contains 48 Intel cores on the same die, all capable of talking to each other via a software-configurable message passing scheme in on-chip memory.

Intel had already demoed an 80-core chip called Polaris for its Terascale Computing program, but those were only floating-point maths units, while the SCCs are full Pentium-compatible x86 cores. They’ll run any x86 program; they simply lack all the sophisticated performance enhancements of today’s chips. But that same week Intel announced it’s to dump its highly-parallel graphics processor chip Larrabee as a commercial product.

Turing showed that given unlimited tape and time, such a simple machine can compute anything a more complex architecture can achieve

It’s a safe bet that Larrabee’s disappointing performance was down to the inherent difficulty of programming general-purpose, message-passing parallel computers compared to dedicated, pipelined vector GPUs from Nvidia and AMD.

I went back over some of my Byte columns from the 1980s related to parallel processing, and stumbled across an old software project of mine called PriPar Machines (short for Primitive Parallel computers). It was a simulator written in Turbo Pascal to simulate a system of communicating Turing Machines.

Turing Machines

A Turing Machine is the simplest possible kind of computer, originally invented as a thought experiment by the father of computer science Alan Turing. It consists of a print head that moves back and forth reading, writing or erasing marks along an infinitely long paper tape under the control of a stored program. Turing showed that given unlimited tape and time, such a simple machine can compute anything a more complex architecture can achieve.

PriPar approximates a class of Turing Machine with two extra instructions to send and receive messages from one another. It’s object-orientated so you create any number of PriPar machine objects, connect them together by message channels, give each one a stored program and some data on a tape, then set them all off running and messaging one another. It’s a sort of Meccano or Lego kit for playing with the problems of parallel processing. I didn’t give it a flashy graphical interface: you just view the output as a scrolling display of tape contents; strings of dashes like //////////////.

A PriPar machine has just two extra instructions, cribbed from the Occam language: “!” means “send a message and proceed”, and “?” means “wait until you receive a message before proceeding”.

Since each PriPar machine has just a single channel connected to one other machine, there’s no ambiguity about who it’s to. However, this is a software simulation, so you can easily add extra channels, buffer messages, or change the meaning of instructions to see the effect on efficiency, integrity and programmability.

That 1990 Byte article attracted little interest: few people had a clue what I was on about, since parallel processing was specialised stuff and Turing Machines on a par with mermaids and unicorns. But a lot of work went into it, and it’s topical again, so I resolved to recycle it.

No-one uses Turbo Pascal 5 anymore, and it was unsuitable even back then (I represented tapes as array constants of specified length and padded them with spaces). So I rewrote it in Ruby, which to my delight took one afternoon. It’s more flexible because Ruby arrays are fully dynamic, like Lisp lists, so you can add stuff without worrying about size. I’ve put the Ruby source for my PriPar system and some sample programs on my website.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.