Ever get the feeling your PC is being a bit random? Well, it’s not. It might simply be a case of low-order nonlinear deterministic chaos. Honest.

Last month, a French team of university researchers published a paper entitled Chaos in computer performance, downloadable from http://arxiv.org. Using statistical analysis, the paper amounts to an argument for the classification and analysis of modern CPUs as complex systems.
To put that in perspective, the most often cited example of a complex system is the weather. Although the global weather system works around well-understood principles of Newtonian physics, in practice the near-infinitely complex interplay of its components is unfathomable. Even if you have a clear grasp of the state of the system at 4.30pm on Friday, it’s essentially impossible to tell exactly what it will be at 4.50pm the same day, and going any further than this time next week relies as much on luck as judgement – the teeming cascades of interaction are just too complex to track. This idea is known as nonlinear deterministic chaos, usually shortened to chaos, and the news is that modern processors appear to display it too.
One of the defining characteristics of a chaotic system is its sensitivity to initial condition, known as SCI in maths circles. High SCI means that a small variance in initial input values can lead to a very large nonlinear variance in the final output, not predictable using standard analytical techniques – the origin of the notional butterfly effect, where a tiny flutter of a butterfly’s wings starts a chain of events that eventually cause a typhoon. The initial conditions of the weather are simply what it’s like at 4.30pm on Friday; in a computer program, the initial conditions are the input data to be processed by that program.
Computers are completely deterministic in the sense that you’ll always get a correct answer eventually: they’d be useless otherwise. The chaotic variance comes in the time taken to complete an operation. The research team running the bzip compression test from the SPEC series of performance benchmarks found a large variation in the time taken to complete identical operations. They used various mathematical sleuthing techniques to eliminate the possibility that the nature of the input data was causing the variation. It wasn’t: the processor itself was simply being unpredictable because of the complex interactions in the mechanisms of its Level 1 and 2 cache, branch prediction modules and so on. It was exhibiting low-order chaotic behaviour.
Now, this isn’t an explanation for your PC taking longer to download your email than it did yesterday. The variations in question are at the microscopic level and concern the time taken to complete a few individual instructions. Since there are several billion instructions completed per second, at the macroscopic level these variations average out so they’re imperceptible in everyday use.
The more important point the researchers make is that design principles will soon be forced to change. You’ve probably read that the Montecito dual-core Itanium packs 1.72 billion transistors – far more than double the number of previous designs. Just like the weather, processor architecture is becoming so complex that human brains aren’t big enough. Processor design is turning to machine-design techniques, based on automated trial-and-error methods and genetic algorithms.
But look up almost any university thesis on genetic algorithms (which evolve new designs using nature’s method of selection by fitness for the purpose) and you’ll see paragraphs that declare a particular genetically evolved solution to a problem works, but the researchers don’t know why. There’s even an urban myth that a researcher evolved a working electronic oscillator circuit containing a transistor that wasn’t physically connected to the circuit; when it was removed, the circuit stopped working.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.