How much smaller can chips go?

Seven of the finest minds Intel can muster are lined up on stage, ready to take questions from a pack of visibly intimidated European journalists.

These are Intel fellows – the highest rank of technical merit afforded to the company’s engineers – whose CVs are stuffed with PhDs and patents in the places that most people put fillers such as “excellent typing skills” and “interest in badminton”.

Finally, one of the press pack plucks up the courage to ask a question. Is Moore’s Law – Gordon Moore’s legendary prediction that the number of transistors on a processor will double every two years – dead? One or two of the fellows chuckle politely, others are visibly irritated. Almost all are eager to grab the microphone and put the impertinent questioner straight.

Top five stories on PC Pro

1. Scientists claim GPUs make passwords worthless

2. A graphic illustration of music industry madness

3. Fibrecity slams utility over failed broadband sewer deal

4. BBC worried HTML5 “sailing off course”

5. Apple’s revamped 27in iMac

One by one, they deliver measured and witty responses. “The number of people predicting the end of Moore’s Law doubles every two years,” quips the Scandinavian Tryggve Fossum, before American fellow Karl Kempf delivers a cutting dénouement. “The first microprocessor had 2,300 transistors, now we have processors with 2.3 billion transistors. That’s Moore’s Law. That’s what we do.”

Indeed, it’s what Intel’s been doing for more than 30 years. Now, the company is preparing to defy the laws of physics to “print” its next generation of chips. Chips so crammed with transistors that the machinery is working with sub-atomic precision to make them.

But when you’re already working with transistors a fraction of the size of a virus cell, how much further can you push the miniaturisation before the plucky journalist’s predicted demise of Moore’s Law comes true?

We’re going to reveal how Intel and other manufacturers overcame the enormous technical barriers that stood in the way of today’s chip technology, and explore the challenges they face in shrinking tomorrow’s chips to 22nm and beyond.

The size of the task

The complexity of a modern processor is almost beyond comprehension. A working 1GHz core on ARM’s latest Cortex A9 processors occupies less than 1.5mm2, using the 65nm production process. To put that into perspective: a nanometre is a billionth of a metre, which means a nanometre is to a tennis ball what a tennis ball is to the planet Earth.

“Microscopic” doesn’t even come close.

Yet, if that sounds impossibly fiddly, Intel’s latest Core processors are built using a 32nm process. While you might just be able to spot one of ARM’s cores with the naked eye, to see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house.

Working at such precision is an enormous challenge for chip manufacturers. As processes are refined every two years to keep Moore’s Law alive, Intel’s engineers are forced to show remarkable levels of ingenuity to keep processors ticking. “The end has been predicted many times, and we have shown this is not the case,” said Intel fellow Jose Maiz. “At least, not yet.”

That’s not to say they haven’t come close. When Intel moved from its 90nm process to 65nm in 2005, something very unusual happened. “Initially, 65nm didn’t have any [performance] advantage over 90nm,” Maiz conceded.

Despite doubling the number of transistors on the processor, there was no gain in performance because the transistors were leaking too much energy.

Modern transistors are essentially a simple switch, where current flows between the source and the drain when the gate electrode in the middle reaches a certain voltage. In 2005, the gate dielectrics – the insulation on the bottom of the gate – were made out of silicon dixoide (SiO2).

As the transistors had shrunk over the years, so had the thin layer of silicon dioxide on the gate, to the point where it was only a few atomic layers thick. The material had been stretched so thin that current had begun to leak through the insulation to the gate’s electrode, in a similar fashion to a dripping tap. And when you’re dealing with hundreds of millions of leaky taps on a single processor, that becomes a huge issue.

transistor cross section

Intel knew it had a problem long before it launched the troubled 65nm processors in 2005. Two years previously, the company announced its solution to the leaking gate problem: the high-k metal gate. This would see the silicon dioxide replaced with a “high-k” material called hafnium.

High-k materials such as hafnium dioxide, zirconium dioxide and titanium dioxide can hold a much greater charge than silicon dioxide, in the same way sponge can hold more water than wood.

Problem solved? Not quite. Hafnium doesn’t play well with the polysilicon material that was used for the gate’s electrode, forcing Intel to introduce a new type of metal electrode as well.

It took Intel’s enormously well-resourced researchers five years to find the perfect combination of high-k material and metal electrode for its transistors, and another four years of development before it was ready to be introduced with the first 45nm processors in 2007.

Still, it was worth the wait, according to Maiz. “In microprocessors today, perhaps 20 or 30% of the power is wasted,” he said. Without high-k metal gate transistors, “that would have been 40 to 50% on a 45nm chip”.

When you’re trying to shave fractions of a watt off the power consumption in a smartphone processor, for example, that 20 to 30% of saved energy makes an enormous difference.

Dark silicon

While manufacturers may have solved the problem of leaky transistors, they now face a different challenge: there are simply too many of them on next-generation processors. With Intel and others planning to move to the 22nm process by 2011, the number of transistors on each processor is growing exponentially; Intel’s test 22nm chip has 2.3 billion transistors, for example.

“Chips don’t physically get any smaller, people just cram in more transistors,” ARM’s chief technology officer, Mike Miller, told PC Pro. “Until now, smaller transistors actually switched on and off faster. And the amount of energy it took to switch a transistor went down as well. You got more [transistors], they went faster and took less power.

“What’s started to happen is that shrinking is going to keep going on, but the speed of the transistor is not going up as quickly as it used to. The most critical thing is they’ve stopped taking less energy to switch,” Miller added.

In servers or smartphones, where the processor is afforded a specific power envelope, continued shrinking could lead to what Miller describes as “dark silicon”, where the chip doesn’t have enough power available to take advantage of all those transistors.

Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm.

The PC philosophy of piling everything through a CPU – instead of creating dedicated processors for specific tasks, as ARM does with smartphones – makes the PC particularly susceptible to the dark silicon problem.

We’re going to have to find cleverer ways to use transistors that don’t involve turning them on and off all the time

“The PC architecture has taken any intelligence out of peripheral devices and runs it on the processor,” he claimed. “Something like an Ethernet controller has been dumbed down. For a low-power architecture, that’s the wrong approach. That leads you to having one big, hot processor.”

So what’s the solution? Miller admits ARM doesn’t yet have the answer. “We’re going to have to find cleverer ways to use transistors that don’t involve turning them on and off all the time,” he said. “We’re looking at smarter ways of building a processor, so that you’re not using all the transistors all of the time.”

Part of ARM’s research involves designing processors that run below optimal power. “As you lower the voltage, the processor starts running too slowly and making mistakes. We’re looking at the technology of error detection and error recovery, so that you can get closer to the edge,” he said.

Yet, even if ARM doesn’t have all the answers right now, Miller is confident it won’t be a show-stopper by the time 22nm processors arrive. “For us, it’s an evolution, it isn’t a cliff edge,” he said. “Creativity shines when you give people the opportunity.”

Seeing the light

Even if the researchers do find a way to power the billions of transistors on tomorrow’s processors, there’s no guarantee they’ll be able to manufacture them.

Today’s chips are “printed” using a process called deep ultraviolet lithography (DUV), but we’re approaching the limits of the technology. It’s almost reached the point where it’s physically impossible to print lines any thinner using DUV: diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail.

DUV should suffice for the 22nm chips that are set to enter manufacturing in 2011, but by the time the 16nm chips of 2013 enter production, an alternative will be required.

That alternative is called extreme ultraviolet lithography (EUV), which uses a series of precisely placed, multilayered mirrors to generate vastly improved resolutions. Intel and others have been working on it for a long time.

In 1997, Intel founder Gordon Moore and members of the Clinton administration announced a $250 million private-public partnership to develop EUV, with partners such as Motorola and AMD.

We’re running out of tricks in what we can do with optical lithography, and we have to move to some other technique

“For the past 40 years we’ve made these patterns optically,” Moore said at the time. “We’ve used optical systems of one sort or another to project the image of the pattern we want on to the wafer. The problem is you can’t make images much smaller than the wavelength of the light that you’re using to make the image… We’re running out of tricks in what we can do with optical lithography, and we have to move to some other technique, something using much shorter wavelengths…”

Thirteen years later, and Intel and others are still waiting for EUV to be able to manufacture tomorrow’s processors. Intel alone has spent hundreds of millions of dollars on developing EUV machines – “kind of an expensive camera”, is how Intel fellow Jose Maiz referred to them – and yet there’s still no guarantee they’ll be able to deliver the required accuracy.

“To manufacture this stuff, you have to have control at the sub-nanometre scale,” said Maiz. “The challenge isn’t just to make one [processor], but millions and millions of them, and for all of them to operate.”

ARM’s CTO agrees that EUV still has a long way to go before machines can be introduced into fabrication plants. “Our research group took out the first 22nm processors a while ago,” said Miller. “The fabs are being built now – more life will be wrung out [of the existing DUV technology] for the next few years.

“[EUV] isn’t a done deal. There will be surprises, no doubt about that.”

Ever decreasing circles

So how much further can they shrink the processor manufacturing process? Intel’s public roadmaps only stretch as far as 16nm in 2013, although it’s a safe bet that Intel – which delivers a new architecture every two years – has at least pencilled in 11nm by 2015.

ARM’s chief technology officer is more conservative, claiming that we’ll have reached 13nm by 2020. “Beyond that? That’s beyond Tomorrow’s World,” Miller said, when asked to venture an opinion on whether processors could ever reach the sub-nanometre level.

Even if such infinitesimally small manufacturing processes seem unimaginable, it’s exactly the sort of challenge the engineers thrive on. “You shouldn’t accept something is impossible just because you don’t know how to do it,” said Maiz. These guys literally don’t know when to stop.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.