ATi promised the release of its new graphics chips in ‘Summer 2005’. A look out of the window confirms that ATi is pushing the definition of summer by a long way, but it may have been worth the wait. The reason for the delay is that the X1800 GPU has been through three circuit revisions before a certain gremlin was spotted and removed. With three optimising revisions, we should see less flaws and higher yields when it comes to final production, meaning availability should be good too.
The biggest initial shock is that the X1800 is ‘only’ a 16-pixel pipeline GPU. Compared to the 24-pixel pipes of the 7800 GTX and 20 of the 7800 GT, you’ll think something’s amiss. Not so. Both nVidia cards actually have 16 ROPs (Render Output Processors) and so can still only throw 16 pixels into the frame buffer per clock cycle, just as the X1800 can. But this XL GPU runs at 500MHz compared with its rival 7800 GT’s 400MHz, letting it theoretically output an extra 1.6-billion pixels per second more. This difference in core clock speeds is largely thanks to the use of 90nm transistors. These are more efficient than the 110nm transistors in the 7800 GPU, switching faster and using less power.
These are impressive advances considering this is ATi’s first Shader Model 3-compliant chip – a significant change from the previous generation. That means support across the whole X1000 range for the shiny new graphics in next year’s Windows Vista and, right now, the real possibility of High Dynamic Rendering in games such as Far Cry and the imminent Lost Coast level for Half-Life 2. But where the 7800 GPU applies brute force, with its extra pixel pipelines taking excess load, ATi uses techniques such as dynamic branching between pixel pipelines, and claims that these should now be called pixel processors.
Whatever they’re called, they’re certainly more intelligent and complex than basic pipelines. Pixels are asked which textures they require before entering the pixel processor, and told to wait in line while these are fetched from local memory. In the meantime, using the branching structure, the pixel processor can be sent another pixel with attendant textures. This ensures that every pixel processor is working on a pixel whenever it can, instead of idling while textures for a pixel are fetched. To help the large amounts of data move quickly around the GPU, there’s a 512-bit Ring Bus architecture memory controller. This leads ATi to boldly claim that certain operations on its hardware can take one clock cycle compared to seven on nVidia’s.
But only benchmarking will show whether this is of more than academic interest. Pitched against nVidia’s 7800 GT, it’s clearly faster in Far Cry; 8fps faster at our standard settings and 12fps faster at 1,600 x 1,200 with the same AA and AF. Doom 3 is a different story, however, with nVidia still the king of OpenGL. ATI’s card remains around 15fps slower at both resolutions – there’s clearly more work to be done here, but it can’t be taken as indicative of performance as OpenGL games are a dying breed. However you look at it, this is a fantastically quick card, and we’d expect to see better scores with finished drivers and game patches too. This is especially the case when it comes to HDR, which produced the same mediocre results, regardless of other settings – most likely a glitch in the early software.
There’s also a new anti-aliasing technique called Adaptive Anti-Aliasing (AAA). Not only does this add gamma correction to the anti-aliasing, but the effect is used only where it would bring an improvement to image quality, thus improving performance. In testing, we saw little difference, but again we were using beta drivers. There’s also a High-Quality anisotropy mode to correct shimmering errors as well as blending textures smoothly.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.