Closer to reality: photorealism in computer graphics

CryEngine 3 is capable of impressively realistic visuals. Using tessellation, it can create lifelike surfaces and shapes, so that objects, characters and scenery look more realistic at a distance, or when viewed close up.

Parallax occlusion mapping gives surfaces self-shadowing and an almost tangible illusion of relief, and next-generation systems replicate the way lights of different hues interact with each other and different surfaces.

You can see all this at work in Crysis 3, and in Crytek’s upcoming launch title for the Xbox One, Ryse: Son of Rome, which features realistic characters and cinematic effects.

CryEngine is also being used outside of games. French firm Enodo uses CryEngine 3 to build virtual environments that can be used to visualise architectural or urban-planning projects – such as Nice’s new tramway system – while they’re still in the blueprint stages.

The march of progress

What’s made this possible? First, GPUs have grown exponentially more powerful in the past five years. The fastest processor of 2008, the Nvidia GeForce GTX 280, had 240 unified shaders and a peak processing performance of 933 GFLOPS; this year’s equivalent, the GeForce GTX Titan, has 2,688 unified shader units and a peak processing power of 4.5 TFLOPS.

These are high-end processors, but this explosion in performance is mirrored lower down the line, not to mention in new console GPUs – the processors in the Xbox One and the PlayStation 4 are based on versions of AMD’s Radeon HD 7790 GPUs.

Second, GPUs now have more memory to work with. In 2008, high-end PC graphics cards shipped with a maximum 1GB of RAM; now, they come with 3GB or even 6GB. The PlayStation 4 and Xbox One, meanwhile, have 8GB shared between the CPU and GPU. That gives developers more headroom for high-resolution textures and more complex scenes and shaders.

Finally, the software APIs that control these GPUs are becoming ever-more flexible and efficient. Microsoft DirectX 10 was a watershed, ditching fixed-function processing units for unified processors that can work on geometry, textures or anything else.

DirectX 11 (DX11) took this even further, adding hardware tessellation and GPU Compute support, enabling all the parallel processing horsepower of the GPU to be used not only for standard 3D processing, but also for any task where the developer can see benefits. This, Crytek’s Tracy feels, is where GPU Compute technology will really come into its own.

“Easily the biggest step forward, and the most exciting in my opinion, is the ability to use DX11 and the compute shader,” says Tracy. He believes this will allow pioneering developers such as Crytek to experiment with more realistic lighting- and materials-rendering systems. However, he warns that there isn’t a single “smoking gun” that will solve any major problems for this generation. “DX11 and [GPU] Compute are nice, but they’ll require significant engineering to get out every ounce of performance and fidelity,” he argues.

According to Silvia Rasheva, producer for Unity Technologies, the technology is improving, but it isn’t there yet. “New game consoles are a significant step forward if we compare with the previous generation,” she says. “But if we compare with offline graphics, which usually use clusters of machines to render a single frame in 40 minutes to two hours, then real-time hardware is pretty far away.”

For Rasheva, the problem isn’t so much the complexity of the models or the shaders used. Instead, what separates film CG rendering pipelines from real-time 3D rendering pipelines are computationally expensive techniques such as high-quality motion blur, complex volumetric effects for explosions and fluids and – most of all – high-quality global illumination.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos