
In my last post I suggested that DirectX 11’s extensive GPGPU support could mark the end of the road for CUDA. And I do expect that mass market GPU applications will quickly move to DirectX rather than restricting themselves to a single architecture.
But the other day I was discussing DX11 with Bit-Tech editor Tim Smalley, and I found him very reluctant to write CUDA off just yet. He pointed out that CUDA retains one big advantage over DX11, in that developers can knock up CUDA routines directly in C – or Fortran or even Matlab – without having to deal with the DirectX API.
Two different markets
Thinking about this, I’ve realised that there are in fact two wholly separate markets for GPU computing. As a mainstream technology, it’s a great way for application developers to wring extra performance from whatever hardware the user happens to already own. In this market, it makes sense to write code that will benefit as many users as possible, which today means DirectX.
But the truth is that there aren’t many desktop tasks that really benefit all that much from GPU acceleration – everyone talks about video transcoding, physics simulations and AI for games, but once you move past those very specific applications it’s slim pickings. For now, the real potential of massively parallel computing appears to lie in its ability to accelerate scientific research.
An academic question
And when scientists and engineers are choosing a research platform, they don’t really care about issues like market share. Their code only needs to run on a handful of machines, and it’s no problem to design those machines to suit the task, rather than vice versa. Here, CUDA is a no-brainer, because it lets researchers program in familiar languages, producing code that can be maintained and expanded without having to learn a new API.
Nvidia realises this, of course, and rather than continuing to talk about games, it’s been carefully positioning CUDA as a friend of academia. As the photo above shows, the hallways at GTC this morning were filled with boards – not whiteboards, like at IDF, but display boards showing summaries of research projects. Some focused on graphical techniques; others targeted problems in biology, physics or engineering. But all of them had something in common…
It’s a bold display. It cleverly makes CUDA look like serious business while ATI is still worrying about copying DVDs onto iPods.
And, what’s more, it’s persuaded me that Mr Smalley does have a point. Clearly, CUDA isn’t going to vanish from these environments overnight. Why would it, when DirectX offers no advantage?
The road ahead
But while academia may be a respectable market, you have to question how much actual revenue Nvidia sees from each of these projects. And though the use of (more or less) industry standard C is working for CUDA right now, it will work against it when a real rival comes along – a rival, perhaps, that can offer even greater programmability, backed up by the kind of muscle that doesn’t need to worry about revenue.
So here’s my crazy prediction. Some form of high-level GPGPU interface will survive alongside DX11 for the foreseeable future. And for the time being that will be CUDA as a matter of default. But I don’t think CUDA will ever be a real money-maker for Nvidia. And within five years I predict that Nvidia’s share of the GPU computing market will be swallowed up… by Larrabee.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.