Facebook has worked with Intel on a deep learning AI chip
Intel has unveiled what it’s calling the Nervana neural network processor (NNP); the company’s first commercially available chip designed from the ground up for deep learning applications. The reveal also came with a surprise: Intel has been working with Facebook to make its AI-dedicated hardware.
The Nervana NNP range, formally known as “Lake Crest”, is Intel’s response to the growing use of machine learning techniques, and a rebuttal against the surge of interest in Nvidia’s silicon for AI workloads (which has seen that company’s revenue soar in the last year).
With a chipset dedicated to deep learning, Intel is hoping it can grab the AI data centre market much like it has grabbed the stacks of more traditional data centres. The Verge, for example, points to Intel having a 96% market share in data centers. If machine learning is the future, Intel wants a slice of that self-evolving pie.
Enter the Nervana NNP, which leverages the company’s acquisition in August 2016 of deep learning company Nervana Systems. Last year the company paid more than $400 million (£303 million) for the 48-person deep learning startup, and it was widely predicted that Intel would make a push into deep learning for data centres.
More surprising, however, was the reveal that Facebook has had a central part in creating the Nevana NNP. “We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market,” Intel CEO Brian Krzanich wrote in a blog post, publishing alongside the announcement on Tuesday.
Intel is a bit vague on what Facebook has specifically offered in the way of “technical insight”, but greater deep learning powers are certainly in the social network’s favour. More machine learning potential means it can do more in the way of offering personalised, predictive pages for users. That means more siloed targeting for advertisers.
Intel is also vague on the exact speeds of its Nervana NNP range. The company is so far only saying it “aims to deliver up to 100x reduction in the time to train a deep learning model over the next three years compared to GPU solutions”. The aim for that 100-fold speed in training is currently set for 2020.