Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.
In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable price point, for Intel to bring Nervana to the fore of the training marke t–and push its other products for inference at scale along with that current, it will take a herculean effort–one that Intel seems willing to invest in given its aggressive roadmap for the Nervana-based lineup.
The difference now is that at least we have some insight into how (and by how much) this architecture differs from GPUs–and where it might carve out a performance advantage and more certainly, a power efficiency one.