Intel INTC +3.36% held its inaugural Artificial Intelligence (AI) Developers Conference in San Francisco on May 23-24th, presenting its leadership, technologies, and customers to a capacity audience of some 800 AI geeks and media. The company now has a rich portfolio of AI technologies, after acquiring Movidius and MobileEye for real-time processing, Altera for reprogrammable FPGA acceleration hardware, and Nervana for the training workloads currently served byNVIDIA NVDA +2.1% GPUs. Intel’s primary focus on inference processing for the production use of trained neural networks is a sound strategy, as inference is likely to become a much larger market that the training segment over the next few years. While Intel does not yet have a brawny ASIC in its portfolio to build AI networks, it can create a sizable position in inference, alongside companies such as Apple AAPL +1.8%, Qualcomm QCOM +0.86%, XilinxXLNX +3.3%, and NVIDIA.
That being said, Intel has not abandoned the AI training market, where NVIDIA is enjoying tremendous success with a $3B run rate. Intel stressed Xeon’s strengths in training at the event, while pointing to the future where it hopes to leverage Nervana to compete more directly with NVIDIA’s big silicon. Unfortunately for Intel, Nervana now appears to be at least 18 months away—a result of an even larger redesign, which I forecasted here.
Here’s what I learned
At the event keynote, Naveen Rao, Intel’s SVP for AI, articulated the company’s strategy for AI: essentially, to provide the full range of general purpose and specialized devices for AI supported by a unified suite of optimizing development software. As Mr. Rao pointed out, running AI apps is not a market where one size fits all, and Intel has products offering a broad range of performance, latency, and power envelopes for inference processing.