IBM Opportunity- There has never been a single processor architecture for server applications because no two workloads are the same. However, for the past decade, Intel has all but dominated the server market, shutting out AMD, the only other x86 server vendor, and leaving only HPC, mainframe, database and other specialized applications for other processor architectures. With the industry facing challenges due to a growing list of security vulnerabilities from defective speculative execution implementations, changing workload requirements, slowing of Moore’s Law, and innovative new processor architectures, server OEMs and customers are all looking at making changes. Many of the new processor vendors are focusing on specific segments to target. IBM appears the best positioned to benefit from the tremendous interest in AI.
In addition to the changing technical and market dynamics, Intel has also stumbled in both manufacturing process and architecture. They have struggled to transition to the 14nm and 10nm process nodes and now appear well behind Samsung and TSMC, the major leading foundry service providers, in transitioning to the 7nm process node. In addition, Intel has purposely limited memory bandwidth in its Xeon processors to promote two-socket servers over single-socket configurations. This has created a renewed opportunity for the IBM with its Power architecture.
By leveraging these leading foundries, IBM has the flexibility to leverage the highest density and most cost-effective process technology. In addition, IBM has focused the design of the Power architecture on overall system performance and leveraging software, such as PowerAI, for improved data flows. The area that is critical in both AI training and inference is bandwidth to memory, to accelerators and to the network. IBM’s Power architecture not only supports more memory than competitive Xeon platforms, but it also delivers approximately 10x the bandwidth to accelerators with NVLink. NVIDIA is the leader in AI accelerators with its Tesla line of GPU accelerators and POWER is the only processor architecture using NVIDIA’s NVLink interface directly in the processor itself which significantly improves performance. Additionally, the NVLink is coherent, which enables the GPUs to have direct access to the system memory enabling large AI models up to 2TB in size rather than being restricted to just the GPU memory.
While Intel Xeon’s are used as the host processor in most of the installed data center neural networks, customers will see a benefit to leveraging accelerated platforms with higher performance and bandwidth.