But the real winner in this race is Nvidia.
IBM and Hewlett Packard Enterprise this week introduced new servers optimized for artificial intelligence, and the two had one thing in common: Nvidia technology.
HPE this week announced Gen10 of its HPE Apollo 6500 platform, running Intel Skylake processors and up to eight Pascal or Volta Nvidia GPUs connected by NVLink, Nvidia’s high-speed interconnect.
A fully loaded V100s server will get you 66 peak double-precision teraflops of performance, which HPE says is three times the performance of the previous generation.
The Apollo 6500 Gen10 platform is aimed at deep-learning workloads and traditional HPC use cases. The NVLink technology is up to 10 times faster than PCI Express Gen 3 interconnects.
For its part, IBM held its OpenPOWER Summit in Las Vegas this past week and announced that more than 325 member companies are working on products and services for A.I.-themed workloads in the enterprise.
The event highlighted more than 50 vendors that have built new OpenPOWER-based products, including Google. Google announced that it is deploying Zaius, a server custom-built by Rackspace and using IBM’s POWER9 RISC processor within its data centers.
This is huge for IBM because it breaks the x86 stranglehold on Google data centers with its RISC-based POWER9 processor. Up to 2013, if you wanted a POWER-based server you had to buy an IBM Power system. IBM created the OpenPOWER consortium to get POWER processors adopted by other vendors. Getting Google is the technical equivalent of the Pope’s blessing for the POWER processor.
IBM launched its first POWER9 chip late last year. It uses Nvidia’s NVLink as a high-speed interconnect in part because the PCI Express Gen3 interface was several years old and just not fast enough. POWER9 systems also use PCI Express Gen4, which doubles performance over Gen3. Overall, IBM claims a tenfold improvement in performance over POWER8 processors.