Report says servers with AI and ML coprocessors will go mainstream by 2020

Cloud service providers are driving the adoption of servers that have specialized processing units for artificial intelligence and machine learning. (IHS Markit)

Servers with specialized coprocessors for artificial intelligence (AI) and machine learning (ML) will go mainstream by 2020 and account for 10% of the global server shipments.

Spurred by investments in AI for speech recognition, search engine optimization and cybersecurity, the cloud service providers, —such as Facebook, Amazon Web Services, Microsoft Azure, Netflix and Google—are increasingly deploying servers with general-purpose programmable parallel compute coprocessors, according to IHS Markit's Data Center Server Equipment Market Tracker report.

The report also said that with the adoption of Internet of Things connected devices, network function virtualization and software technologies like AI and ML, demand for data center computation will increase. As a result of that demand, CPU will struggle to keep pace. General-purpose programmable parallel compute coprocessors will also go mainstream to help alleviate the demand on CPU.

FREE DAILY NEWSLETTER

Like this story? Subscribe to FierceTelecom!

The Telecom industry is an ever-changing world where big ideas come along daily. Our subscribers rely on FierceTelecom as their must-read source for the latest news, analysis and data on the intersection of telecom and media. Sign up today to get telecom news and updates delivered to your inbox and read on the go.

IHS Markit's survey of more than 150 North American enterprises found that businesses plan to ramp up investment in servers with coprocessors, showing a preference for servers with general-purpose graphics processing units (GPUs) and field-programmable gate arrays (FPGAs.)

Vladimir Galabov, senior analyst, data center compute, IHS Markit, said in the report that composable compute with PCI Express (PCIe) switches—which allow pools of compute, storage, networking and coprocessors within a rack to be grouped together to form a virtual compute node—creates better economics for service providers and enterprises alike, enabling workload sharing and distribution of coprocessor pools.

Features that make it possible to virtualize coprocessors continue to be added to multitenant server software, increasing the utilization of servers shipped with a coprocessor, which makes them more attractive to service providers

RELATED: Machine learning is the engine that drives automation, AI, according to BT's McRae

Generational improvements in coprocessor performance have also made servers with coprocessors more attractive than traditional CPU-only servers. In addition, coprocessor options are multiplying, enabling diverse server architectures that allow customers to better match their servers to their workloads.

Report highlights:

• Servers shipped with general-purpose graphics processing units are forecast to comprise 7% of global server units in 2022 (although the forecast didn't include GPUs for video).

• Servers shipped with FPGAs, including PCIe form factor with Ethernet ports for FPGA clustering, are anticipated to make up 1.5% of worldwide server units in 2022 (the forecast didn't include I/O cards used to connect servers to an Ethernet network).

• Servers shipped with tensor processing units are projected to account for 0.8% of global server units over the same time frame.

• Servers shipped with other programmable coprocessors are forecast to reach 1.5% of worldwide server units in 2022. Included in “other” are Xeon Phi coprocessors, PEZYs, deep learning units, neural network processor, machine learning units and deep learning units .