Intel creates IPU to optimize compute resources

Data centers already use central processing units (CPUs) and graphical processing units (GPUs). Today, Intel unveiled its infrastructure processing unit (IPU). The IPU is a programmable networking device to maximize processor efficiency in data centers.

Intel says the IPU will constantly balance compute processing and storage resources to free up performance for CPUs.

“The IPU is a new category of technologies and is one of the strategic pillars of our cloud strategy,” said Guido Appenzeller, CTO of Intel’s Data Platforms Group, in a statement. “It expands upon our SmartNIC capabilities and is designed to address the complexity and inefficiencies in the modern data center.”

Although the trend toward cloud-native architectures based on containers and micro-services brings a lot of benefits, the technologies also suck up more compute resources.

Intel cited research from both Google and Facebook that found 22% to 80% of CPU cycles can be consumed by micro-services communications overhead.

RELATED: Cloud native has lots of benefits, some drawbacks: Special Report

Intel says its IPU will free up CPU cores by shifting storage and network virtualization functions to the IPU. And the IPU will also improve data center utilization by allowing for flexible workload placement.

Intel has already deployed the first of its IPUs on field programmable gate arrays (FPGAs) at Microsoft Azure and some other cloud providers, and its first ASIC IPU is under test.

Speaking at Intel’s Six Five Summit today Intel EVP for the Data Platforms Group Navin Shenoy said the IPU was conceived because data centers suffer from incredibly inefficient utilization of compute resources. “For example, there may be too much compute in one place, but not enough compute in another place for a given workload, and the exact opposite may be true for a different workload.”

Shenoy said Intel’s IPU is part of the company’s vision for data center processors to run micro-services more efficiently with shared memory and storage, and it will all be enabled with open source software frameworks.