Inspur adds artificial intelligence node into OCP-compliant servers

China-based Inspur is offering several new compute nodes into OCP servers. (Pixabay)

China-based Inspur rolled out a new set of Open Compute Project (OCP)-based hyperscale rack servers that included support for artificial intelligence.

Inspur is the third-largest server vendor in the world and the biggest in China, according to Dolly Wu, vice president of data center and cloud at Inspur. Inspur's technology has cornered 57% of the artificial intelligence (AI) market share in China, and now it has included some of that technology into one of its new server nodes.

"The highlight that we want to bring for the OCP community is that Inspur is introducing several new compute modules using the San Jose motherboard, which we contributed to OCP last year," Wu said in an interview with FierceTelecom. "It's the first OCP-accepted (Intel) Zeon scalable processor platform in the community. Using this motherboard, we've created three new compute modules."

While Inspur has had its server technology deployed in China for five years, today's OCP news was its first entrance into that arena. Wu said there is already large-scale adoption of Facebook's OCP-based hardware around the world, but Inspur's AI node has the highest density available in an OCP configuration, which makes it ideal for AI training sessions. Wu said that Inspur also has a version of its AI node with its own FPGAs for AI inference work.

RELATED: Worldwide spend on cloud IT infrastructure hits double-digit growth in Q2—report

Baidu tapped Inspur for its autonomous car project, which Inspur built to another open-source hardware specification that's part of the ODCC's Scorpio project. ODCC is backed by Chinese vendors such as Tencent, Alibaba and Baidu. Inspur is the largest server supplier of both Alibaba and Baidu.

Inspur introduced four other server nodes today:

• Compute node 1—This node is designed for search engine acceleration, deep learning inference and data analysis application scenarios.

• Compute node 2—Compute node 2 is designed for data acceleration, I/O expansion, transaction processing and image search application scenarios. It can also support different kinds of half-height and half-length external cards.

• Compute node 3—Wu said compute node 3 is ideal for NFV application scenarios, with a wide range of half-height and half-length external cards that can also support 100 Gb Ethernet.

• Storage node—The storage node fits 34 hard drives that can be used as an expansion module for compute nodes or as a storage pool for the entire rack.

Wu said Inspur is also offering open source management for its hardware by merging OpenBMC and Redfish. Wu said that while Inspur plans to partner with telco service providers, such as Verizon or AT&T, they typically like to use system integrators, which Inspur is currently working with.