Industry Voices—Raynovich: Why telco automation takes a while

There are few more abused sets of buzzwords than "network automation." It's getting up there with "artificial intelligence." Magical robots doing all the dirty work you don't want to do—sounds great, doesn't it?

The problem is that while automation has made major progress in the cloud, its progression is slower in the major telecom providers, and that's because they have more challenges, such as upgrading a complex web of legacy gear. Network automation is evolutionary, and many things need to happen in order to deliver it. 

Recent end-user research by Futuriom indicates that network automation in telecom networks is following the path of cloud automation—that is, adopting a wide range of standards and open APIs to emulate the scale-out model of cloud orchestration.

Networks can be orchestrated based on standard components and software technologies. But they need to use the same data models, APIs, and standards to understand each other. The continued march toward open standards, open APIs, standardized IT equipment, and cooperation among vendors following the path of network functions virtualization (NFV) will eventually fuel the rise of network automation that can connect all networks—whether they are based on the enterprise, the cloud or the telco.

In fact, operators in the survey indicated that NFV is a focus of this pursuit of network automation. Of operators surveyed, 29% said that NFV is a top target for network automation and 24.6% said the focus would be on the “edge cloud.”

Putting the network on autopilot

Why is it taking so long to automate networks, especially communications networks? Let's take a look at auto-driving cars first, as an analogy. I was recently talking to an engineer about how we were programming planes to fly decades before we started to program cars to drive themselves. The engineer, being smarter than I am, pointed out that cars have far more layers of complexity such as road maps, other drivers, pedestrians and complicated traffic configurations. Planes fly point to point. Good point. I'll stick with the self-driving planes.

Automation sounds great—including trying to drive you home when you are drunk. But executing the automation comes down the number of variables in the system, an infrastructure that can collect and process data, and then sophisticated analytics and AI technology that can translate the data into useless decisions.

RELATED: Verizon—Service chains are essential and automation is a key focus

If we look at automation in this context, networks are potentially as complex as self-driving cars. There are a plethora data sources and dependencies. What time of day is it? Which applications are likely to be used? Which network operator technology is being used? What's happens when there is loss of bandwidth from an earthquake? How many servers are failing? How many teenagers are going to turn on Netflix?

People often point to webscale providers such as Google and how they've delivered automation faster than the communications providers. This is because their clouds or networks have fewer dependencies. They might be focused on a narrower set of focused applications, such as video and e-commerce, and they don’t have to scale to support the wider range of public communications services.

Yet the cloud providers have certainly made progress in building automation into their networks, and that model is being adapted by service providers. The key to the approach is getting access to as much data as possible—because data drives the automation. The cloud model is built on a foundation of open APIs, IT and Internet standards, and a focus on data portability.

Telemetry leads the way

One of the major components of an autopilot system is telemetry. In aviation, telemetry is the process of gather data readings from instrumentation—preferably in real-time. The autopilot system can then process the data to make adjustments to follow a programmed course of action.

In networking, it's no different. The Futuriom Network Automation survey gathered data from 130 cloud service providers and telecommunications operators. The respondents indicated that networking telemetry, monitoring, and analytics will be essential technologies for delivering network automation, with 90% of those of those surveyed ranking these components either “very important” (61%) or “important” (30%) to network automation.

The leap forward in cloud networking, in which network connections can be programmed to reconfigure itself based on applications changes, has been pushed forward by the proliferation of open equipment applications programming interfaces (APIs) and standards commercial off-the-shelf (COTS) hardware.

Telecommunications providers interviewed by Futuriom indicate they are interested in following this model. At the recent MEF18 show in Los Angeles, there was an increased focus on multi-vendor demonstrations leveraging networking monitoring and service assurance (telemetry) with software orchestration technologies that could use data APIs to drive service automation into the network. For example, Tata Communications, Sparkle, Equinix, Liquid Telecom, ECI, Amartus and Spirent demonstrated the fulfillment and activation of an intercontinental Ethernet services spanning four operators, using new MEF APIs. AT&T, Equinix and Ciena demonstrated automated, applications-based network orchestration using Ciena's Blue Planet software. There were many other examples of multiple operators, hardware vendors, and software vendors combining their technologies for deep integration using APIs.

The quest for open standards and APIs explains why network automation is a slower process in telecommunications networks, which have been built over many decades and have often required long government approval processes. You can rip-and-replace multibillion-dollar networks that have been installed using proprietary models. The network is opening up, but it's going to take some time.

R. Scott Raynovich is the founder and chief analyst of Futuriom. For two decades, he has been covering a wide range of technology as an editor, analyst, and publisher. Most recently, he was VP of research at SDxCentral.com, which acquired his previous technology website, Rayno Report, in 2015. Prior to that, he was the editor in chief of Light Reading, where he worked for nine years. Raynovich has also served as investment editor at Red Herring, where he started the New York bureau and helped build the original Redherring.com website. He has won several industry awards, including an Editor & Publisher award for Best Business Blog, and his analysis has been featured by prominent media outlets including NPR, CNBC, The Wall Street Journal, and the San Jose Mercury News. He can be reached at scott@futuriom.com; follow him @rayno.

Industry Voices are opinion columns written by outside contributors—often industry experts or analysts—who are invited to the conversation by Fierce staff. They do not represent the opinions of Fierce.