Containers are a work in progress for some telcos

Last year, Kubernetes took center stage in the telecommunications industry as the primary means for managing containers, but there's still work to do.

The use of containers is already underway by telcos such as AT&T and BT, and there's no doubt more service providers will use them going forward.

Containers are lightweight, standalone, executable packages of software code that share an operating system, such as Linux, and can run large distributed applications. Because containers have the bare minimum software that's needed to run an application, they can be more efficient than virtual machines (VMs).

The hyperscale cloud service providers, such as Microsoft Azure, Google and Amazon Web Services, pioneered the initial adoption of container software in their data centers, but the majority of telcos are just starting to explore using them.

A survey last year by IHS Markit, which included cloud service providers and telco service providers, asked service providers what portion of their servers were single tenant, virtualized (with a hypervisor) or containerized, and how they expected that to change in 2019. Respondents said that containerized servers were expected to nearly double, from an average of 12% in 2017 to 23% of the servers by 2019.

RELATED: How service providers are using containers and Kubernetes

While Google-developed Kubernetes is now the hands-down de facto standard for orchestrating and managing containers across public and private clouds, Colt Technology's Mirko Voltolini, head of Network On Demand, said his company isn't ready to jump into containers with both feet.

Voltolini said there currently wasn't a sense of maturity in terms of the tool sets needed to control and orchestrate the life cycle of a container, as apposed to a virtual network function with a VM wrapped around it.

"So containers obviously are ready, but you need to be able to orchestrate containers," he said in an interview with FierceTelecom. "You need to be able to manage the full life cycle of an application in a containerized approach. We don't even have a maturity of orchestration level for the current virtual VM-based virtualized world. There are no off-the-shelf orchestrators that can manage containers for these use-cases.

"Containers are a really nice concept and we're going to move into that at some point. But using containers means we are taking an application, breaking it down into microservices, putting them in containers and then composing services based on that with our orchestrators. I cannot buy anything that does that orchestration today. Could I do something myself? Maybe, but I don't want to do that."

It's worth noting that since Colt's services are directed toward business customers instead of residential end users, its VNFs are different than those used by AT&T or Verizon.

Moving to the cloud

The telecommunications industry is addressing the use of Kubernetes, cloud-native and containers through various open source projects, including Open Networking Automation Platform (ONAP) and OPNFV projects that are both under the Linux Foundation's LF Networking umbrella.

Back in September, the Linux Foundation announced deeper collaboration between the Cloud Native Computing Foundation (CNCF) and LF Networking in order to help migrate VNFs to cloud-native functions (CNFs).

While it will take some time to migrate to CNFs, they would be lighter weight and easier to instantiate than VNFs while running network functions using Kubernetes across public, private or hybrid cloud environments. Container-based processes would also be easier to chain, heal, move and back-up as CNFs.

"The key thing that we were emphasizing is we are bringing the best of the telecom world with the best of cloud world," said Arpit Joshipura, general manager of networking and orchestration for The Linux Foundation. "What we bring to it is the lower layers of the networks. So NFVi (network functions virtualization infrastructure) and ONAP. ONAP is already Kubernetes and container ready. CNCF is the host of Kubernetes, and LF Networking is the host of ONAP and OPNFV and others.

"The containerization moves up the stack into the VNFs, but what does that mean to move a VNF into a cloud-native environment? And what we are trying to say is there's a definite journey in terms of a layered architecture. The devil is obviously in the details."

Joshipura said currently there's a hybrid model in place where the majority of applications, services and VNFs are in VMs that sit on top of OpenStack that can be packaged for Kubernetes.

"We're looking at moving from that hybrid model to where you start writing CNFs with cloud in mind, and then putting them on Kubernetes sort of side-by-side with bare-metal (devices) and OpenStack," Joshipura said. "And then long term, have more VNFs migrate to that cloud environment and have the new VNFs written in a cloud-native manner."

Joshipura said to date there has been a tendency to clutter the definition of cloud-native, container services and microservices.

"All of those words have been interchangeably used by everybody and I don't think anybody really understands this especially when they're doing marketing," he said.

Joshipura pointed to the CNCF's definitions of cloud-native, microservices and containers that are posted on Github.

"Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach," according to the definition. "These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil."

Cloud native is also a popular term across the telecoms and cloud communities, but it's not a cure-all.

"In talking to many people in the industry, it seems like everyone positions cloud native as paradise, magically solving all problems," said Axel Clauberg, vice president of  technology innovation at Deutsche Telekom, in an email to FierceTelecom. "The reality is our industry has to solve some serious challenges before we can claim success. That includes automation and crchestration, network I/O optimization and security."