Editor's Corner—The ABCs of virtualization and other terms according to Comcast's Gaydos

During an SCTE Cable-Tec Expo panel, Comcast Fellow Bob Gaydos, left, and Elad Nafshi, also with Comcast, talk about the challenges and benefits of virtualization. (FierceTelecom)

I was hired by FierceTelecom in May with the broad mandate of covering software and virtualization, and it has been a blur ever since.

But as we close in on the holiday season, it's time to ask ourselves not just whether we'll duck out of the office party or if brining the turkey is still the way to go, but also what do virtualization and some of the other buzzwords in the telecom industry mean?

It would be great if FierceTelecom had a glossary of industry terms that we could link to as we cover the evolution that is underway, but since we don't we'll lay out some terms and definitions that were provided during an SCTE Cable-Tec Expo panel by Comcast Fellow Bob Gaydos, which were tweaked a bit for clarity and length.

FREE DAILY NEWSLETTER

Like this story? Subscribe to FierceTelecom!

The Telecom industry is an ever-changing world where big ideas come along daily. Our subscribers rely on FierceTelecom as their must-read source for the latest news, analysis and data on the intersection of telecom and media. Sign up today to get telecom news and updates delivered to your inbox and read on the go.

Before we get to the industry terms, Gaydos provides his explanation of what virtualization is, and then at the end he talks about how it helps Comcast.

"The whole reason we do virtualization, the whole reason we do all this stuff, is because Intel was so good at their job that their servers were sitting there idle for 20% of the time," he said. "Our applications only use like 20% of the resources. So we sat back one day and said 'Hey, how do we get better use of this?' By virtualizing we needed less computers, and we could run more stuff at the same time for less money. That's it. That's what we do virtualization for."

RELATED: Comcast executives on lessons learned from deploying vCMTS

Kernel: "This is one of the fundamental things that runs on a computer that keeps it going. The kernel is the thing that talks between the hardware and your applications, that's all you really have to know other than every operating system in the world has a kernel. We patch kernels all the time for security reasons and so forth and so on. But the kernel is kind of at the fundamental core of every computer that runs out there. On top of kernel is an operating system, which collects that kernel, and then kind of does things that the user cares about. It's the one that launches an application and it tells the kernel to launch it. That's where your security policy is put into the system, but the kernel enforces the policy. Things like CLI (command-line interface) is really part of the operating system not the kernel."

Hypervisor: "That was a term that first came about when we first started virtualization. What a hypervisor is in common terms is it's a kernel that runs other operating systems. So you put a hypervisor on a machine, and then you can run multiple operating systems on top of it. This is how one PC, or one server, can actually run Windows and Linux at the same time, because a hypervisor lets both of those operating systems get virtualized and use the hardware as if they owned the hardware themselves. That's basically what a hypervisor is.

"Now, there are two types of hypervisors. There's really skinny ones, that's one that VMware first built, and then there's really thick ones that are actually a full operating system in themselves, and that's what OpenStack is. So when you're running OpenStack you're actually running two operating systems on top of one another. It's a horribly inefficient way of running a computer, because you spend cycles running multiple operating systems instead of running your applications."

Containers: "Now after we got to the virtual machines we said 'Hey, this isn't exactly the most efficient way of doing this either because we're still running multiple operating systems on the same computer. How can we run things a little bit faster?' And that's where containers came in. So containers is a buzzword today. A container basically means you take your application and it works by kind of talking to the operating system, but the kernel's virtualizing it. I don't want to get too much into how it's actually done, but let's put it this way, you're not running multiple operating systems, it only makes it look like you're running multiple operating systems. And because of that you have less stuff running on the machine, you have a much more efficient use of virtualization. That's what containers bring to you in a nutshell."

Microservices: Microservices basically mean you take a process, and you have it do one thing, and do it really, really well. We've been running this kind of architecture for years. You put an API in, you spin up the process, and it does one thing instead of having a big monolithic application. And the idea behind that is that if that process goes bad, you just replace that process, you don't reinstall all the other software. So microservices run really well on containers, so if you decompose your software and your application into microservices, now you can scale horizontally. So if you need more of X processes, you spin up a couple more containers. If you need more Y processes, you spin up some of those. If you're trying to deploy new software, rather than putting out a whole new monolithic code, what you do is kill some of the containers that run in the microservices and you put the new version in. You see how those containers, those microservice versions, run versus the old ones. If it's good you kill the rest of them and you ramp up the other microservice. That's really what it comes to.

"So when people come to you, and this happens to us all the time, and say 'Hey, we've just taken our application and we're now cloud-ready. We've put it into containers, and it's running microservices.' When you get under the hood, and you start asking the questions like how did you devolve that big monolith application into microservices? If it really isn't one function and is instead lots of little functions all over the place, they didn't do it. What they did was they just ported their application for what used to be on a physical machine to a container and called it a day. That's not going to get you the benefit of hyperscale. That's not going to get your horizontal scale."

Cloud native: "Cloud native is basically if you design your application using microservices it'll run on cloud really, really well. If on the other hand, you only did the port, then you are still relying on some kind of file system some place. The application isn't designed to be highly available. That means that it can go down, that you still have to do a lot of caring and feeding to keep the application up. In a cloud-native application it's always running. When something fails you don't really know about it. If something looks a little ill, like the latency's up on one process, or it's consuming a lot of memory maybe because there's a memory leak, you don't try and fix it. You just kill that container, that microservice. But your orchestration system just replenishes it and automatically you have as much capacity as you need and your users don't know that your application is going through that effect."

ELK: "ELK stands for Elasticsearch, Logstash, and Kibana. This is one of the biggest innovations in the world. ELK replaces a very good commercial application by the name of Splunk. Splunk, those guys did an awesome job, but you have to pay for it. So some people in the cloud industry came up with this idea called elastic search, which is basically an open source search engine that goes through and indexes the logs of all your applications. So all these little microservices send all their data and all their telemetry through some of their logs to elastic search, which indexes them. Logstash stores them off in storage, and Kibana visualizes and lets you do queries to find out what's going on in your system.

"So when you get out of the big, monolithic applications, you no longer have a CLI. You no longer have SNMP. You can't poll the application because the application is spread across, in some cases, multiple data centers, in some cases, thousands of machines so you can't go in and poll it. One day you might have 400 containers running. The next day you might have 4,000, and since you don't know how many there are there's no way to do efficient polling. The only way you can actually do things is push, and ELK is a great thing to push data into and organize all that data so you can actually debug your applications."

CI/CD: "CI/CD stands for continuous integration and continuous delivery. This is a little bit harder to achieve. What CI/CD means is when a developer checks in code it automatically builds a new version of the code, it automatically publishes that code into some kind of test environment where automatic tests are run, and if they pass it eventually moves its way into your production system and you don't know it. This fundamentally changes how we, as operators, work because in the past we had maintenance windows, we had upgrade windows and we had to plan for the upgrade versions. We knew when they were coming and we tried to make sure that didn't happen during the same day as the Super Bowl.

"In CI/CD, you can still kind of do some of that stuff, but effectively the machines take over and take it right from the developers into production. This is not for the faint of heart because if you don't have good automation tests, if you don't have good code checking procedures, if you don't have a good way to distribute your code, it's not going work. So you have to spend a lot of time getting your application to work in the CI/CD manner."

SDN: "Software-defined networking is another big buzzword that's out there. Some people, even within our company, think that software-defined networking is network automation. Well, automation is certainly part of SDN, but what SDN really is it allows the application to talk directly to your network. So what does that mean? In the past, we would develop an application, we would go to the network guys and say 'Hey, I need to get a tunnel or some capacity from point A to point B, and make these two boxes talk to one another.' What SDN gives you the ability—through APIs, through computer software—to actually talk to the network and say 'I want to talk over there.'

"The SDN doesn't know all the router commands to do that. It doesn't know anything about BGP or any of the other buzzwords that come out of thenetwork world. It just talks about intent and says 'I want to talk from here to here and I need this quality of service.' The SDN controller then comes in and programs the network. It takes those intents and it translates them into the language that is known by the network and makes your application work. What it means to us is, as application developers, as operators who need new and better applications, we can go much faster because once we build the application, it programs the network automatically and you can spin up a new application with a lot less people involved. That's what SDN really brings."

Cloud: "You have public cloud, private cloud and hybrid cloud. Private is easy. You own it. You run your own virtual machines on it. Public cloud, you pay somebody else to run the virtual machines. Hybrid, you have a little bit of your own and a little bit of theirs, depending on which one you're trying to do.

"Then it comes down to latency. Can you really afford to be shipping all your data to AWS' public cloud and getting it back in time to run it? In our case, when we virtualize CMTS (cable modem termination systems), we did look at moving the whole entire vCMTS to the public cloud, but we don't think the latencies and distances would work so we run it in our own private cloud.

"All of our ELK stack, for example, is sitting in the public cloud. So we ship all our logs there, let them deal with all of the hyperscale, the data storage, while we focus on the low latency, high bandwidth applications."

Virtualization in action

"What we try to teach our DevOps team is 'Listen, if you could go and use something that exists in the public cloud that somebody else has already virtualized, that's where I want you to focus because as operators we know what the application is.' We want to get to the real problem, which is we need boots on the ground to solve the problems of where the noise comes into our network. We need to get boots on the ground to install all the modems and find all the trouble in the (subscriber's) house. That's where we need people the most, not maintaining public cloud, for example.

"We don't have to maintain our own virtual machines if somebody else can do that for us because our time and our efforts are better spent working on the application itself. We want to get the networks out of the way so that instead of it becoming this thing that you have to care for and feed all the time, the applications just sit on top and run like they're directly connected."

Thanks, Bob Gaydos, and happy Thanksgiving. (Don't brine the turkey; presalting is the way to go.) — Mike

Editor's Corners are opinion columns written by a member of the FierceTelecom editorial team. They are edited for balance and accuracy.