joins the Linux Foundation’s open-source project

software code, a project that provides network routing topologies for software-defined applications, has joined the Linux Foundation.

The project helps track and analyze network routing topology data in real time for those that are using border gateway protocol (BGP) as a control protocol, internet service providers, large enterprises, and enterprise data center networks using EVPS.

Contributors to the project include Cisco, Internet Initiative of Japan (IIJ), Liberty Global, Pmacct, RouteViews, and the University of California, San Diego. complements several Linux Foundation projects, including PNDA and, and is a part of the next phase of networking growth: the automation of networking infrastructure made possible through open-source collaboration.

Topology network data collection stems from both layer 3 and layer 2 of the network, and includes IP information, quality of service requests and physical and device specifics.

RELATED: Linux Foundation merges Open Source ECOMP, OPEN-O, further harmonizes virtualization group efforts

By collecting and analyzing this data in real time, devops, netops and network application developers who are designing and running networks work with topology data in big volumes to more efficiently automate infrastructure management.

Traditionally, service providers and enterprises had two methods to collect data: using an external measurement device, or via telemetry.

Serpil Bayraktar, principal engineer at Cisco, told FierceTelecom that using external measurement devices is not favored by service providers due to concerns that the devices could inadvertently cause issues to a router or switch on the network.

Bayraktar, Cisco
Serpil Bayraktar

“Operators don’t like external monitoring devices to connect to their network and become part of the control plane,” Bayraktar said. “It’s a major risk because these devices sense something fatal that might crash the device.”

By using telemetry, a service provider can automate the communications process by which measurements are made and other data collected at remote or inaccessible points and transmitted to receiving equipment for monitoring. While telemetry provides efficiencies, Bayraktar said it also poses risks.

“The issue with telemetry is it’s a periodic snapshot of what’s going on in the network at certain intervals,” Bayraktar said. “The control plane routing protocols run all the time and you can miss something even if you’re pushing the data out of the router using a snapshot method like telemetry.”

Evolving BGP monitoring

When was originally created under the OpenBMP moniker, the project focused on providing a BGP monitoring protocol collector.

Since that time, the project has expanded to include other software components to make real-time streaming of millions of routing objects a viable solution. Linux Foundation said the name change helps reflect the project’s growing scope.

One of the existing IETF standards includes a BGP monitoring protocol. While it initially was not used much by service providers, it is now gaining further acceptance.

What’s compelling about this protocol is that it’s not part of the control plane, but rather another process running on a routing device. The job is to capture everything that router is receiving through the control plane protocol from every device it is talking to and send it over to a collection server or a single TCP session.

“If a device has 100 BGP peers turn on this protocol and start receiving data from all 100 peers,” Bayraktar said. “You turn collection on one device and have access to 100 devices' data.”

There are three levels of BGP data on the router: raw data, filtered data and selected data. Selected BGP data programs the device to forward the traffic.

An additional property of BGP monitoring protocol (BMP) is that it can convey raw, filtered and selected data to a monitoring station.

Leveraging Kafka

The collector not only streams topology data, it also parses it, separating the networking protocol headers and then organizing the data based on these headers. Parsed data is then sent to the high-performance message bus, Kafka, in a well-documented and customizable topic structure.

Similar to a message queue or enterprise messaging system, Kafka lets users publish and subscribe to streams of records. It lets service providers and users store those streams in a fault-tolerant way and process streams of records as they occur.

“The moment we added Kafka, we had a light bulb go off because we said anyone can write an application to tap into the data,” Bayraktar said. “They don’t have to have hardcore network people and just be network operators who understand the content of the data.”

One of the unique features about is that it includes an application that stores the data in a MySQL database. Others who use can access the data either at the message bus layer using Kafka APIs or using the project’s RESTful database API service.

“We are trying to enable application developers so you can make an HTTP query into the database to do all kinds of analytics based on this data,” Bayraktar said. “We are also open-sourcing a user interface because when you first start working with this data, you want to visualize what you’re looking at, what kinds of information the device has and what organization it belongs to.”