In this interview, Yuval Stein, TEOCO’s Associate Vice President of Technologies, discusses the rise of ONAP and its impact on the telecom industry.

Q: The telecom industry, and really the entire world, is amid a huge technological shift. New consortiums and standards groups are trying to figure out the best path forward, and how to leverage these new capabilities in ways that benefit the industry. One of these groups is ONAP. Can you explain what ONAP is, and why it was started?

Yuval: ONAP stands for Open Network Automation Platform. It is a telecom industry initiative that began about 3 years ago and is focused on automating cloud, network and IoT services. They have been working to create a common automation platform that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.

ONAP was originally created by combining the AT&T ECOMP initiative and the Open-O initiative that was led by China Mobile. This merger of ideas then became an open source project under the Linux Foundation. Other operators soon joined, including Vodafone, Bell Mobility, Orange, and others. Today I believe they have about 100 members, which include network and cloud operators and technology providers. The ‘platform of the future’ that they are designing is based on open source code that is intended to provide operators a way to improve and automate their OSS capabilities. In effect, to reimagine all OSS functionality and how it is being delivered, which is a big task reflecting a huge challenge.

Q: Let’s take a step back. Why was ONAP initially created? What industry challenges does it try to address?

Yuval: ONAP is about creating a network management platform that supports 3 key telecom industry initiatives:

The first is network automation implemented through the concept of closed loops. Closed loop automation can be implemented for many different processes, including provisioning, along with network and service optimization, scaling and healing. Historically speaking, many of these processes have been performed manually, or in a combination of manual processes paired with software. ONAP is working to move the industry towards greater automation. For example, with the benefit of data and analytics, closed-loop automation can monitor network occurrences, such as faults and congestion, and act proactively to correct any issues, without any human intervention. This is what is called a ‘closed loop’.  A key outcome of this process is the ability to introduce new network services and functions much faster, and in an automated way.

Their second initiative is the move towards more network virtualization. Until recently, telecom networks have been entirely dependent on physical network elements – including servers, switches, routers, and gateways. The industry is aiming to change this. Now, the function of these network elements is becoming software-based. With software, installation and lifecycle management is much easier. Where in the past you needed someone to physically install a piece of hardware, now it can be done much faster, and at scale. To enjoy the full benefits of network virtualization it’s not sufficient to just rely upon virtualized network functions. You also need the right tools in place that can support the management of network virtualization. Working with legacy systems designed only for physical network elements is like fitting a round peg into a square hole. Communication service providers need the right management software so that the entire process of introducing new network functions can be done in days -instead of months.

There’s an interesting side story to how ONAP’s focus on virtualization has evolved. From the outset, the initial goal of network virtualization was there- but the concept of ‘cloud native’ didn’t exist when all the planning began. Tools like Dockers, Kubernetes, and Container Storage were still in their infancy.  As ONAP’s plans moved forward, these new technologies were just coming onto the market, which meant ONAP needed to adjust their virtualization goals to incorporate these ‘cloud native’ technologies. I guess you could say ONAP’s thinking was ahead of its time- but technology quickly caught up and provided an alternate path.

ONAP

Figure 1: ONAP Closed-Loop Process – source: ONAP

The third initiative that ONAP supports is the move towards a more modern software architecture. In my opinion, this work is quite exciting as it suggests several interesting architectural directions. ONAP took a fresh look at the entire OSS architecture to determine how to make it more dynamic, agile, and scalable.

Q: Why is it that you find the architecture piece the most interesting and worthwhile?

Yuval: From an architecture perspective, ONAP’s belief is that the various OSS modules, such as the End-to-end Orchestrator, Service Assurance, Policy Management, etc. should become separate modules that are de-coupled from each other.

Instead of these functions being tightly tied together, the intent is for these new modules to communicate through a common message & event bus, which supports this separation. Additionally, the platform provides tools for service registration/discovery and support for internal and external APIs and key SDKs. Policies and Closed-loops are managed as “first class citizens” due to a dedicated lifecycle that enables a higher degree of automation for the different virtual functions. These cloud-native deployment infrastructures support the quick onboarding of network functions and services.

I believe this is the right path, and TEOCO has moved in this direction with our latest release of our service assurance solution HELIX, where we have strengthened external Kafka APIs to support this separation of duties. For example, to ensure seamless integration with ONAP’s modules, we provide the collection, processing, and transmission of native VNF Event Stream messages (VES) using ONAP’s KAFKA-based DMaaP message bus.

This has been a big leap forward in terms of network architecture – and ONAP has led the way on this front.

ONAP

Figure 2: ONAP Architecture – source ONAP

Q: This does sound like a massive undertaking. After all, telecom networks are complex, and change takes time, and different service providers have different needs. Do you feel ONAP has been able to reach its objectives?

Yuval: Some portions of ONAP have been successful, but in my view its biggest hurdle has probably been the strict adherence to keeping all OSS functionality within the platform. While this is an admirable goal, it’s one that is difficult for most operators to adopt. This isn’t necessarily a good match for everyone.

For instance, the ONAP concept of using open source to develop the entire platform’s OSS functionality creates the need for significant IT investments by each service provider, because they need to customize the platform to their needs. This may work for tier-1 service providers with large IT teams and budgets, but for smaller providers, this is too big of a project to take on. They can’t support this level of custom development. And besides- isn’t the goal to move away from proprietary, custom built solutions? ONAP is a bit hard to swallow as a single solution designed to replace many critical network management functions that service operators have been developing and growing for dozens of years.

Q: What would you have done differently?

Yuval: My belief is that for ONAP’s initiatives to be successful, the process needs to be approached as an evolution – not a revolution. ONAP would benefit from being a bit more flexible, so that everything doesn’t need to be ONAP-native from the outset. This could be achieved, for example, by creating functional APIs among ONAP’s modules that allow for the separation of functions – without going down to the microservices level.

While there may be some initial cost savings because the platform is based on open source instead of licensed software, internal IT departments or external system integrators will still need to make significant changes to this basic ‘vanilla’ platform. And the fact remains – the OSS systems ONAP is designed to replace exist for a reason. Millions of man-hours have gone into creating these solutions. This includes commercial versions and those developed in-house. That’s a lot of built-in intelligence that is just being tossed aside for the benefit of open-source software. This is a huge risk for service providers.  I think that ONAP has very noble intentions, but it needs to be more flexible and take a more measured approach. These goals can be achieved, but we shouldn’t ‘throw the baby out with the bath water’, so to speak.

Q: Now that we’re three years into this initiative, what do you think will be the future of ONAP?

Yuval: I believe ONAP will have to find a way to integrate with existing OSS products and functionalities.  It will need to be built in pieces over time, taking more of a best-of-breed approach, so that each OSS system can leverage the benefits of what the market has to offer. There must be a balance between architectural purity and common sense functionality.

There must also be a balance between what’s automated and what isn’t. ONAP’s current approach is to automate what they can – and then send everything else to the Network Operations team to manage via northbound APIs. The challenge with this approach is that it can separate functions that should not necessarily be separated. Sometimes you need a mix of both.

In reality, the relationship between what becomes automated and what does not is much more complex. There will be some functionality that can, and should, be automated, while some things are best left to human decision making. Yet these two sides must still ‘talk’ to each other, creating feedback loops of information.  With ONAP, the separation between the two is overly restrictive, and in my view, this is an approach that should be reconsidered.

Q: Can, and should, the industry continue to support these types of initiatives?

Yuval: I think life has its own way of providing balance. People in our industry look at ONAP and see the advantages and disadvantages. Overall, it has contributed to the industry by putting many good concepts on the table that are helping move all of us towards a better approach.

In fact, we’ve seen some operators adopt ONAP’s concepts and architectural blueprints, but without using the currently available open-sourced ONAP code. I believe in taking an approach that allows service providers to select multiple vendors for different functional areas as reflected in the different ONAP modules, to create a best-of-breed solution. But this will require easing some of their strict platform rules. In the end, this would create a better mix of practical functionality and modern architecture, using the best of what ONAP offers, while benefiting from the maturity and robustness of the market’s current OSS solutions. This approach also provides more options for a better balance between the mix of automation and manual functions.

While many service providers believe in ONAPs vision, it will take many years to make the transition. So yes, I believe ONAP has benefited the industry and I hope these initiatives continue.  But I also believe the path towards increased automation will be more of an evolution. History has shown it may not always be the fastest way forward, but it’s typically the right way.