Choosing the Right Closed Loop Implementation Strategy
In 1935 a physiologist named Walter Cannon came up with the term ‘homeostasis’ to describe the body’s ability to maintain an internal consistency that is essential to life. An example is that we can spend an afternoon out in sub zero temperatures yet our internal body temperature barely budges. Our brain carefully regulates blood sodium levels and our pancreas keeps insulin levels where they need to be. Hundreds of systems must work to Collect, Analyze, Decide and Execute their individual tasks, but still communicate with each other as part of the whole. And they must do this automatically – in ‘closed loop’ fashion- to keep everything in check. Clearly, humans are complex organisms, and in many ways, so are today’s communications networks.
Today we are at a real technological tipping point. For progress to continue- we must find new ways to automate. To do more. But faster, better and cheaper. Network function virtualization (NFV) is providing one big piece to this network automation puzzle. Just like a human body, NFV helps networks to ‘self-heal’ – creating its own form of homeostasis. And as NFV continues to evolve, the understanding of closed-loop automation is improving. An entire assortment of use cases has come to light- each with their own unique factors and requirements, differing in scale, in complexity of decision making, and in the speed required to execute actions.
What we are finding is that like the human body, there is no single approach to implementing these closed-loops, but rather a myriad – depending on the types of decisions that need to be made at the different points in the network.
Different Problems Require Different Analyses
Standards organizations such as TM Forum, Open Source organizations such as ONAP, and Communications Service Providers (CSPs), have designed an underlying architecture to support closed-loop use cases. And several architecture options have been designed to determine when each of the ‘Collect, Analyze, Decide and Execute’ steps should take place. These options vary, depending on whether you are trying to identify a problem or solve a problem.
Distributed vs. Centralized
There are two main options for where analysis should be conducted for problem identification: distributed or centralized. In a distributed model, the assurance and analysis functions are embedded in each layer of the OSS solution. While in a centralized approach, the OSS system has a single analytics module that can analyze data across-layers. The main advantage of this option is that advanced algorithms and Artificial Intelligence (AI) can be used to leverage data across the entire network.
While both options can co-exist as a hybrid architecture, I would argue that in most cases, the centralized approach is the preferred strategy.
Let’s examine why. A common use case that might appear simple at a first glance is when too little processing power is being applied to a virtual machine (VM) or a virtual network function (VNF). The distributed approach would identify the problem and immediately allocate more resources, without taking into consideration how this action could impact the broader network.
A more holistic approach will consider the root cause and apply logic, like the following:
• If the need for more processing power is “natural” additional processing power should be allocated
• If the need is coming from a sudden exceptional behavioral of a specific VM/VNF, the resolution may be to reset/restart/reconfigure the VM/VNF
• If the need is coming from a global phenomenon (such as a weather storm or a large sporting event), the course of action may be totally different, and part of a larger system response to the overall need. As the whole system may be very busy, a decision can made to preserve only vital VMs/VNFs, to keep the system running in tough conditions.
As with the human body, problems don’t typically live in isolation. In the case of a more global network phenomenon, handling each VM or VNF separately may cause even greater system problems, demonstrating why Distributed Analysis may provide a short-sighted remedy.
One size does not fit all
This use case touches on just one simplistic problem. If homeostasis is the goal, everything needs to work together for the good of the network. For this reason, Closed-loops will have different architecture implementation options depending on many variables. Fortunately, groups like TM Forum and its member companies, including TEOCO, are working to resolve these issues and help service providers around the globe create networks that will enable tomorrow’s services.