Living on the Edge: Why Service Providers Are Shifting their Focus to Multi-Access Edge Computing
21 MAY 2020
In this interview with Danny Itzigsohn, TEOCO’s Senior Director, Technology and Strategy discusses the rise of Edge Computing, or MEC.
Q. In telecom there’s suddenly a lot of focus on edge computing (MEC). What do you believe are the advantages for service providers and their customers?
Today’s telecom networks enable all types of applications – it is not just about voice services anymore; it’s data hungry applications like video and internet streaming. Now, along with the growing demands from these existing telecom services comes 5G and IoT, which will introduce a whole host of new devices, sensors and applications, and we’ll see 10-100x more data crossing the network. Telecom networks are getting ready for this now.
Edge computing allows certain functions and applications to be handled more quickly and efficiently, closer to where the end user is. At the edge network operators and application providers place all the functions and applications that require lightning fast response times with very low levels of latency. You can think about it like a trip to the grocery store. For shopping trips on the weekend, it may be worth it to go to the big grocery warehouse store in the next town. It’s a longer drive but they have low prices and lots of choices. But let’s say you just baked some rolls for dinner, and you realize you’re out of butter. You’re not going to drive 20 miles to the big grocery store while everyone waits and dinner gets cold. Instead, you’ll just go to the nearest 7-11, bodega or Quick Mart because it’s more efficient.
With edge computing it is the same- but most of those edge processing centers, those 7-11s, do not exist yet, or are still at the initial implementation stage. We must build them. This is a big undertaking. From end to end, CSPs must think about MEC as a domain. There are a lot of moving parts, and this must be a joint effort with partners, service providers and OSS vendors. It requires re-thinking the way things work and the way the network is architected.
Q. Those in the telecom industry often use the term mobile edge computing. Is this trend only relevant for cellular providers?
No, not at all. MEC is a cloud-based IT service environment physically located at the edge of a network. It allows multiple types of access at the edge, including wireline.
For instance, Verizon and AWS announced AWS Wavelength as their joint edge solution, where Verizon will provide the connectivity and AWS will provide the mini datacenters. These AWS datacenters primarily rely upon a fiber infrastructure to support 5G wireless services at the edge. Fixed line services have always played a critical role in wireless, and even more so with 5G.
Q. What are some applications that benefit from edge processing?
Edge computing is critical to four types of applications or use cases:
- Low latency
Use cases that require very low latency (URLLC – Ultra Reliable Low Latency), such as autonomous driving, robotic arm controls and smart grid, cannot have the applications placed at the network core because the round-trip distance creates a higher latency, in most cases, than the maximum allowed by the SLA or use case. Therefore, the distance between the cell site and the application server must be minimized. In some extremes this even requires placing MEC capabilities at the site itself.
- Local services and content
Local services, such as augmented and virtual reality for local malls, museums or sport venues, and localized content for autonomous cars (e.g. HD maps), are oriented to serve a specific area, so the data and applications should be held in an edge data center that is within, or close to, the area it serves.
- Private networks
Private networks or private slices in 5G can serve a lot of different use cases, such as those related to Industry 4.0, enterprises, and even ports, and university campuses. Analytics and applications related to these private networks or slices are frequently found at the edge, due to latency and reliability constrains, and privacy concerns.
- Avoidance of backhaul congestion
As new services and use cases are introduced, more demands will be placed on the network. The ability to process data at the edge helps reduce the amount of data that is sent back for processing. This reduction in backhaul helps improve overall network efficiency and scalability; two critical factors for ensuring 5G success.
Here’s an example: In the U.S., ‘silver alerts’ are issued when there is a missing person – typically someone who is elderly and cognitively impaired. The local government broadcasts information about this person through the telecom networks. If the person is suspected to be driving, the alert would be used to help find the vehicle as it passes a traffic camera or a toll booth. In this scenario, the service provider can either transport petabytes of video data to servers at a distant data center in the search to find the right license plate, or you can do the analytics much closer, at the edge, and send only the final results – which are typically just few bytes of really important information. In the first case, you might congest the backhaul from all the video content and create network delays- while the second approach utilizes strategically placed video analytic capabilities distributed among several MECs, optimizing the network traffic.
Q. Which Operational Support Systems (OSS), if any, need to be delivered in this fashion? Can everything be analyzed at the edge?
We’re engaged with several Tier-1 network operators who are driving this innovation. I know a lot of people are convinced that OSS related analysis needs to happen at the edge, in real or near-real time, and that is what we are working towards. I think it’s fair to say that SOME things should happen at the edge; the smaller, simpler functions that need to happen fast – but not everything. Just like the grocery store analogy- it’s not practical to do all your shopping at the 7-11. It’s expensive and they can’t provide everything you need. But it makes sense for smaller, quick tasks – like running out to get the butter.
Video is a good example. It is the number one use of data traffic for 5G EMBB use cases (Enhanced Mobile Broadband). So, we need to ask – how can it be assured and monitored? We can certainly connect to MEC network and application functions and perform analytics at the edge. Applying thresholds to performance counters and KPIs to proactively detect local deteriorations of QoS – or monitor the user plane functions at the edge to make sure they are not congested, are both good examples of this. This could even trigger local actuation, or a local closed loop, with orchestration functionalities found at MEC. But the heavy duty analytics based on ML/AI algorithms is most often located at the central OSS instance.
TEOCO has already developed containerized agents that enable the seamless deployment and activation of collection, discovery, and actuation functions at the MEC level, and we are moving forward to containerize other capabilities to enhance our MEC offering
Q. Why do services now need to be assured at the ‘edge’ – when they were not in the past?
It has to do with many things – including the growth in network virtualization, the growing amount of data that needs to be processed, the need to support and manage SLA-backed network slices, and the need for better response times. Wherever you have virtual functions we must be able to monitor them, which means some services will need to be monitored and controlled through local OSS and orchestration functionalities, respectively.
In general, we can simply state that services must be assured at the edge because their main components, those network and application functions that make the service happen within their SLA boundaries, are moving to the edge for many use cases, as explained above.
Another issue is that service providers are concerned that when trying to deliver millisecond latencies, any SLA deviation or QoS deterioration will be blamed on the network, when the problem may actually lie elsewhere. We focus on measuring this so CSPs can see exactly where the root cause of these issues can be found. Is it the core network, the cloud, the connected car, the IOT sensor, the external application service – or something else? Service providers want to make sure they can measure network performance from end-to-end to show they are delivering what they’ve committed to deliver.
Q. What is needed to make all this happen?
CSPs need a solution that focuses on 5G enablement, assures services and slices at the edge, and can handle the massive data volumes, 5G slices, and the growing complexity and dynamic nature of the network – which is what we’ve done with TEOCO’s HELIX 11.
We have made significant investments that enable customers to run their services on local clouds at the edge, so they can support network slicing. We can now monitor performance at multiple levels, including infrastructure, network, application, slice, and service. We see this as an enabling technology. A way for CSPs to monetize new services, and aid in the timely and reliable deployment and operation of these new services
In designing this latest release, we really looked more closely at how to best support all the use cases that are relevant to 5G, both current and future. We’ve further decoupled Helix and made it more modular, so it can be deployed on any cloud. Systems need to talk to each other, using APIs and tools like Kafka or a centralized message bus; but it is less about the latest ‘cool’ tools and more about functionality and support of relevant use cases. We’re focused on being practical and doing what works best – because that’s what matters to our customers.
Q. What about the analytics part – isn’t that a critical piece of what’s happening at the edge?
Yes, in my opinion, the most interesting developments are happening on the analytics side. We have invested significant resources into ML and AI for many years now, and with our domain expertise in service assurance, we’re able to infuse this information into the use cases and algorithms we’ve developed. Most analytics algorithms, however, involve big data – this is their essence. For this reason, managing analytics at the edge is a bit of a challenge. We need, at the very least, an architecture where the ‘learning’ is done centrally; but the responses, how we use these learnings, will be decided at the edge.
Today, everything is so centralized. As an industry, we need to push some of these activities out- and wrap our heads around things architecturally. Moving things out to the edge requires you to split things up; to do foundational data processing and analytics at the center and then other things can happen autonomously at the edge. But which things? It’s all about contextualization. You can’t assure the network without knowing how services are being deployed and used. And if CSPs want to support different SLAs for 5G slices, they need to remember that if you cannot monitor it, you can’t assure it – which means you can’t sell it. Furthermore, if you cannot monitor it, you also cannot initiate the network healing and self-adjustment procedures that are at the core of closed loops.
All of this must be considered. From an industry perspective, this is a real paradigm shift, but I am happy to say things are moving in the right direction.