17 Jan Good Measures for 5G Service Assurance | Light Reading
After years of preparation, commercial 5G has finally gone live in different parts of the world. More than 50 networks in over two dozen countries had switched 5G service on by the end of 2019. Subscriber uptake was already faster than the same period when 4G was first introduced, according to data released by the South Korean operators. The research firm Ovum predicted that over 1.3 billion subscribers are expected to sign on 5G by 2023.
5G has come with high promises. The technology was designed from the onset with service scenarios, including enhanced mobile broadband access (eMBB), ultra-reliable low latency communications (URLLC), and massive machine-type communications (mMTC). To deliver on these promises for consumers and business customers, service assurance is of utmost importance to not only provide the experience to those the technology is expected to serve, but also to deliver the business returns that telecom operators have bet on 5G.
The starting point to assure service is to get full visibility of what is happening on the networks. However, the complexity of 5G networks has made thorough and reliable monitoring of the network a challenge for operators. The complexity primarily comes from the fact that 5G networks will be hybrid, both in the sense that network components of different generations will be operating side by side, especially for incumbent operators, but also in the sense that 5G will be composed of traditional physical networks, network functions virtualization, and cloud infrastructure.
Going cloud native is crucial for the long-term success of 5G, using the 3GPP standard 5G core, which will enable new architecture including the deployment of container-based microservices. To compare with the conventional monolithic applications, containerized microservices’ advantages include facilitating continuous delivery, easier scalability, and, in the service assurance context, isolating failures. But they are not without challenges.
Traditionally, service assurance on telecom networks has been a highly manual process and one often carried out in reactive mode, typically carrying out remedial actions when faults occur. Such modes are not only time-consuming, but also limited by the manpower made available for such tasks.
This manual mode of monitoring will cease to be essential in 5G, not least because the size of data generated in 5G networks will be exponentially larger than before, going way beyond manual monitoring and analysis capability. Also, the new architecture, including microservices, will demand 5G service assurance to use completely different approaches.
To start with, monitoring solutions need to collect data from multiple data sources. There will be the standard packet-based and event-based monitoring, including probing the core networks, the network virtualization functions as well as probing the network edge. Virtual probe-based monitoring and assurance are important, as they provide network visibility at a high level as well as at a granular level. Such visibilities will be critical for root-cause analysis and for gaining end-to-end coverage.
However, probe-based assurance needs to be complemented by data from other sources. This is especially true when it comes to containerized microservices. In such network environments, distributed tracing is needed to monitor and analyse the cross-process transactions taking place “under the hood.” OpenTracing API is one direction that the market may embrace, which provides a standard, vendor-neutral framework to work across different proprietary protocols.
Such an integrated approach can also help counter encryption challenges that are expected to be even more serious in 5G. One of the most representative cases is video, which currently generates the highest traffic among all applications and is expected to be responsible for much higher volumes of traffic in 5G. However, more and more video is now sent over encrypted connections. YouTube aims to encrypt 100% of its traffic in the near future. This trend will pose a threat to the service providers’ capability to gain full visibility of the data going through their networks and will impact user experience.
The encryption challenge demands that operators must gather, process, analyse and correlate data from multiple sources, including network packets, OpenTracing from multiple vendors, network interfaces and event notifications, to convert data points into insights for operators to deliver optimal customer experience for 5G services.
Another challenge is the large quantity of data going through the 5G networks, including network data, service data and user data. The data volume going through end-to-end 5G networks, built on completely new architecture with 5G RAN on top of 5G core, will be exponentially higher than LTE networks built on Evolved Packet Core (EPC). It is therefore not financially viable to monitor and store all the data going through such networks.
From complete monitoring to smart sampling
The complexity of 5G networks and size of data generated by 5G will demand service assurance practice to move from complete monitoring to smart monitoring, which will rely on the policies that define what types of data and what parts of the networks should be the most critical elements to be monitored. One of the most efficient and effective approaches is smart sampling.
Analysis of current data monitoring practices has shown that around 95% of the data gathered has never been used. This implies a high level of network resource waste, for example the computing power of CPUs. In order to achieve the efficiency of smart sampling without losing sight of the most critical functions, monitoring policies need to be set aligned with the service providers’ key service targets.
For example, the assurance solutions should be deployed to make full monitoring of the network core to provide complete visibility of fundamental services, like voice calls, media delivery and service subscription and un-subscription, or all transactions made by important customers, like VIP clients or service-level-agreement partners.
Meanwhile, dynamic sampling should be selected for other monitoring. For example, higher percentage sampling should be applied to greenfield networks while lower percentage would be enough for established networks. On-demand analytics can be provided when assurance analysts or developers need them. Targeted container-level analytics will be critical to isolate and diagnose app failures.
In practice this means that on-demand analytics and assurance should be integrated with NFV MANO (Management and Orchestration) in 5G’s “cloud-native” core networks, which at this stage primarily refers to containerized microservices deployed on bare metal and controlled by Kubernetes. Only through such an integrated approach can assurance help lay the foundations for automation and closed-loop approach to network monitoring, which will be critical for guaranteeing advanced 5G services.
NWDAF on every 5G core
Network slicing is one of the key 5G service propositions, especially meaningful for business customers. Thanks to its virtualization nature, service assurance of sliced networks will ask for new monitoring methods.
Network Data Analytics Function (NWDAF) was first introduced at 3GPP’s 5G System Architecture meeting in early 2017, and the specifications have been published in Release 15 and 16. The objective of NWDAF is to provide network level data analytics, for example slice load level information.
Specifically, in Release 15, NWDAF can support two major types of services. It enables the network function service consumers to subscribe and unsubscribe for network slice specific congestion events notification. It also supports the network function service consumers’ requests for operator-specific analytics.
The output of NWDAF is then used as input to Policy Control Function (PCF) and Network Slice Selection Function (NSSF). PCF uses that data in its policy decisions, while NSSF can use the data provided by NWDAF for slice selection.
Given the critical role network slicing is expected to play in 5G’s long-term success, ultimately, all 5G cores should be equipped with NWDAF, which is pointing to the trend of future monitoring that all service providers and assurance solution providers should seriously plan to embrace.
Going forward, the application of NWDAF will be expanded beyond facilitating network slicing and becoming a central focus of data analytics. There will be multiple new directions in which NWDAF can be applied, here are a few of the more obvious ones:
Therefore, adopting NWDAF will not only provide targeted precision analytics, improving monitoring efficiency, but also support 5G operators to differentiate their service offerings, delivering on 5G’s big promises.
In summary, operators should look for 5G monitoring solutions that support multiple data sources (packets including Transport Layer Security (TLS) deciphering, OpenTracing, 3GPP events, and proprietary events) with an outlook for synergy between the service assurance platform and the new NWDAF.
— Tomer Ilan, Senior Director of Product Management, RADCOM
This content is sponsored by RADCOM.
Sorry, the comment form is closed at this time.