 Hello. My name is Nikolai Nikolaev. I'm a senior staff engineer with Juniper Networks. Currently my focus is to the application of the cloud native technologies within the Delco domain. And today I'm going to talk to you about the open run and how it adopts Kubernetes. Let's start. The modernization of the way we do networking started about 15 years ago with the introduction of the software-defined networking. And when this trend was combined with the concept of virtualization of the hardware resources, the networking function virtualization was about the NFV. It allowed for running specialized networking workloads on top of the commodity hardware like CPUs, memory and network interface cards. The rise and proliferation of cloud native technologies in principles created a new wave of networking virtualization. The codification of the networking domain allowed for building and running scalable applications and creating closely coupled systems that are resilient, manageable and observable. This is a direct quote from the CNCF's cloud native definition. But my favorite part of this quote is make high impact changes frequently and predictably with minimum toil. This is a huge game changer for the networking quote where typically a new software rollout can take months. While all of this was happening, the telecom industry kept running through the newer and newer generations of their networking architectures, 3G, 4G. But when 5G was introduced, it proposed various new use cases such as self-driving cars and e-held. This puts huge demand on the flexibility of the underlying infrastructure where virtualizing networking functions was a natural fit. But adding the cloud native principles in the mix makes it even more appealing to the telcos to adopt. Let's zoom a bit. A telco network is typically split into the core network, radio access network or run and the back home, which we call the infrastructure that connects the core and the run. While the core network was a relatively natural fit for the NLVN codification, the radio access network was typically more rigid and hard to revolutionize. Yeah, people started looking at it and in an attempt to align its implementation with the NLVN codification trends. So that's how in 2018 the open run alliance was born. It defined the standardized logical architecture of the radio access network in the interfaces between its components. It also defines various splits or functionality distributions across physical and client instances. To facilitate this design, the open run alliance defines 10 work groups. What's most important here is work group 6, which is the classification and orchestration. And this work group seeks to drive the decoupling of run software from the underlying hardware platforms and produce technology and reference designs that would allow commodity hardware platforms to be leveraged for all parts of the radio access network deployment. Sounds familiar, right? The platform that hosts the software and hardware to implement the open run architecture is called O-Cloud. It comprises CPUs, memory, storage, hardware accelerators, like FPGAs and DPUs, and the relevant networking infrastructure. The specification does not mandate how the workloads that leverage these resources are orchestrated. One can run bare metal workloads or virtualized VNFs and, of course, containerized CNFs. However, the adoption of the cloud native principles is very appealing to the DevOps teams in the telco industry, too. The flexibility that this approach brings allows for a tremendous change in the way the infrastructure is managed and operated. The single O-Cloud can host several on-premise Kubernetes clusters, which can also have public cloud management, managed Kubernetes clusters, as well as other types of workloads. This puts interesting challenges to implement, where Kubernetes orchestrated workloads need to be able to communicate to remote Kubernetes clusters, bare metals, and virtualized workloads. One can imagine that such scenarios go well beyond the traditional CNI networking, for example, and include technologies like MOTUS, SIOV, and high-performance DPDK communications. A subgroup in the World Group 6 is looking into the management interface with Kubernetes as a platform to host the CNF workloads. Its job is to define part of the so-called O2 interface, which will allow for seamless integration of service management and orchestration platform, the SMO, and the multitude of Kubernetes cluster hosts in the O-Cloud that this SMO manages. There are certain challenges that are still to be resolved, such as the aforementioned networking challenges, accessing the accelerators that we talked about, but also simple things such as how do we manage namespaces, how do we handle our bug, maybe even volume mounts, for example. In a conclusion, I'd like to highlight that O-Paran is a prominent platform for the future of modern telecommunications, but also it's an interesting use case for the cloud-native technologies that it adopts. The development and the technology evolution that is to be made within this seemingly niche work will have a serious impact on the rest of the cloud-native community. So if you want to know more, visit the link that is on the slide here. Thank you.