 Hi all, welcome to today's video. We are going to discuss about infrastructure provisioning for 5G telco orchestrators. To introduce myself, I am Stefan. I work for ADL at Seattle Digital Labs. And we are a service provider mainly focusing on telco and banking domains, supporting our clients in their digital environment. To start with, let's go to our content for the day. So first of all, we'll discuss a bit of history. The transition from 3G to 5G as a whole, not only the infrastructure layer, but shifted when telco operators transform from legacy 3G to 5G systems. Then we'll discuss about cups in 5G, which is control and user plane separation, and which kind of triggers the requirement for a rapid infrastructure provisioning capabilities, especially in 5G user planes. Then we'll discuss some challenges that come across within these setups, especially referring to some of the steps based on polls carried out by CMT itself. Then we'll discuss class API, which we believe kind of quite a unique solution. And quite kind of an advantage in provisioning infrastructure in 5G setups. Then we'll go to the deployment architecture followed by a small demo to showcase how we use class API within our telco customization systems. Right, we are in transition in the telco domain. It happened earlier when we moved from 3G to 4G. Now it is happening again with the move to 5G. The changes are in RAM, the radio access network, and also in the connect work components, and our focus is in the latter, where the telcos are also experiencing the shift from monolith to service-based offerings in their connect work similar to all the other domains. One could say that the telcos' use of proprietary hardware coupled with special protocols and technologies has delayed them from moving into cloud native. But now it is happening, and it is happening fast. OK, let us backtrack a bit and see what happened when we moved from 3G to 4G. So in the 3G era, the circuit switching was still working. And proprietary closed systems were the norm, and the whole co-functionality was one large entity. In came 4G with the EPC and IEP switching being the common standard. Separate co-network components were defined, which are connected through standardized interfaces. And the VMs were used as the infrared with some of the co-functions converted as virtual network functions. But still the proprietary systems and large server hardware were dominating the infrastructure layer. And these systems can be considered death mileage systems. Then came the 5G with vast improvements across LAN and also in the co-networks. The VNFs or PNFs now being converted to CNFs cloud native functions with service-based architecture in mind. Now the co-network functions are connected into multiple services. And there's clear separation between the control plane and the user plane function. This separation itself allowed 5G capabilities such as network slicing and private networks where multiple UPFs are deployed in different localized environments and communicate with control plane when required. And the deployment of these UPFs can be anywhere. It could be public cloud, hybrid environments, on-prem data centers, basically anywhere you name. Now this introduced the challenge. When providing infrastructure for this user plane function, we need to be quick. And it is critical for telco operators, hotel co-vendors, and support partners, people like us, to be able to come up with solutions that can cater for this type of dynamic infrastructure requirements. Now if we summarize up to this point, the main takeaway here is COPS and the pack that it trigger the requirement of declarative infrastructure forms that can be made available quickly across multiple providers for its UPF and the capabilities to do so can give any telco the edge with lower time to market. Okay, let's move forward and discuss a bit more about COPS. As mentioned earlier, the concept of COPS provide the basis for low latency operations in 5G networks because the processing is moved closer to the end user with the user plane function. As an example, consider a smart factory with all the IoT devices, autonomous machines, and so forth. And these devices rely heavily on high speed intercommunication. Now the practical scenario is smart city with autonomous caps and operation. And the backbone of these autonomous vehicles is the underlying ultra low latency reliable communication platform, which is 5G network. User plane functions such as quality of service or routing plays a crucial role in the proper operation of this network. And what this example shows is the 5G UPF can be and they need to be deployed anywhere. The factory might be using their own data center, cloud, private cloud, anything. And the operation will be at large scale as well. So this results the requirements of manageable reusable information procedure, which of course should be declarative considering the scale factors. Okay, let's move forward. Now, as we established the requirement of infra platforms that can be deployed anywhere for UPFs, let's discuss the options and challenges one could face. First up is a fact, a fact based on recent survey performed by CNCA frontier operators. On the challenges they see while they are in this shift from legacy systems to cloud native deployments. And as per the results, that third place goes to cluster management, the internal management part of it. Let along the requirement of moving to cloud native services on their own core network where they have the full control. Now the telcos are also expected to have the ability to deploy rapidly on user spaces as say. This is true not only for telcos, but for telco vendors, support partners and for network function developers who may be developing different UPF functions. As an example, if you take particular software component that address specific use case in 5G ecosystem, this product itself should be able to come up with rapid deployment capabilities. You cannot go and ask your clients to provide specific hardware, specific kind of cluster deployment or a preferred cloud provider. Or there can't be any dependent components or should be part and parcel ready to be deployed anywhere and kind of pluggable as well. The concepts like GTOPS and orchestration layers should help in this scenario as they provide the background for successful deployments and integrations. Okay, let us move forward. Okay, the background is set. We discussed the changes. We discussed the challenges. And we know the requirements. Now, what tools in the CNCS that can help us to address this? In comes cluster API. Along with some additional supporting tools, we believe that it provides a wonderful platform to deploy UPF across different infrastructure environments. Cluster API itself is managed by CNCS Seek Special Interest Group and it's a production ready product. Cluster API was developed to address this specific requirement that is the need of declarative approach to define and manage infrastructure across multiple infrastructure providers. Be it be in the cloud, on-prem bare metal VMs, et cetera. And Cluster API supports many providers, including almost all the public cloud providers and also bare metal solution making it a great fit for 5G telco domain, especially considering the requirements of HD deployment related to 5G ecosystems. And Cluster API is based on an operator model. So it extends the Kubernetes and get the info management under itself. It deploys a management cluster where it runs its core services with multiple providers. As an example, let's say we are managing clusters for two different enterprise private 5G networks, one on top of AWS and another on top of GCP. In this type of scenario, we need to configure Cluster API management cluster with two providers, that is 4AWS and GCP. So Cluster API allows us to define declarative clusters and of course reuse them. And the other major point to emphasis is that the Cluster API is not only about deployment or the commission of the clusters, it actually provides the lifecycle management capability for the cluster. Let's say you need to scale up the cluster, you need to increase the nodes from two to three or whatever the number is. Just do it with a change in the manifest or assume you need to update the cluster, a version update, same. You can do it with Cluster API in standard manner, across worker cluster. In our use case of 5G, UPF deployment that provides a distinct advantage, especially the capability to deploy within any infrastructure provided. If you refer the diagram here, thus far we only discuss on this particular box the infrastructure management part. If we go for a broader view of the deployment Cluster API is a way to match for modern cloud native application orchestration platform to be part of it as the infrastructure provided and it can be an integral part of platform engineering ecosystem. Assume that you are using platform that helps your application development, CI CD reporting, analytics and so forth. Why not include information management part of your application platform? So this will allow applications to be deployed anywhere with minimum hazard. So this is where Cluster API shines the most. Also there are some other tools that can make Cluster API more user friendly. One in particular is customize. Customize is a configuration management tool and it helps a lot in reusability. As an example, assume it is required to deploy different environments of cloud native application such as the UQA road in different infrastructure. For this you can use customize to reuse Cluster API manifest and reduce the duplicating request. Okay, now let's move into briefly discuss the deployment architecture of Cluster API in 5G UPF environment. So as discussed earlier, Cluster API requires its own management cluster. This management cluster can run anywhere and we can set up required providers that are being managed. One management cluster can provision and control any number of workload clusters in different providers of course. Cluster API creates multiple CRDs in its control cluster including machines and machine templates. This allows us to define cluster nodes which are preconfigured with temporary data deployment prerequisites. Examples if we take SRIOV or multi-CN9. So they can be pre-installed in their nodes. This speeds up the cluster provision and at the great deal of flexibility as a. Also Kate, image builder can be used to define these reusable machine images under different infrastructure providers. So let's move into our demo. It is more of a discussion on how we use Cluster API within our orchestration platform. For the demo purposes, we have one management cluster running and that's one workload cluster provision on top of Docker. Let's use Cluster API we should rise to view our current topology. Okay, let's start with the demo. The idea of this is to show how easy it is to spin up a new cluster with specific requirements for a given service using Cluster API. So what we have here at the moment is Ubuntu machine with kind installed Kubernetes in Docker. And we are running our control plane, Cluster API control plane on top of kind. And to add, kind is not supported for Cluster API control planes in production use, but for demo purposes we are using kind today. But if the control plane is intended for production use, the control, it should be running one supported infrastructure provider. So among these two kind clusters, the main cluster names kind is hosting the control plane for us. If we check what ports are running under Cluster API itself, we will see the main Cluster API services as well as the providers we have configured. So if you take the API system, it's the main cluster API core service, the cluster API backend itself. And in our setup, we have two infrastructure providers configured. One is the CEPG system, which is the infrastructure provider for GCP cloud. And CEPG system is the infrastructure provider for Docker. And for demonstration purposes, we do have a Docker cluster running, a worker cluster running configured with CEPG system. So it is running with one control plane node and two worker nodes. Now this is just for the demonstration purposes to showcase the multi provider capabilities of Cluster API. And for visualization purposes, we are using open source project Cluster API visualizer. So this is our current setup. As of now, we have our control plane running on top of kind and we have our worker cluster running on top of Docker named Docker 5G. If we go inside, we can see that it has one control plane machine and two worker nodes running at the moment. So that is our current setup. That is what we have in hand at the moment. And also we do have an interface created around Cluster API. And it's the sample interface that operators can use to manage the worker clusters in a single portal. With this kind of portal, we can optimize the DevOps execution by preferred work force to support boiler-plated developers, not only in the integration level, but in the deployment level as well. And this will help to implement platform in the real-time practices, not compromising governance requirement in any given operator environment. So to come back to our context back again, now in order to demonstrate the Cluster API scale ability to go across providers, let's assume we need to spin up another cluster this time on top of GCP with support of a 5G UPF service, maybe fundamental data center for a client hosted on GCP itself. And also this particular service might need specific requirements, maybe specific tools related to 5G setup under the Kubernetes worker nodes. So first up, what we need is a machine image build with the requirements of this application pre-installed. So in our case, we do have this machine image created for GCP using the Kubernetes image builder. So what image builder allows us to do is define machine images in reusable manner and add version control capabilities based on our requirements as well. So basically it provides a declarative way to define the images. And these images can be defined across providers. This image builder product goes along with Cluster API, it is kind of part of Cluster API, but it can definitely used as a standalone product for any other purposes as well. So we do have our cluster image creator to be used as our basis for our nodes for the GCP Kubernetes cluster. Now let's go ahead and deploy cluster using this image. So in our case, we will use the interface that I have mentioned earlier. So at the moment, the backend cluster supports GCP and Docker-1D. So we will select GCP. This selection in the backend will configure the GCP project region, the service accounts, the authentication tokens, all the configurations that are required for GCP operation through Cluster API. And if we take the cluster config template, this of course will point to our image which we have created already. And this have the configurations related to number of control play nodes we are using, number of work nodes we are using and the compute engine class, the node type we are going to use for control plane and work node configurations. So for this configuration, we will be using one node for one node each for control plane and work node as well. So application configuration is related to the services which are going to be deployed on top of this cluster along with cluster creation. But for this demo purposes, we will not deploy any applications. So let's go ahead and create the cluster. And just to mention, if we check the backend, the GCP backend that we are calling through this cluster API interface, at the moment it has only one VM instance running. But once we start the customization, we will see the new nodes being provisioned. Actually two nodes will be provisioned, one for control plane and one as a work node in this particular project. So let's go ahead and create the cluster. Now this should take some time to complete and if we switch to our VM instances page in GCP console, we should see the cluster provisioning getting started. And also from the cluster API visualizer, now we are seeing that the new cluster on GCP provisioning has started already. So it will first deploy the control plane, then it will deploy the work node in GCP. So since this is going to take some time, meanwhile, let's discuss about some of the challenges, the operational challenges that could come up during this type of worker cluster management, a topology that is used by Cust API. One of the challenges that could occur is the concerns related to connectivity. Now the connectivity between control cluster and worker clusters are must for this topology to work out. But especially in a set up there are multiple providers that can use this requirement, can span maybe multiple public clouds and this might result in configuration overhead as well. Also there might be an argument, the initial efforts required with setting up this solution might be bit of hectic. And there might be concerns related to security as well as there's one privileged entity the control plane that acts as a central point to manage all of the clusters. So those are all in all operational concerns and specific to the given implementation. But overall considering the advantages Cust API provides related to manageability, especially when deployed in scale can counter these challenges for a given environment. Okay, so now our cluster is provisioned, we have our control plane node up and running and our worker node is up and running as well. And Cust API visualize also, confirms that the GCP cluster is also provisioned. So now we can use this cluster for any purpose similar to any other Kubernetes cluster that we deploy using any of the methods that are in place. So again to emphasize and being going back to where we started this kind of declarative and manageable cluster deployments can immensely help in use cases such as 5G VR, there's a requirement for rapid infrastructure deployment across providers. Now in our example, we touched Docker and GCP only but the cluster API provider list is very impressive and includes almost all the public cloud providers and the bare metal solutions as well. So for a given use case, we have this requirement for deploy many number of clusters. The solution give immense advantage to configure and manage clusters with more confidence and ease. So the demo concludes our video for the day and hope you will enjoy the content and thanks a lot for joining. So until next time, bye, thanks.