 Do you know how does it look like to provide the managed Kubernetes service in a telco? It means managing hundreds, even thousands of Kubernetes clusters across hundreds of locations. What kind of locations? Well, this might look familiar, or even this, but it is also this. As an industry, we are not quite yet there, but the speed with which we are approaching to this scenario is enormous. Hello, everybody. I am Wu Goinich, and since two years, I am running Kubernetes platform team of Deutsche Telekom Technik, a network technology unit of Deutsche Telekom in Germany. My cloud native journey started in mid of 2019 when I was asked to look into providing a Kubernetes platform for a telco workloads. What kind of workloads? Well, it is certainly not what you are expecting in an enterprise unless if you are coming from a telco. We run the service platforms and network functions there. Many of those you are using even right now to participate in the KubeCon, which is the technology which is highly and then very fast proliferating in that space. Can you guess it? I guess you guessed it well. It is a 5G, and this is also what we are building and providing our platform for, because it also requires platform infrastructure to run somewhere. We asked ourselves when we began, how could we provision, maintain and manage Kubernetes clusters, the Kubernetes text at that scale, with a relatively small people of not more than 10 SREs. We looked into what is common in the industry. One of the common and accepted statements is that Kubernetes is mature for production as it is. If this was the true in 2018, it is even more true in 2020, and it is true not only for Kubernetes, but for many other pieces of cloud-native technologies. There are, of course, many advanced solutions and distributions on the market, and each of those is bringing a lot of options and benefits. The question is, what do you need? And we found just enough of what we need for our use case in CNCF. We got inspired by those who did the same or similar thing before us and we decided to follow their path. Fail we made, sale we must, expecting long journey. We gave our project symbolic name, Daschiff, which is a German for the ship, and our small but growing team eagerly airmarked. All the pieces came together in a platform, in the platform, by usage of a combination of cluster API and flux CD. This enabled us to enjoy all the benefits of GitOps as pioneered by VWorks. However, in this case, for a managing Telco and 5G infrastructure at scale and all of that, even on a bare metal. Managing, it's actually not the right term. It is rather self-management, a situation in which infrastructure takes care of itself for most of the time without too much attendance. Our work actually focused on creating a glue for all of this in the form of Git layout that enabled us to cover our multi-site, multi-infrastructure and multi-cluster scenario. This is a foundation on top of which we added all the components which are necessary if you claim to provide a managed service. But if you are in a scenario on premise and in the use case like ours, having a Kubernetes cluster is only one piece of the puzzle. For the other pieces, we had to go cross foundational, if I may. You also need the modern data center networking stack, and that would be more natural for people and for a team who is running Kubernetes platform, but to choose the network stack which is based on the containers. This is what we found in a Sonic out of Open Compute project. They have some extremely super interesting integration and fusion with Kubernetes on their roadmap. Think of it like having all your switches in a data center being a Kubernetes node, forming a big Kubernetes cluster, and the software for those switches running as a cloud native application in the same cluster. The centerpiece of our approach are bare metal Kubernetes clusters, and to manage bare metal in a cloud native way, we turn to Ironic out of Open Infrastructure Foundation. Ironic in combination with Metal Cube is giving a very good way to manage bare metal, and these are all good pieces of software for that function. With this, we have rounded up our approach and we ended up with a platform in which we use only Git and Kubernetes API to manage and provision and manage bare metal servers and ultimately create bare metal Kubernetes clusters with them. And on that platform, we are currently onboarding quite some workloads, 5G core among others. The same setup we can apply on bigger core locations where you have a couple of hundreds of servers, smaller edge locations, but even on the very small far edge locations. There are many benefits due to which we prefer bare metal Kubernetes for a Telco and 5G use case. These are, among others, less overhead, uncomplicated multi-tenancy, easier use of hardware acceleration and direct hardware access in general, and high flexibility and possibility to customize the operating system level. Are we done? Not nearly. We are working with our friends from VWorks and with the Firecracker team on use case for micro-VMs to act as a control plane in our resource-constrained far edge scenario. This could, however, unlock much more and vibrant innovation for the use cases which are way beyond what we are trying to do. It's very, very interesting and very exciting. So this is a story about how we build a platform for cloud-native Telco and 5G. However, it is not less important to work in the community and to help network functions become cloud-native so that they comfortably run on such a platform. Therefore, we are actively engaging in a CNF working group and we see it as a driving force that will enable, ultimately, possibility to test the conformance of cloud-native network functions to universally accepted cloud-native principles and best practices. The main paradigm of software delivery and network function delivery in Telco today is so-called systems integrated approach. It is based on testing and validating all the components which are version-pinned for the internet working a couple of times a year. As much as this systems integrated approach worked well for the past, it is not fit at all for a cloud-native scenario and for a highly dynamic environment. The CNF vendors need to enable CNF operators to validate and test platform and CNF combination on a continuous basis, ideally daily, so that they can assure its availability and the SLA for the cloud-native network functions. This is what we are looking for to see as one of the outcomes of the CNF working group. Finally, we are committed to give back to the community to share our experience, our know-how and also components that we developed on our own. Therefore, we are working with our partners in the WeWorks to create a composable Telco 5G cloud-native platform that others could be able to use and hopefully get inspired to contribute. For more details about that, please visit our GitHub page and feel free to contact me as well. Finally, for the end, this conference would not be a conference without me saying we are hiring and indeed we are hiring for the positions based in Germany. So if you like Kubernetes bare metal networking and if you are good on that, please feel free to contact me. Thank you everybody for listening in and I wish you great rest of the conference. Bye.