 So we can start now. Hi, this talk is about how to create beautiful cloud-native landscapes So my name is Kristen Berndt. I'm working with OpenStack for over 10 years and We are 23 technologies. We were founded in 2021 by myself We are cloud-native experts located here in Germany and we partnered About a half year ago with Klojera previously known as city network to build and run their managed Kubernetes Service that should be available soon. We hope so and We are also preparing managed Kubernetes for Some other CSPs here in Europe and we are also working together with Audi and Bosch and Trump and some other bigger companies in the Stuttgart area on software-defined Manufacturing to be able to bring cloud-native technology to the to the OT to shop floors and so on This talk about is about where are we today about cloud-native workload and That is normally running today on And Kubernetes layer that is running on an infrastructure as a service layer Sometimes OpenStack, sometimes other solutions Sometimes based on OpenStack, but normally on top of an infrastructure as a service layer and There's a problem. So the problem is that we want to deploy a lot of Different cloud-native workloads and there are a lot of ways to deploy the cloud-native workloads on top of an Kubernetes layer and there are also many ways to deploy and maintain and to operate on Kubernetes which runs on infrastructures that are all different and The problem is now that it's really really hard for an an user of an application to provide their software on top of an Kubernetes that is different everywhere and We we we tackle this this issue sometimes and then we decided to fund the company to make this easier and We we try to make Happy users in the end so that an operator can deploy their workload their applications on Top of the same Kubernetes everywhere So our goal is that we provide an Kubernetes cluster that looks the same on any infrastructure out there and to be able to do this we choose to Use and Kubernetes as a service layer as a layer between the Infrastructure and the Kubernetes cluster so that we do not deploy directly the Kubernetes cluster with some tooling like Cops or QBADM or something like this on top of the infrastructure. So we have an Control plane between and Kubernetes as a service that manage the infrastructure in a universal way So that the infrastructure is still different, but that's fine now because the now happy user does not see the underlying infrastructure and To be able to do this We use an universal Kubernetes at scale Developed by SAP as an open source project. It's called Gardner The target of the of Gardner is to establish and control plane that is multicloud aware so that you can Deploy Kubernetes clusters on a lot of different infrastructure as a service on-premise and off-premise on Multiple clouds on Ali cloud AWS on Azure Google heads now equinix open stack bare metal VM where it doesn't matter in the end so that the Kubernetes clusters look the same everywhere so that the workload does not know the underlying infrastructure and it's based on infrastructure resources provided by the Infrastructure providers so that you have to use standard resources. We like do not rely on AKS or EKS or something like this We simply deploy our own the mware resources on Network resources or on storage resources on you put to or on garden Linux or on chorus and then we run an Control plane on top of this and this way we are able to manage thousands of clusters with one Solution so that we have an fleet management with a minimal TCO So it's possible to operate thousands of clusters in the end with one FTE and This makes it possible that an small CSP can offer and high-quality managed Kubernetes service without Yeah, the hassle to implement it itself with Terraform or Ansible or something like this How does it look like? We have a central gardener control plane that is running somewhere It can be on the same infrastructure. It can be on in other infrastructure So it's just in Kubernetes cluster which has the API server for the gardener itself the dashboard and so on and then we have Infrastructure providers and for example AWS and for each region you deploy and so-called Seat plane and the seed plane has the control plane for the so-called shoot clusters So a shoot cluster is in the end in Kubernetes cluster for a customer. They can access the shoot clusters via the Cube config and can do what they want to do there and the seed plane has the API server and the Every required service by the Kubernetes and and this way it's possible to have an and shared a control plane for a lot of Clusters so you normally have around 200 clusters for each seed plane and This way the regions are independent from each other So when the control plane dies or one of the region dies It does not affect the other running seed planes or when a shoot cluster dies It does not affect other clusters. So in the end you can spawn a lot of Shoot clusters that can be consumed by your customers and then you they can run their workload on top of it And this is also possible to make an hybrid Deployment so the seed plane for example We have an public demonstrator of on the gardener and their deceit plane is running on the Azure cloud and the shoot notes Are running on the Hatsner cloud so you can combine different Cloud environments to to have different SLAs or to have an on-premise control plane and off-premise Clusters to allow a bursting that when you have a large open stack environment on your side And then you want to spawn additional resources in the AWS cloud then you can do this by simply adding a new cloud profile to the to the control plane And the inside is that you have on the control plane running all of the required Kubernetes services like the API server or the scheduler or the controller or the manager inside Kubernetes cluster itself, and this is the so-called seed cluster and on top of the seed cluster you add then Additional services required to spawn the shoot clusters so that you can control the underlying infrastructure via Kubernetes itself or the machine controller via the outer scalar and and so on and This way This slide is broken Yeah, and then you can attach to the the seed cluster now contains the control plane for Shoot cluster and the shoot cluster is the workers for a customer and you can simply Attach then the worker clusters to the shoot to the seed clusters And then you have an so-called shoot cluster and the shoot cluster is then for the for the customer and the The shoot cluster cannot access the seed cluster So this way it's safe that the customers can only control these things that are running on the shoot cluster and on the workers and they cannot Join other customers or something like this via the seed cluster and then you can control everything with This why the Gardner cluster API on top of this via an API or via in UI or you can simply write in Qt manifest and define your Gardner cluster there and Condense on your Kubernetes clusters and this is the central entry point for the users or For the CI processes or stuff like this Okay, so this is the end of the presentation. We have them. I think a few minutes left so we can have a short Demonstration when my internet connection is working Hopefully not like the last time so I have to check Okay Yeah, so this is the The login so I have no So you can simply look in there via and GitHub So there is an dex proxy so you can integrate whatever you want and then you can access the UI of the Gardner and that's all in the end what you can see there So you have them clusters running somewhere and here you can see where this running This is the so-called clothe profile and here you see the versions and the querying readiness and the heels of the cluster And there is one dialogue to create new clusters and and here you can see the different Cloud profiles for example for AWS or Azure or Google or Some open stack based clouds so you can add simply you'll say I here's the keystone endpoint I have this and this image in those flavors available and then you can just click together your cluster You can also do this via Cube config file so you just have this definition and then you can give this to the Kubernetes API server of the Gardner and then you can spawn the cluster and that's all in the end So we can also I have to to move a little bit to see it. That's not so easy Yeah, something's Yeah, it does not work this way so I cannot see anything here but but you just have to to you can just create this cluster then and in the back and then there's running and reconciliation process so it creates the required control services in an namespace in the seed plane it spawns the Kubernetes API service and and so on and so on and then it will start to create the required infrastructure resources on the specified cloud profile and it will then attach the infrastructure resources to the seed plane and then the Kubernetes is ready So it takes around five minutes at the moment to to spawn a new cluster So it's not optimized and it heavily depends on the underlying infrastructure So when your infrastructure is slow, it will take a long time to spawn the cluster then you have an fast Infrastructure it will take only a few seconds or minutes. So we try to to optimize it But yeah, you have a little overhead To to spawn the first the network and then the router and then maybe you add the load balancer to it And and so on and so on. Yeah, but I think this way it's possible to Yeah Spawn a lot of Kubernetes cluster on an infrastructure without Caring about the underlying infrastructure in the end because everything is Consolidated in extensions. You can add new infrastructures when they're not yet supported. For example, we added the Hatsner cloud extension so that it's not possible to run in Kubernetes cluster on Hatsner cloud as well and it's not a big deal to to add further extensions there. Yeah So, thank you for being here and for your time