 Hello and welcome to this year's KubeCon 2021, a year with rather unusual circumstances. But today, we want to talk about the future and even more unusual circumstances. My name is Sascha Hase, I'm the VP of Edge for KuboMatic, and let's just start and get right into it. KuboMatic is a company that has been contributing to the cloud-native environments for a couple of years now. We are the leading contributor in Europe and one of the top 10 contributors in general to the Kubernetes project. And we do provide our own product portfolio around Kubernetes, this KuboMatic platform and Q1. And additionally, training and consulting in the CK, a CKS and CKD environment is available with us. But I think a lot better, it is illustrated by our latest puzzle, it's going to be released soon, you'll get a notification. And cloud is quite big and quite massive, and it does require a lot of knowledge to get around. In Germany especially, but also globally, a lot of well-known companies trust us to be the guide within the cloud-native space. But we think to further strengthen this footprint, we need to go and see if we can be a jack of all trades and a master of none. And this is going to get quite interesting. Distributed computing, everyone in the cloud-native space knows what this means. This means we have a data center with a lot of individual hardware nodes that are connected via a network. We know that this is a very feasible way to orchestrate massive workloads like CERN does and to just number crunch data end to end. Cloud computing adds just a little bit to distributed computing. It adds a lot of usability. Easy to consume infrastructure and services that make you succeed much, much easier than setting up your own data center and then getting the network to work, getting all the applications to work. Just go to a cloud provider and you will get all of that very easily. You have a nice user interface, you will have APIs that lets you consume all of the above mentioned topics very easily. And there's one specialty, I think Kubernetes is somewhat a threat to those services because it also takes away a lot of the things you would have to do manually. But it's very compatible with cloud. Edge computing is something that is new to these topics. And edge computing is quite straightforward from my point of view at least. I think edge would qualify as a scattered compute because we involuntarily need to distribute application workload. And we cannot choose to do that in a centralized data center, which we've just seen. I think edge as well is just the relevant development to do what we do in our daily lives with our mobile phones and our mobile devices. And the industry just needs to adapt to this. It's nothing too spectacular, I think, but it's quite hard in the details like most other things are. I also think that there is no cloud or edge. Edge includes cloud. It's an absolute necessity to be relevant and fit in the future to know what you can do on cloud and what you can work on with edge. And the environments that we would refer to as edge are just something that we add onto the existing technology stack. Let's have a look how Kubernetes looks at the edge. I think Kubernetes is quite known as well. And it will face some new topics. If we consider edge end to end, we will have not a dozen or multiple dozens of locations. We'll have hundreds or even thousands of locations considering how far into the edge you would want to go. The specialty is with harsh environments in very remote locations. But I think that is something that we will see later in the adoption of edge in the future. The dynamic provisioning of applications in data centers and service provider edges and maybe on constraint devices are something new. Everything that comes with it like where the data is stored, how do we transport it, and security relevant topics will be a challenge as well. Application life cycle management, I think, is just something that we will need to have automation for as well as the integration with legacy environments. Why is Kubernetes a good idea in combination with edge? Well, you have standardized APIs, you have this immutable infrastructure, and you have this form of resilience and automation. It also would, as a side effect, scale down to a reduced TCO calculation. It would allow you to be more resilient and it would allow you to bring topics to the market faster. When we look at some of the forecasts around edge, the service provider edges are something that most people would consider as very relevant in the near future and in the far future as well, because they will provide capabilities that are duly needed. This is how we think a centralized management could look on edge, that you will have your multicluster management in the cloud, you will orchestrate regional edges, you will orchestrate service provider edges, and you will even orchestrate more remote locations from the central environment. In this topic, we would have to see that the capabilities that you have at each location are very purpose driven. A robot within a production facility does have quite an amount of a computer network available, but it is not comparable to a data center that you can put into a plant or a power plant quite easily where you would have basically the same capabilities as you would have within a data center. Cubomatic platform is something that is very well suited to fight the final boss, because with our multicluster management, we are already able to set up highly available Kubernetes with Cubomatic on top, which is a container-rise software as well, and then you can orchestrate a lot of clusters quite easily. This means that the service provider edge that we have been talking about is something that is already in our reach. But let's be honest, no one cares about infrastructure. I have never seen someone being excited about tap water. We do take that as granted, and I think it is the same in IT as well. What we really care for are the applications that everyone needs in order to do what he wants to do in order to achieve the goals that we have. Edge will add something to this, and this is a cost conundrum. Applications will become truly distributed if you want to be considered as valuable in the future. Applications will no longer be deployed by a human operator, and applications have to adapt to the scarcity of resources that we will see in the future, where we will fight essentially for compute space at very relevant locations. Think about festivals, for example. We do have applications within our Kubernetes platform as well. This is a nice illustration done by my colleague, who has just put the emphasis on our monitoring, logging, and alerting stack, and we use the quite popular Grafana promisos Loki stack as well. We did emphasize on the resource scarcity with our latest development efforts, but you could see that you would have centralized and decentralized components as well. How do we do that with a cloud native way? Operators are a principle known within cloud native for some time now. Essentially, the operator does what a human operator would do as well. It's just programmed to do so, and it is adjusted to what the application needs. The benefits of using an operator are quite easy to sum up. The human operator no longer needs to take care about deployment, upgrade, backup. Kubernetes will work extremely well with an operator-driven approach. Application operations become very testable. It's a lot better than asking the colleague if he's done the low tests and if he has done the protocol, and it will help greatly to ease the management in hybrid cloud or edge environments. We have another tool as well. Just one of our younger releases, Q-Carrier. Q-Carrier does take care exclusively about the application management, especially when you have a lot of clusters already. Q-Carrier relies on operators. We'll provide a service catalog and a multi-tenancy driven approach, and the catalog contains the catalog of operators, which then will be chosen by a user. You will opt in for the specific catalog with your cluster, and then you can consume the catalog items. This will result in an end-to-end automation for delivering applications. We will furthermore be able to set up Q-Carrier as a software and application on top of Googlematic platform. We will use our centralized environment in the cloud, and then we will just manage all the different locations and the software in set location. If you really want to be successful in the upcoming years, I think you'd have to also be aware of some changes in the future. We talked about the individual capabilities that are purely purpose-driven and that will drive automation in the future. That's purely it. On the different locations, you'll have special requirements as well. For example, there is no better place to operate a data center than in or near a power plant. You'll have everything that you need in a production facility on a shop floor. You can also deliver quite a lot of computational workload, but there are constraints as well. The space is very expensive. You've already taken care about all the production areas, and a lot of concerns about machine management, production management, will facilitate into every decision you do. Then you'll have the machines in itself that also have only a subset of functionality available compared to the production management compute in the same production facility. They also do share some common denominators, all of them, no matter how small or big. They in the future will have compute available, they will have network available, they will have storage available, and hopefully, everyone will have some sort of Kubernetes adoption available as well. With these denominators, you are able to do things. You are able to do container automation. Balancing containers is very easy. We've seen that in the least history. It's totally not a thing, and it does not influence our daily life at all. But getting back to what I said earlier, when we have an abstraction, and we use the declaration approach that we grew fond of with our APIs, we can make all of this work. We will have the metadata to abstract the infrastructure. We will add the metadata, which we need for the applications, and with operators, we do have quite the catch available already. If we then create a logic that deploys applications to a suitable compute network and storage target for the set application, we are able to do quite a lot when we want to fight the final boss in the end. Well, we have something at Kubernetes, which is called Kube Carrier. This is how we think the future could look like. If you set up a multi cluster management solution, if you bring Kube Carrier on top, and if you then start to use the principles that we just pointed out, you will be able to deliver different applications to different locations at any given point in time. You will not need to take care about all of the complicated things manually, but you will also abstract and automate that in the same procedure away from a human operator. The software will decide if the parameters of compute, of network and storage are actually okay for the application, because the application via the operator communicated the very same way, and then it will deploy the software automatically if the conditions have been met. This is how we think the future could look like and how you could win the fight against the final boss. Thanks. Please reach out to us at kubimatic.com, dig into our latest podcast at edge native.io, or write me an email at dasheratkubimatic.com. Thanks a lot and have a nice day.