 So hi, my name is Stefan Haas. I'm a software engineer at NavOps. And today I want to talk about mixed workloads with OpenShift and NavOps or Kubernetes. So on the first slide, I want to show you, I want to show you the main features of our NavOps suit. On the one hand, we have virtual multi-tenancy, which means you can share your cluster, your Kubernetes or OpenShift cluster across teams and applications. We have mixed workloads, which I will dive in deeper in a couple of seconds. We have also the feature of managed resource scarcity. So as you can read here, it will enable you to drive high utilization of your resources. You can define with NavOps application workflows, so to address job dependencies you might have in your cluster. We have features for managed cloud resources. So this means prioritize your workloads if you want to run it on-premise or on cloud resources. And one of our newest features is that you can run methods frameworks within your Kubernetes or OpenShift cluster. But as said, I wanted to talk about today the mixed workload. What is the challenge? So many of the Kubernetes and OpenShift users also run beside the containerized applications, non-traditional workloads, legacy, whatever you can imagine, array jobs, HPC workloads and stuff like that. If you want to re-engineer all your applications, this is most, it's time and most of the, it's cost expensive. So users tend to separate the workloads into different clusters. So on the one hand you have running Kubernetes cluster or OpenShift, and on the other hand you have a cluster for your legacy workload. So as said, this increased cost. This is suboptimal for your resource utilization. Just imagine your Kubernetes cluster is fully packed and loaded and you have services waiting for free resources. But your other cluster for the legacy workload is empty or there would be free resources. So another challenge is if you want to share data and network connectivity between your clusters, so as also we have increased management effort if you have multiple clusters. So with NavOps, we have a solution for that. We are using our product called NavOps command and our product grid engine, which is a batch queuing system. It's based on the SunGrid engine. A couple of you might still remember that. So in the end, what are we doing with command? We are deploying a grid engine cluster within your Kubernetes cluster. So as you can see here on this picture, we have so-called execution demons, which is comparable to a KubeLite in the Kubernetes world deployed in a shared Kubernetes cluster. So in the end, you will be able to run in one and the same cluster containerized applications as also traditional batch and application workloads. So in the end, this means you can, as said, you run your non-containerized applications as guests in the grid engine pots. It's your choice what Kubernetes distribution you use, even it can be a vanilla Kubernetes installation, OpenShift or whatever. Additionally, as I said, we have advanced resource sharing policies. We have low latency scheduling in built-ins, which means we are replacing the stock scheduler by ours. You can do fair share allocation, arrangements and stuff like that, backfill scheduling. So what are the benefits? So you can avoid siloed and separate clusters. You will reduce your infrastructure costs. You improve utilization and service levels. You can, at your own pace, move your non-containerized workloads to containerized workloads. So sometimes you even have not the chance to move that. Let's just imagine a certain party application where it is not that easy without the help of the vendor to move that to a containerized application. Last but not least, it will simplify your management. Thanks for your attention.