 I want to talk to you a little bit about context. So yesterday, Abby made the announcement that we renamed our Elastic Runtime to Application Runtime to kind of help be more descriptive. We've also really embraced as a community the container runtime as a way to think about running and having that Kubernetes container abstraction, but ensuring that there's a high degree of operational value to it. But one of the things that I think we didn't spend enough time on yet, I want to right now. And that's the question of when do I use one or the other? It's not a choice of I'm using one platform and everything should go on it. We live in the real world, and there are different abstractions that are valuable for different reasons. There are different use cases that fit more appropriately in a Kubernetes cluster. And there are different scenarios where things fit more appropriately in the Cloud Foundry application runtime. So I can't tell you what those scenarios are, right? And it really is a question of what's the right tool for the right job and can you have a great experience operationally ensuring that these platforms are alive. But what I can tell you is that there's clearly some emergent commonality between the different enterprises that I've spoken with that are pairing Kubernetes with Cloud Foundry App Runtime. Now, we all know the CF push experiences for the developers, the custom applications on the application runtime side. And the container runtime side, some of the things that I've seen are, number one, a lot of independent software vendors or ISVs, they're shipping their software to companies as Docker container images or OCI container images. So that's a great use. You can land it into Kubernetes. There's other examples where there are data services that are actually born of the container era, right? They're designed to work in Kubernetes. So they should run in Kubernetes. There are other examples where you have a older application, a traditional application. Somebody even say it's legacy, and it was from last week. But a legacy application, it's designed a bit as a monolith. You could package that in a Linux container, and if you choose, you could choose to run that on Kubernetes cluster with Cloud Foundry Container Runtime. So we're going to continue to see as a community in your organizations as we pair these two abstractions together and continue to make them work even more seamlessly. We're going to evolve as an industry. We're going to learn what's right at one abstraction versus the other abstraction. So with that in mind, I want to invite someone up on stage who's going to give us a walkthrough of an example of where we're pairing the Cloud Foundry application runtime with some machine learning that's running on top of Kubernetes. So if I could have Emma from SAP join me. All right, yeah, let's go up here. All right, well, thank you for joining us. Thank you. Yeah, so why don't we start with one? Can you tell everybody what you do at SAP? OK, I am a developer in SAP. I work with the SAP Leonardo Machine Learning Foundation. And I work with open source technologies like Cloud Foundry and Kubernetes. So today, I'm going to be telling you how we do machine learning in SAP. Great, great. So you brought a diagram that you wanted to walk through. Yeah, but before that. Before that, here, why don't I give you this? So I kind of give an overview of what we do at SAP. And we in SAP, we are actually interested in two business use cases. These include software as a service and I'll say container as a service. So we have employed leading technologies to help us deliver our business use cases. With respect to software as a service, we would like to deploy intelligent business applications to the cloud environment. And because of these, we have decided to use SAP's cloud platform, which is Cloud Foundry, to help us deliver this or run these 12 factor-like apps. On the other hand, we have customers who would like to train their models or they would like to run inference against their models. So we have employed Docker technology to containerize their models and use Kubernetes engine to run the resulting containers. The idea is that the applications which are running in the cloud have been powered by the models which are running in Cloud Foundry. So a look at architecture at Dagger, we really drive this message home. You see we have different tiers. We have three tiers and these include the application tier, the compute tier, and the persistent tier. The Cloud Foundry hosts the application tier and the components of the applications here include tenant onboarding, APIs, and our use case logic. For tenant onboarding, we leverage on the security features of Cloud Foundry to help us bring our customers to use our platform. And for APIs, these are what enterprise applications or developers can use to consume what we offer in our platform. The business logic is different for different kind of models. And for instance, you can have the business logic for image classification, or you can have business logic for some linear regression. And this is the gateway to the compute tier. Now the compute tier is where all the heavy lifting is done. It is actually composed of two different kinds of infrastructures. They include the inference infrastructure and then the training infrastructure. For the inference infrastructure, we can host custom models. This means customers can bring their models to us. For instance, our models, and they want us to save it. We can host that. We also host open source models like TensorFlow. For instance, we have the inception model, which is currently running our TensorFlow server. Can you maybe tell everybody what TensorFlow is for those of them that might not know what it is? OK, TensorFlow is TensorFlow. Anyway, TensorFlow. That's the best answer I've ever heard. TensorFlow is TensorFlow. Clearly. What do I know? TensorFlow is an open source machine learning library that is from Google. So. And we also have the training infrastructure and this training infrastructure is characterized by long running jobs. Usually, we have jobs that could run for one week or even more. And these kind of jobs, they require massive volumes of data. And I'm talking about data, which is in the gigabytes or terabyte range. And because of these, we have taken measures to bring the data closer to processing why training is going on by way of data caching. We also have employed NVIDIA GPUs in order to make training efficient. And the idea is that after training is completed, it is uploaded into the model repository, which is in the persistent layer. And thereafter, containers which wish to serve this model can download it and use it for inference. To drive this message home, I will demonstrate a specific use case of how we use Cloud Foundry 2 as a gateway to our machine learning services. In this particular use case, I am a developer and I have trained my own model, but I would like to use Cloud Foundry. And I would also like to use Kubernetes to deploy my model and to run driving inference against my model. In particular, when we watch this, everybody pay attention to the developer experience, because you're going to see the CF command line tool actually has been expanded with a plug-in, which interacts with the machine learning system. Yeah. So as a developer, my first steps will be to subscribe to the machine learning services that we have currently in Cloud Foundry. So maybe we can roll the, there it comes. And so after subscribing, I create an instance of my service that of the service of the machine learning service filling all the relevant data. And because I would like to use this model in my laptop, I create service keys. And OK, at this moment, the instance is being created. And I click on my new instance, and I generate service keys. These service keys gives me credentials that I can use in my local computer to assess the application, or assess Cloud Foundry. So I have what I can use to identify myself and the APIs that I can communicate with if I want to use it to deploy my models. So the next thing, we have developed a Cloud Foundry plug-in that has different kinds of APIs. I have deploy, describe, undefloy, and upload. So in my file system, I have my model. And I upload it to the model repository. After it's told me that I'm successful, the next thing I would like to, at the moment, I would like to deploy it. This is an asynchronous process. So there has to be some wait time. And it gives me a response, or it gives me a notification when I describe to tell me, at this moment, the model is still being uploaded. So you see it's spending. The state of the model is spending. And how long does it normally take? Few seconds, depending on how large the model is. And it looks like a drag. Now it's succeeded. So at this point, I can drive inference against my model. And this is the host and the pods. So here, this is an example of an application in the Cloud and already deployed in the Cloud. And we are driving an image classification request. Oh, that's a cute dog. Yeah. And the results comes. And so this is how we do machine learning in SAP. Thank you very much for listening. Thank you so much for coming. Appreciate it. So that was kind of cool.