 Okay, I will slowly start because I have probably more content than I have time for. My name is Florian Müller. I'm a technical lead at the SAP Cloud Platform. You have seen probably most of what I'm, oh, a big portion of what I'm showing here already in the keynote. I will go into some more details. And you're probably all here to see the demo actually working. Let's see if you can come to this agenda point. Talking about the agenda, introductions, what I'm doing here. Talking about the many services, many platforms, probably a problem that I already mentioned in the keynote. Then I talk a little bit about the service manager, a bit more in detail than I talked in the keynote. I will explain how we do instant sharing across platforms, something I just mentioned in the keynote, but haven't really talked about. I go a little bit more into details of the PeriPly project, and then I will talk about some of the PeriPly enhancement that we are doing for SAP. So I talked about plug-in stuff, and I will explain a few of the plug-ins that we are building for SAP to get an idea of what this is all about. Very quick, wrap up how the service broker works. Maybe we have an admin or developer, actually, talking to the cloud control or the service catalog based on which platform we are talking about, talking to the, well, making a request to the service broker to, for example, create a new instance or bind-in instance or something like that. The service broker will then somehow do stuff with the service to create, for example, the instance returns the credentials back to the platform, and the platform will present this credentials then to the application. For the rest of the talk, I'm not really interested in the application and the service part. I'm only focusing on this cloud controller slash service catalog and the service broker. So one platform, one service broker, not a big deal. Platform multiple service brokers, well, if you're talking about four service brokers, it's not a problem. If you're talking about hundreds of service brokers, like, we will eventually end up with the sub-cloud platform, then this gets more of a problem. If you add more platforms to that, then at some point it gets ugly, and our solution to that is the service manager. Basically, we are registering all the service brokers, not directly at the platforms anymore, but instead we are registering them at the service manager directly. We're also registering the platforms at the service manager. So this gives us a few advantages. First of all, if a new platform joins the environment and is registered at the service manager, the service manager can figure out which of the brokers should actually be registered at the platform itself. We can apply policies to it. We can filter the catalog, for example. For a Kubernetes cluster, we can somehow decide that some services shouldn't be visible, some plans shouldn't be visible. We can actually add stuff to it. We can add new plans and services, but I'm going into this a little bit later. We can apply other policies. For example, we can do simple quota checks. So whenever a platform wants to create a new instance, the service manager knows how many instances are already existing for a certain customer and then can do a quota check and say, well, you have to quote our five instances. You're trying to create your sixth one. I'm not doing this, so the broker will not even ask to create a new instance. So we can do a lot of stuff here. And this policy thing is really important for us. So we can actually have some SAP-specific logic in the whole flow. But I'm going to talk about the architecture a bit later, and then we can go into the details. The other thing that this gives us is instant sharing. A few typical use cases, one is you have a database created for a Cloud Foundry application and you now want to reuse this in your Kubernetes cluster for an application in the Kubernetes cluster. Now, I already hear people screaming and saying, you're doing it wrong. So this looks like you have two microservices and each microservice should have its own database. Yes, yes, that's true. But there are some scenarios where you really want to share the same database because the data in the database is huge and you don't want to transfer data back and forth. The database or the backend system might be expensive and you don't want to have two of them and so on. So there are use cases for that. The other typical use case is you have a message queue and both ends now need an instance of this message queue and while this instant sharing across the platforms gives off the ability to actually establish this connection. Instant sharing is already possible without a service manager, right? It's pretty easy. You create a service key and you somehow transfer the key to the other platform and then the other application can use it. But you basically lose control of all of this. Like for example, if you're deleting the instance, do you know who else is using it? No, because it's a manual process. So this is not something that scales. So instant sharing with the service manager works a little bit different. So all requests go through the service manager. All open service broker requests. I should mention that it's the open service broker request. We are not proxying any traffic between the application and the service. That's not our business, right? It's only the open service broker calls that go through the service manager. This makes it possible that the service manager tracks all the instances. So the service manager has always a complete list of all instances of all platforms for all services. So what we are now doing to enable instant sharing is we're adding a reference plan to each service in the catalog that should be shareable. Which means when we register a broker and there are services that should be shareable, the service manager will add to the catalog another plan for each of those that should be shareable that we call the reference plan. And this will then show up at the platform. So let's say for a Kubernetes cluster, it looks like this is a plan that doesn't really exist in the broker. It's a virtual plan, basically. So when we create a new instance with this reference plan, we need the original instance of the original idea of the instance that we want to share. Then the request goes to the service manager. The service manager identifies this as a reference plan. We store the mapping between the two instances and return success. So the broker is actually not involved in this whole thing. It's a mapping that happens within the service manager. And the nice thing about this is that the brokers don't need any special support for instance sharing across us. This just works with any broker. In fact, although we don't have to talk to the broker, we at SAP do that. So we will do that because that's not implemented yet. We will tell the broker that there is a new customer for this instance and how this works I will explain in a minute. Now in the target platform, there's now a new instance created with this reference plan. And for the platform, for the Cloud Controller, the service catalog, this just looks like an instance. So we can create a binding, get credentials for this instance, and everything works out. So there are some challenges with instance sharing. Who can share which instance bear? So here, again, we need some policies. And that's where we need the service manager as well to check those policies. And these are probably policies that are very, very specific to the Cloud Provider. So it probably looks completely different for IBM than it looks for SAP because the domain model and how customers are modeled and how permissions are work are different. So there's no one fits all solution. The second challenge is, who actually owns the instance? If I shared the instance, I know both the shared instance and the original instance are now owners of the service. Who can make updates to the service? So again, this is a thing for policies and very specific. If there is an owner of an instance, can I transfer the ownership? That's really interesting. What should happen? Well, that's actually the next thing. What should actually happen if I delete an instance? So if I delete the original one and the delete go through, I actually break all the shared instances. That's something I probably don't want. But maybe in some case, I actually want that behavior. If I delete a reference instance, well, the reference instance goes away. But does it mean I also want to delete the whole instance? What if I want to actually delete, want to get rid of the platform that originally created the instance, right? So I want to get rid of the Kubernetes cluster that created the instance. And while the others that have shared this instance still need access to it. So this whole instance sharing across platforms is actually pretty difficult. And the whole policy thing within or the whole plug-in mechanism that holds the policies in the service manager can solve that or, well, we can at least implement the policies that we need. Coming to the Pariply project that I also mentioned in the keynote, it's an open service manager implementation by SAP basically implementing what I've just talked about. We started in February this year to do that. It's written in Go. It runs on Cloud Foundry and Kubernetes. We are at SAP currently running it on Cloud Foundry. But in the long run, we will move it to Kubernetes probably. It consists of four main components, the service manager itself, a service broker proxy for Cloud Foundry and one for Kubernetes. And I will explain those in a second. And a command line tool. That's not really surprising. We have bi-weekly calls with interested parties. So if anybody is interested in this project, can just dial in and listen about what's happened, what's current status, what ideas, scenarios, whatever, we talk. And of course, the source code is on GitHub and you can take it and use it. As I mentioned in the keynote, it's working as it is. So you can take it, install it, and it works. It has no plug-ins, so it will not do any checks. So it will let everything through, which is good for testing, but probably not something you want in production. So the high-level architecture of the Perry People project looks like this. We have the service manager in the middle. We have service broker that we can register here. We have a command line tool that talks to the service manager API. The service manager API has basically two big blocks. One is more administrative, like registering brokers and platforms. And the other one is, and I mentioned this here, is you can actually use it to create instances, bindings, well, service keys, I should say. So actually, the service manager is itself a platform. So you actually don't need a platform down here to talk to brokers. We will use this in some use cases where we have a customer account, but this customer account has no platform attached to it. But still, we can create instances like a database or something like that. And in the later step, the customer can add Cloud Foundry, Kubernetes, and whatever, and then reuse it. I will talk about this bit in a moment, but let me start doing this stuff. For Cloud Foundry and for Kubernetes, I said we have what we call a broker proxy. Those two guys, these two green boxes that live in Kubernetes and Cloud Foundry. So whenever a broker is registered at the service manager, this registration will hit the broker proxy. The broker proxy will then use the native APIs of the platforms, all the Cloud Control or the service catalog, to register itself in the name of the broker. So whenever the Cloud Control or the service catalog wants to talk to the broker, it actually talks to the broker proxy, which then forward the request to the service manager, which does its policy thing, and then forward the request to the actual broker. This works for Cloud Foundry and Kubernetes. We have implemented that, but there can be also other platforms. So this is just the open service broker API plus a few additional calls. So it's relatively easy to implement this for other platforms as well. And in fact, at SAP, we have two proprietary platforms where for one of those, we will probably do this. So this is completely open, and you can add platforms whatever you like. Another way of getting instances is this. We have some applications that do not run in a platform. For example, on a plain VM or somewhere else, and they also should be able to consume services. So that's what the service manager API is for. So the application will just ask the service manager, give me your catalog of services, then create me an instance, create me a service key, and so on. The difference between this API and the open service broker API down here is that when you create an instance through this path, then the cloud control of the service catalog are in charge of the instance. So the whole lifecycle of the instance is in the hand of the platforms. This means things like offer mitigation or something. The platforms are responsible for. If you choose this way, then the service manager is a platform, and the service manager is responsible for the lifecycle of the instance. So if something goes wrong, and offer mitigation has to take place, then the service manager will do that. So the application don't need to know about this. This is the more easy path to, while getting an instance, getting a binding. It also helps us if this goes away for some reason, because probably we don't control this. We still have the instances here. So if you have a service that costs you 20,000 euro a month and your application dies, we still in the service manager can make sure that we can get rid of this instance so that it doesn't cost you any money anymore. OK, so I talked about the server broker proxies, and well, there are, of course, alternatives of implementing this. But for SAP, this was the right choice because of the following assumptions. We don't want it to change any platform. So we don't want to change any cloud phone recode. We don't want to change any service catalog code. It should just work with what's available today. So we don't want to touch anything. It should also work if somebody brings its own Kubernetes cluster from somewhere else. We want to make sure that it works with that as well. We don't want an SAP Kubernetes cluster that is modified to support that. The second assumption is it should be transparent to the application of brokers. So nothing should change. So when we introduce the service manager into the mix, the application shouldn't notice. Neither old application or new application, it's the same thing. The native platform tools working, nothing has to change. The same is true for the brokers. We have tons of brokers, so we don't want to touch them all. So this should work without any change in the brokers. This is not 100% true. We have brokers that are tailored to Cloud Foundry. And those, of course, have to be changed. But those have to be changed anyway to work in the Kubernetes world. So whatever change is necessary has to be done anyway. The second or the third thing that is that the service manager is platform agnostic. So what does it mean? Let me jump to this slide again. So the service manager speaks as open service broker API plus plus thing. The service manager itself has no idea about Cloud Foundry or Kubernetes. So if you add another platform, it still works because of the service manager just serves an API that is well defined, but it doesn't know what Cloud Foundry and Kubernetes is. And connected to that is that we cannot call back into the platforms. So when we don't know what the platform is that we are talking about, we cannot make any calls back into the platform because we don't know the API. So there are, of course, alternative ways of implementing service brokers. It's possible to embed those into the service manager, just assume that you move these boxes right into the service manager, and then the Cloud Controller of the service catalog will directly talk to the service manager, which means that to register a broker while you still need to call this API of the platform, which in turn means that you need the credentials to actually do that, which also means that the service manager then have to have all the credentials of all platforms that are connected, which opens another whole problem space that we don't want to go into. So this solution looked better for us. The second option is, well, I explained already that the service manager itself is a platform. So we can create instances and service keys right in here. And theoretically, the service manager then could push the credentials into the platforms, however that looks like. The problem with that is it's not the same as creating an instance from the platform itself. So it needs adoption of the, however this is transferred to the application. So the application will not find, for example, the credentials in an environment variable anymore. It would be different. It's not a bad thing that it's different, but it would be different. And that would break, well, probably most applications. And we cannot really afford breaking all applications running on the SAP Cloud Platform. So although this would be an alternative, it's not an alternative for us. OK, plugins. I already mentioned that quite a lot. Every cloud setup is different. So all the policies I talked about are really different per cloud provider. So we cannot really put them into the open source project as well. Here's a template, and you just fill out this template that works. It's not going to work that way. So what we have are plugins. So plugins are compile-time plugins. Basically, you have to provide go code and implement a certain interface. A plugin is really mighty. It can read, manipulate, and read to service requests and responses. So every request and response goes through a plugin, and a plugin can look at it. It can change the request and response before it hands over the request or the response to the next plugin in the chain. And it can also stop the processing entirely. So quota management is one of the things. So one of the plugins finds out, while you are over quota, it can stop the request to create a new instance and send back an error message. So plugins are very, very powerful. Plugins can also call microservices. We have this case where we have a compiled plugin that just is a thin layer that calls a microservice somewhere else. This gives us different life cycles of the service manager and the logic that presents the policies. It can be handled by different teams and so on. So we have both options. You can tightly plug in into the service manager itself, or you can move out this code from a microservice. Both have advantages and disadvantages. But this makes plugins very flexible. To give you an example of the plugins that we are building at SAP, some of them are already existing, some we are working on. Authorization is one of those. So if you take the open source implementation, you will get authentication through OAuth that's implemented, but it stops there. If you want authorization, you have to build your own plugin to find out if this person that makes the request is actually allowed to do this. And this is, again, very specific to the Cloud Runner. So what we do in the SAP domain model and the permissions there is probably not transferable to any other Cloud Vendor. So that's why we're doing this. Context enrichment, that's actually an interesting one. When you create a new instance, the platform sends, as part of the request, a context object. The context object for Cloud Foundry looks like this. So say it's platform Cloud Foundry, here's your org, and here's base grid. For Kubernetes, it looks like this. Platform is Kubernetes, and you get a cluster ID in a namespace. In the end, brokers have to translate that to a customer. So now, if we do it like that, then the broker has to check, oh, this is a call coming from Cloud Foundry. I somehow have to turn the organization ID into a customer ID, or the Kubernetes call I have to turn a cluster ID into a customer ID. If you add a third, fourth platform, then you have to add to all the brokers again to make this transformation. So what we do instead is we have a plug-in that enriches this context. And then the rich context looks like this. The platform has changed. It's changed from Cloud Foundry Kubernetes to SAP Cloud Platform. We still have the origin. So if a broker really wants to know where this request is coming from, this is not lost. But we are translating the org ID and the cluster ID to a customer ID at SAP. So in SAP, we have basically two IDs. One is a global account ID, and the other one is a sub-account ID. So brokers at SAP just have to look at the sub-account ID to find out who the customer actually is. So if you add, at some point, another platform and yet just another platform, brokers don't have to change because they can rely on the fact that there is a sub-account ID that they can use. Also very SAP-specific, of course. The broker and service visibility and entitlement management are already talked about. So not all brokers should be registered at all places, not all services, not all plans should be visible to all clusters or organizations. If customers buy a service, then this, of course, has to be propagated down to the platforms. We want this to happen almost in real time. So if you buy something, you want to use it a second later. So things like that we implemented as a plugin as well. Platform-specific brokers, so all SAP brokers now have to provide the information for which platform they are built. So we can actually filter out where we actually have to register a broker. So if a broker tells us, I only really work with Kubernetes, then it's not registered at a Cloud Foundry foundation. Yeah, let's leave it like that. So simple quota check, already talked about that. So if a customer has four instances and you want to create your fifth one, well, this plugin will stop it. Network management, this is a topic that probably needs another talk. And I don't want to go into details, but we have services that are not reachable from the internet. So they're not publicly available. So at the time of binding, we have to create a tunnel between the application and the service. And the service manager is actually the guy in the middle who knows both ends. And while with a plug-in, we can actually create this tunnel, however that looks like. And it might be different based on what the source on what the target is. But as I said, this is a new talk. And then we have the instant sharing part. Yeah, so I already talked about this, the whole problem with instant sharing of the policies that we have to implement. There have to be somehow mapped into code, and that's what this instant sharing plug-in is for. So I think I'm out of time. The next step would be the demo. Before I do that, well, SAP hires, you probably know that. Before I do a demo, are there any quick questions? Or probably we have to leave the room anyway, so I can't do the demo. Any questions? There's one. Yeah. Yeah. Yes, of course. So actually, if you look at the content enrichment thing, this is actually pretty thin. So it just adds stuff to an object that flows through the service manager, so that's easy. But other things need some more work. But at SAP, we are not forking the project, right? So we are just building plug-ins for that. So we're using actually the code that's on GitHub and just enrich it with the plug-in that we need. But the plug-in framework itself is really powerful. You can do a lot of stuff with it. You can basically manipulate everything you like. You're implementing an interface, a Go interface, and register your implementation internally, and then you will get all the traffic. Yes, we are supporting both. So whatever you like, we are currently hosting it in Cloud Foundry, but we'll move it to Kubernetes at some point. But in the open-source project, you will see you can find manuals how to do this in Kubernetes or Cloud Foundry. So we will make sure that it also runs in Kubernetes. At this point, yes. So we are only doing this. But with generic actions that are in the works for some time in the open-source broker API, we will, of course, provide whatever the broker provides. So yeah, we just tunnel it through. And yeah, we can do more. Yeah, well, if I go back to this, so this API here can do a lot of things that the broker cannot do. For example, listing all instances or something like that. These are management capabilities that we can do through the service manager. If you're talking about things like more triggering a backup, for example, that's not in the scope of the service manager. But if the broker would at some point provide this as a generic extension, then it's, of course, available through the service manager itself as well. No, the basic service manager not, but no, you can write a plug-in. So you see everything that goes through the service manager write your plug-in to gather metrics, whatever you need. So that's what the plug-ins are for. Metrics is another thing where every company needs something different. So there's not one implementation that works for all. There might be at some point an example for that, how to do this, another that you can tweak. But yeah, we see this as a plug-in, not as a core functionality of the service manager. Not at this point. So we are in the process of rolling it out internally right now. It will be live Q1 next year. So we are migrating the first data centers to use this in January. Can you speak up a bit? I can barely hear you. So you're talking about where the service broke on where the service lives. So all the services that are available in Cloud Foundry today will then be also available in Kubernetes through this mean. Different, you mean this box over here? Yeah, so I think we have announced that we will have an ABAP platform for the cloud as well, or it's already there. This will at some point will be also able to consume all the services that we're providing to Cloud Foundry. OK, I think we are out of time. Unfortunately, no demo. I can demo in the hallway if you're interested. But yeah, sorry.