 Welcome everybody. My name is Doug Davis, work at IBM. I'm going to talk about the open service broker API, but I'm not going to talk about the actual API itself because I'm assuming you guys probably already know that. This talk is more about what we're doing in the working group itself in terms of changes to the spec. So if you don't know the open service broker API, whoops. Yeah, so if you don't know the open service broker API, this isn't a talk for you. I do have one slide that briefly summarizes it, but I'm not going to talk to it. Instead, I'm going to talk about the goals of the working group, the changes that have gone into the spec since we started, where we're going with it in terms of futuristic things, and then some things beyond the spec itself that may be of interest to you guys. And as I said, I'm not going to talk about the API itself is. Other than here to say is that the point of this is to expand the open service broker API because it originally started out as a Cloud Foundry service broker API. Now we're making the open service broker API because we want it to be used by other platforms other than just Cloud Foundry, in particular this case, mainly Kubernetes for the most part. So as I mentioned, the goals here are to involve it into a community spec. That means, obviously, more than just Cloud Foundry. That means removing the CF isms. So anything you've seen the spec that's specific to Cloud Foundry, we try to extract. Now they're not completely gone. There was someplace else. I'll talk about where they went, but from the core spec itself, obviously if you're going to use it on Kubernetes, you can't have CF specific stuff in there. So that's one of the key things we had to do with the spec in terms of goals. The other key goal is we did not want to break existing service brokers. One of the reasons we're doing this is because we want the entire community or ecosystem of brokers that are out there to be able to be used with other platforms like Kubernetes. If you end up breaking them in the process, that defeats the purpose. So that is a very key goal for us. Finally, once you get all that stuff in place, then we can start looking at new features we want to add to the spec. And that's where some of the cool stuff comes into play. So in terms of releases, we actually started back in 2016, November. The first release was pretty much just a no-op for the most part. It was just we transferred things over to the new PMC or whatever the Cloud Foundry Foundation terminology is for our working group. And then we put out a release as a baseline. But you can see since then we've had three releases. And that's two years, right? Three releases, two years, that's not a whole lot. And I actually think that's a good thing, because that means we're not really changing a whole lot in the spec, which I think is good because we don't want to. As I said, we don't want to break existing things. And what that means is we're going very slow when we do add new things to the spec, because we want to be very careful about it, because even if we don't think we're breaking something, you never know. So we've got to go really slow with that. And as a picture implies, we are now up to 2.14. All right, so let's talk about the actual changes in the spec itself. This first set is what I call sort of spec hygiene type things. I already mentioned the high level concept here, remove the CF-isms. So these are things like when you see API calls that have Oregon space in them, they've now been removed from the core spec. The concept of those have been moved into what we call a profile. So there's a file called profile.md, which contains all the platform or environmental specific things that go in there. So you'll see cloud foundry specific stuff in there that talks about how you reference Oregon spaces. On the Kubernetes side of things, you'll say, oh, for Kubernetes users, it's the ring called namespaces. Same high level concept, and they may go into the same spot when you actually see the API. But because they're different platforms, you have to have different properties to be used. That's what the profile spec is for. To say, here's the sort of cloud foundry specific way of using OSB API, and here's the Kubernetes way of using it. And we also decided to put what I call sort of popular extensions into there, meaning things that aren't necessarily part of the core spec, but seem to be used by a lot of people. And we wanted some place to put them to sort of encourage their use. And maybe one day we'll move them into the core spec so it could be kind of viewed as a testing ground for these things. But those are the other kind of things you'll see in the profile that MD doc. Now the other thing we wanted to do was turn the document into a real spec, or the API into a real spec. And that I mean, use RC 2119 keywords, you know, must, shoulds, what's required, what's optional stuff like that. The old document, I would say kind of implied what was actually required. It would use phrases like the platform will do this. Well, that's not really formalized enough, right? It doesn't really mean, you don't know for sure what that means. It can do it, it may do it, it must do it. It was a little bit fuzzy because you're not using well defined terminology that most specs actually use. So that was another key thing we did out of the RC 2119 keywords. Hopefully in that process we didn't actually change any of the semantics. Everything that was optional before, still should be optional now, required stuff, still required. Hopefully nothing broke there. The other thing we did is we basically made sure that everything that Cloud Foundry was doing is actually put into the spec. In the past when Cloud Foundry was the only implementation of the spec, they didn't have to put everything in there because no one else bothered to really care much about some of the back end details or the hidden details. All they really cared about was what the brokers saw. But there were some specific things that the platform itself, I mean, Cloud Foundry, needed or did that weren't in the spec already and we needed to put those in there because Kubernetes needed to match them. So as an example, the names for say service plans and, I'm sorry, for services and service plans. The spec originally said things like, oh, it has to be a CLI friendly name. Only letters, characters, and those spaces, stuff like that because that's what the spec kind of hinted at. And it wasn't until recently that we realized that Cloud Foundry actually didn't do that. It actually allows any string you want, including spaces. So we ought to make sure that's actually in the spec because otherwise Kubernetes was going to be more strict than actually Cloud Foundry. So we needed to make sure that the platforms implemented the exact same set of requirements. And we needed to do that, got to put them into the spec. HTTP status codes. This is another big one. The originally PI document was not necessarily, shall we say, consistent with how it used HTTP codes. And now we're getting closer to the point where we're pretty consistent, where if a broker returns anything in the 400 range, you can assume it's a user error and the resources on the broker side are still unchanged. Anything in the 500 range, it's something went wrong in the back end of the broker and the platform needs to assume that that resource is basically dead. And it needs to do what we call orphan mitigation on it. In other words, turn it on and delete it. We weren't necessarily consistent in the API document before. This is an attempt to try to make it very consistent so that everybody understands what's going on, whether you're talking from a Kubernetes platform or a Kubernetes versus Cloud Foundry platform kind of thing. And the last big one from a hygiene perspective are extensions. It wasn't actually clear in the spec where you could put other bits of data into the JSON. It may seem obvious from a programmer point of view. It's like anywhere. JSON allows it anywhere. Sure. But not everybody necessarily expected it and since the spec didn't actually come right out and say it, we made it very clear. Basically extensions can go anywhere. Might a little thing but very important if for people who are actually trying to implement this stuff and trying to be very, very precise relative to the spec itself. Okay. So the service broker point of view, when it returns the catalog metadata, one of the key things we recently added, which was actually just asked for by one of the colleagues yesterday, was parameter schema. So the platform may want to know what are the actual parameters that this service instance is expecting or allows. And so we allow the schema to be in there. This is actually kind of cool because now the platform can take that information and produce a really nice user interface to their users and say, oh, don't just give me a chunk of JSON or some random parameters that I may not know about when you go to create an instance. It can actually give nice UI with drop-downs, do some validation checking on the input string, stuff like that. So you can actually produce a better UI experience around that. And you specify the schema in JSON draft version four. The other thing we actually added was, I'm sorry, we codified into the spec what I'm calling common service metadata. Things like display name, image URL. These are the bits of metadata that seem to be very popular but aren't part of the core spec itself. Those are now actually defined some place as I mentioned or put in that profile doc. Another key thing people wanted was, I don't know why, but some people don't seem to think basic auth is actually good enough in the real world. Imagine that. So the spec used to be very precise before. It said you must use basic auth. We actually loosened that a little and said you don't have to. However, if there is no outside or out of band communication between the platforms and the broker, then basic auth is the default requirement because that gives you the baseline interoperability. But if the brokers and the platform do talk to each other out of bands and they agree on OAuth or something else, that's fine. So the spec was loosened a little but still kept that base level of interoperability. Concurrency updates. So the spec never actually said much in the past about whether you could have, say, two operations coming in at the same time, maybe creating two instances of the same service or maybe two bindings of the same time against the same instance. It was basically silent on that. We basically made it so that you can support that if you want to from a broker perspective. The platform doesn't have to send concurrent updates. In fact, I believe Cloud Foundry will not. Kubernetes, I think, may, I'm not 100% sure. It may actually do concurrent requests going in. I'm not 100% sure. But either way, the broker is free to allow it or not. And we defined a well-known error to come back so the platform knows, oh, it failed not because something happened in the back end. It was just a concurrency update. So it's a 400 type of error. So we can retry later once the first request is actually finished. From the platform point of view, excuse me, we added another header. I guess it is a header. It's an ASP header to request messages from the platform to the brokers. And this one's called originating identity header. This actually contains the user ID of the person who originated this request. So in the CF world, if you do a CF create service, this header is now going to include your user ID. Now, why is this interesting? This is to allow the service brokers to actually do additional checks or operations with that information. So, for example, let's say you have a very complicated policy in place on the service broker that says maybe Morgan is allowed to create the service, but Swetha cannot delete it, right? Those kind of things, right? Without this information, the broker has no idea who's actually initiating the request and able to do these additional checks. Or maybe you just want it for auditing purposes. Who knows? You can use it for whatever you want from the broker, but without that information, you just can't do it. So we added that in there. It is required for the broker to send it, I'm sorry, required for the platform to send, but it's optional for the broker to do anything with it. It's just FYI kind of stuff, if they want to use it. Context object. Now, earlier I talked about how we're removing the CF-isms from the spec and putting it, or in the core spec, moving it to the profile. Those properties are still there. However, what we did is we moved it into a new object in the JSON payload called context. And all it is is a wrapper called context around Oregon space and in Cloud Foundry and around namespace inside Kubernetes. That's all it is. Same information just in a well-defined spot. That way when another platform comes along, it's not sprinkling its information about things like Oregon space or it's equivalent. Just randomly throughout the payload, it's all in this one little spot. And the profile document explicitly says for Cloud Foundry object, I'm sorry, for Cloud Foundry, the context has to have Oregon space for certain operations and for Kubernetes it has to have namespace, those kind of things. So it's same information just moved to a well-known location. Async bindings. Now for a while, obviously the API has had async create, for instances. But really, this is just provide symmetry because in some brokers, creating bindings actually could take quite a while, very long while. And so we've realized we actually had a sort of missing piece in the spec, so we added the support for async bindings. Nothing too exciting, but it provides that optional flexibility for people. Get. So you can actually now, from the platform, do an HP get to a binding or an instance? Which is actually kind of interesting to think about it because if you read the spec, it says the platform is the source of truth for this information. So why on earth would you need to do a get against these things? A couple of reasons. The biggest one in my mind is because if you think about whatever their information is returned when you create an instance, one of the bits of information is dashboard URL. That implies that someone could hit that URL, get a browser, and not just look at the instances and bindings for what's going on. They actually may be able to change things to that dashboard. Platform is completely unaware of what's going on. The get now allows for the platform to sync up its view of the world with the broker if the broker supports these get operations. So it allows just for the synchronization to happen in case things do get out of whack. That's really why it's there. It's not required for the broker to support it. You can still be completely stateless if you want to, but it is there and you can advertise that he supports it and that way the platform could take advantage of it if it wants to. Now, what's interesting is the get on a binding. Because binding creates, I'm sorry, contains sensitive information, the credentials, you may not necessarily want to return that information. You may want to turn other mindless things that aren't really sensitive. And in that particular case, you don't have to return the credentials, you can just return the non-sensitive information. But we do make it very clear that in the spec, if you're going to be doing things like async binding creates, which relies on get of the binding to get the credentials back, you have to at least return those sensitive bits of data at least once. After that, you're free to mask them out or drop them entirely from the response. So that's it for the basically core set of changes that go into the spec. Lots of other little things and clean up things, but to me, these are sort of the big ticket items in there. Now, beyond the core specification itself, we've also produced a swagger or open API version 2.0 definition of the API. If you're if you have tooling that you want to use to generate your brokers or another platform, you can now use that to generate if you really want to. We have a starting getting started page, which has links to sample brokers and stuff just to help people get up to speed if they want to write their own broker. And we have the OSB API checker. This is a side project that's that was designed primarily by Microsoft, like they donated to the group to allow you to basically test your service broker to see if it's compliant with the spec. It's relatively new out there. If you guys have brokers, we really encourage you to use this because we find that not a whole lot of people are using it yet, so we need more people in there to make it feel like it actually is doing its job properly beyond the one or two people that are actually using it. So we would be grateful if you guys could play with that. The other thing, and this may seem like a weird thing to mention, because it's just a web page with a list of key topics or key changes per release. But I actually really like this one because a lot of people don't necessarily pay attention to things as often as we do in terms of what's going on in the spec. And having one page here that highlights sort of the key changes per release, especially if you're a service broker author, and knowing which platform actually supports which one, because not all platforms support everything, most of the stuff is optional, is really, really useful. Just sort of get a balance for what's out there. Not huge, but I think it's actually kind of cool. And the file is compatibility.md. All right. So finally, actually not finally next, let's talk about the futuristic things that we're thinking about adding to the service broker or within the group itself. So v3. We have been on v2 for a while now. A lot of people keep talking about are we going to do a v3. And for the most part when we talk about v3, it's mainly because people aren't really happy with the spec. It's not very restful. It feels like it was sort of put together in kind of a hodge podge kind of a way. And we would really, really like to clean it up, make it look like it is designed in a proper restful way and just pretty, put it that way. No other real reason. The problem is if we do that and we make the REST API really the way we want it to look and stuff like that, that's going to go back and violate one of our key tenants, which is don't break existing brokers, right? So we've been really resistant to do that. We've talked about it many, many times but we're probably not going to do it unless we have a really, really compelling reason. So if you guys are broker authors or you want to add another platform to the mix besides just Cloud Foundry Kubernetes and you think that there's something that's missing, very fundamental to the spec but it's a just a major change. You don't think it's going to get in because it's going to require a v3. Bring it because to be honest there's a lot of us that are actually looking for a reason to go to v3. But until we have a really good reason to break existing brokers, we're not going to go there because that's our key tenant or one of our key tenants. Generic broker action. So a lot of people ask about, okay, great, you've got the CRUD operations on instances and bindings and that's all well and good. What about other types of operations that may want to be invoked from the platform? For example, let's say you have a database, right? What about backing up databases? Why can't the platform do that for me, right? On some kind of cron bases or other types of those operations? Well, we want to add that ability optionally and that's what this generic broker action thing is. It's going to allow for you to include a swagger or open API doc as part of the catalog data that comes back so that the broker can advertise to the platform and says, hey, I support these other operations and you can actually call them if you want. Here's the definition of them. You can now expose them to your users. Do whatever you want to do with them. They're there. That's going to allow people to extend the API basically in new and cool ways and they're not going to be beholden on us as spec authors to actually put them into the spec to make them real. And it also is possible that if some of these extensions become really, really popular, maybe we'll add to the spec, right? Maybe we'll find that we actually do need a common backup API because it actually is used by most service brokers. Don't know. It's another playpen type of area. So that's another thing that we're thinking about adding. It's not there yet. If you have ideas on any of these things, please speak up. We're looking for feedback on them. Networking metadata. So as part of setting up the connection between your application and the binding, in some cases it may need more than just credentials, you know, user ID and password. You may actually need to set up networking things for people running Istio. Maybe you need to do some Istio configurations to set things up or punch holes in firewalls, that kind of stuff. We're talking about adding metadata to the information that's returned so that the platform can negotiate and set those things up. It's still in a sort of an investigatory stage right now. But if you get involved in that, because you think that's been a blocker for you in terms of some of your environments, please join that conversation. That's going to be a really important one. It's kind of a big one. And so we want to make sure we get that one right because that could be really good. OSB Checker. Now I mentioned this one already. However, I want to reiterate we really need feedback on it because, as I said, it's not getting a whole lot of love. So if you guys have brokers, please download the Checker, use it, see if you think it actually is useful to you. That would be wonderful feedback to us. Because to be honest, if we find out that maybe only one or two people are using it, we may not keep it around. And I think that'd be real shame because the concept of a compliance Checker for brokers I think is really, really useful in concept. But in practice, if no one cares about it, we're not going to maintain it. So we need some feedback on that. OK. So that's talking about the spec itself and sort of what the main working group is doing. What I'd like to very quickly talk about are things that are sort of outside of the Open Service Broker API. Now Florian sitting over here from SAP, you probably saw him in the keynotes the other day talking about his service manager stuff and he had a talk yesterday on it. This is another thing that several members of the working group are talking about. It's not from a project within the working group per se. But this is another key thing. I'm going to really dumb it down. Go look at Florian's video to get more information about it. But in my dumbed down version, it's basically putting a proxy between your platform and your brokers to allow for a single point of management of both sides. Brokers only have to register with one spot now. That one thing now manages all the registrations to all the platforms. It can control what each platform actually sees. You don't have to go to each platform and manage it. Lots of real benefits here. Not to mention the fact that you can now, if you listen to his talk, talk about instant sharing. So you can share instances across the platforms because the service manager manages that all for you. Not going to go into the details, but if this kind of functionality is important to you, join the project. Now, this then, however, leads into the project that has no name. This one is even more interesting because the service manager project is cool unto itself. But as Florian mentioned in his talk, one of the key tenants of the service manager project is he didn't want to have to change Cloud Foundry or Kubernetes or the brokers. That's why it sits in the middle. No spec changes on either side. That's great for existing platforms and brokers. However, when you start talking about things like instant sharing, it becomes a real headache because both Cloud Foundry and Kubernetes or whatever platforms you have, both think they own the instance that's created because the spec even says they are the source of truth. So when you start sharing an instance across two platforms and one platform tries to change something like change the plan, what do you do with this other guy over here? He thinks it's still on the old plan. Do you try to synchronize the two? Do you give dummy names for the plan over here, which the service manager does, calls it a reference plan I think or something like that? So then you get this weird situation. Well, a user over here doesn't see the real plan name unless there's an additional parameter that you add to it. But then you get this weird state where application over here wants to see the plan name, looks at one property, over here they have to go to different properties to see the real plan name. It gets really kind of not necessarily ugly but feels a little bit hacky and unnatural. What this project tries to do is says, okay, if you think about what's going on, all of the management of the app of the service broker stuff from the platform's perspective, it's just managing metadata, right? When you create an instance, all you're doing is keeping metadata around the platform. When you do a binding, it's just extra metadata. The only thing that application actually needs are those credentials and that's it. So why not move everything else outside the platform? And that's what this service provisioner thing is, right? If you take that logic in Cloud Foundry World, it's taking the Cloud Control logic from the service broker, moving it outside. In the Kubernetes world, it's taking the service catalog stuff, moving it outside into a separate unit that can then do pretty much nothing but push credentials into the various platforms. And it now owns everything. So you're down to a single platform managing the world, meaning managing multiple platforms under the covers and just pushing credentials back and forth. We think this actually may be a nicer way to manage things in this sort of multi-platform world where you're doing things like instant sharing. But of course, in order to do this, it's gonna require changes to both Cloud Foundry and Kubernetes, which we're not sure yet can happen, but we're exploring this. So this project sounds interesting to you, but personally I think this is gonna be really, really cool if we do this. Again, come and join the project. We talk about a little in the open source broker API, but technically all the work is done in a separate phone call, but if you join our main call, you'll find out about it and you get more information. Anyway, that's it in terms of the spec, the working group, things in the future, and I'm pretty much on time, believe it or not. In terms of interested, joining us, there's a URL to the GitHub repo. We have weekly calls Tuesdays, 11.30 Eastern time. Anybody's free to join. You don't have to technically part of the working group per se minute you join the call. You're part of the group. You can't escape. And that's about it. Have any questions? Actually, I'm sorry, I should have put it here. There's a mailing list. You can join that. Go to the main read me on the repo for that. Questions and answer. Yes. Yes. So if you think about what say Kubernetes or Cloud Foundry does, ignoring the injection of the environment variable, the cap services, if you ignore that last bit, right? The registration of service brokers that will be done into the provisioner, not into CF or Kubernetes anymore. The service create, service delete, that will be done by the provisioner talking to the platform. Binding, same thing. The binding operation will originate, or the binding operation to the brokers will originate from the service provisioner. It will be the source of truth for everything from the platform side. Now, that's not to say that, do I have it? Is it working? Oh, it does. So that's not to say that a CF user now has to talk to the provisioner directly. It may be it can, and maybe it wants to, for resources or services that aren't necessarily tied to Cloud Foundry or Kubernetes. Maybe they're outside of any real platform. But for a normal CF user, they'll still be able to do a CF create service down here, but then CF will act as a proxy, basically, and send the request up to the service provisioner who will then do the real request to the broker. So it's just moving all that logic of managing the metadata about instances and bindings up a level. Does that help? Would he be stateful? Maybe, okay, if we kept this spec the way it is right now, yes. Because right now, service brokers can be state less. Therefore, the platform is the source of truth. If we kept that model, then yes. The service provisioner would still, would be stateful and be the source of truth. When we talk about v3, one of the things we really wonder about is do we really want to keep the notion of stateless brokers? Because it is really, really odd that this thing over here owns the instances, but that thing over there is the source of truth about it. That's really a bizarre model. So we may drop that at some point, but it's still up in the air, we don't know. Right. Yeah, that's one of the main reasons, yes. It also actually allows for, in my mind, a little more consistency because you have the Cloud Controller in CF, you have the service catalog in Kubernetes. Both do the exact same thing for the most part, except at the very end, one injects an environment variable, the other creates a secret. It's a real shame we have duplicate code. So this allows us to share code at that point. It's just this one is maybe deployed into CF, one may be deployed in Cloud Foundry, one may be deployed outside of both. You have your choice at that point. A lot of questions. Yeah. Well, if who talks to the broker? Well, no. In this model, we still expect everybody to talk to the broker through the provisioner. No one should talk to the brokers directly. That's the whole point, right? Because there's authentication involved, you don't want that shared with everybody. Now the reason I show app over there is because you may need an application that wants to provision a service that isn't gonna be used inside of Cloud Foundry or Kubernetes. Okay, sorry if I misspoke, I apologize. All right, we have two minutes left. Any other questions? Yes. Boop, boop, boop, boop, boop, boop, boop, boop, boop. This one? That's an excellent question. I guess I should be repeating these for the video. The question is, with the context object, can you now, basically dynamically, from the service broker side, change what you do based upon the platform? Yes, exactly. And in fact, while the profile specifically calls out, actually it should be a little more explicit here. Inside the context object, it's not just Oregon space or namespace. There's also a field called, I think it's called platform, which is either Cloud Foundry or Kubernetes. Makes sense. But you can technically put any string you want in there. So for example, in the blue, in the IBM Cloud world, we actually put blue mix in there. Because our broker will know, oh, if it's talking to blue mix, it can do special magic stuff. So yeah, you can exactly do exactly that. So yes. I agree, it is cool. All right, yes, sir. Yes. Let's double check the table. Yes, apparently it does support it. So, okay, I may need to lean on the Cloud Foundry folks here a little because these, okay, the answer I'm hearing is no. But the get does not, see the way I want to answer it is the get, or the presence of the get operation does not change whether bindings are immutable or not. So if a broker before through the dashboard URL has something that twittled a binding, that could still happen and it could still break. The get is not explicitly for right now, a credential rotation. It's not explicitly for that. It's more for syncing up if needed. Now we may leverage it at some point to say, oh, let's add a new operation to rotate the credentials and then yes, you use get to get it back. But it is not explicitly added for rotation of things. There'll be a whole bunch of problems involved in that, yes. Yes. And then be very clear, the tiki does not mean it's exposed to the user. The tiki just means Cloud Foundry can do a get. I'd have to ask the Cloud Foundry guy, when does Cloud Foundry ever do a get? Is it ever initiated by a user? I apologize, Matt or Sam. Yeah, we'll get the next one if there is another one. Technical or over time, but I don't think there's anything after this. So we can keep going if you want. Any other questions? There you think. You're a little late, Morgan. Yeah. So basically, is the service provisioner thing just a thought experiment or is something that there's a POC out there or some code, some experiments? Right now it is a thought experiment and I wanted to mention it just because I think it's really, really cool. And if you guys also think it's really, really cool, come join the fun. Because I think this will be a game changer per se, but it'll make life so much easier because as cool as the service manager is, the constraints they had to live under, not changing the broker, not changing the platform, it's not as pretty as it could be. And I really think that this could be a way to fix that. And at least for us, and I know SAP's in the same boat, instant sharing is a very big deal for us because we have, for example, in IBM Cloud, we have Cloud Foundry, Bluemix, Docker, and you can create instances and bind them to applications running in all three. And you can share the exact same credentials and they all talk to the same instance database, whatever, and it makes things really harder when you don't have something as elegant as this. All right, any other questions? I was gonna say wait till it puts it down and then raise your hand. So we have another user case where different services, they have different requirement from Azure. So for us, the SQL is really, really complicated. Customer always have additional request like can you do backup restore? Can you do failover? Or like the complicated network requirement? So we have to write a lot of parameters and implement all services. But on the other hand, there are some service very straightforward. So I was wondering whether just from the design point of view, you can also distinguish like some of the advanced features like I only exposed for this service. And instead of if I add one extension and have to apply it for every services that's complicated everyone's other people. Yeah, right now in the catalog metadata, there's nothing in there that allows the service broker to say this service is only available for Kubernetes or Azure or whatever other platform thing. And right now the service catalog is basically still, it assumes it's generic. Now there may be some other things that you can do like for example in the service manager, it may be smart enough to do that kind of stuff. But from a pure spec perspective, there's nothing there to help that. All right, I think with that we're done. Thank you guys very much.