 All right, we're going to get started. Can you all hear me OK? Sweet. OK, welcome to the inaugural core project update for the Cloud Foundry Services API team. And we put Anos Bapion here because we do a lot of work with the Open Service Broker API community. So I thought I'd give a quick overview as well of how that project's going. So the most exciting bit of the talk. Hopefully, we've all seen this a number of times now. So I'm not going to read it out, but exit so that way. OK. Cool, to start with, let's do some introductions. So I'm Matt McNeany. I'm the PM of the new Cloud Foundry Services API team. I work at Pivotal in London. And I'm also a co-chair of the Open Service Broker API specification. And my name is Jen Spinney. I work for SUSE. I'm an engineer on the CF Services API team. And I work out of Germany. All right, so a quick agenda today. So firstly, Jen's going to take us through what is the Cloud Foundry Services API team. What are we trying to do? And an update on that project and what we're working on. I'll then take you through the Open Service Broker API project update. And then we've got a couple of live demos to finish off with. So over to you, Jen. Great. So we're going to start off by talking about the responsibilities of the services API team. We have two main hats that we wear. The first one is to improve and maintain the user experience of services inside Cloud Foundry. So that includes making changes to the Cloud Controller, the CLI, wherever we need to make changes to make services work really well inside Cloud Foundry. Our other hat is to represent the Cloud Foundry community inside OSBAPI, the Open Service Broker API. So when the Open Service Broker API comes out with a new feature in the spec, it's our job to implement that in Cloud Controller, CLI, wherever it needs to be implemented. We also work a lot with the OSBAPI community to give them feedback on what features we think need to make it into the spec. This is what we look like. Oh, our flags didn't show up. We had little cute flags, but now they're showing up as letters instead. On the top left, you can see Matt, he's our PM. The rest of us are engineers. We work out of a variety of countries, and we work for several different companies, as you can see. So remote friendliness is a top, top priority for us, as you can see, because we're so distributed. We all work out of Europe, but we work out of three different time zones in Europe. So we're always looking for new ways to improve our remote pair programming ability and making sure everyone feels like they're an equal member of the team. And we're always trying to improve that, and I think we do a pretty good job, but there's always room for improvement. We also have to coordinate a lot with teams on the West Coast of North America, so that involves a lot of asynchronous communication using tracker and Slack and stuff like that, so we really have to be good with those skills as well, and we're really kind of trying to learn how we can be an awesome remote team while still doing pair programming. So we're a relatively new team, but we've already done some cool features. First of all, we improved the developer experience for configuring services, so previously you could specify custom configuration parameters when you create a service instance or bind to a service instance. You could say, for example, I want this much memory or I want this to be this kind of special service, but it's all defined by the service broker author themselves, and you had to just like communicate with that service broker author or read documentation for that service to figure out what parameters are tweakable and what the valid values are and stuff like that. So we added JSON schema support to this so that it was machine discoverable what the parameters you can configure are and what the valid values are and stuff like that. We also added the ability to share service instances across spaces and orgs inside Cloud Foundry, and Matt's gonna do a little demo of that at the end of this talk. Also on the configuring services point, we also added the ability to retrieve the current value you have for those configuration parameters, so previously you could set this knob all the way up to 10, but then if you come back the next day, you're like, what did I set that to? You wouldn't necessarily know, so we've added the ability to retrieve what you set. And we're also working with service authors to increase the ecosystem of what services are available. Specifically, we're working to create a new proxy broker for talking with the new Google hosted service broker to get access to all of Google's awesome services from Cloud Foundry. And we have a lot upcoming as well. So one of the things we wanna do is start talking OAuth between the Cloud Controller and service brokers. Right now that's all basic OAuth, but we'd really like to improve that. We want to add the ability for people to trigger service specific actions, so a service broker could say for this particular service instance, you have the ability to pause the service instance, or you have the ability to do a backup and restore on the service instance. And that's defined by the service broker. And we wanna expose that then eventually to users via the CFCLI. So from the CLI, you could say, hey, my service, it exposes the pause action, so I'm gonna invoke pause. And those actions are valid either on service instances or also across the entire service broker. So we're working closely with the Oz Bappy Group to implement that functionality. And we also wanna work on increasing the flexibility of service plan and service visibility across. So right now, if you have like a service broker, it can be visible from a specific space, and that's a space scoped broker, and then the services and plans are visible just from that space. Or you can have it globally, and we wanna tweak that a little bit so you can have more finer grain control of exactly where that plan and service is actually available. Matt. Sweet, thanks, Jen. Okay, so if you're already familiar with the open service broker API project, then bear with me for two minutes. Well, I could give a high level overview. So the open service broker API project was designed to allow software vendors and SaaS providers a single way to deliver their services, things like databases, config service, messaging queues, to applications and containers running on any, what we're calling cloud native platforms. This did start out as a cloud foundry API called a service broker API, but late in 2016, still under the guidance of the cloud foundry foundation, this was moved into a kind of an open source project where we have collaborations from a whole lot of companies, as we can see on this slide. So these companies, they're not only at the building platforms like CF and Kubernetes, both kind of like locally installable versions of those platforms and hosted versions, but they're also building service brokers. So we, as Jen mentioned, we have like the Google hosted service broker as your has a whole lot of services available. And by providing this like single spec, we can get those services available to developers in any platform. A lot of this is focused around the self service model. So, you know, your admins can basically say, yeah, okay, I've signed up for this account. I can get this service visible to my app developers and then developers can kind of be pretty automated, right? And they can run free and provision what they need, be it a Spanner or a MySQL, whatever they want to use. So it's multi-platform and multi-cloud. What that means is that the service broker API has like, it's completely infrastructure agnostic. We don't care what platform you're using. It could be CF, Kubernetes, hopefully many more in the future will support the platform side of the spec. Likewise, the service broker could be running anywhere. So we have hosted ones. You could install something locally, spin it up in a Docker file, whatever it needs to be. The service broker is responsible for a very kind of basic set of life cycle commands. So API calls. So it's a very simple spec. So those things are like provision instance, create a binding, which typically looks like, give me a set of credentials to access a service instance. And then there's also unbind, delete, and the cast log endpoint, which is how you say, hey broker, what do you expose? Now that could be a number of services or just one. And each service is made up of a number of plans. So typically we see things like t-shirt sizes, so small, medium, large, but things like databases, but they could be anything and they can even be configurable in a large number of cases. So as I said, service broker implement this basic API. There's a lot of libraries out there, which if you have an existing service or application that you want to expose, I think the winning demo app from this last CF summit was an app that basically allowed you to put a service broker API in front of any application, which is pretty cool. And then platforms like CF and Kubernetes today can ask the broker these set of really simple questions. As I said before, a binding usually consists of, you get some JSON object which contains an IP address or a URL username and a password. But likewise it could also be, in CF at least, you can kind of have a service instance which intercepts traffic going to an application. So that can do things like authorization or authentication or kind of modifying headers. You can also have a volume out in CF as well so you can mount a volume inside your application. So the Ozbapi group met up last week and made representatives from pretty much all of those companies and there was about 20 of us. Version 214 of the spec is coming very soon. That has the get end points for fetching a service instance and a service binding. That allows the feature that Jen mentioned earlier where you can kind of go back later and see what you can figure your service instance. Did you set that knob to 10? And also async bindings. So because the service broker spec provides this kind of very high level obstruction, a service binding doesn't necessarily mean you're generating credentials. Just like creating a service instance doesn't mean you're spinning up a new VM. These things could mean anything. So we saw cases in the wild where some bindings were taking a reasonably long time. Let's say we have to call out to a third party API and do something on their backend. So we added support for async bindings so we can kind of start that process and then just pull that broker and find out when it's finished. The other new features coming soon. Jen mentioned one of them which was instance actions. There's discussions about what V3 would look like sometime in the future. And the other one I think is useful to call out is the schematized responses. So I said kind of bindings are abstract earlier. Sometimes they're an IP, sometimes a URL, sometimes username and password. If we can really well define those and have platforms programmatically understand them, then platforms can do some really awesome things. So things like network automation. So you see an IP address comes back and if you're allowed to talk to that IP, we could kind of have a very fine grain security system where you open up access from one application to that specific IP address. So if you're either working on a platform or a service broker or you have a service and you wanna make some more money from it, then come and get involved with the group. We'd love to hear, especially from service broker authors. There's a weekly call, which if you're based in this time zone is at 12, 30 p.m. on Tuesdays. Likewise, we have a very active Slack channel and Google group, so yeah, come drop in. All right, okay. Now the fun starts. So there's like two parts of this live demo. The first is basically like an introduction to using services in Cloud Foundry and Kubernetes. As we mentioned earlier, our team is responsible for the services UX in Cloud Foundry, but through the open service broker API, we try and provide like a reasonably set of kind of common commands across both. So this will give a demo into kind of what, how we can use a service broker and what that really looks like. The only bit of kind of, here's what I made earlier in this demo is the fact that we've already pushed a service broker. This service broker just runs, has like a dashboard. It doesn't really do anything, as I say, like a provision in this case is just saving a bit of memory in this web app, but this gives us like a little overview of what's being provisioned already. So on the left-hand side, in black, we have a Cloud Foundry environment, which I'll be running, and generally wearing her Kubernetes hat and running the right-hand side of the screen. So this is running live, but because I'm terrible at typing, I can just hit enter and it's gonna type the command for me. But if something goes wrong, you'll believe me. Okay, so to start with in Cloud Foundry, we have this concept of a marketplace. This shows the developer all of the services and plans that they have access to in their environment. If I go and take a look in CF today, we can see I'm looking in my organ space and I have no service offerings found. And similarly in Kubernetes, we can ask for all the cluster service brokers and we have no resources found here as well. Okay, so how do I get access to a resource? Well, in Cloud Foundry, I need to register a service broker. This could be deployed anywhere. It could be on the same, in this case, it's in the same CF that I'm actually registering the service broker in. Likewise, it could be on a VM in GCP or on my Raspberry Pi back home. All I need is a URL and a basic author user name and password to access it. So if I create that service broker, it's gonna go fetch it, check if that catalog endpoint works, and then it's gonna say okay. Because in CF, we have this fine-grained access control mechanism, I now have to enable service access to all the services that broker offers. In this case, I'm gonna enable access across my entire organization, but I could restrict this into a specific org or even a specific space. That's it. In Kubernetes, when we create resources, we usually deal with YAML. So first, we're gonna take a look at this YAML file that represents the broker resource that we wanna create. So here in this YAML file, you can see we have, similarly to what we have on the left, we have the username and password, but here it's hidden behind a secret in Kubernetes. So here we have a reference to a secret that we've already created that has the same password that he's using for Cloud Foundry. And we're pointing to the same URL that we're using on the Cloud Foundry side. So we're using the exact same actual app for the broker in both platforms. Then we can just use kubectl create and create what was in that YAML file. And now we have our broker. All right, so we can check everything worked by taking a look at the marketplace in CF. So you can now see we have our overview broker demo service that offers two plans, simple and complex. Say these could be small, medium, large, whatever the service broker wants to offer, and a kind of a basic user, user-readable description of the service. And similarly in Kubernetes, we can take a look at what service brokers we have now. So we have our new broker and we can take a look at what cluster service classes we have. A service class in Kubernetes is the same thing as a service in Cloud Foundry. So here we have one service class and we can also take a look at what plans we have available. And here we have complex and simple just like we have in Cloud Foundry. So it's worth noting here that there's nothing shown in this dashboard still. We still haven't really interacted with the broker. All we've done is asked it for its catalog, so what services and plans it offers. Everything else we've shown you is like the platform implementation of how you interact with that going forward. So in both NCF and Kube now, we have like a representation of what that broker offers and now we can start to use it. So NCF, first thing we're going to do we're going to provision a new service. So I've got the overview broker demo service. I'm going to use the simple plan and I'm going to call the instance of this service my service. This should happen straight away. This could happen asynchronously but we've set up this broker to do it synchronously. If I now take a look at the list of services I've got registered as we can see here, the last operation is wrapping a bit was create succeeded and you can again see the service and plan that you used to create it. And in Kubernetes, again, most resources are represented in YAML. So we have a description of the service instance that we want to create. Here at the top you can see the kind is service instance and then the name of our service instance is going to be my service. And then we specify what service class we want to use and what service plan we want to use. So we're going to go ahead and create this resource. So it's created and we're going to take a look at what service instances we have. We're going to YAMLify it and take a look. So here you can see if you look in the middle of the page here where it says status, it says type ready. So it was provisioned successfully. Everything worked okay. Okay, so we've now jumped back to overview broker and refresh. The one we should see is we have two instances created. This broker dumps a little bit of kind of contextual information that we got. So we can see this is using Odds Bappi Open Service Broker API version 213. You can see the service and plan names that we used, any config parameters that were passed in. And then a bit of information. As you can see here, there's no bindings right now. We can see which is quite cool, the organ space that it was created in the CF. So let's say this broker wanted to call back into the platform. Let's say it was an app autoscaler in CF and it wanted to go and find an application. It could use this information to know where that app lived. In general, this is in context because we want obviously service brokers to be usable across any platform. But this contextual information just allows some brokers to kind of do some extra kind of cool things, little tricks. Okay, so we have a service instance now in both our CF and CUBE environments. Now we want to get access to that. In CF, often that correlates to CF bind service. What that does is it goes and creates a binding on the broker, gets back a set of usually credentials in an IP address or a URL, and it injects that into the app's environment. To better demonstrate this, and so we don't have to push an app and do that binding, I'm going to create a service key. To the service broker, this looks exactly the same. But what this means is in CF, we get this service key construct. And if I go and look at that, my secret service key for the My Service Instance, then we can see that's what the broker returned to me. So a username and a password, which most brokers always return a kind of unique password for every binding. So we have that kind of fine-grained security control. So the idea of a service key in Cloud Foundry is represented as a service binding in Kubernetes. So a service binding in Kubernetes is actually not a binding between an app and a service instance, it's just a set of credentials just like what a service key is in Cloud Foundry. So again, we're going to take a look at the YAML that we've prepared for this. So this is a service binding, we're going to call the binding My Binding, we're going to reference the service instance called My Service, and we're going to create a secret from this binding called My Secret. So when we create this binding, it's going to go and create a Kubernetes secret called My Secret, and we can then later use that secret in our app or wherever we want to use it. We're going to do the kubectl create, like usual. The service binding is created, then we're going to take a look at what we have for service bindings. So here you can see the service binding was created, it says status true type ready, everything looks fine here in the middle of the page. And similarly, we have created a secret that was automatically generated when we created the service binding, and at the bottom of this list here, you can see My Secret. So then you can reference the secret in your deployment YAML file to use it directly. And again, we can take a look at the details of this secret. So at the top here where you see data, password, and username, that's just base 64 encoded versions of what Matt has on the left side of the screen. So if you decode those, you'll get the exact same values. Awesome, so if I jump back to the broker quickly, you can see now there's a binding represented in each of these. Again, in that binding, it has a unique ID and you can see the actual date that's represented by that binding. And now to clean up, all we need to do is in CF, I'm going to delete that service key first, otherwise I won't be able to delete the instance, because it's still being used. That'll delete it, and then I can go and delete my service. That should delete. And jumping back here, we should now see this kind of this Cloud Foundry service instance should be gone. And similarly in Kubernetes, we also just do some delete commands. So first we delete the service binding that gets deleted. Now we're going to delete the service instance, and that's deleted as well. And it should be gone. And if you look in here, we can actually see like the last request was a delete request to the broker, and the last response was nothing, which means it was successful. Okay, so the second part of the demo, which is a bit shorter, is I quickly want to show you this new bit in Cloud Foundry, which is demonstrating sharing a service instance across orgs and spaces. This was released very recently in I think it's CF deployment a month or so ago. This is a really cool workflow now, because before you had, if you had a message in queue or a config server or a database, basically like development teams had to make sure they were working out of the same space. If they both wanted to share that same service instance, that wasn't great in terms of like isolation. It was kind of imposing our opinion on the way you should structure your like Cloud Foundry orgs and spaces. So this is kind of enabling many more of our Cloud Foundry users to kind of have their own kind of organizational setup, and then use the use service instances and allow developers to share those autonomously. Just like the kind of the Ozbapi model allows this self-service for provisioning. Now we're allowing self-service for sharing these things as well. So this is a feature flag in Cloud Foundry, which is disabled by default. So as you can see here, we have this new service instance sharing flag, which is disabled. So I have all the power as an admin so I can go and enable this thing, and that's it. So now we should be ready to go. It's also worth noting that not only does it have to be enabled at a platform level, but also at a service broker level. Some service brokers that do things like call back into the platform to talk to particular applications, they don't want to enable this feature, but kind of most services like Spring Cloud services, kind of messaging queues, config servers, all kind of have already seen the benefits that this feature provides to their users. So there's a lot of brokers already who have this feature enabled. So I'm going to create a quick create a couple of spaces so I can better demonstrate this feature. So I'm going to create dev1 and dev2, and jump into dev1. So I still have my broker registered from earlier, the OV broker. So again, I'm going to create a service instance called myService. This time, if I run CF services, we can see that I have my thing created. So this is currently only accessible in dev1. If I go and target dev2 now, I won't be able to see anything. So since I'm a space developer in dev2 as well, I can now go and run CF share service. As you can see here, this is an experimental command. In a few CF deployments time, we hope to make this kind of a GA feature, but for now this experimental, so if you have feedback, please let us know sooner or later, because then we can take that into account and make changes. So now that's shared. If I run CF service, myService, you can see here, I can see shared with spaces, and I can see that it's shared into dev2 with zero bindings. Because obviously I created this instance and I might be paying for it, we wanna make sure that at any time, I can go and delete that thing. What that means is we have to make it very obvious if there are any bindings in other spaces, because if you go and delete that instance, those bindings are gonna break. So applications using those bindings will also break. This at least, there are warnings when you do this to the CLI, and this at least gives you the opportunity to go and find out if your colleagues are using the service instance, so you can make that a little bit better. All right, for now, jump into dev2. Likewise, if I run CF services, we can now see I have my service that's appeared magically in there, and if I wanna find out a little bit more about it, I can run CF service, myService, and again, this time, I can see that it's shared from the old BAPI demo org in the dev1 space. So if there's a problem with the service instance, if I wanna find out when it's being upgraded, I can go and reach out to the development team who operate in that space. All right, if I go back into dev1, I can now do an unshared service to stop it being used from dev2, and I've used the force plug, if I didn't, I'd get the warning saying, hey, this could cause some pretty catastrophic consequences, so you better think about that. And finally, like before, I can delete my service, and that's it, we're all wrapped up. So if we jump back in here, how do we find a SAPI team? So you can find us on Slack at hashtag SAPI. You can also email us at our email address. You can come talk to us today at 205 in the coaches corner down the foundry for our office hours, and we're also doing a hands-on lab. We did this lab yesterday, and we're repeating it again today, it's pretty much an interactive version of what you just saw with the Kubernetes and Cloud Foundry, so if you wanna try that yourself, come join us at 245 in the hands-on lab corner, or just come up to us, we're gonna answer questions with the time we have left, but feel free to just grab us after this as well if you have more questions. Awesome, so on that note, are there any questions from the audience? Hey. Okay, cool, so the question was, are there any plans to add parameters for sharing? So it's a very, very interesting question. So right now, no, what we've actually let that, we've actually like Broker authored to decide that, so in that context object, a service broker will be told if the binding from a space is coming from the space the instance was provisioned in, or if it's from a different space. So what we've seen is we have like messaging queues which allow apps in the provisioning space, so to speak, to be able to push messages onto that queue, and for apps in the receiving space, they can only go and receive messages, but so far that's up to the brokers. Which, okay, so there's a couple of things which might help with that. One is his actions proposal, so we might be able to use actions in order to kind of configure what sharing looks like. So you could provision a service instance, you then say, hey, service instance, what do you expose as actions, and it could return a whole load of configuration, like telling it what kind of sharing construct you want in there. But yeah, I'd love to hear more about that use case, so maybe we can chat afterwards, because that's definitely something we could like look to bring into the raw CLI experience. Hey. Sure, yeah, so the question was, last time a gentleman looked to there's still an experimental feature. So that is still the case. We're basically, we're just waiting to get some, a little bit more feedback on use cases. You wanna make sure that before we market as GA and get into the world, if we can't make breaking changes, that is actually kind of solving the problem we've set out to solve. We hope we're looking at a very short time frame to do that, so in the next couple of months. So far, the feedback's been positive, and we haven't heard of anything other than additional ideas like this, which would change things. But yeah, so two to three months, and this should be a G8 feature. Any other questions? Okay, if you have more questions, feel free to come up to us afterwards, and thank you for your time. Thanks a lot.