 Alright, let's get started. So welcome everyone to Dueling Platforms. We're going to talk to you today about how to build services for Cloud Foundry in Kubernetes using the Open Service Broker API. So to start with a couple of introductions. Hey, my name is Alex, I'm a product manager at Pivotal working on Cloud Foundry, also involved in the Open Service Broker API PMC group. Awesome, I'm the product manager for the Cloud Foundry Services API project working on implementing a whole lot of services stuff in Cloud Foundry Cloud Controller and also on the Open Service Broker API, which we're now going to refer to as OSBAPI because it's much, much easier on the OSBAPI PMC. So the agenda for today, we'll talk about the problems as to why the service broker API spec came about. Then we'll talk about the spec itself, followed by a live side by side demo with using both Cloud Foundry and Kubernetes and finally how you can all get involved in this exciting project. So Alex, take us away. Yeah, so we're just going to run through some of the problems, mostly enterprise IT kind of faces and development teams working enterprise is trying to ship their software. So remember, like the reason that you're trying to build this awesome software is because you've got a group of users who are really keen to get all the latest awesome features you're offering through your product. And so they're dying to get the hands on this really cool new stuff you're writing. We then have our developer who sat there writing the features, trying to get these shipped. And then you've got your operations team. So your operations team is often the people responsible for running this in production, the gatekeepers for deploying the services, for deploying your applications. And really often you see everything go through them, often with tickets. You send a ticket in, a few weeks later you get a response back that says, oh, we need to go procure some more servers. And it just takes a whole time and developers are just frustrated. So eventually you get your app. It's written. You give it to your operations team. And you say, hey, can you deploy this awesome app for me? So you send them a ticket. If you're using Cloud Foundry, you might be able to see a push. In this case, we've managed to get your app code onto the server. But it's not much use because it needs a data service. You need some place to store your state because you're writing these awesome 12-factor apps. So next up, you manage to pure yourself a database. This probably took a few more weeks. And so great. You've got everything you think you need. But how do you get access and how do you get credentials to this database? And so maybe you're lucky enough to be configuring this to self-production. And they give you this nice little post-it note. Don't lose Alex. Yeah, don't lose this one. It says admin, admin. And you go in and configure a server. So I'll leave that with you Matt, not to lose. The pressure. And this is like a massive security fail, right? Because no one wants to be living in a world where you're having to pass around credentials through Slack or email. And what happens when you expose these, which you will do because you leave them on a bus or something, right Matt? No. That hasn't happened before. Maybe not. But yeah, so it's kind of a really bad security anti-pattern. So that's the real problem of how do you get credentials back to the development team in a secure way to access this new service that they've got. So you've got the credentials for the application to use to connect to the database. You've got your server. You've got your database. Then you run into these corporate firewalls. Because this is probably a separate team that's managing your networking concerns. And so you've got to go over this other hurdle, submit tickets to another team. And all this time, your developers are just getting more and more frustrated like, why can't I ship this awesome feature to my users? Yeah. So finally, you've got access to the database. All the firewalls are done. And maybe three months later, you've managed to deliver some user value. Well, another development team decided, hey, we're not going to wait for this internal IT team to do anything. They're taking months. So why don't we take our credit card, go to Amazon Web Services, build our own platform using all of the stuff that's available on Amazon. But now your development team has to run this, look after it, make sure it's updated, patched. And that's just distracting you from delivering the user value. So there must be a way to do this. And in Cloud Foundry, as many of you know, we've solved this problem. Matt's going to give you a bit more details on how that was achieved. Awesome. Thanks. So as we start to talk about the open service broker API, let's talk about a little bit of history first. So this started out in the Cloud Foundry Cloud Controller as the service broker API. We had a lot of service brokers written against this spec. It went through a couple of iterations. But at the end of last year, a new project management committee was formed under the Cloud Foundry Foundation to make this the open service broker API. All existing service brokers transitioned over seamlessly. There was no changes. What this allowed us to do is kind of have these have two new releases since then with contributors from a whole lot of other companies and also from other platforms as well. So we stopped being a kind of a Cloud Foundry only component and a Cloud Foundry only spec and has become this like open spec where any platform who want to build against these kind of third party services can use it. So as on the landing page, it says it connects developers to a global ecosystem of services. These services can be databases. They can be configuration servers. Messaging queues, autoscaling apps, a whole load of awesome things you can connect to your applications that they may need to run in production. And it's worked on by all these companies. So we have IBM's Blue Mix, Red Hat's OpenShift. We have Cloud Foundry's Cloud Foundry and a whole load of other platforms which are integrating this platform, integrating this specification today. And the service is being written for all those platforms. And you only have to write your service once. So what is it actually? So it's an open source API specification, just five endpoints, that's all it is. It's implemented by platforms in order to build a marketplace of services that operators can basically provision in their platform and then give application developers access to. And it's implemented by service providers. So like Suspenders, for example, to basically allow containers and applications in those platforms to go ahead and use those services. And it's awesome because, and this is a bit of a mouthful, it allows platforms to offer self-service, service access to developers. So we don't have those problems Alex described earlier, where we have credential problems and networking problems. The Ozbapi spec allows platforms to kind of in an automated way offer these services to developers for them to provision, connect to and use whenever they want. It also automates the life cycle of services, including credential management. So you can give out different credentials, unique credentials to different development teams. If you want one of those development teams to stop using your service, delete that specific set of credentials. And you don't have to worry about any of those kind of security implications. So getting a bit more technical, there's only three endpoints that you have to offer to be compliant with the spec. The first is the catalog endpoint. This returns a big JSON object which basically describes the number of all the services that your service offers. And each service is composed of one or more plans. And in a plan, typically let's say it was a MySQL service, you could have a small plan, a medium plan, a large plan with kind of different configuration settings like memory, CPUs. This could obviously somehow correspond to whatever infrastructure you're provisioning your service on, but the spec doesn't require that. It's totally infrastructure agnostic. Plans can also have JSON schemas used to describe configuration parameters. So let's say you want some things to be kind of controlled by your app developer. Let's say the number of nodes in a cluster. You can actually offer that using a JSON schema to your application developers and configure that at provision time. Which leads on to the next endpoint, which is creating a new service instance. So one post command, you say, hey, service broker, please go and create me one instance of your underlying service and tell me when it's done. This can happen asynchronously. And at some point it'll tell the platform it's ready. The platform can then give that to the application developer to start using. And finally, you can generate unique credentials so you can access the new service. So whilst most service bindings, which is this concept of accessing the service, are unique credentials, they can also be other things. So you can have a volume mount. So you can mount a remote volume into your container application to access the files. You can also integrate something called a root service. So it's going to intercept all the traffic, go into your application, and then you can do things like authentication, checking headers, doing some other magic in there as well. And that's it. So three endpoints and you can write a service and start making some money. So this is what it looks like today. This is a GitHub page. We're on version 213. You can go here anytime. You can use the Projects tab to see what the group's currently working on. It's pretty active at the moment. And yeah, the two key platforms integrating this right now, Acubus Maintenance and Cloud Foundry, we're hoping to see many, many more in the future. And we really think that the power of this specification is the fact that it is so open. And that service providers only have to create their service once and then any platform can make use of it. Yeah. So as we've probably heard in the keynotes this morning, lots of people think it's Cloud Foundry or Kubernetes. They're basing it out to the end. And as you saw from our talk title, many people thought, oh, this is going to be a cowboy in Western kind of shoot them up where they're trying to go against each other platform on platform. But actually what we're saying is, no, we don't think we need to be fighting like this. And using the open service broker API, you can connect the right workloads together. They're running on Kubernetes or Cloud Foundry. And the broker API is that compatibility layer in between. So we think it's going to look a lot more like this in the future. And yeah, this is good times. OK. Yeah. So onto the fun bit. So now it's the live demo. So just want to show this slide first to kind of just give you a bit more context about what we're going to show you. So we're going to have one Cloud Foundry environment with the CF application running. We're going to have one service Kubernetes environment with the service catalog project. And that's going to have a Kate's pod up and running. And we're going to have one service broker, which could be deployed anywhere. In this case, it's going to be deployed as a Cloud Foundry application. I'm going to show you how both Cloud Foundry and Kubernetes environments can talk to that same service broker to provision services. Yeah. Cool. Right. Let's go. Yeah. Wish us luck. Live demos always work. All right. So first thing we're going to do. So on the right hand side of the screen, we've got a Cloud Foundry environment. And on the left hand side of the screen is a Kubernetes environment. So firstly, I want to show you this broker that we've deployed for today. So we can see we have this broker called MoverViewBroker CFSummit. So this is kind of a dummy service broker. It doesn't have any kind of backing service like a database behind it. But it's going to visually give a representation of what's going on with the service broker spec. So we can see that that's deployed. If I quickly show you it here, this is it running. So basically, all the service instances, service bindings, things like that are going to appear on this web page. Every service broker can offer a dashboard like this to developers and operators. So for example, in a database, you could actually have additional settings here, which don't have to go through the specification. All right, so let's go. So in Cloud Foundry, we have this concept of a marketplace. This is what app developers can see, a list of services and plans that they can actually provision whenever they want. This is kind of the self-service access model we were talking about. So you can see right now I've got no service offerings found. Great. So in Kubernetes, we have a project called Service Catalog. And the Service Catalog project aims to produce a similar set of behaviors as the Cloud Foundry marketplace experience. But in this way, we describe the resources that we're interested in retrieving from the Service Catalog. So you can say here, we're interested in seeing service brokers and we're interested in seeing which services they offer. Because remember, one service broker can offer many service offerings. OK, so it looks like we haven't got anything in there either. OK, so far so good. So the first thing we have to do in Cloud Foundry to give app developers access to the services that broker provides is create a service broker. So to do this, we just give a name for our service broker, some basic or credentials, super secure admin password, and then the URL of where that service broker lives. So while in this case, it's actually running on a Cloud Foundry environment, this could be deployed as a standalone app on AWS, Microsoft Azure, wherever you want. All right, so that's going to create it. There's also a fairly advanced permissions model implemented in Cloud Foundry. So in order to actually make sure our app development teams can access this, we're going to run a command called CFEnableServiceAccess. You can restrict this to say give only particular development teams access using orgs and spaces. But for now, we'll enable access to our entire environment. All right, and that's ready to go. Great. So we want to do the same thing in Kubernetes. But in Kubernetes, we describe the resource that we want to be created by the service catalog. So all those YAML fans out there get ready. We describe a simple bit of YAML that describes the name of the broker that we're going to call this. We're going to call it overview broker. It's very similar to Cloud Foundry. We present the URL that we need to go fetches from. OK, so let's do this. So we're going to submit this resource to be created. OK, and it looks like we have a broker. OK, so back to the CF marketplace. I'm an app developer. What can I see now? Awesome. I can see my overview broker. I can see the simple plan that it offers. It only offers one plan. And I can see a basic description of that service. Great. Let's see if this has worked in Kubernetes. OK, so we're going to get the service classes. And we can see we have one called overview broker CF Summit. And let's take a look at that in a bit more detail. So we bust out some more YAML. And you can see here that we're seeing at the bottom. You're seeing the description of the simple plan. And so this looks like we're ready to go. OK, so again, we've got the service broker up and running. We've got it added to the marketplace. That was pretty easy. Now, I'm an app developer. And I actually want to create a service. So I just run one command, choose a simple plan, give it a name. And you'll see that this obviously works synchronously, or asynchronously. In this case, it's going to return straight away saying, OK, that service is being created. If I pop back over to the overview broker dashboard and refresh, we can see here it's being created. We can see it's come from a Cloud Foundry environment. And a whole lot of other information, that's automatically set when you use the spec. So things like which service and plan ID was used, what configuration parameters did an application developer pass in. This is where you could do things like number of nodes or memory limits. And also kind of the bit of contextual information because it's an open spec. We have the kind of Cloud Foundry or Kubernetes specific information in this object here. So in Cloud Foundry, obviously, that's an organ space GUID. I can also see that through the CLI. If I just check out my CS services, then we can see my services sitting there waiting to be used. OK, so I also want a service instance on Kubernetes. So let's see how I can do that. OK, so first of all, we're going to create a new namespace. And namespaces in Kubernetes are similar to the organ space model in Cloud Foundry. It's a way that you can dedicate resources in a particular place and kind of isolate them. So let's create this namespace. And then we're going to be requesting, again, that we create a resource. And the resource here is an overview instance. We're going to be associating that with a development namespace. And we're going to create this also on a simple plan. OK, so we submit the request into the service catalog. And the instance has been created. And let's just check to see that happened. Great, so we now got one instance that's been created in this broker from Cloud Foundry and one instance coming from Kubernetes. Cool, so finally, we can check that this instance is recognized in Kubernetes as well. And you can, again, request to have the full description of the service instance. This gives you some interesting information about when it was created, the status. And you can see at the bottom it's interesting that it says the message was the service instance was provisioned successfully. And this means that you can check on asynchronous behaviors. So if you've got a service broker that does asynchronous provisioning, you can tell when the resource is ready to be used by your pod or how are you going to utilize it in Kubernetes. Awesome, I think one thing that's probably worth pointing out here is that when we're talking about provisioning services instances, that doesn't necessarily mean that we're calling into some kind of infrastructure saying, hey, spin up a new VM and give me this brand new thing. You can have multi-tenant services, right? So you have an existing MySQL database, and provisioning a new instance actually means provisioning a new database in that database. And this can work for a whole load of other apps as well. Same thing with configuration servers. All right, so we've got our service instances up and running, and now we want to talk about service bindings. So if I check the apps that we've got deployed, we can see I've got this very suitably named extremely basic node app running. This is basically just a totally dummy app. It does nothing, but we're just going to use it to show what happens in Cloud Foundry when we actually bind the service instance to an application. So with one command, I give it the name of my app, the name of my service that we've created, and we can see that's going to bind. And the way this works in Cloud Foundry is we basically inject the credentials into environmental variables in that application. So if I run CFM, which will dump out the environment for that app, you can see now we have a super secure credential set injected into the container which is running in this application. You can see here it's called Overview Broker CF Summit with the username and password, admin and password. We can also see these represented in the Overview Broker. Down here is a new binding. Again, it gives you a bit of contextual information what app good, for example, is that connected to. So you can see the actual broker knows kind of, if it wants to, it can know the Cloud Foundry specific information about where that binding is being created from. Cool. So I want to do the same thing in Kubernetes, but I don't want to bind it just yet to my application. I want to create a secret. So Kubernetes has a notion of secrets that can be put into a namespace. And so let's have a look at how we would do that. Okay, so again, we've described our resource. We're describing the binding name we want to create, the namespace we're creating this binding into, and then we want to describe what the secret's called. And this is a way later on that you could bind this secret into your Kubernetes pods. So let's, again, send that resource request into the Service Catalog. And so the binding's been created. Matt, do you want to prove we're not faking this one? I don't think we are. There it is. There we go, magic. Cool. Okay, great. So let's check and see what we can see in the Kubernetes environment. So I'm going to ask to get the binding, and we can see that that has been created successfully and it's ready to use. So that should mean our secret is in the development namespace. So let's get the secrets. And you can see here we have an overview credentials, which is a type opaque object. And so let's see what's inside that credentials. We should see something very similar to what Matt just showed you in the Cloud Foundry environment. So you can see here that we've got data and we've got a username and password. Which don't look like the same username and password I just saw. But this is Base64 encoded, so easy to get back to the original one if you need to. All right. So that's about it. That's what we've created our environment. We've given our app development teams like actually truly self-service access. They've created their own whatever service they wanted to create. They've injected it or bound all created secrets to get access to that service. And now in the spirit of full self-service, they can go and clean them after themselves. So they can delete their bindings and then they can delete their service instances that they created. And if you take another look back at overview broker, it should be empty. I went back to square one. All right. Live demos worked. That's the first time that hasn't gone wrong all the way through. So thanks. Right. So yeah, this is how we're feeling right now. We're pumped. Right. Okay. So just a recap. This is the architecture you saw. You saw the user connecting to via Kubernetes and Cloud Foundry to the same service broker provisioning access to some service. And yeah. So it's self-service access for everyone, no matter what platform you're using. What's really interesting is when this looks like a multi-cloud world, because the broker API doesn't specify how you deploy these things. It's all about compatibility and making sure the contract's clear between these various systems. And so you could deploy your Kubernetes cluster to Google. You could have Cloud Foundry running on Azure. And you could have all of your on-prem, you know, big, heavy data services running on vSphere. All right. So last bit of the talk, how you can get involved. I have to see an exciting demo. There's a few ways. Firstly, we have a weekly call at 9.30 a.m. PST. I think that's about 6.30 p.m. here. And basically we dedicate 10 minutes at the start of each call to hear from people in the community. This is really nice because we get people who are both working on platforms that are integrating services that conform to the spec. And also service authors who are writing services that want to conform to the spec. And the more of those use cases we hear about and the things you're trying to do, kind of the better we can make sure the spec evolves to kind of handle all of these diverse range of services that are out there. You can also head over to our GitHub page. There's now a getting started guide there with a few example brokers. Some really cool libraries like the spring service broker you can use to get up and running in basically no time. And finally there is a Slack and a Google group which are pretty active so please come along and ask us questions and start kind of contributing to the spec. Matt, have you still got the post-it note or have you lost it yet? I don't know. Use the service broker API. So I think that's the number one lesson here. I think we've got five minutes left so I guess we can take some questions if anyone has any. Okay, in the back. So if you head over to the GitHub repository there's like a projects tab and in there we're planning for at least the next release along which will be version 2.14 and basically the way this actually works in reality is the various members of the PMC and kind of by responding to community interest after each release we raise the priorities and things we want to focus on for the next release. So right now, for example, asynchronous bindings is the thing that will end in the next version of the spec. Yeah, so back up from restore has been an interesting one. Several people in the community that have figured you could extend the service broker API by adding a slash backup, slash restore endpoint. And other groups looking at this say maybe that's not flexible enough for how we want to back up our particular service. It's not really meaningful for what a backup means and you might want to start doing other things like list backups and it kind of becomes a new subset of API commands. And what we find is great about the service broker API is it's so simple that you don't need to we're not sure if we want to start complicating it with these other kind of specific service things because not every service will be back-up-able or restore-able. And so we need to have a proposal that one of the community members made called service broker actions where you can define sets of actions that can be associated with your broker and you do this in some kind of JSON schema or Swagger spec and it's an extension mechanism to the API. And these are just two of the options we started to explore but I expect this is going to be coming maybe into the roadmap for the next release cycle or the one after of trying to solve this problem. But just for context how things get into the API we're trying not to do too much architecture on this and having real use cases where someone comes with a user problem as either a service broker or a platform and what we then say as a group like is this a problem we want to tackle if it is we pick a platform or many platforms to implement this in some way to give some feedback as in can we provide the user value we expected to with this feature as the first question we ask and also is the API design that we've proposed efficient for delivering this value right like was it really hard to technically implement it using this spec and we collect the feedback and then once a group are happy that we've addressed any concerns we then move it into the spec in the next release and so it's normally two release cycles before something will get promoted to the spec if it's quite a complex thing sometimes we can fit it in in one release cycle which at the moment we've been hitting about every three months for a release. Yeah if you see anything in the any issue or PR in the repo tagged with this like validation through implementation phase it basically means that it's being actively worked on is likely to land pretty soon and like an example of this is right now the spec works over basic course and we want to improve that so we basically we've changed the spec now to allow kind of out of bound authentication mechanisms like O-Wall to take place and at some point we're going to bring that back into the spec once that's been validated what we don't want to do is obviously just blindly make changes to the spec without either both Cloud Foundry and Kubernetes implementing it but also having service brokers implement it for their services so we know it really works Yeah cool do we have any other questions? Okay great thanks for coming everyone Thanks