 Okay. Hi, everybody. Are we ready? You have to make the fire code announcement. Yeah. Well, the first, okay. Got up? No. Okay, okay. So we have to tell you all that if there's a fire, the doors are in the back, and please exit calmly and look for an emergency stairwell. And if anybody else tells you to do a thing and they look really official, they'll follow what they tell you to do. Okay. So who are we? We are... I'm Morgan. This is Jonathan. I'm Jonathan Burkan. We're both core contributors to Kubernetes Service Catalog, which is a Kubernetes extension project that implements the Open Service Broker API to allow Kubernetes users to use service broker services. What he said. And as we all know, the CF container runtime is partially based on Kubernetes, and thus the service catalog works in it as normal. And the reason it does so is we can start the presentation. I guess we got to go through the agenda first. We're going to talk about service broker architecture, Open Service Broker API, because that is what everything is compatible. That's the common open behavior. Then I'm going to talk about the service catalog in Kubernetes, and I will show at least the commands to install it. And then Jonathan is going to do a demo with it and show you how to run an app. And we're just going to run it in the Kube side, but we're going to have the commands so you can do a comparative look at what it would look like to do the same thing in CF as in Kube. We have a couple of bullets on trying to make your broker's platform independent, in when you implement them, and then obviously questions and answers. And at the end of the slides, there are links for all of the stuff we're talking about, so you can go and download the slides because we uploaded them because they asked us to. Forward? Yes, forward. So Open Service Broker API, OSBAPI, as we like to say. The reason this exists is because you have sort of your cloud-native apps, which run in containers, so they kind of don't live a long time necessarily. They don't have any persistent storage. They still, however, need access to all of the standard persistence storage, like databases, caching, stuff you can see, big list we have. So how do we solve this problem when your apps live in a system where they don't necessarily live very long and can die and come back and die and come back and run on one machine and run on a different machine? We do this by creating a service. And why would we do this? Well, we want your app developer to be concerned with your app development and not all the infrastructure that they attach to. So the app developer doesn't have to care where their MySQL is coming from. They just have it. They have the parameters and, you know, credentials to access it. And then it's easy for them to use and standardize, at least partially. So they can use it. And then your cloud provider platform, so CF and Cube and SAP has one and other people have compatible open-service broker platforms, they don't have to care where the service is, what it's doing, and it allows them to, if they are providing a managed platform to expose third-party services directly into the ecosystem and they don't have to really care what the services do as long as they sort of behave appropriately and conform to the OSBAPI. Then, so what is it? It's basically a specification that allows us that has five resources, list catalog, provision of service, bind to the service, unbind the service, deprovision the service, and so it's a whole life cycle of service management. So, you know, again, example, SQL database, message buses, Watson services. All sorts of services that you might want to have access to, but you don't want to run yourself necessarily. You don't want to have to manage it. You don't have to look at disk, network, whatever. You just get a network endpoint to connect to. Boom, done. So the client side here is the guy, is the platform. He walks to the broker, and so the brokers, or the platforms are, we have Cloud Foundry, Kubernetes, are sort of the public ones. I know of many other ones that are not sort of public, but they do exist. Do service side implemented? So that's the service brokers. The brokers actually implement these resources. Again, list the catalog, V2 catalog, stuff, provision and deprovision is V2 service instances, and as a sub-resource of instances, we have bind and unbind. Everybody good? Am I going too fast? We've heard this before? Okay, good. I went backwards. Okay, so this is basically we're going to go through the flow with some nice boxes and arrows here. So your cloud platform, we'll talk to your catalog. It'll do the list catalog, and you'll get a nice return JSON, which is services have plans, services have plans, services have plans. Okay, so now when you actually want to this way, when you want to get a service, you want a provision service, you say, okay, V2 service instance, whatever, and that sends a nice Foo provision with parameters to your service broker. Your service broker does, you don't care, that's the whole point, and then it returns Foo is created. There's a synchronous flow, there's an asynchronous flow. It's important, but eventually you get a service reference back. You get an ID, and that's the thing that you use when you talk to the broker again, the platform talks to the broker. The platform will then expose into the client applications whatever it needs to as appropriate. So it's that part we don't concern ourselves with. CUBE does secrets, CF does VCAP services. That's probably what you're concerned about, but that part of the specification, that's not part of the specification, that behavior is platform specific. So again, now create a binding. Okay, well we have that idea earlier Foo, we give it there, it does whatever it needs to do to create a binding. In a case like a MySQL, that would be, okay, you said provision me a MySQL, so now you have a database. Service instance might be give me a new user database, and then what you get sent back is the username and password to access that specific piece of the database along with URLs and dashboards and all sorts of stuff. So what are we talking about? We are talking about the specific platform, Kubernetes, CFCR and inside of that we have service catalog, which is our real sort of the thing that implements the client side of the platform, which is the platform side of OSB and then that talks to all our usual suspects as service brokers. So what is service catalog and service catalog implementation open service broker API. It's an incubator project in Kubernetes, started about year, year and a half ago. It's been at the bleeding edge of a lot of Kubernetes features in terms of things like the aggregated API extensions, RBAC, very nice Helm charts that work. The cool thing about this is that it is native Kubernetes. When you actually install it, it looks as if it was there the whole time. It is installable at runtime, so what you do is you pull up APIs, you'll see, oh, the other thing it's not there, it's the normal APIs, then we install it with Helm and then you do a list of APIs and boom, you see service catalog APIs. So it's very cool and you can use kube control just as you could anything else. You can use the dynamic client go to access it, but we also provide client go specific to our resources. So install with Helm. Helm is the sort of preferred way to install things in Kubernetes as far as I can tell and thus we have a Helm chart. Make sure you have your Helm initialized, Helm init. Helm itself needs access to things because it creates a bunch of stuff. So I don't know if this is a good idea or not, but we give it cluster admin in our documentation. There's probably a less access role to bind, but that's probably up to Helm to tell us at some point what we should be binding to. Then we have our own beautiful service catalog charts.storage.googleapi.com which contains the links to the TARS that have our Helm charts and that has both service catalog as well as the user provided service broker. Then you install it wherever you want with whatever name you want. We just do catalog catalog because it's convenient, but it can be installed anywhere. A lot of people I've heard run it in either default or kube service, kube kube system because that's sort of the system resources and this is kind of a system resource, even though it's add on afterwards. Helm will create a whole bunch of objects, our back rules, services, pods, deployments, etc. And they will be serving that API and they'll link the API in and the core API server will pick up the new API server and again put the API resources into the standard list and you can access it like it was never not there. And I think that's about it for me and ready for the demo. Okay, everybody can be fine? Okay, so what are we going to be doing for our demo? For our demo we're going to be provisioning mySQL database. We're going to be connecting an app to it that lets people vote on whether they like cats or dogs more. We're going to see that data flow through to the database instance. And then we're going to connect a second app to the same database instance that's going to show those votes in a pie chart. So the two apps don't actually know about each other, they're connected, they're bound to the same database instance. And we'll see how we do that. So this is just a short run down of the commands I'm about to run. The Kubernetes versions are on the bottom, the CF ones are on the top. So this is assuming I've already installed Helm, sorry, I've already used Helm to install service catalog on my CR cluster. And I've already added the broker. And that broker can be wherever for me, it's just running in a pod locally on my Kubernetes cluster. So the first thing I'm going to do is I'm going to use SV cat, which is a CUBE CLI plugin that we have written. We, the services catalog team, to act as sort of a friendly front end for the service catalog. Otherwise you just see me run CUBE CTL and create five times in a row. And it wouldn't make much sense. So first I'm going to provision the database. So this is sort of the Kubernetes equivalent of Cloud Foundry create service. It's going to go talk to the broker, be like, hey broker, provision me a database. I'm going to supply a class name, MariaDB, and a plan name default. Those correspond in CF to the name of the service and the name of the service plan. They're the same thing. And then I'm going to tell it to create a binding for my service instance. And I'm going to pass it a secret name. So this is a little where the abstractions used in Kubernetes and Cloud Foundry differ a little bit. In Cloud Foundry when you create a binding for a service instance it's explicitly attached to a running application. It says create a binding for this service instance, inject those creds into my app using VCAP services. Kubernetes has this notion of a secret. This is a core resource type in Kubernetes. It's not part of the service catalog. And it's basically used for holding secrets. So if you have sensitive credentials, username, passwords, encryption keys, that sort of thing, you can create it in a secret and then attach that secret to things you want to use that secret but you don't necessarily want to have full access to it. So say you have an encryption key for, I don't know, you used to log into something and you have an application that requires it, but you don't want everyone who has access to your app to be able to see that you could put it in a secret, attach the secret to your app, and away you go. So when we create a binding in Kubernetes rather than injecting those directly into a running pod or deployment we stick it in a secret. And then we then manually attach that secret to our running pod or deployment. So I'm going to go ahead and start us off. So again, I'm using a plug-in SVcat, it's sort of going to provide some friendlier commands. So we actually get some outputs that are just pushing and reading YAML files over again. So we can see here I have two services that are installed in my service catalog, MariaDB and Sample Service 2. I'm going to go ahead and use SVcat to provision myself in instance. And I can see that that was created and it's ready. And then I'm going to go ahead and use the same I've forgotten it's a secret dash name. Now this is also where our flow differs a little bit between Cloud Foundry and Kubernetes. So in Cloud Foundry you kind of need to have an app already running because like I said bindings are attached to specific apps. That's not so in Kubernetes. So if I go ahead and get my secrets I can see the one I just created up here. And if I wanted to I could get that secret and see all those creds that were just created. Now I'm going to go ahead and create my actual application, but before I do, so this is that is not as much of it as I was hoping to show. So this is the YAML for my application. I'm deploying it as a pod which is just one single container, the simplest way I can do it. And in this pod declaration I have my secret ref. So this says when I create this pod, I'm expecting there to already be a secret named binding. Take that information and stick it into this pod. So I'm going to go ahead and create that app. We can go ahead and see my app running. So I'm going to set up the database. That's just a simple thing and injects this game I'm about to use. And then I'm going to go ahead and just put some votes in. So I have this running, used the binding to inject or to create the secret, injected the secret into the thing. Now I'd like to see what I voted for. So I'm going to go ahead and create a second application. Let me go back. That does the same thing. So now if you were doing this in Cloud Foundry I would push an entirely new app. I already have my service instance. That still exists. So I would simply bind it to my second app and then restart it so the creds get injected. Now in Kubernetes, because bindings are attached to secrets I don't have to create a new binding. I can just reuse the same binding I already created. So over here I'm going to take a look at my pod specification for the second app and we can see at the bottom I'm injecting the same secret. So this saves me a little bit of busy work. I don't have to create umpteen bindings for umpteen apps. I'm going to go ahead and create that see it was created wait for it to come up it's up and running and we can see apparently more people like dogs. I'm going to put some more votes for cats in there. We can see it updates. So that's the basic workflow of how do I push a pod how do I commit to a service how do I create bindings. It's pretty close to the Cloud Foundry model not exactly the same. So the only part we have left is I'm going to talk briefly about some tips for creating platform-independent brokers. So this is actually pretty easy to do in this day and age. OSBA API was originally spun out of Cloud Foundry and it originally contained a lot of CF specific assumptions. Things were names based in terms of organizations and spaces which are a Cloud Foundry idea but don't necessarily make sense of the other platforms. So there's been a lot of work in the past year that is moving OSBA API away from this. We've replaced orgs and spaces in the spec with a generic context object that can contain and has the name of the platform and then it can contain arbitrary parameters. So for Cloud Foundry it can still use orgs and spaces. Over here in service catalog we're currently in the process of implementing namespace brokers. So if you're familiar with Kubernetes namespaces you can use them. And that would go on the context object. Currently brokers are intended to be stateless. That might be changing in the near future. So at least in service broker authors that we've talked to here at service catalog we found that although they're intended to be stateless most service broker authors are storing some state somewhere and we might be moving to taking advantage of that in the very near future. We just had discussions literally last week about starting design of version 3 of the open service broker API and we didn't really make any hard and fast decisions but the general way the wind seemed to be blowing is that we're going to be moving towards more stateful brokers in the future. Mainly because that would allow us to implement get endpoints so you could ask a service broker what service instances do you have provisioned? What bindings do you have? And then the platform could make decisions based on that. So that's just something that's going to be happening in the very near future that we would like you to be aware of. We encourage you to write your own service brokers. We ourselves literally just wrote one for the hackathon. We wrote an Ethereum blockchain service broker. It was a lot of fun. So we encourage you to try the same for whatever services you guys would want. And that is it. Does anyone have any questions? I saw you but this guy is closer so let's get from the microphone. We'll get to you in a second. Just a question about the bindings. Are they copies of them placed in the pods? Or how did the secret you created? So I created a secret. Or rather, when I created a binding service catalog itself created that secret and populated it for me. Let me see if I can find my terminal again. And I can show you what that secret looks like. I'll show you how much. I know. What am I supposed to be running? Oh, right. Okay. So up here in this data we have, this is the information I actually got back from the broker. So in Cloud Foundry this is what would be injected for VCAP services. Now for us it's just stuck in the secret. Now the secret has a notion. It's basically a hash map so it has keys and values. And these are the environment variables it gets injected into my pod has. So when I'm in my application I would read the environment variable database and expect to find my database there. I would read the environment variable username and expect to find my username there. No, it's outside the pod. So that's why when I created my second app I actually used the same binding which used the same secret so it got injected without having to create another binding. Are you good? Okay. Yeah, so the project is still an incubator and the things it injects are beta. So what does that mean in terms of stability and sort of forward looking API compatibility? Okay, so we are assuming you're talking about service catalog specifically? Yes. Okay. So we are indeed an incubator project which means yes we are technically beta. However, we are the largest by far incubator project in Kubernetes. We are as we speak gearing up for our GA release at which point we will no longer be beta. So yeah, pretty much we're here to stay. Anybody else? Okay. I was just curious in that demo where did that service broker actually live and provision the MySQL instance? So really what's I got you. Sorry. So really what's happening is it's running in a pod on my Kubernetes deployment itself. It has this is the broker if I say and then every time I create a service instance I believe it spins it up in another pod. So that's what this big horrendous looking GU it is. So in this instance it's all running locally on my laptop. But because of Oz Bappy it just specifies an API and the thing that implements an API can live anywhere. This could be running on the internet. It could be running somewhere else here in my data center. It could be anything I can access over the network that broker could be running on. And while this case it deploys Kubernetes pods it could deploy VMs. It could have a monkey that builds computers and attaches them to the network for you. It could be anything. Okay. And are there is there authentication in front of that? Would you have to get credentials for it? So in this specific instance, no, this is an authenticated broker because I was lazy and didn't want to do that for my demo. There is an authentication mechanism for brokers. Yes. When you create when you add the broker to service catalog you can specify those. And then you need those credentials to log in to perform actions. Are you good? Okay. This is all within GitHub's space. And I saw you had a little preference of the PCF. I've seen some demos where people actually have the broker I don't know if it's living in PCF or in Coob. I guess it can live anywhere theoretically. So I could theoretically spin it up as a Docker container in PCF. Do you have an example of in GitHub maybe or something or how I could create a service within PCF that could call the service broker here in Coob to create the, is there any kind of example that I could kind of combine the two? So I could have a marketplace within PCF sort of like your marketplace here in Kubernetes where I can make calls to Kubernetes. No? So like I could spin up the database pods in Kubernetes but actually have my app live in PCF. So I could for instance if I had a Cloud Foundry, I don't have one running on my laptop right now, deploy Cloud Foundry, deploy an app on Cloud Foundry, add this broker that's running in my Kubernetes to Cloud Foundry. It would be able to see its catalog, it would be you offer Maria DBs. I could then in Cloud Foundry say CF create service, it would cause another pod to be spun up here and then I could attach that service to my Cloud Foundry application, yes. That's Ozbapi supports that completely negatively. I could add as many platformers to this broker as I want as long as they could actually reach it over whatever network they have. You have to worry about the ingress into Kubernetes though to actually attach things. Yeah, I don't think these particular pods have services created them to be routable outside of the Kubernetes. But if I wrote my broker in a way that allowed that, it could. The sure, so the reference implementation of the service broker API that Cloud Foundry provides, CF MySQL release, is a Bosch release that's deployable. It deploys a service broker, it deploys a five node MySQL cluster replicated, blah, blah, blah, blah, by default. And yeah, so that would be a Bosch release that would be VMs running on your network somewhere. You could add that to your Cloud Foundry, you could add it to your Kubernetes, you'd be able to see it from both of them and push create it. So that would be the first link that we're going to do. So that would be the first link that we're going to do, whatever. Although, again, I stress, all service brokers, if they are compliant to the spec, can do that. Anything else? Resource page. Oh, right. Please come participate in open source. As we said earlier, we were doing, we were thinking about this every week. So if you have any needs and you want things, if you're a broker author and you really need, I don't know, get endpoints or some other weird thing, please come and contribute, converse, we'd be great ecstatic to hear your actual opinions of people who actually use this junk. And then a couple other links. The second one is just the Cloudfoundry.org page on the CF container runtime so you can see what it is. It's really just a Kubernetes that's deployed by Bosch. And then the last link is the thing we actually work on. Kubernetes service catalog that is the main hub page for that. So you can see that. We have instructions on how to install it. It should work seamlessly with your CFCR deployment if you want to try and use service catalog. Okay. There's no more questions. Thank you for coming.