 Okay, I figure we can go ahead and get started. So hello, and thank you for showing up to the SIG Service Catalog SIG session. So my name is Jonathan Burkan. I'm a co-chair of SIG Service Catalog, as well as a maintainer. So yeah, so this is the only session we have. We only got one slot. So I'm going to be covering stuff fairly quickly. I'm going to start off by introducing sort of what Service Catalog is, what it does, why you would want one. And then I'm going to do a brief demo, just sort of the simple happy path of using it. And then I'm going to dive a bit more into how it actually works, how the OSBAPI, the API that lies behind it works, and then quickly go over stuff the SIG has done recently and our future plans. Okay, so what is Service Catalog? So in the world of cloud native applications we have today, your applications that run on Kubernetes are not exactly islands. Well, they also often have their own functionality they're responsible for, they are often dependent on external services, the simple sort of archetypal example we use as a database. Your app probably has some sort of data that needs to be persisted beyond the lifetime of each ephemeral container. And you need some place to put that data, usually relational database, MySQL, Postgres, what have you. And these services are often fundamentally required by your application. It's not like you can, you know, finagle some ways that you wouldn't have to require them, it's something you absolutely have to have, or else your application will not work. And as an application developer, you need these things, but it would be nice if you didn't have to provision them and manage them and allocate all the resources for them yourselves. It would be nice if someone who, you know, was more experienced who had dedicated responsibility for that sort of thing would handle all of this for you. And this is the problem that Service Catalog is attempting to solve. So I have my app, that's really the only sort of thing I care about. It would be nice if the the use and management of the services could be abstracted away from me. And before I go any further, I'm going to sort of help define what exactly I mean when I say services. So especially within the realm of Kubernetes, that word is sort of overloaded. Kubernetes itself has a notion of services, which is sort of similar to the generic meaning of the word in that it's, it's something reachable someplace on the network at a specific address within the cluster that's discoverable via DNS. So you get a host name, I go look for it and eventually I arrive at whatever is running at that address. At the same time, we also have a notion of platform managed host services like object storage, various flavors of databases, and then external services, which is probably where it starts to get a little more weird because that can still be the sort of archetypal thing like relational databases. It could be a subscription to some API like Twilio, it could be a subscription to billing services, so on and so forth. So it's really the sort of second two categories that what I mean when I say services, I don't necessarily mean a thing running at a DNS discoverable address within the cluster itself. I mean some, some trying to avoid using the word service, something I want that my application needs, not necessarily within the cluster. And creating amenities, these things can be non trivial. So a relational database is something all of us are probably familiar with. But there are certainly can be stranger and more esoteric services that, again, our application developers don't really want to be in the business of provisioning and maintaining those themselves. We've all probably have some experiential databases, but something like Twilio, which is a API to make phone calls. Again, I really don't want to have to handle that myself. Something that's sort of mission critical like billing services. Again, I don't really want to have to handle that myself. And for any of these, even if I do have like the actual code that does it, managing the instances can itself be a non trivial task. Okay. So once I have a database up and running, I have to control access to it. Okay. So I have a set of credentials. How do I maintain that? Do I maintain it with a sticking out posted on my monitor? Do I okay, that's probably not a very good idea, but I could put it in a secret. Okay. That's a little bit better. But that secret is still ephemeral. It's within your Kubernetes cluster. You have to control access to it. If you're in a multi-tenant cluster, you have to make sure that the wrong people don't have access to it. In the event that the credentials need to be changed or rolled, I have to go in and change all that. And this all can be further compounded by the fact that this service, this thing I'm consuming might not be deployed within the Kubernetes cluster itself. It might be something else running in the same data center. It might be something else running on the internet somewhere and completely different. And each of these services is also going to be further compounded by the fact that they're going to be managed and deployed their own way. So okay, if I have a relational database, I've got to provision that for me, whatever other APIs my application might be dependent on, probably have their own different ways that they're provisioned. And I don't care about any of these really. So it would be nice if we could shift the burden of all of this provisioning and managing to the platform so that the application developer doesn't really have to care about it. Kubernetes already has secrets, which is a good way to encapsulate these credentials. So it would be nice if we could take advantage of that because our application developers are probably already familiar with that. And that is what service catalog does. So what if I'm a user of Kubernetes, and I had a command called marketplace to be like, Hey, show me all the things I could buy. And it came back with a list of, okay, we have two flavors of database available in our cluster, my SQL and MongoDB. And they come in a couple different flavors. And once I know exactly what I want, I can say, provision me an instance of one of those bind that instance, which is sort of create me a set of credentials to access it. And then I could just consume those credentials once they get put in a secret. And I wouldn't have to know or care about where that database exists, or who's running it, or who's controlling access to it, or who's going to have to respond to it when the instance that's running in blows up, I don't really know or care. So that's sort of the promise of what service catalog gets you. What are the benefits of this? So the most obvious benefit is probably the ease of use. And that's really what we're aiming for, to relieve the burden of application developers from having to focus their time and effort on these things that while they need them for their application to run, they aren't really within the main purview of the application developers themselves, freeze them up to focus on the thing they should be focusing on their application. And it lets services be managed by the experts who are hopefully, you know, and experts in managing whatever their their service broker provides. Additionally, because this is all contained within the sort of normal Kubernetes workflow, it can be utilized via Helm charts or operators or what have you to automate the use of these resource types. Sort of coming at it from the other angle, if I'm a service provider, so if I sell databases, this is also very helpful to me because it gives me an easy mechanism to provide my service to application developers who are interested in consuming it. Additionally, while Service Catalog itself is the Kubernetes implementation of OSBAPI, the thing that allows this magic sort of work, other platforms as a services also implement OSBAPI so that they can also consume these service brokers. So if I'm interested in selling my databases, this opens up multiple markets to me, the service provider, which hopefully means more people can use them. Okay, so I'm going to exit out of this for a second and show a brief demo. So I have running on a Kubernetes cluster here, just a very simple guest book application. It's just sort of a website. I can go in, I can type in some data, and it's going to store it for me. Now currently, this is running in a pod with an in memory data store. So this is just running locally, storing it state locally. And then if I were to... So I'm going to go ahead and kick the pod that it's running in and obviously that data is going to go away. Okay, so my deployment's going to go ahead and re-kick that pod and obviously all the data I had stored in my in memory data store was vanished, scattered to the far winds. Okay, so I'm going to go ahead and take a look at the services available from the brokers I have installed on my cluster. Now these are the services available from a single broker called Mini Broker. That's sort of a helpful tool that I use a lot for demos. It's a service broker, so it offers services, but the actual backing store is really just Helm charts. So when I request one of these, it's going to deploy a Helm chart and locally in the same cluster, but that's sort of the way this particular broker is implemented. That's not a requirement. So I'm going to go ahead and deploy a Redis instance and then attach it to that pod I have running. So the class is just sort of what service this offering sort of generally represents. This is fairly straightforward. It's a Redis. The plan is what particular flavors that service comes in. So because this is really deploying Helm charts in the back end, that's what that is. That's the version of the Helm chart I'm pushing. So in reality, that's pushing Helm charts. So I can go back and look at the pods to see it getting created. Depending on what broker you're using, this could be something like this where it's deployed in the same cluster. It could be a broker on the internet that's provisioning something inside the service providers data center. I don't have to really know or care. So we can see a couple pods that came up, and then I'm going to go ahead and bind that service instance, which is going to cause the secret to be created that I can contains the credentials I can use to access that service. Okay, because we can see the secret that popped out, I'm going to go ahead and attach that secret to my deployment that's running. So when I go back and reload this, it should be connected to my Redis. And this is all the information it's using to access that Redis instance I just created. So now I can type stuff in and then we'll get stored in that Redis instance. So that's sort of the brief walkthrough of how easy the stuff is to use. So that's sort of the happy path. Now that example used a Redis, you know, key value database. And that's sort of the archetypal example, like I said, but this can be used to represent both. Oh, sorry. A lot more stuff. So pretty much anything that can be represented in credentials, which is JSON containing arbitrary fields can be used, offered via a service broker. So databases, subscriptions to any sort of API. They're even maybe a little bit stranger brokers that are sort of platform aware and can reach back into the platform provide internal platform services such as load balancing or auto scaling of the things that are attached to them. So while that's sort of the example we use, it's by no means limited to just that. And while the command line utility I used in that demo as VCAT, that's sort of the little tool we have developed for service catalog. Everything I showed you is accomplished using normal Kubernetes pattern. So if you want to, you can accomplish all of that by writing YAML files and crudding YAML up and down with kubectl. Okay, so that's the basics. I'm going to get kind of into the nitty gritty now. So I've called said this a couple times open service broker API. What exactly is that? So it's an open specification of an API for the automated deployment management and use of services. So I've already explained sort of the why you would want this. This is sort of the how it specifies two sides of an API one for the the server side, which is called a broker one for the client side, which is the platform in our case Kubernetes. It's sort of has three main actions and get catalog provision instance and buy and service, which we have represented in corresponding types in service catalog itself. Cluster service broker cluster service class cluster service plan service instance and service binding as well as namespace representations of each of those. So going back to the sort of archetypal example cluster service broker would be some a MySQL broker a broker that offers MySQL services. The class would be MySQL instances or sorry, MySQL databases, then that service class would have sort of plans that are belong to it, which would be different flavors of databases. So 100 megabyte MySQL databases. And then when a user provisions instances of that class and plan, that's what's called an instance. So Jonathan's 100 megabyte MySQL database. And then when we create bindings to that instance, that's a set of unique credentials to access it. So a single username password for accessing Jonathan's 100 megabyte MySQL database. And then I'm going to briefly sort of step through the workflow of what actually happens. So the first thing that happens when you're trying to use service catalog is you have to register a service broker. Now this is something that's sort of done once usually by an operator. And from then on services offered by that broker are going to be available in your cluster. So when that happens, you're going to create a service broker object in service catalog. And service catalog is going to go out to the broker and hit it's get catalog endpoint. And that's going to return a list of all of the classes and plans of services offered by that broker, which we're going to store for later use. At some later point in time, a user's probably going to come along and be like, Okay, these are the services that are available on service catalog. I'm going to want to provision an instance of one of those. So it's going to do this by creating a new service instance object catalogs going to go to the broker and be like, Hey, create me a new MySQL database of class MySQL plan 100 megabytes. And the broker's going to do whatever it needs to do to make that happen. Again, service catalog doesn't really need to know or care. That user is going to deploy some application that it needs to consume the service. It's going to create a binding by creating a service binding object in service catalog, which again, we're going to forward that request to the broker saying create a binding to that instance you already have provisioned. And it's going to return a set of credentials. Now I mentioned this before these credentials is sort of an arbitrary JSON. It contains arbitrary fields, which can vary a lot depending on the service and how the broker offers it. Usually it's some set of a username, password, a URL and port that you can, you know, where you where you can access the service and how you can log in. And we're going to take that and we're going to create a secret with that credential in it, which the user can then attach to their app, which we'll use to access the service wherever it's running. So that's that's what it's what's actually going on when I step through those steps. And that's sort of how service catalog works. So that's the what the why and the how I'm going to go briefly now into what the SIG has sort of been doing in the past couple of months. So we just recently released zero to zero of service catalog, a major feature release that was the official GA release of namespace resources. So previously, brokers plans and classes were only available as cluster wide resources, which meant when a broker was added to the the Kubernetes cluster, it would be available cluster wide to all users. And sort of in the interest of allowing operators to restrict access to certain services to namespaces or even further, we added a couple of features, the main one of which is namespace versions of those resources. And also because cluster service brokers adding them to the cluster is sort of a very widely impactful action that was usually recommended to be restricted only to cluster operators. But namespace version of these resources can be installed in a single namespace and are much less disruptive to other users of the cluster. So there's probably a lot less reason to have restrictions on creating and using those types. So this also allows individual users of a cluster to manage their own brokers, add them to their cluster if they want to use them themselves. And then we also updated SVCAD or CLI to be sort of intelligent about manipulating both these resource types. And Cavalog Restrictions was another feature we added. It's not really related to the namespaced stuff, but it is sort of in the same vein of a way to restrict access. That was just basically adding the capability to add white or black lists to brokers to restrict which services that broker offers or actually ended up in the catalog. Okay, so that's what we just did. I'm going to talk about what we're going to do now. So Service Catalog is kind of weird in that it's a pretty old project. We've been around for about three years now, which means that when we started all the fancy things you can use today like CRDs didn't exist. So currently today, Service Catalog exists as an aggregated API server and an accompanying controller, which means we don't use CRDs. That said, we're a pretty perfect fit for what CRDs offer. So we're currently in the process of transitioning to being based on CRDs and getting rid of our API server. This is a lot of work. So it's pretty slow going. This is sort of the major feature we're working on for the next major feature release 030. Yeah, so this is sort of a work in progress. This is what we're working on right now. Other than that, the things we're working on are so because we represent a sort of a functionality of a server outside with Kubernetes that we occasionally have synchronization issues where a user can request some state by issuing creating an object or deleting an object within kube. And we have to attempt to reconcile that state with an exterior sort of source of truth in the broker, which occasionally causes little hiccups if the broker prevents us from doing a thing or forbids us from doing an action after that action is already cleared within kube. So that's sort of a source of constant hiccups and bugs where we're still working on improving that. We're adding a new feature called user provided services where we sort of skip the first three steps of creating a broker and picking a class and plan. So this is a feature that's if I have some service instance, like a database that already exists outside in the world for legacy purposes, maybe I can manage access to that extant service instance using service catalog without having to have a broker. Pod presets. So currently the end result of using service catalog is you create a mining and it creates a secret in a namespace with your credentials in it. And beyond that, it's really lifts up to the user. You have to attach it to a pod or a deployment and make use of it manually. So we're working on improving that by adding a feature called pod presets, which would allow you to automatically inject a binding to a deployment or pod. We're also working on improving our docs. That's sort of a constant task. But with the namespace service broker stuff and transitioning into CRDs, we're hoping to clean up our docs a lot. And because we're still a beta release, we haven't actually gone GA yet. We're coming up on it pretty quick, though, we're sort of tentatively helping maybe by the end of the year. Because once we get CRDs done, that that sort of simplifies things greatly in terms of how much stuff we have to maintain and how much left we have to do. So hopefully that's coming up. And then before I take questions, I'm briefly going to list a couple websites. So svcat.io, that's sort of our main docs website where we tell you what service catalog is and how to use it. Kubernetes SIG service catalog, that's our main repository that contains the code for the API server, the controller, the CLI, pretty much everything. OpenServiceBokerAPI.org, that's the main website that lists information about the OSBAPI spec, which is what we're an implementation of. And then finally, if you're interested in contributing, we host weekly SIG meetings on Monday at 9am PST, as well as we have people available on our Slack channel pretty much around the clock around the world. So if you're interested in contributing or if you just want to know more, feel free to pop into one of those two channels and say hi. Okay, so that's pretty much it for me. Does anyone have any questions? Let's just make sure we get everything on the recording. So as you mentioned, pod presets, can it be an alternative to service catalog that is having a pod preset and an admission controller and then just getting rid of the service catalog? So that's sort of a repr- an admission controller. That would be your replacement for user provided services, but that wouldn't, I don't know how you'd represent the sort of the full workflow in that. So you're still going to have to interface with this broker, which is some exterior server that controls the provisioning and access to these services, whatever they may are. An admission controller, if you got rid of service catalog, you wouldn't have the types to manipulate. So you would have to interface with the broker directly to get the credentials and then put that into a pod preset. And I don't think I could, I wouldn't have a reasonable expectation that an application developer would sit there and write curl requests manually to the broker. So you'd have to implement something a bit fancier. And at that point, you're basically just re-implementing service catalog and an admission controller. So I don't... I mean, so pod preset is still there in the nucleus. And if this is going to be not in the API server and as a CRD, it would be deprecating from the actual nucleus, right? Okay. Sorry. It seems I might have forgotten to explain a bit. Okay. So we're an extension project. While we have an API server, we are not in the API server. We exist as an extra API server that's aggregated into the main Kubernetes API server namespace. So we don't exist in the same code as the regular API server. And we aren't really adherent to the same restrictions. Okay. Okay. Thank you. Do you have a list of the brokers that are ready to be integrated with this service catalog? So that's sort of an explicit sort of anti-goal of our SIG. We don't really maintain a list of the available brokers because it's kind of like there's way too many to count. So I personally work for IBM. I know on our cloud, we have something on the realm of like 300 plus different services that's available just from our brokers. And we know of, you know, hundreds more that are available on the internet from various companies, cloud providers, et cetera, et cetera, who offer their services via OSB API. So the short answer is no, we don't maintain a list of that because it would like it's too big of a task for our SIG. I do know, so we have a whole bunch of services available by this. I know most of the other major cloud providers, Google, SAP, et cetera, et cetera, have some level of integration with OSB API going on. Yeah, I think the challenge is the authorization between the catalog and the service broker. Usually, if you consume a public service, you have to create a user ID or account, right? So when we're provisioning service with this SVC or the service catalog in Kubernetes, there's no such account. So that is true. So the API itself does have a mechanism to include more elaborate information like billing information, et cetera, and so forth. And a lot of the major cloud providers, clouds do end up doing that in some way. I don't really work on the close source of things. I know IBM Cloud specifically does that. We have integration that allows the billing information to get passed through the whole workflow and to the broker and back. I don't personally know how that happens because I don't work on that. Yeah. So hopefully that answers your question. Okay. Thank you. Okay. If you want to come grab me after the talk, we can maybe have a little bit more in-depth conversation about that topic specifically. So my question is who is responsible for creating the service binding? The application developer or the cluster operator? An application developer. So really the only thing that's a cluster operator restricted action is creating the broker. And that's usually because when you add a broker to the system, all of the services that broker offers are going to become available to usually everyone who accesses the cluster. So both the provisioning of the instance and the provisioning of the binding are usually something that's up to the individual app developer. Okay. But neither of those is really... Like it's something that's going to happen fairly infrequently. You're going to provision a database and unless you need to create a new database or perform a database migration, you're probably just going to let that instance run. So the developer can ask for as many resources as they want? Uh, technically, yes. Normally these brokers, like I said, have more involved like billing systems and information that flows through the system like the identity of the user and so on and so forth. So while in a naive implementation of a broker, yes, a user can provision as many resources as they want in the actual world with actual brokers know that that's not really the case. Okay. Thank you. Okay. Well, if we don't have any more questions, thank you all for coming.