 Well, hello and welcome again to another OpenShift Commons briefing. This time we're going to be talking with Paul Murray again. He's one of the leads for the Kubernetes SIG on the service catalog work and we've got a release candidate going out that he's going to give us an update on and it's going to go along today. So ask your questions in the chat. We're going to try and do a Q&A at the end and hopefully we'll then in a secondary session later this week do some demos. But today there's so much detail we're going to run quite a bit of time. So please indulge us and Paul, why don't you introduce yourself and we'll just kick it right off. All right. Well, hello everybody out there in Internet LAN. I'm Paul Murray. I lead the service catalog effort at Red Hat and I'm one of the SIG leads in the Kubernetes community for a service catalog and I'm on the Open Service Broker API PMC. And today we're going to be talking about the Kubernetes service catalog, both kind of an overview and refresher for people that haven't already seen this concept and are familiar with these things. And then also talking about much to my delight, the initial beta release, which we have just done in the last couple of weeks, the upstream service catalog and what that means for OpenShip 3.7. So we're going to start off just talking about an overview of OpenService Broker API, which I see I misspelled on the slide here, hooray. And then we're going to talk through the service catalog API in Kubernetes, what the key concepts are, what the main resources are and how they work together. And then we're going to see a little bit of service catalog in action in OpenShip 3.7. So the OpenService Broker API exists to fill this gap where users and applications need access to services and resources. And there are a number of different ways, both in terms of process in your organization and then also in terms of technology used to provision and manage those things that if you have any experience working in a large organization, like a large company, you're probably familiar with this situation where you need a database created, for example. And you have to create at least one help desk ticket. And sometimes you might have to create a number of different help desk tickets for different aspects of that process. And talk to a bunch of different parties to get these resources allocated for you. And then at the end, once everything is done, you get some information about how to use those things, like a connection string or credentials if we're talking about a database as an example. And those might come to you via email, they might come to you via someone writing it on a posted note, or there are a number of different ways that you might get that information given to you in your organization. So the value proposition of the OpenService Broker API and Kubernetes Service Catalog is that the service catalog provides a central place for you and other users in your organization to request new instances of particular capabilities or services and also insulates you from the process around doing those things and the technologies in use to actually do them. Which is important to me as a developer because if I just want to get a database created for myself or my team, I personally would rather not know that there had to be like a bunch of different tickets created for it. I don't want to track communication with different parties and I definitely don't want a posted note handed to me at the end that tells me how to use this. So the service catalog is meant to centralize these things and OpenService Broker API is meant to define the interactions between the catalog or platform and the entities that actually provide those services, which we call service brokers. So to just double down on that because this is an important concept that I found there to be some confusion on in the Kubernetes community and OpenService Broker API community. Let me just repeat that again. So the OpenService Broker API defines an HTTP interface between a platform and the entities that provide services, which we call service brokers. A service broker is the component of a service that implements the OpenService Broker API. So you'll probably hear me say that again. If it gets repetitive, that is excellent and my goal for people to be so familiar and so deeply internalize this concept that it gets boring to them. So let's talk a little bit about this. Now you're the user. You're using the service catalog and the service catalog is managing communication between one or more brokers. And the brokers are the things that actually provide the services. Service catalog is how you interact with them. And to a great degree shields you from even actually having to understand the OpenService Broker API. To just reiterate that the OpenService Broker API, generally you do not have to implement that like program to that directly. You'll program to the platform what interface it gives to you to use it. In this case, it's the Kubernetes service catalog. So you're programming or interacting with the service catalog and the service catalog is the thing that actually communicates with the brokers. So a little history lesson on OpenService Broker API. There are a number of different vendors that are present at the OpenService Broker API table in the working group. Obviously Red Hat. There's also probably fairly obviously a pivotal presence because of the origin of the OpenService Broker API, which we'll talk a little bit about. But Google, IBM, Fujitsu, SAP and actually since I originally created this slide several months ago, there have been a number of other vendors that have started having folks attend like Microsoft, Atlassian occasionally. And so there's a number of different vendors and the pool of vendors that are interested in this is growing, which I think is a good bellwether for us as folks that are implementing the API and Kubernetes and the community around it that it seems like adoption is going up. So like I said, OpenService Broker API originated as the Cloud Foundry Service Broker API in 2011. And the recent history of this API is that in mid 2016, the Cloud Foundry folks were having users and users of Cloud Foundry coming to them and saying, we really like the Service Broker concept. We want to use this in other platforms that aren't Cloud Foundry. Obviously probably the biggest gorilla in the room in that scenario is Kubernetes, which has become extremely popular over the last three years. So in 2016, we formed a new working group for OpenService Broker API and later on December 2016 announced that the Cloud Foundry Service Broker API had become OpenService Broker API. And then this year we've had two really important releases for us, especially in the Kubernetes community because they introduced new concepts into the OpenService Broker API that both generalize the API beyond Cloud Foundry. It's Cloud Foundry origins and add new concepts that are important to everybody, both Kubernetes and users of Cloud Foundry because they enhance the capabilities of the API. So in June 2017, we made our first significant change to OpenService Broker API and introduced this context profile concept, which is sort of the first thing that began the decoupling of the API from Cloud Foundry specific concepts. And context profile is a way for a platform to send coordinates for their own platform, in our case Kubernetes, about the context for an API request. And we'll talk a little bit more about this, but it's very important to us for Kubernetes because we were able to begin sending the Kubernetes namespace instead of using the buckets for Cloud Foundry coordinates, which are called Oregon space. So later in September 2017, we had another really significant release where the three top things that are interesting and important to us are that Service Brokers can now communicate parameter schemas for their services that they offer. So what this means is that there's always been parameters support for services, so you can specify particular parameters when you request that a service is created, but you really had to know about those things before you did that. The parameter scheme is important because it allows the Service Broker itself to communicate to a platform that integrates with it what parameters it expects for certain operations. So you can create a user interface as a platform author that tells users what parameters they can set. So the next one is the originating identity. And what this is is a way for a platform to send information to a broker for a particular operation that describes the actual user of the platform that's creating or requesting a particular operation. So this is really important to us, especially for use cases where brokers provision services into Kubernetes, which you're going to see we have a few different brokers at Red Hat that we've developed that actually do this. Because it allows the broker to do things like make checks about whether a user is allowed to do something and to make those requests on behalf of the user. So this is important to us because it allows us to use the user's quota and RBAC rules in Kubernetes to check and see as a user do I have permission to do this and then also to allocate those resources against my quota. And then finally, we added in a kind of roundabout fashion support for additional auth flavors in the API. Before this 2.13 release, the only official auth method in the API was basic auth. And there are a number of significantly more secure flavors of authentication that are available to us. So we use an OpenShift already that you might be familiar with, for example, token based authentication, which you get as part of getting a Kubernetes or OpenShift service account. So this is another really important one to us at Red Hat because it allows us to use Kubernetes service accounts and their tokens to secure brokers, which is really, really important. So I have a timeline here that I've already kind of talked through this. It's mostly here for folks that might want to dig into some of these details. I'll go through it again just for completeness and also because it's kind of interesting to me. So you can see that the Cloud Foundry Service Broker API began in 2011 at VMware and initially there was just support for five fixed types of services. They're MySQL, Postgres, RabbitMQ, MongoDB, and Redis. And this is really interesting to me having worked on three different iterations of OpenShift because it kind of parallels the evolution of OpenShift. In OpenShift v1, we had a very fixed set of different cartridges that you could use. And over the progression from v2 and v3, we added support for users to make their own cartridges, which parallels what happened in 2013 in Cloud Foundry Service Broker API where they released v2. And there's now a very clean abstraction between the platform and service implementation similar to how we had a kind of concurrent evolution in OpenShift, where we moved from fixed types of things that you could do to more generalized things and especially the ability to add new things as a user. So then in 2015, they added asynchronous provisioning to the Cloud Foundry Service Broker API. And what this means is that the API was then able to describe to platforms that they've accepted a request to make a new service and the platform should call them back periodically. Instead of just having the service broker have to do the work within the lifespan of a single HTTP call, this is really powerful because it lets service brokers do a lot more work and opens up the possibility to create services that require a significant amount of time to stand up. So then as I've already talked about in 2016, it was renamed to Open Service Broker API. And then in 2017, we've had a couple of releases that add a lot of things that we're interested in in the Kubernetes community and in Red Hat and begin decoupling the API from Cloud Foundry specific concepts. So that brings us to the centerpiece of this briefing, which is the Kubernetes Service Catalog. And you probably already have an idea of what this is, even if you haven't seen the Kubernetes Service Catalog before because of what I've already talked to. But just to put a fine point on it, the Kubernetes Service Catalog is an integration between Kubernetes and brokers that implement the Open Service Broker API. Another little history lesson here, we formed a SIG in the Kubernetes community around this area in 2016 in September. And later in October 2016, we created the incubator repository. There's a link there. It's in the Kubernetes incubator or it's called Service Catalog. And that was a really interesting and eye-opening experience to me for a number of different reasons that I'll probably sprinkle in to the rest of this presentation. But one of the foremost things that I'll mention, I'll definitely mention this again, is that the Service Catalog is really one of the first proving out of the concept of making an extension to Kubernetes, which has an architecture similar to Kubernetes itself. And something that was a challenge for us in this group is that the SIG originally was populated with folks that did not have a deep background on Kubernetes. So it was a really eye-opening experience trying to explain and evangelize the concepts that we wanted to converge on at Red Hat with this Kubernetes-like architecture and API aggregator that I'll be talking about in a moment. So in March 2017, we released our first alpha release of the Service Catalog, the first of, if I remember correctly, 23 alpha releases of Service Catalog. And this was a really big hump for us to get over because we had converged on this architecture that we were proving out and had the basics of the integration between Kubernetes and Open Service Broker API implemented. It wasn't complete, but it was enough for folks to use and play around with. And we actually released a later alpha version in OpenShift 3.6. 3.6 is a tech preview feature with, which I'm sure that some of you have used in detail and others have maybe played around with. Fast forward to October 2017, we released our first beta release of Service Catalog, and this is a really important milestone in the Kubernetes community because when you call something beta, it means that you have to maintain API backward compatibility with it. And that's important both for users in the sense that a lot of users won't touch things that are alpha because it could be quicksand, right, like it could change underneath you. And that's what the purpose of alpha is, is to designate those things that are still really in the beginning of their development as being so. But it also kind of means that people with a high tolerance for change are going to be most interested in those things. So beta is very significant to us at Red Hat also because it means that we can take off the tech preview designation in OpenShift 3.7 for Service Catalog, which is important for us and it's important for users because users know that there's going to be backward compatibility with future API changes. So the primary contributors are players that I've already kind of mentioned a little bit. Red Hat obviously is one of the main contributors of code to this repository. And Google, Elassian, Microsoft, and IBM have also made contributions. So I hope if you're watching this and you're somebody that's interested in this, I will just take advantage of the fact that I have the bully pulpit at the moment and say I would love to have new contributions and contributors from other organizations. We try to be a friendly bunch in the SIG and it's also a good way to learn about Kubernetes without some of the overload effect that you might have if you go and try to look at the main Kubernetes repository. It's a lot smaller than Kubernetes Kubernetes. So alpha to beta, what changed? A lot of things it turns out. From a feature standpoint, we now have support for token based authentication and enhanced TLS support for communication with service brokers, which is important from a security standpoint. We now have support for that originating identity feature. That's in the open service broker API now, which is important, especially to us at Red Hat, having a lot of broker provision resources back into Kubernetes and OpenShift. And then finally, we've completed the implementation of open service broker API and Kubernetes by adding support for plan and parameter updates for instances of services. So if I make a service instance, and that service supports plan updates like going from a bronze plan to a gold plan, I can now do that in the service catalog. And I can also change the parameters of an instance that I've already provisioned, which is really important to users for a number of different reasons. From an API standpoint, the high notes are that the catalog API works with the Kubernetes API aggregator just to revisit that concept, and we'll revisit it again. But to put a fine point on for now, the API aggregator is a capability that's built into the main Kubernetes API server that allows you as a cluster operator to register new API servers that provide new API groups and resources with Kubernetes and allow your users to use those things, just like they were part of the main Kubernetes API server, which is a really fundamental capability to the story that we expect to be dominant for how you extend Kubernetes in the future. And we'll talk a little bit more about this once we get into the API details. We've also had significant refinement of the API resources in terms of what the Kubernetes way of doing things is. I think that this will be pretty clear what I'm referring to once we start looking at examples of these resources. But we spent a very significant amount of time talking to other SIGs in the Kubernetes community, like SIG API machinery, SIG auth, and SIG architecture about how do we make this API, the service catalog API in Kubernetes look and feel and work just like any other API that's part of the main Kubernetes repository. So this is kind of a sub bullet this next thing of refinement of the API, but the resource status for the main API resources that users interact with now has very detailed information about completed and in progress operations. That the API is performing or that the service catalog is performing against the broker. There were some unique challenges that we had to get over due to the different styles of Kubernetes APIs and the open service broker API. But the end result is that as a user, you can now take a look at one of your service instances and get a much better idea of what is actually happening currently and what the last thing to complete that the broker knows about is, which gives you a much better idea on the ground of exactly how the thing works, which is very important and also what's happening currently. And then last but certainly not least, we've vastly improved the error handling in the service catalog so we can not correctly handle operations that time out or fail. And we can also correctly handle corner cases like if a user creates a new service instance and it's being provisioned. And then while it's being provisioned, they deleted will handle that correctly now where that was something that wasn't especially mature during the early alpha releases. So service catalog API concepts. Here's where we get into the nitty gritty of the nitty gritty details of exactly what this API looks like in Kubernetes. We're going to just talk first, very briefly about the operations that are part of the open service broker API, because the API resources we're going to talk about mirror these things to a very great degree. So open service broker provides five fundamental operations. A broker can tell you which services it offers. And then there's a provision operation where a new instance of a service is created and new resources are allocated. There's a bind operation that creates resources to allow applications to communicate with an instance of a service. And then those last two have symmetrical operations where they get deleted. So when you delete a binding, it's called unbind where the binding is removed. And when you delete in service instance is called deprovisioning, where the resources associated with that service instance are deallocated, and the instance is deleted. When we talk about these concepts in Kubernetes, let's just put a very specific fine point on these things. A service broker is something that manages a set of capabilities, which we call services. A service class is a particular capability managed by a service broker. And the canonical example that most people can relate to about this is a service class might be a database as a service. Service classes have plans which are a specific tier or offering of that service. So for example, our database as a service might offer a free tier where maybe you get a table space in a multi-tenant database where your data is stored alongside other folks. You might have acceptable performance characteristics for development. And there might also be like a medium or a gold tier that you had to pay for potentially where you get maybe get a dedicated database with nobody else's data living in it and better IOPS performance. So plans are basically levels or tiers of a service. Then a service instance is an instantiation of a particular service's capability. The example for that is when I want to provision the database as a service, I get my database that I can use as a result of that. And then a service binding is a relationship between a service instance and an application. So following our database as a service example, a binding would be credentials created in the database that I can use. And then the output of that to me is a secret where that stores those credentials that I can use in my Kubernetes pods. And application for the purposes of this discussion is just code that will access or consume a service. So there's no limit on which types of constructs and Kubernetes can consume bindings. Basically, anything that runs as a pod, whether that's like a replica set or deployment, job, et cetera, can use the secrets that hold credentials for bindings. And we just call those things applications for simplicity when we talk about this concept of the service catalog. So as I've kind of hinted at before, Service Catalog has an architecture that's very similar to Kubernetes itself. There's a Service Catalog API server. And the API server is the entity that exposes the rest operations for these different API resources that we're going to talk about and persist those things into a data store, which is at CD. The API server is backed by a controller that is watching the API server and responding to events that the API server sends out about the resources that it stores. The controller is the entity that actually implements the behaviors of the Service Catalog API. And in this case, the controller is doing the work of talking and communicating to the brokers and saying to the broker, we want to make a provision a new instance. We want to make a new binding to this instance. And we're going to see a little bit more about this later in the deck. Here's a pictorial representation. And you can see on the left hand side of the screen, this box for the CLI, you can think of this as just being any client right like it might be a user using the CLI. It might be a program that's programming against the Service Catalog API. It might be the OpenShift console, but clients speak to the API. And for simplicity's sake in this picture, we've omitted the API aggregator, but they speak to the API using going through the API aggregator. The Service Catalog controller has established watches on the API server and implements the behavior of talking to the brokers that are registered into the catalog. And then is persisting changes back into the Service Catalog API server, for example, to update the status of the different resources, and also persisting information about bindings into the Kubernetes API using a secret resource. So when we talk about the API resources of the Service Catalog API, there are five of them. And if you saw our last Commons briefing on the Service Catalog, you noticed that this number climbed up by one, which I will talk about. But there are basically three resources that are mostly for cluster operators, and two resources that end users use. The ones for operators are the cluster service broker resource, cluster service class, and cluster service plan. These are cluster scoped, meaning they live outside of a namespace. And these are for cluster operators to use to register a broker in, and then register a broker in the catalog. And then the controller creates these cluster service class and cluster service plan resources to model the services that are in the catalog endpoint that they get back from the broker. The service instance and service binding resources are how the user creates new instances and bindings to service instances. Let's drill into cluster service broker. This resource represents a particular broker that the Service Catalog should show services and plans from. You can see that this resource is spec, and I think, yeah, you should be able to see my mouse, has a URL where the catalog should contact that broker. And then also has auth information where you can reference a secret that has an authentication token to communicate and authenticate to that broker. Now, I didn't have space to show it on this slide, but you can also give a particular CA bundle that you want to verify the broker's TLS. And you can also specify basic authentication using a secret that holds a username and password if you need to use an existing broker that only handles basic auth. But this is what the cluster operator uses to register a new broker into the catalog. So let's look at a picture. And this is a little out of date, but I think that it will still be very, very clear just the resource names have changed and there's a new resource that isn't captured on here, which I'll talk about. So in step one, the cluster operator makes a broker cluster service broker resource and creates it in the Service Catalog API server. The catalog controller gets an event saying that that new cluster service broker resource was added. It goes and invokes this catalog endpoint at the actual broker, which is running either, you know, potentially in the same cluster, Kubernetes cluster or somewhere out on the internet. And it gets back a payload from the broker that contains information about services that broker offers in their plans. So the Service Catalog controller transforms that information into the cluster service class and cluster service plan resources and persists those back into the service catalog API server. So at the end of this flow, now there's a broker resource and there's cluster service classes and cluster service plans that describe the services that users can use from this broker. So let's look at cluster service class and cluster service plan really quick. The cluster service class, as I've said, represents a particular offering in the catalog. And one thing that you will notice if you use the alpha version of the API is that the name here is actually an ID. You'll also notice down here that we have these external ID and external name fields in the spec. And the reason for this, well, first the difference that you'll notice in coming going from the alpha version of the API is that the name of this is not currently human readable. And also that the plan information is now in a separate resource. So just to talk through that really quickly, we had to adopt this naming strategy because the open service broker names of services are actually mutable and Kubernetes names of resources are not mutable. There is a lot of design time that we spent talking about the best way to handle this. And the simplest thing that we could do in the SIG and incubator repo to hit our goals for getting an alpha or sorry, getting a beta release out is to use the immutable IDs from open service broker API for now. But we let users specify using the human readable names, what service and plan they want. In the future, we'll probably adopt or add a naming strategy to the service catalog that uses the readable names and handles the name changes. But unfortunately, we just didn't have time to do this and still hit the targets that were important to the folks working in the incubator repo. So let's take a look also at the, I'm sorry, I skipped over something. So last thing on the slide, the spec plan updateable field describes whether instances created of the service can change plans. Not every plan supports this or I'm sorry, not every service supports this, but some services do support changing from like a bronze plan to a gold plan. Excuse me. So looking at cluster service plan. This represents a particular plan. And this is a plan for the example cluster service class that we just showed. It has a description and it's also got these external ID and external name fields and similar situation with these, we had to use open service broker ID as the name of these resources for now, because the names can change. And there's also you'll see on here in the spec of free field that indicates whether there's a monetary cost for this plan. So that kind of rounds out the resources that are created by the cluster operator and then the catalog in response to the cluster service broker resource being created. Let's talk about the service instance resource, which is the central resource that users interact with to create new instances of these services. So the first thing that I want to point out here is that the service instance you'll notice has a namespace. So service instance and service binding are namespace scope. The cluster resources are cluster scoped. In the future, we will probably have namespaced versions of the service broker service service class in service plan resources so that users can set up a broker just in their namespace without exposing. That broker to the rest of the cluster or having to have permission to create the cluster level brokers. So let's dig into this thing. As I said, users use the human readable names to specify what service and plan they want to make an instance of. So you'll notice here there's a field of a spec called cluster service class external name. Its value is user provided service and there's another one down here called cluster service plan external name and its value is default. So when the user created this resource, they just fill out those fields. They say the user provided service and the default plan and the service catalog controller resolves those to the actual Kubernetes names and sets these reference fields so that it can translate between what the user wants and what are the coordinates of those things. And then you'll see below that there's an external ID field. This is generated by the service catalog API server and it's the ID of the service instance that's actually used to communicate with the open service broker API. And then below that you'll see that there's a parameters field and you can specify parameters either in line like we've done here and I realize now that the credentials bucket that's part of the parameters. That's very poorly named. You should not put credentials in line into a service instance. You should instead reference them from a secret, which I didn't have the space to show here. But that is another thing that you can do, which we always put parameters into a secret when you use the open ship console so that sensitive information doesn't go into a resource that doesn't have the right protections on it. But let's talk about before we get too lost in the details, let's talk about the workflow when I create this resource. So as a user, I create a new service instance resource in the API server. It gets that ID generated for it. It gets the references resolved to the actual service class and plan and I'm sorry, gets the ID generated. The controller gets an event saying that the service instance was created. It resolves the service class and service plan and then it talks to the broker and says we want to provision a new service instance with this ID of this service in this plan. The broker then goes and does the work to allocate that resource and it might use the old synchronous style where maybe the service broker can just do this work and one HTTP call, or it might tell the catalog controller, hey, I've accepted your request, call me back and see if I'm done yet and keep calling me back until I'm done. And service catalog controller handles that coordination and ultimately updates the status of the service instance to let the user know what happened and reflect on the service instance status, what parameters the broker knows about, what plan the broker knows about, stuff like that, so that the user understands what state the broker knows about for that service instance. These are a couple slides about asynchronous provision. I was on this so that you have a chance watching the video to to drill into the details here. I'm running a little short on time. But this slide is fairly self explanatory. So I'm going to skip over it and go to service binding. So a binding is a relationship between an application in a service instance. And you'll notice this resource is also namespaced. It's got that external ID field just like service instance does that's generated. And it's the ID that is used during communication via open service broker API to refer to this resource and Kubernetes. The other parts of its back are a reference to the instance that the binding is to. Currently you can just refer to a service instance that is in your namespace, but in the future it's very likely that we will have the ability for you to make a binding to a service instance that's in another namespace that you've been granted access to. And then finally, the last part of the spec is a secret of the name of a secret to inject the binding credentials for this service binding into and that secret is created by the service catalog controller and will ultimately hold the credentials for that service instance. So let's take a look at our picture again. Consumer creates a new service binding in the API server controller detects that new binding. It talks to the service broker that offers that service and says, let's make a new binding. The broker does the work to do that and hands credentials back to the service catalog controller and then the catalog controller creates a Kubernetes secret with the binding credentials and updates the service binding resource status to show what happened to it. When you want to remove these things, it's very easy. You just delete the Kubernetes resources that correspond to the thing that you want to remove. So if you want to unbind you delete the service binding that you want to get rid of and the service catalog controller does the same kind of thing where it tells the broker to unbind a particular binding. And when you want to deprovision an instance of a service, you do the same thing. You delete the service instance and catalog controller handles talking to the broker about that. Now one thing that I'll mention is if you delete a service instance that still has bindings, you have to delete those bindings before the controller will deprovision the instance. If you do that, you'll see a piece of status information that says, I can't deprovision this yet because there are still active bindings. You have to go in and clean up the bindings before we'll send the deprovision. So I'm going to skip over these next two slides. I'm going to just pause on them for a little bit. But since we're running low on time and they're pretty simple concepts, we'll just pause on them so you can see them in the video. Now let's talk about service catalog and OpenShift 37. So in OpenShift 36, as I said, we shipped service catalog with a tech preview qualification. That is not present anymore. The service catalog will be enabled by default in new OpenShift 37 clusters. And we have a number of different brokers that we've developed at Red Hat, like the template service broker that uses OpenShift templates, Ansible Service Broker, which leverages Ansible Playbook bundles to automate the creation of complex services, and then the on-mass service broker that provides messaging as a service type services. What's really exciting to me is there's been a lot of work in the console to enhance the service catalog experience, so you get really nice icons for services now if the broker communicates what icon to use back to you. This is pretty central part of the landing page, and there's been a lot of work to incorporate all of the API constructions that I referred to about status into the console to reflect things back to users about what is going on with a service that they've provisioned. So let's take a look at some screenshots here of what it's like to provision a service in the OpenShift console. So say that we're provisioning a Postgres instance from the Ansible, you can see what image dependencies it's got, and you can click through then and choose a plan. Once you choose that plan, you can provide some parameters, and these are driven off of the parameter schema that I referred to earlier in the Open Service Broker API. So just to drive home exactly how important that is, that allows us to make a user interface that can tell users what knobs they have to set. So we're going to go ahead and pretend we click next here, and then we can create a new binding in our project to that service instance, and it will be created by the controller like we just discussed and contain the coordinates and credentials that we need to use that service in a pod somewhere. So I don't know about folks watching this call, but it's really awesome to me to think about all the different possibilities that this enables and makes easy to use via the OpenShift console. I'm really impressed with the work that Red Hat UXD and the OpenShift UI team has done for this stuff, really cool stuff. And that is basically the end of it. So there's some resources here in the slides for Open Service Broker API and the service catalog repository and information about the SIG. I'm going to switch back to BlueJeans now to see, I see things in the chat. There's a couple of questions, and mostly I think if you go back to the EnMass question here, one was asking that whether the EnMass service broker was going to be available in the launch of OpenShift 3.7, and I believe so, but I was wondering you're sitting in the middle of the engineering team, so maybe you know better. I'm going to have to double check on exactly what's going to be available in 3.7 for EnMass. We are going to record a demo, probably tomorrow for this so that you can see it, and either in that demo or at some point in the blog post we can provide some information about that. Yeah, so we'll post the slides for this talk into a blog post and tomorrow or Friday we're going to record the demos of all the OpenShift 3.7 capabilities with the service broker and include that. Someone is asking if you can come to see this too, will APB catalog items potentially be part of 3.7? I believe the answer to that question is yes. I think there's a lot of yeses here. There's a lot of work that's gone into it, yeah, because it is right. So, because of the nature of having to set up this whole conversation, this session has run pretty long and I really do want to get the demos out there. I'm going to, I've asked Paul to record them separately from this and we will post those on blog.OpenShift.com and hopefully by the end of this week and we'll tweet that out and send it to the mailing list. As well, Paul is going to be with me and a whole bunch of other OpenShifters on August, December 5th in Austin, Texas, the day before coupon and talking about service broker and be there and available to answer questions as well. So, if you haven't registered yet for that event, please do so. It's going to be really a lot of interesting conversations with upstream project leads and people who are deploying OpenShift and so we hope to have you all there with us as well. That really brings us to the end of the hour. Paul did a great job given an overview and we will post this as soon as possible. And I will make an announcement on the mailing list as well and in social media. We'll probably record the demos live as well with a little Q&A. So, if you want to join us for that, I'll make that Eugene session available on the OpenShift Commons events calendar, which is at commons.OpenShift.org slash events.html. John's asking one last question, is it going to include AWS as well? That's a really good question, John. The AWS services are on a slightly different timeline that I don't have the specifics in front of me. But you'll definitely be hearing more about in the future. Diane, can I just, can I say one extra thing here before we sign off? I just want to thank you to everybody that might be watching that has contributed to the repository in some way, whether that's code or user feedback. That's really important to us and it's very sincerely appreciated. So thanks a lot for watching and thanks a lot for contributing. And to the AWS question, John, David Duncan from AWS will also be at the OpenShift Commons gathering and I'm pretty sure that's what he's going to be talking about. So please do join us there. There'll be a lot of good content that will get you up. So that's what we have and we'll sign off for now.