 Hello everybody and welcome again to yet another OpenShift Commons briefing. Today we have two Red Haters with us, Paul Mori and Andrew Block. Paul has been working diligently on Kubernetes and on the Service Catalog and Andrew's been deploying it and working on getting ready to deploy it in different places. And we thought since 1.7 Kubernetes is out the door, this would be a good time to do an update and an overview on the Service Catalog. So without any further ado, I'm going to let Paul do his presentation and Andrew add commentary and there'll be Q&A at the end and you can ask questions in the chat. So take it away, Paul. Okay, well Andrew, you usually like to start this deck off. Why don't you take it to start with? Yeah, sure. Thanks Paul. So as Diane mentioned, my name is Andrew Block. I'm a principal consultant with Red Hat Consulting and I specialize in Red Hat's cloud and integration solutions and work primarily with our customers to adopt and implement these technologies within their organization. And I'm also here as we mentioned by Paul Mori who can kind of give an introduction about what he is currently doing in the Kubernetes ecosystem. Paul? Yeah, so I work on Kubernetes at Red Hat and most recently I've been leading the Six Service Catalog Special Interest Group in the Kubernetes community and in addition to that, I am on the Working Group Committee for the Open Service Broker API and in general kind of track all things engineering, things related to catalog at Red Hat. Awesome, thanks Paul. So during today's session we're going to talk about a number of different topics. First we're going to start talking about some of the common patterns that are utilized for service creation and consumption and a desire to have a more streamlined process. We'll then introduce the Open Service Broker API and talk about its history and how it relates today. Then we'll discuss the efforts that are currently ongoing inside the Kubernetes community and the efforts that are being done to consume the Open Service Broker API. Then we'll start diving into the technical details of the Kubernetes Service Catalog including the core concepts and how to work with the catalog. And then once we address some of the theoretical concepts behind the Service Catalog ecosystem we'll discuss some of the implementations that have been derived out of the Service Catalog and within the OpenShift ecosystem. And then finally we'll highlight some of the features that will be available with the release of OpenShift version 3.6. Next slide. So when creating applications, developers typically need to access some form of service in which they can connect their applications to. This typically takes the form of a database that provides backend storage or a message queue that can be used to connect multiple dispersed systems. It also may include resources within an existing software as a service solution. Now I work with many organizations both big and small and one of the common challenges that I hear from application teams is the effort that they need to work through to make services available to them. These systems are typically not maintained by the application team so in most cases they need to make some form of request for the service to be made available to them such as I mentioned earlier a creation of a database or a queue within a messaging system. But then once they have those provisions, how can they then make use of the new service available? So in the next slide. So the typical workflow that I see being used by most of my customers is that the application teams need to submit some form of help desk ticket to provision the resources. This may be in a ticketing system similar to ServiceNow for example. I see that in many of my customers. The team responsible for the service will then eventually acknowledge the request based on some form of SLA that has been defined within the organization. That team will then allocate the resources and then make the details back to the original requestor. Now in many cases this whole process may involve the interaction of multiple teams and require multiple approvals along the way. Now heaven forbid a portion of that request was submitted incorrectly as that will only exacerbate the time that it takes to make the service available to the application teams. I've been with many customers and to get some simple resource and you may think of it as simple resources available for them to start using. It can take anywhere from a week to depending on the complexity and the number of approvals that are required. It could even take up to a month and in today's world that's a long time. Next slide. So wouldn't it be nice if there was some sort of central location for which providers of a service could make their assets available to consumers and for consumers to have a process which they can manage the lifecycle of services possibly in some sort of simple interface or dashboard where users could be self-sufficient. Now Redhead has a product called CloudForms that can be used to provide some form of catalog of services. However the difference here is we want to have a more desirable standardized process and that's really what the open service broker API set out to do. Next slide. There are two primary parties involved in the open service broker API. First is the endpoint to become an intermediary between a platform and the entities that provide a set of services which are better known as service brokers. The catalog listens for requests and executes actions on behalf of the related brokers. The service brokers themselves are the components that implement the open service broker API. Next slide. So if we look at this interaction in practice, users would interact with the service catalog to view all of the available services who have registered themselves to the platform. As users interact with the catalog, the request for provisioning services would be carried out by the brokers and ultimately the services would be made available to them. Next slide. So the open service broker API has come a long way since the concept of the service broker was open sourced by VMware in the Cloud Foundry ecosystem in 2011. And at that time it was mostly exposed and contained of fixed database services. So this includes MySQL, Redis, MongoDB, RabbitMQ, and Postgres just to name a few. Now since then two major milestones have occurred. In 2013 the entire platform was rewritten and released as a version two which extended the flexibility of the platform. And in 2015 asynchronous processing was incorporated to decouple the catalog from the broker during long running operation, such as provisioning. It sometimes takes a few minutes to provision a new database or new service. That asynchronous processing really helped. And then last year in 2016 the open service broker API specification was officially released paving the way for additional platforms outside the Cloud Foundry ecosystem to make use of the specification. Next slide. So just to recap from the prior slide the open service broker API is the successor of the Cloud Foundry service broker and included in the open service broker API is the entire existing ecosystem along with the API portion. And to expand upon some of the events that occurred last year in 2016 the workgroup for the open service broker API was formed in September and the public announcement occurred in December. Next slide. The open service broker API currently has contributions and representation from a number of companies. These companies include Google, Pivotal, IBM, Fujitsu, and of course Red Hat who are making a lot of strides on this effort. And if anyone is interested in browsing the specification it's available out on GitHub. I highly encourage those interested to take a look at the content. Next slide. So now that we've touched upon the open service broker API let's talk about its application into Kubernetes. This is what the Kubernetes Service Catalog Special Interest Group, or CIG, is working on. The open service broker API is the interface and the Kubernetes Service Catalog is implementation. It is the integration between Kubernetes and brokers that implements the open service broker API. Now the history of the CIG parallels many of the efforts of the open service broker API. The CIG was formed in September of 2016, one month after the open service broker API working group was formed and the project was added as a Kubernetes incubator in October. The service catalog celebrated their alpha release in March of this year and they look forward to a beta release in the near future. But honestly, this short little overview pales into comparison for the amount of work that has been done in this effort. So I'm going to go ahead and turn the floor over to Paul, the lead of the Kubernetes Service Catalog from Red Hat for additional commentary and information from his perspective. Paul? Thanks a lot, Andrew. So we're going to talk through some foundational concepts in the open service broker API and then how they relate to the Kubernetes API that we have for the service catalog. And then we're going to take a look at how we actually use those resources to express the things we want. So one thing that I want to note before we get started is that the service catalog is currently in an alpha state and as part of going to beta we'll probably be making changes to the names of some of these resources and terms as they relate specifically to the service catalog at some point before beta. So do keep in mind that the names for some of these things might change slightly. So the foundational concept of the open service broker API is really the service broker itself. The service broker is an entity that manages a set of capabilities that are called services. So for example, a service in this lexicon might be something like a database as a service. Services have plans which are a specific offering or tier of that service. So for example, our database as a service might have different tiers starting with free and going up to medium and large which you might have to pay for. When we instantiate a particular services capability we create what's called a service instance. So continuing our database as a service metaphor when we instantiate the database as a service we get an actual database that we can use. A relationship between a service instance and an application is called a binding. So for example, continuing our database as a service metaphor a binding might equate to credentials being created in the database that our instance represents and being returned to the user so they can use it in a consuming application. And just for the record here application is meant as a very general term that just describes code that will access or consume a service. So for example, our web application that requires database credentials we can think of as an application. There are really only five fundamental operations in the open service broker API. The first is catalog management and this is an instance where we have a slightly inconvenient overload with some of the other names that are in the mix. When we say catalog management here what we refer to is that the broker I'm sorry the open service broker API has an endpoint that returns the list of services that that particular broker offers. The provision endpoint is how you access the facility for creating new instances of a service. The bind endpoint allows you to create new bindings and then there's also an unbind operation that uses delete with the same endpoint that removes a binding and then a deprovision operation that uses delete with the provision endpoint that removes an instance. So the architecture of the service catalog is very similar conceptually to the architecture of Kubernetes itself. There's a Kubernetes service catalog API server that exposes the Kubernetes API to access these concepts and it is backed by a controller that handles the communication with the different service brokers that are registered in the catalog and implements the state reconciliation pattern to move the current state of the resources that we requested towards the desired state. So in addition to using the same fundamental concepts of API server and controller the service catalog is meant to be used behind an API aggregator that presents a single unified endpoint that allows users to use both the core Kubernetes APIs and the service catalog API and an arbitrary set of other API servers that might exist like they were a single endpoint. One notable thing about the service catalog is that it has a pluggable data store. It can use EtsyD, it can use third party resources although we do not recommend that. In the future the service catalog API server will be able to use Kubernetes custom resource definitions as a data store. So let's take a look at the high level architecture so we can visualize some of these things. You can see on the left the CLI. You can think of that as being either the CLI or any client of this API. Talks to the Kubernetes service catalog API server. The service catalog controller is watching the API server and getting events that indicate that things have happened to resources in that API server that the controller should process. And then you can see also that the catalog controller speaks to all of the brokers in the system. One thing I want to emphasize here is that the broker is the component that manages a set of capabilities. The service catalog is the integration between Kubernetes and those brokers that uses the open service broker API. So it's important to make a distinction between the service catalog and a particular broker. As you can see there are multiple brokers that you can register into the service catalog and its job is to handle the communication with the correct broker for the instance or I'm sorry for the service that the user wants to work with on a particular instance or binding. The catalog controller is also responsible for talking to the primary Kubernetes API which it needs to do for things like ensuring that namespace exists or namespace resources and for manifesting the results of bindings back into Kubernetes. So the model that we have for the Kubernetes service catalog API is very simple. There are only about four resources although we've recently decided in the SIG that we'll probably split one of these which I'll talk through as I get to it. So the current API resources that we have are the broker resource. It's fairly self-explanatory. It represents a broker that an operator wants to consume in the catalog. The service class resource represents service and its plans that a particular broker offers. The instance resource represents the intent to provision a new instance of a service and the binding resource represents the intent to bind between an application and an instance of a service. So let's take a look at an example of what a broker resource looks like. This is a simplified example and there are some additional fields that are omitted but the main point is that there's a URL field and that is the endpoint that the catalog should use to talk to the broker over. One thing I want to note about this is that the broker resource is non-namespaced. So all brokers that are registered into the catalog currently are global. Here's an example of a service class resource. If you take note of the parenthesized numbers you can see some important things about this. The service class has a reference to the broker that provides it. It has a field also that says whether the service is bindable. Not all services are bindable or it makes sense to use with the binding. Most of the ones that immediately probably come to mind are things that you'd want to bind to but not everything is bindable. The service class has an array of the services plans that have some information about that plan. For example, it's tough to show on a slide like this but one of the things that a broker can provide about a plan is a schema for what parameters that plan accepts. I also want to note that service class is a global resource. Here's an example of an instance. Remember, instance represents an intent to provision a new logical instance of a particular capability and I do want to call out that the instance resource is namespace scoped. So just to make this clear broker and service class are global instance and later we'll see binding or namespace scoped. So the instances spec has a reference to the name of the service class that it is an instance of and which plan of that service class the instance is currently on. There are also parameters and I want to call out that the YAML for the parameters will very likely change since we recently made a decision on that that I'll talk through a little later on. The list of parameters is optional but they're frequently used to change attributes of the service instance that's provisioned. Here's an example of a binding. Like instance, this one is also namespaced and the binding spec contains the name of the instance within that namespace that the binding is to. It contains the name of the secret that the binding result should be manifested into and also contains an optional list of parameters. So now that we've talked through what these different resources are let's talk about how we use them to express what we want. As a prerequisite, we need a Kubernetes cluster at least at the 1.6 level. The important things about 1.6 are that in 1.6 we added initial support for the API aggregation feature that I spoke about earlier and the kubectl client, which the OC client is based on added really good generic resource support. So prior to this, kubectl and OC were only able to show rich descriptions of resources that there was code compiled into the client for and in Kubernetes 1.6 the generic resource support made it possible to have a much more native experience when working with new API servers that aren't known to the client already. So in addition to the Kubernetes 1.6 cluster or OpenShift 3.6 cluster, you need to have the core infrastructure deployed, which is the service catalog API server, the service catalog controller, and the API server has to be configured with the store. In OpenShift 3.6, the OC cluster up tool can do this for you and there will be installer support that allows you to set up the catalog when you create a new OpenShift cluster. So the first thing that needs to be done once the catalog is installed is that we need to add brokers to the platform for the catalog to consume. Since the broker resource is non-namespaced, it's really created by cluster operators for users of the cluster and the pattern that we're going to see with the broker resource is very similar to the pattern that we'll see with the rest of the resources that we talk about, which is that once the resource is created, the controller gets a watch event that says that something happened to the resource or a new resource was created and it goes and does some work to do the right thing. In the case of the broker resource, what happens is that the controller will contact the broker on the broker specs URL and it will fetch that broker's catalog of services and transform them into the service class resource and then persist that back into the Kubernetes service catalog API server. Here's a picture of all this happening. So first off, the platform operator creates a new broker resource. The service catalog controller gets a new watch event saying that the broker resource was added and then queries the broker at the catalog endpoint and transform the payload back into one or more service classes and persist those back into the API server. So after this happens, the service classes will be present for users to take a look at and begin provisioning instances of. So like all other Kubernetes APIs, the service catalog API is intention based and what this means is that when we create a resource, it represents our intention that something should happen. So to provision a new instance of a service class, we create a new instance resource and supply parameters if the service class and plan that we're using accept them and the controller gets a watch event saying that a new instance has been requested and goes and talks to the broker to provision a new instance. Here's a picture of that happening. You can see first off, the user creates an instance resource. The catalog API server gives that resource a grid that it can use to be the coordinate for that particular instance in the open service broker API. After that's persisted, the catalog controller gets a watch event and calls provision at the broker. The broker does some work to allocate a resource and replies to the catalog controller saying done or possibly is ongoing asynchronously, which we'll talk about in a second. And then the service catalog updates the instance with the status that reflects what happened, whether the provision was successful, whether there was an error talking to the broker, or maybe the broker said we couldn't provision that instance right now. That's reflected in the instance resources status. So like I said, provisioning a new instance can sometimes take a long time. If you've ever provisioned a new database from a database as a service product, you probably are familiar with the fact that it can sometimes take several minutes. And this is where the asynchronous support in the open service broker API becomes very useful because it gives the catalog controller a way to check back with the broker and see what is going to a provision that's asynchronous. So here's the flow for that. The catalog API server detects a new instance resource and we actually in the catalog controller say that we can always support asynchronous. It depends really on whether the broker's service needs to be provisioned synchronously or not, or rather whether it supports asynchronous provisioning. And if it does support asynchronous provisioning, it's possible for the broker to return an HTTP202 which tells the catalog controller that the broker has chosen to implement this provision asynchronous. Asynchronously, the catalog controller will then update the status to show that an asynchronous operation is happening for that instance and will call the broker back at the right endpoint to pull the status of the operation periodically. And once that succeeds or fails, the catalog controller will update the status on the instance resource and send it back to the API server. Excuse me. So we've talked about bindings a little bit. Just to refresh everybody, a binding is a relationship between an application and a service. The open service broker API supports multiple types of bindings for credentials, log drain and routing. The log drain and routing services are very cloud foundry specific in the implementation. So currently, the Kubernetes service catalog only supports credentials binding. What credentials bindings are sort of what they sound like. You do a binding between an application and an instance of a service and you get a secret containing information about how you should use that service. So for example, the secret that you get might contain coordinates that you dial the service at. It might contain credentials like username and password. It might contain an API key. And it might contain some configuration parameters like quality settings. Remember, these APIs are intention based. So the creation of a binding resource represents the user's intent that a new binding should be created. Let's take a look at the picture here. It's similar to what we've seen before. So first, the user creates the new binding resource. The catalog API server does a similar GWID generation to give the binding a coordinate to use in the open service broker API. The catalog controller then gets a watch event saying that a new binding resource has been created. It calls bind on the broker. Updates are, I'm sorry, next it'll create a secret with the result of the binding and then update the status of the binding resource. So we talked about how to make these things. Let's talk about how to delete them. In the intention based API world, the way to trigger that something should no longer exist is to delete it. And the way that we handle this in Kubernetes is that when a bind or instance resource is deleted, we actually don't delete it right away. We set a deletion timestamp on it and that creates a watch event that the controller receives and allows it to do work to clean up things. So in the case of a binding, when we delete a binding, that binding is just updated with a set deletion timestamp and that gives the controller an opportunity to delete the secret that it created and call the broker back and tell it to remove the binding. Same thing with instances. When you delete an instance resource, it results in that instance having a deletion timestamp. The controller sees it, it goes and talks to the broker and the instance deprovision and then finishes cleaning up and then the instance is totally deleted from the API server. I guess I actually have talked through these slides. I'm going to give each of them a couple screen moments for folks to see. So as I said, just to recap, deleting a binding results in that binding being unbound, the secret will be deleted, the controller will talk to the broker and tell the broker to delete the binding and then the resource disappears. Same thing with deprovision. Once you delete an instance, the controller will invoke deprovision at the broker and finish deleting the instance resource. The broker resource implements a similar behavior for removing a broker in its associated service classes. So when we delete a broker resource, the controller will delete all of the service class associated with that broker and then the broker resource goes away. So those are the basics of the API mechanics for the Kubernetes service catalog. I'm going to turn it back to Andrew now and we're going to take a look at what it looks like for OpenShift when we use the service catalog. Thanks, Paul. So as the SIG service catalog continues to evolve, implementations of the Open Service Broker API in the form of service brokers have started to emerge. Next slide. Three in particular of note with regard to OpenShift are the template service broker, the answerable service broker, and the en masse service broker. The template service broker will wrap the existing OpenShift templating scheme within the lifecycle of the Open Service Broker API. You will be able to leverage the same template that you have been using since the early days of OpenShift, version three, within this broker. The answerable service broker incorporates the concepts of answerable playbook bundles or APBs to leverage the answerable ecosystem to manage the lifecycle of services. Now there is a prior Commons briefing that was dedicated to the concepts of answerable playbook bundles and the answerable service broker. So I do highly encourage you to locate the recording on YouTube and the OpenShift Commons briefing blog to revisit those concepts. And finally is the en masse service broker, which will be a component of the messaging as a service platform that is currently being developed by Red Hat. It provides the capabilities to allocate messaging services, such as cues or topics for use by applications. Next slide. So when enabled in OpenShift 3.6, the catalog provides a new user interface for users to explore the available applications along with the new workflow for deploying applications. You will be able to walk through each of the operations of the Open Service Broker API as the application is deployed. Now these enhancements are just part of the continued evolution of the OpenShift user experience. The user interface has come a long way since OpenShift was first released and this is just yet another step. It's really amazing to see the evolution come about. Next slide. So for those of you who are looking to create your own service broker, a service broker software development kit is available to help jumpstart those efforts. It provides a baseline set of boilerplate code written in Go and subs out all of the necessary Open Service Broker API functions. Now service brokers can be written in any language and this type of SDK can be ported as necessary to the language that you are currently developing, such as Java, C-Charp, you name it. It's very easy to implement these interfaces. Next slide. So while we've covered a wealth of information during this session, it really only scratches the surface of what the Open Service Broker API can offer. Here are a set of links that you can browse at your leisure to learn more about the Open Service Broker API and the efforts on implementing them within the Kubernetes community. Next slide. And we've saved the best for last. We've talked a lot about the theory of what's occurring in the upstream community. Now how will these be consumed into features available in the next version of OpenShift? OpenShift version 3.6. Now first, the service catalog will be available in technology preview status. This includes the base catalog infrastructure, including the API server and the controller manager. And included are two of the brokers that we described earlier, the template service broker and the Ansible service broker. And for those of you who are looking to run the service catalog locally within their own local development environment, a new parameter has been added to the OC cluster up tool to simplify the provisioning process using the dash, dash service, dash catalog parameter when running OC cluster up. Now one thing I do want to note, now since the service catalog is available as a technology preview feature, when it is enabled, it is embedded into a number of core services within OpenShift. And by doing so, it does change the level of support provided by Red Hat. So while I'll encourage you to view and my customers to start looking at implementing this ecosystem within both your and their organizations, do keep in mind that this should only be done in a development environment and not a production environment as of this upcoming release. Now we want to thank you for attending this session today. We really hope that it provided some valuable information into the open service broker API specification, the work that's being done in the upstream Kubernetes community and the features that will be available in the next version of OpenShift. All right, with that, there is one question that's come up from Jonathan that's in the chat that he should try and get answered. With the Ansible and Language support, do you imagine replacing some of the functionality of Kubernetes templates? He's asking because he would love to have more logic in how to instantiate services from a template feature. I'll take that one. So that's a really good question. The progression that I usually describe to people that are curious about this is that if you can use, if you can accomplish what you want just with an OpenShift template, use an OpenShift template. If you need something more, your next step, your best next step is probably to go to Ansible and write an Ansible playbook bundle. And if for some reason that doesn't work for you and you're very interested in writing your own code, at that point, you should check out the SDK to write your own broker. Does that answer your question? Jonathan, I've just unmuted you if you wanted to follow up on that. Did that answer your question? You'd have to unmute himself. Sorry, can you hear me now? Yes, I can. Thank you. Okay, thank you. We've been struggling with it and going to Ansible, we were thinking that's a pretty drastic step and I was wondering if because you guys are kind of knee-deep in that, if that's an issue that you guys have seen a lot and if you thought about making the Ansible playbooks, coming up with templatized Ansible playbooks to try and make it easier. The very next step is to go down to Ansible, but that's a big step to make because you have to write a lot of Ansible logic to create the resources in Kubernetes and I'm wondering if you guys have kind of template resources available for somebody wanting to experiment in Ansible but not knowing where to start. I can take that if you want. Go ahead. One of the benefits of the Ansible playbooks bundles in that ecosystem is there are a number of modules that have come out of that ecosystem that represent a lot of the objects that are Kubernetes objects. So you can easily create services, deployment configurations, routes using only a few lines of code and it's all ID-podent. So it will be able to manage the lifecycle of open shifts very easily. So you don't have to have it. You don't have to have a large learning curve to learning Ansible because a lot of that is already boiler-plated out for you. Let's see. Is there another question that popped in? Google live open up if open shift for easy open shift someone is giving some tips in there. Maybe I'll unmute him and see if you can. Hey. Now just on the question Ansible and OpenShift so with an OpenShift playbook overall there is a big plus Ansible module with OpenShift which basically makes life a little bit easier when you need to interact with OpenShift plus then using Ansible. So if you're looking something like that it would make life easier a little bit. Yeah. Oh, I see what you're saying in the note to actually use Google search to look for open shift for any I was like, Google what does Google have to do with this? All right. So I'll put that note in the blog post too so people can find those things. Are there any other questions people who are out there? Yeah, and Anders just posted the modules that are leveraged by a lot of the existing Ansible playbooks examples and we're going to have another Ansible playbooks and Ansible Service Broker Commons briefing in a couple of weeks I'll post the date for that soon they've got some new stuff coming up and wanted another chance to update everybody so that'll be some more in that and there is an earlier briefing too as well on Ansible playbooks features and what that all entails. So I don't see any other questions so I want to thank Paul and Andrew for their time today and this video will be up on OpenShift's blog blog.OpenShift.com probably in a day depending on how fast the internet gods work and uploads and if there are other topics that you want to hear about especially things relating to the Kubernetes 1.7 release that are coming out with that please let me know and I'll try and core someone into speaking on that but other than that thank you all for your time today and if you have questions just send us an email on the mailing list or jump on the Slack channel and we'll try and get them answered and thanks everybody Thanks a lot Diane Thanks Diane, thanks everyone