 Thank you all for coming. My name is Shannon Cohen. I'm a product manager at Pivotal working on Cloud Foundry. And I'm joined today with by Doug Davis from IBM, who's been working on Kubernetes. We also have some of the team with us here in the audience who has been contributing to the working group I'll describe now. Before I get to the meat of the presentation, I'd like to start with a couple of foundational concepts. Some of this may be review, and I appreciate your patience. I'd like to start with motivations. We know that application development teams require services, and these range from application dependencies to services which enable the entire application development lifecycle. We found that managed services enable application development teams to concentrate on their application code rather than operating these services, which can be very costly. We also know that self-service on-demand marketplace services increase developer velocity and minimize time to deliver value to market. As many of you are aware, Cloud Foundry has a marketplace of this kind. Each deployment of Cloud Foundry, in fact, has its own marketplace with admins adding services to the marketplace based on application developer demand. Admins have control of access to services and plans by organization, and can optionally allow developers to bring their own services to the marketplace. The marketplace provides a highly integrated self-service and on-demand application developer user experience. And over the years, a rich ecosystem of services, compatible services, has developed, enabled by a simple, well-documented API, which facilitates integration between the marketplace and service providers. The API, we call the service broker API. And as this is familiar to most of you, I'll briefly summarize it here. The service providers implement a few API endpoints for which Cloud Foundry is the client. And components that implement these endpoints we refer to as service brokers. Service brokers can be hosted anywhere. The platform can reach them and provide a catalog of services, plans, and user-facing metadata exposed in the Cloud Foundry marketplace. The real value of the broker API, though, is in abstracting service-specific lifecycle operations from the platform. Service brokers translate generic requests from the platform to service-specific ones for operations such as create, update, delete, and generate credentials. The platform also provides a service brokers can also offer many services and plans. And many service brokers can be registered with the marketplace so that the catalog of services available to users of Cloud Foundry is the aggregate of all services offered by all brokers. And the platform then provides a homogenous user experience for application developers managing these services. So this model has worked fairly well for quite a while, but we still have some goals. And those include increasing the choice of services offered to application developers in Cloud Foundry, enhancing the API to offer new service use cases to application developers, and increasing adoption of Cloud Foundry. We have found, however, that reaching out to service providers one by one is labor-intensive. And not surprisingly, we've seen that the adoption of compatible services or the availability of compatible services has grown as the popularity of Cloud Foundry has grown. So we've been thinking, how can we make the investment and integration in this API even more compelling? Last year, we heard that several open source communities were interested in adopting the service broker API to enable marketplaces that they were designing for their platforms. We had discussions with representatives from Kubernetes, OpenShift, Bluemix, and Google about how we could enable them to adopt the service broker API for these marketplaces they were designing, and how we could work together to enhance the API on an ongoing basis. Before I proceed, I'll address an obvious question. Many have viewed the Cloud Foundry Services Marketplace as a differentiator in an increasingly competitive market. But we believe, and some have asked why we would assist these platforms in enabling this feature. We believe that with more platforms supporting this broker API, it becomes more compelling for service providers to invest in implementing the integration. And ultimately, we expect the ecosystem of compatible services to grow and enable what really is our goal is increasing developer velocity through increasing choice of compatible services. So with that in mind, last year, the Cloud Foundry Foundation created the open service broker API project managed by a Cloud Foundry Foundation PMC project management committee. And the API spec that members of the PMC are represented by Fujitsu, Google, IBM, Pivotal, Red Hat, and SAP representing Cloud Foundry and Kubernetes and other platforms. The API spec itself for the broker API has been moved from a Cloud Foundry repo to a new GitHub organization for the project. And goals for this working group include evolving the API into a cross-platform community specification and increasing adoption by platforms and service providers to, again, ultimately increase choice available to application developers. So since then, the working group has been learning to work together. Some of us are new to creating standards. But fortunately, we have some members in our working group with a great amount of experience in this. We shared our respective priorities and fortunately found that the top priorities for our respective platforms overlapped. So there hasn't been a great deal of disagreement over priorities. Most of our work has been in gathering requirements and discussing use cases and design documents, arriving at solutions that we agree on to meet these use cases. And we're developing a release process that continues to require implementation of new features in a platform. Currently, that's been Cloud Foundry. But increasingly, that may be Kubernetes also, to validate the usability of API interactions. I'd like to give you a quick look into the road map for features that we have coming in upcoming versions of the spec. The first thing we identified was that there were a few Cloud Foundry-specific aspects to the API. Fortunately, there weren't that many. In particular, Cloud Foundry sends an Oregon Space ID and the provision request and Kubernetes and other platforms don't have a notion of those fields. So we're introducing a new field that will allow platforms to send a profile of information to brokers. And while we'll deprecate these current fields, we won't remove them as our charter is to make additive changes only in the next major version of the spec, whenever that may be, those will be removed. The biggest feature that we've been discussing recently is to meet the use case that app developers want a richer experience around management of service configuration options. Brokers currently may support arbitrary number of configuration options, but discovery of those is currently left out of band through documentation or otherwise. We've been designing a schema based on the JSON schema standard, which would enable broker authors to declare supported configuration options. And platforms would pass those through to platform user-facing clients so that those clients could offer the much richer user-facing interactions regarding these configuration options. We've also identified that there are now valid use cases for adding some get endpoints for instance and binding to allow retrieval of these configuration options and credentials. We imagine that these get endpoints could eventually be used for other aspects of instance and binding state. We've also heard that the specified mechanism for authorization and authentication in broker interactions of basic auth is both too prescriptive and undesirable in many environments. We intend to relax that constraint as well as identify a few popular mechanisms that we might identify ways to facilitate. In particular, we found that while we don't necessarily need to add anything to the spec about where your OAuth2 token server may be, we could standardize on the scopes that brokers and token servers and platforms use to authorize requests between platforms and brokers. We've heard that broker authors would like to know the identity of the originating end user and are thinking through how to facilitate that both for the purposes of authorization or billing, but also to enable broker authors to make calls into the platform on a user's behalf. Many brokers are offering persistent data services, but not all. And we're thinking through how we can enable broker authors to execute backup and restore operations. We have not yet decided yet whether those should be facilitated with first class endpoints on the API or whether we might consider a more generic mechanism to allow brokers to declare arbitrary supported actions. And finally, we mean to extend the asynchronous support to bind and unbind as some broker authors are making calls for those operations to asynchronous backend systems. With that, I'd like to invite Doug Davis from IBM to the stage to give us an update on Kubernetes Service Catalog project, which supports the service broker API. Thanks, Shane. Is volume OK? All right, thanks. As Shane had said, I'm Doug Davis from IBM. I've been lucky enough to actually work on Cloud Foundry in the past, then Docker, and now Kubernetes. So I'm really excited to be able to bring this aspect of Cloud Foundry to the Kubernetes community, along with other people in the community. So that's all great. Now I know that this is a Cloud Foundry audience, so if you're familiar with Kubernetes, forgive me for really, really oversimplifying it. But I wanted to at least give you a quick intro for those who don't know Kubernetes, what it's all about. At a very basic core level, you can think of Kubernetes as basically a database with a REST API in front of it. And that's basically it at its core. So what you're going to have is over here, I guess you can't see it. I'm sorry. So anyway, what you're going to have is the Kubernetes client talking to an API server, which talks to a database. And that's basically it in terms of the core functionality of how the user interacts with the system. It's an asynchronous model, meaning you put things into the database, and then the client basically gets returned back, for most cases. There are exceptions. Beyond that, though, what they have are the set of watchers or controllers who monitor basically the database and look for changes in there and then act upon those changes. So for example, if you ask for three instances of a particular application, but it only detects one, it'll bring up two more. So that's very much matching the desired state model that you see in Cloud Foundry as well. But the important thing here is that when you think about Kubernetes versus Cloud Foundry, Cloud Foundry tries to abstract these things away from the end user. Kubernetes tries to actually take the other approach. And for the most part, exposes the end user, if they choose to, to all the underlying system resources. And that's a very different model. And this is important because I'm going to start explaining some of the additional resources to you. So if you start playing with this in Kubernetes, I didn't want you to be surprised by seeing these things that are there in Cloud Foundry, but they're just hidden from you. But in Kubernetes, they will be exposed. The other key points to keep in mind when thinking about Cloud Foundry versus Docker is, I'm sorry, Cloud Foundry versus Kubernetes, is Kubernetes has an ocean of pods. While they do use containers, they allow you to group containers together into what they call pods because they realize in many cases you want to have multiple containers sort of sitting side by side on the same host working together. And that is sort of the small unit of deployment that they have. So to keep in mind, they have pods instead of containers. And lastly, in order to identify your application, they actually don't have the notion of application. What they have are labels that you can put on two pods. So when you want to take an action on your quote application, what you're really doing is saying, go find all the pods out there with this particular label and take this action for me. So it's important to think to know that the notion of application really doesn't exist inside Kubernetes for the most part. All right, so with that in mind, let's keep going. Now, so in order for Kubernetes to support Service Broker, we had to add some new resources to the data model. Now, for a variety of reasons, we decided not to extend the core Kubernetes model itself. It's actually an add-on feature, partially because we weren't 100% sure we wanted to pull into the core yet, but also they wanted to kind of use us as a guinea pig for some of the newer features that they're adding. And I'll talk about that in terms of how they allow people to extend the model. And so we've got to be a guinea pig for some of those things. But let's start talking about some of these core features. And for most part, these should be very similar to the ones you see inside Cloud Foundry. It's just some of these may not be exposed as first class citizens in Cloud Foundry, but they are there. So for example, you can, in Kubernetes, create a broker resource, which is nothing more than the metadata about where the broker lives, right? It's URL, user ID, and password. And then once this resource is created, typically by an admin in the Kubernetes model, a controller will detect it, and then under the covers, go off and talk to the broker, get the catalog, and then populate the Kubernetes resource model with that catalog, right? So once that catalog gets, or as the catalog gets created, what you're going to see are service classes appear. Now service class is the same thing as a, quote, service, other than the reason we had to use the word class on the end is because service itself was actually taken already by the Kubernetes model, and we didn't want confusion there. But these are created implicitly by the controller. End users don't create these. And within the service class itself, you'll have all the various plans that you guys should know about if you deal with services in Cloud Foundry. So then we also have an instance object. Now this actually is created by an end user. So when they want to provision an instance of, say, MongoDB, they'll go off and create an instance object inside the Kubernetes model. And that has basically the metadata necessary for the controller to notice it, go off and talk to the particular broker, come back with whatever information the broker's going to return and stick it back in that object. So this object is just sort of a metadata object more than anything else. It just registers the existence of the instance itself from the user's perspective that it wants it, and then the information from the broker itself about what it actually got created. And then finally we have the binding. So much in the same way Cloud Foundry links applications to service instances, that's what we have here inside of Kubernetes. The only major difference here is inside Kubernetes, the credentials are actually stored in a first class object called secrets. Think of it as nothing more than a key value store, but that is just encrypted. Now inside Kubernetes, though, you can take these secrets and bind them to your application or your pods in a variety of ways. So for example, you can take all the keys and have them appear as environment variables, or you can have them appear as a volume mount. That way you don't have to deal with the security potential problems around environment variables, but you can access them as a volume mount there. So there are some options there. And we make use of those, and I'll talk a little bit about that in a sec. Now everything I talked about here so far has talked about extending the model sort of outside the core. There is one feature that we did actually add to the core itself, and that's called the pod preset. Now pod preset allows you to define a resource, obviously called pod preset, which says look for any new pod that's getting created in the system. And based upon the labels you see in that pod, do something to it. In particular, it allows you to add environment variables to the pod, or it allows you to mount volumes in there. And hopefully you should be able to see this is going to be very important to us, because as we need to inject credentials from the binding into the pods for an application, we're going to make use of this itself. So while this resource was created because of the service catalog work, we made it generic enough so it could be used by other applications, or other uses of Kubernetes. It's not service catalog specific. Yep, I'll repention that. OK, so let's talk a little bit more about bindings, because I think this was sort of the piece of Kubernetes that might be slightly different from Cloud Foundry. And on the top right-hand corner of the screen, you can actually see a sample YAML file that describes a binding. But very much like the Cloud Foundry model, a binding is just a linkage between an application and the credentials itself. So in this particular case, what you're going to see in red is a pointer to the instance itself. So we're going to reference it by name. So previously we created that instance object. We gave it a name, in this case MongoDB1. And then inside this binding object, we're also going to create the pod preset template. And this is going to get used to actually create a real pod preset for the Kubernetes core system. In this particular case, I said, OK, find all pods that get created that have a label of app with a value of my app. And so as new pods get created in the system, they will get the secrets from that instance that we're bound to ejected into their pods as we go along. Now, a lot of the stuff that I'm talking about here is kind of an alpha stage. So as of right now, all of the secret information, in other words, all the keys, will get added as separate environment variables automatically. We are going to work on adding support for doing bind mounts and stuff like that. It's just not quite there yet if you decide to play with it. But I did want to mention that at least the basics of it is there. They all appear as environment variables. But at the most part, you should see that this is actually very similar to what you see in Cloud Foundry. You create an instance. You create a binding. Things magically appear inside your application as environment variables, or as a single environment variable in the Cloud Foundry case. Here's a set of environment variables in the Kubernetes case, very similar concepts. And that's one of the key things I think that's important in all this work is, while the underlying implementation is slightly different from Kubernetes versus Cloud Foundry, the way the user sees it is very, very similar, different syntax in some cases, but the concepts are the same. And I think that's a very important concept going forward here. And it's that level of consistency and pseudo interoperability I think is going to be very great for the users in terms of giving them choice, in terms of what platform they want to use, because they get similar experiences. So one of the other things that we experienced here is because the model we added to wasn't part of the core, we had to basically create our own little API server. That's that little HTTP front end in front of a database. Now, what we had to do those, as I said, created our own little API server. But that's not a great user experience from the client's perspective, that they have to talk to two different API servers just to get their job done. So Kubernetes has been working on this thing called API aggregator, which basically is kind of like a proxy in front of multiple API servers. And so in our case, what we're going to do is tell people to install this as the normal API server on the left-hand side of the picture there. That's the core API server. But then put in front of it an API aggregator, which says, OK, if you see a request that talks about service catalog types of resources routed over to the service broker API server, which is running outside the core and gets your job done. Now, the API server for the service broker may still talk back to the core to the database over there to get its job done, for example, talking to secrets and stuff like that. But from the end user perspective, they see a single endpoint and a single user experience. And this was something that was relatively new. So we took a lot of arrows and pulled out a lot of hair trying to get it all to work. And the same thing for staying up a brand new API server. For those of you who are familiar with Kubernetes, some of the things in there are not necessarily easiest things in the world to deal with. And we took a lot of hits trying to get all this stuff working and ironed out a lot of the bugs, because we were really the first ones to use some of these things. So that was actually good for the community at large, because we helped enhance the documentation and the experience for using these things. All right, so for those of you who are familiar with Kubernetes, there is a command line, kubectl. And basically, you act on resources, as I said. So you could do a kubectl create, update, delete, and you pass in a YAML file describing the resource. So here's some examples of it right there. Now, we are planning on doing some plugins to the kubectl command line to make the user experience a little more user-friendly. And that's what you see down at the bottom. And those should look very familiar to you when you look at the Cloud Foundry command line. So it's a very similar thing. You want to create an instance. You give the service class name, the plan name, what Kubernetes name space you want it to be visible into, and then the instance name. So very similar to Cloud Foundry. And under the covers, these will create the YAML files appropriately and then basically the equivalent of the kubectl create commands for you. So we're trying to copy Cloud Foundry's wonderful user experience here on the Kubernetes side. So kudos to them, because they did a great job. So in terms of status, we have a special interest group and an incubator project, both called Service Catalog. We are planning on being fully service broker API compliant going forward. We have IBM, Red Hat, and Google are very involved, obviously, in both organizations. So they've got great synergy between the two. And as I said, a lot of the stuff we have right now is in alpha form. We are very, very close to beta. And that's very important from a Kubernetes perspective. Because from a Kubernetes perspective, beta means from an end user, you can assume that the APIs will be stable going forward. So you can start playing with it and be assured that we're not going to break you going forward. We may add new things, but you should be backwards compatible. And that's going to be a huge milestone for us. And I think we're less than about a week or two away from that. So at that point, I would feel comfortable telling people to really start playing with it and hammering it and let us know what you guys think. And finally, just some links for you guys to get involved. The top set are for the open source broker API itself. We have a web page, GitHub repo, obviously the Cloud Foundry link there, and then a link to the Service Catalog GitHub repo for Kubernetes. And I believe that's it. Oh, we have office hours tomorrow in the collaboration station from 12.50 to 1.20. Shannon and I would be there answering questions. And we do have some funny stickers. Funny because they're incredibly small. I've never seen stickers quite this small before. But that's it. So, Max, how much time do we have? Oh, we have a whole four minutes for questions. We made it. I was afraid we were out of time. So Shannon, you want to come up in case we have questions? Oh, you need to steal mine? Yeah, there you go. OK, cool. That's loud. Any questions from the audience? Ah, perfect. So remember, mention your name and association so that we know who you're holding. Yes, Luis Amadeo from Ultimate Software. So is the intention for service brokers to be shareable between Cloud Foundry and Kubernetes once they support the same API? Yeah, by and large, that's the goal. The goal would be that a service provider who wants to offer a service in the marketplace of either platform would only need to write one implementation in support of the broker API endpoints. And that broker should be able to register that broker with both platforms. Great. Thank you. Yeah, the only thing I would add is one of the biggest things, aside from just the coolness factor of that, is imagine then a user can then choose, well, maybe today I want my application deployed on Cloud Foundry, maybe tomorrow I want on Kubernetes or vice versa, they should be able to get a similar user experience to use these services. And that's a wonderfully interoperability statement to me. I think that's really going to be cool going forward. Any other questions? I guess we'll follow up to this. Does it mean that there will be one service instance that is shared between, say, my deployment on both, or will I create two service instances? That's a harder problem. But I've heard that Bluemix enables this to some extent. Yeah. I was going to say that's basically an implementation detail. Being IBMers, we know Bluemix actually does support this today. You can create a service instance for, say, your Cloud Foundry stuff and get access to it from a Docker container or Kubernetes pods and stuff. And so it is going to be possible to do that kind of stuff, but it's going to be an implementation detail of the platform that supports multiple things. Now, you know, Pivotal supporting Cloud Foundry, IBM possibly supporting Kubernetes, where they can share instances between two completely disjoint platforms, that's going to be much harder and probably not any time in the future. But within one platform, I would expect you to, yes. OK, cool. Any other questions? Wow, yes, we're stopping you from lunch. Thank you, Shannon and Doug. Enjoy lunch.