 Can everybody hear me okay? Yes? Yes. All right. Good afternoon, everybody. My name is David Stabetti. I am an engineer on the services API team at Pivotal, and this over here is Dr. Max, as he's more colloquially known from IBM. We're going to talk to you a little bit about today about what's new with Cloud Foundry Services, and sort of give you guys a little bit of a tour through the existing architecture and the features that are coming through the pipeline. So what are we going to cover today? We'll start with an overview of the broker architecture, and what I really want to focus on is I want to focus on the goals that we manage, I think, to accomplish with the given architecture, and I want to sort of make sure that everybody understands why those goals are valuable for people who are writing services, for people who are operating Cloud Foundry, and also for developers who are using Cloud Foundry. And then after I do that, Max is going to go through the three big features that we've added in the last couple of months, and sort of explain a little bit about how we implemented them and how those sort of extend the same basic principles that we established in the first part. And then, you know, if the demo gods are happy with us, we're going to go for a demo at the very end, and then we'll finish up with some Q&A, and maybe some feedback or just sort of hearing from you guys. So let's get started. Probably the best place to get started is to describe the problem that CF Services tries to solve, and basically what we want to do is we want to make sure that Cloud Foundry Service is enabled developers to discover and integrate third-party software with their Cloud Foundry applications through the platform. Yeah, let's just start with that. This definition is actually a little bit malleable. I could tell you, for example, that services actually don't necessarily have to be connected to applications, but this is sort of the basic principle, and then we find that with our definition of services, we can sort of expand it as much as we want to include all sorts of services that don't necessarily fit into the original plan, which I think is a good thing. So maybe let's stick with this. So when I talk about third-party software and services, if we talk about Cloud Foundry services, what do we really mean? What we're talking about is third-party software, any kind, that a user might want to use in the context of Cloud Foundry. So the easiest example to imagine is something like a data persistence layer, SQL databases, or maybe a RADIS, a RabbitMQ, or like Hadoop, anything like that, just data services. But you can take it a little bit further and include things like analytics tools, monitoring, or metrics like New Relic. You could think of utility services for your application, things like sending emails, but you could even take it one step further, and this is sort of where we break away from this definition. And you could include services that are sort of just team enablement services, things like Pivotal Tracker or Jura, that don't necessarily have anything to do with an individual application, but if your developers are already working in the context of Cloud Foundry, it really makes sense for them to have those services in the same context. So services have a pretty generic definition, and that's largely by design. So the first thing we want to do is we want to make sure that software developers, the users, can actually find these services pretty easily. Really easy UX through the platform, something like this. You have Marketplace, and it would tell you, yeah, these are a couple of services that your Cloud Foundry instance site is currently offering. PMySQL and Poreax.js are two services that Pivotal builds. And if you look at this output, it's pretty straightforward. It lists services. The second column might not be as obvious, but those are different plan configurations. And then finally, a little description about what the service does. So the next thing we want to do is we want to integrate these services to our apps, if that's possible. So you can do this in three commands. This is a UX you might imagine for the developer. Something as simple as starting by creating the service, then moving on to what we call binding a service. And a binding is simply a way to tell the system that this application is now, we want this application to be able to use this service instance. When we talk a little bit about the architecture, you'll see that there's a lot of different ways that you can tell the system that. But this is the way that we do it through the UX. And then finally, there's a technical detail that you have to restage your application to make these changes propagate. But there you go, three commands. And your app can now use your service pretty straightforward. I should point out that we did all of this through the platform. No point did the developer have to go out of band to provision these service instances, go log into a web interface or anything like that. We want to make sure that, sort of the same way that I said these instances are valuable in the context of Cloud Foundry, we want to make sure that we don't make users leave that context as they're provisioning and using their services. We have a couple other goals that we want to make sure to accomplish as we build out the architecture. And one is that deployment and maintenance are opaque to the developer. This actually means two things. The first thing is that a developer really shouldn't be concerned with the management or maintenance of a service instance. They don't care. It's not their job to maintain uptime or things like that where these servers are located and networking. That's typically not a developer's concern, especially not at first. The other thing is that we want to make sure that we're agnostic to specific technology choices. So we want to be agnostic to infrastructure. You can do this on, we don't care if a service is deployed to AWS, OpenStack, VMware, whatever. We also don't care about any other technical choices like what language it was written in or what other dependencies it has. We have a pretty basic requirement that services are accessible via a URL from the instance of Cloud Foundry. And that's really it. We also don't want to be prescriptive about what a service is. I talked a little bit about this earlier. But we want to have on purpose a very loose definition about what a service is. We can come up with a lot of really easy examples about what a service is. But what's really interesting is to see people take our loose definition and imagine really interesting use cases for it. So I think earlier today I just heard somebody talk about networking as a service. I was eavesdropping on a conversation. You can really think of a lot of different things. And we want to make sure that we enable people to continue to have slightly different ideas about a service. As a result, external vendors can bring their service to CF. So if we don't care where it's deployed and we don't really have a strong definition of what a service is, that means that even if you're the operator of a Cloud Foundry instance, you can invite partners to come and bring their services to your Cloud Foundry. And the reason why this is really great is that as the operator of Cloud Foundry, it's not your job to also be the operator of the service. We can leave that to the experts. And finally, we want to make sure that all this happens in a dynamic way. I'm not just using a buzzword here. What I mean is that we want to make sure that you never have to redeploy your Cloud Foundry or restart any part of the system. We want to bake broker integration into the API so that we can make a few API calls. And the service provider is completely integrated and ready to have services provisioned and used. So these are four different things we want to make sure we also have accomplished. And so the solution that we converge on is what we call the service broker architecture. So when I talk about a service broker, what we're really talking about is a transition layer between Cloud Foundry and the service provider. Service broker adheres to a very small HTTP interface. And basically, Cloud Foundry is a contract. Cloud Foundry is going to make requests and expect the broker to conform to this relatively small interface. And the broker can take these Cloud Foundry domain requests and translate them into service domain requests. So for example, if your service is a, let's say it's a MySQL database, a Cloud Foundry is going to ask, give me a service instance. Well, MySQL probably doesn't know anything about what a service instance is, a Cloud Foundry service instance. But the broker does. The broker knows what he means is he wants a database. So the broker can then delegate to the appropriate components or make the request itself to create a database. And then it can come back to the Cloud Foundry and say, yeah, we provisioned your service just like you asked. So when we talk about service brokers, and we'll see this in a second, we have some pictures. But what we're talking about when we talk about a service broker is a translation layer between Cloud Foundry and the service provider. All right, so we're going to do some boxes and lines. Nobody gets too scared. I just sort of want to show you the basic back and forth that users will do and what happens to a user's request as it makes its way through the system. So the first thing we need to do is we need to register the service broker. So this is actually an admin functionality, at least currently. And admin says, all right, I have a new service broker that's ready to integrate with my Cloud Foundry. The Cloud Foundry admin can now bust out his CFCLI and make a request create service broker. And all he has to do is provide a URL and credentials to access the URL. Basically, the URL points to where the broker lives. And the Cloud Controller is going to make an HTTP request. This is the first HTTP endpoint that we're expecting brokers to implement. And what we do is we ask for a catalog. Catalog is pretty straightforward. It's just a list of services and plans that the broker offers and basically services that we want users to be able to see. And so once we've integrated, by the way, I started talking about the Cloud Controller, this guy right here, the Cloud Controller is just the component of Cloud Foundry that is accepting API requests and making these API requests. So you don't know what the Cloud Controller is. If you're not already friends with him, just pretend that that says Cloud Foundry is really all that matters. But if you are familiar with a little bit of the internals of the architecture, I wanted to make sure that it was known which component within Cloud Foundry we were talking about. So the Cloud Controller, once the request succeeds, then a user can now make a request that says, hey, show me your services just like I showed you before. And our new service here is displayed to the user. And the user knows that he can now provision and bind to these services. So what does the user want to do next? Well, now we can be a developer. So a developer says I want to create a service. So here he is. He makes his CLI command. CF create service. The Cloud Controller is now going to make the next important request to the broker to a new endpoint. It's going to use a put request to the broker and include a good, which is just a globally unique identifier, at which point the broker is going to go and do whatever it needs to do to make the service. So as I said before in the example of MySQL, that work is to go create a database. But we can imagine lots and lots of different use cases what it means to create a service. Like I said before, the broker is translating a Cloud Foundry-specific request into a service-specific request. And then once everything goes well, the broker returns to the 201 created. And the Cloud Controller keeps a record of that service in the database. And now when a user asks for all of his services, he can see his new database, his new service instance in the output, and he knows that he has successfully created a service instance. So the next thing that he's going to do is he's probably going to bind to the service. That means he's going to connect it to one of his applications. Yet again, the Cloud Controller is just going to relay this request over to the broker. This time, it's a slightly longer URL here. But it's a request to create a service binding, sort of treated as its own resource, but nested under service instances. And again, the broker is going to return 201 created in the case of a success. The important thing to point out here is the actual response body that the broker is going to return to the Cloud Controller. What we're getting back from the broker is a set of credentials, we call it credentials. And what we mean here is that this is the broker telling Cloud Foundry, hey, you want to give this set of data to your application. So your application can use the service instance properly. So most of the time, what you'll see is something like off credentials. But you'll also often find things like IP addresses or URLs where the service instance is actually located. But depending on the nature of the service, it could be something totally different. It might be an API key, it might be other things that just basically tell the application, this is how you can use your service instance. So these credentials get injected into the application's runtime environment. They get stored as environment variables in a variable called vCAP services that your application can then use to actually make use of the service instance. And likewise, a user over here on the front end can see that now this bound to ask column has been updated to include the name of his application. Wow, that was fast. Any questions? Maybe we should hold off questions for the end. And I'll let Max go over what some of the new features are. Thank you, David. OK, so before I start, let me mention something. First is that as you probably know, when we talk about Cloud Foundry, everybody wants, I'm with IBM, and everybody repackages Cloud Foundry and resells it. If you think, well, where do we make money in general? Well, right there, in services. Services is one of the areas where different platforms can differentiate. But of course, we want this to be the base to be open source. So when we were looking, especially Brian Martin over there, wanted, from IBM, wanted different aspects of the services API to be changed and improved, it created some sort of tension. So great thing is we worked with Shannon and David Stevenson and created a brand new team starting at the beginning of the year. And of course, David was the leader. And I worked with him. I've never worked on services before. Never worked with them before. And it was just a joy. For two months, we pretty much implemented a lot of what you're going to see here. And then plus, I went to China. I got two colleagues from China, Edward and Tom. And guess what? They worked on it also with us. So what you're going to see is basically the outcome of this new team. And they're continuing doing, adding more stuff. So the first, I guess, big thing that we added is that in the current service or the old service architecture, you had 60 seconds to provision your service. Crazy, right? I mean, what gets done in 60 seconds? Like, speeding up a VM, I guess, you know, if you're docker at 30 milliseconds, yeah. But you have to still put data. You still have to do stuff. So operations need to be a little bit longer than just 60 seconds. So the obvious solution for this is, can you do stuff asynchronously? Well, it's a lot easier said than done. We did it. It took a little bit of time, partly because we also paid some technical debt. But the basic view of it is that for something like IBM Watson, where a lot of it is analytics stuff, right? Lots of data needs to be pulled in. You've got to set up things before the service is actually available for you to use. It takes more than 60 seconds. So you want to be able to do it in an asynchronous fashion. Same thing for a dupe, for instance, right? You're going to set up in a dupe cluster. It's going to take you more than 60 seconds, because for any job that's more than just a toy job, you're going to have to spin up maybe some VMs, maybe some docker containers, whatever it is. It's going to probably take more than 60 seconds. So the key is to be able to enable these kind of use cases. So how did we do this? We basically used a classic sort of design pattern, which is kind of a future object. So instead of returning immediately, you get an object that tells you what the future is. So you can interrogate it. So in other words, you're going to pull. So you do that, and it gives you pretty much the state of the service. We're going to have a nice demo that shows you exactly what I'm talking about here. The other thing that is a consequence is that there is now a tighter integration between the Cloud Foundry or the Cloud Controller with the broker. So if you're all broker providers, make sure you pay attention to the specification if you did not in the past. Because now there's a little bit more communication between the broker, your broker, and the CC. So as an example, if we look at the previous slide that David was talking about when he showed how service gets bound to, what's going to happen now, instead of immediately having a service ready so that you can start using it, you instead have a series of pull where you can see there is a state now. So the first state, of course, is going to be that it's in progress, and this state could actually persist for any amount of time, depending on your broker, until it's ready or it's created. Same thing if you're deleting. So maybe as part of your delete, you want to back up. So your delete could actually take time. So that's going to be one of the bigger changes that you're going to experience. And this is to support asynchronous operations. So this is kind of the difference between what you saw before and now. Another issue that came up, again a lot of it came from Bluemix team, but also Shannon has a list of customers, a lot of you probably in this room, that pretty much tell them what you're looking for, and we're trying to address that. We used to call it credentials, but credentials is kind of complex because there's other things using credentials to call it keys. So the idea there is that when you have a service running in your Cloud Foundry installation, you might want to use it without necessarily having an app also being used. And I guess it's dropped the Docker name. If you have a Docker container, you want to connect to a service, how do you do it? That may not be an app. And there's a lot more examples of this also. So what you can now do is you can actually call the service API in Cloud Foundry and say, create a key for me. And you can ask as many keys as you want. And the keys are going to be your credentials to be able to connect to the service. So the use cases for this are things like, for instance, having keys for read-only. Let's say, for instance, this is one that Shannon and David and I discussed, where you could have a database where you may want to expose a read-only access to that database. So the key there will be a read key versus a read-write key, for instance. And there are many more. It's going to enable easier accounting, certainly for us in Bluemix, because now you no longer need to have an app like a fake app or something like this. That certainly a very good thing. And you can maybe have multiple credentials. So this is all goodness in some ways. And the best part of it is we didn't have to change much. We added a new endpoint, but in terms of the internals of it, we reused the binding. So if you remember how it worked before or how David showed it to you, where after the service is created, you would have your app, and you would bind them together, we're reusing that same thing, except there's no app. So as a broker, you don't have a lot of change. If you didn't expect to use that app ID, your broker should work. So that's one of the benefits of this. The last feature is almost a no-brainer in some ways, but always harder to implement than it appears. Is that if you have a service that you want to create, but instead of having to force the user experience, go to a dashboard or some additional configuration, you may want to just pass all that information as you're creating the service. So in other words, you want to be able to pass some JSON payload when you're creating the service, or when you're deleting it, for instance. And this is what the arbitrary parameters does. So that allows you to pass any number of, I guess, a JSON payload to your service creation. What we're going to do next is to show you a demo just because this is how it looks. You just pass minus p, for instance, shorting truth to your database, and then you can specify the size. So this avoids having to go to a dashboard. The next step, this is where I'm very excited to bring my colleague, Tom, where we're going to show you a demo. So let me set this up as Tom gets it ready. It's not live. It's recorded just so that the gods of demos don't come here. I guess we'll be guaranteed that it works. But here's the important thing, is that everything you see here is actually code that you will be able to go and start using today, because it's a simple broker that they've implemented. And to set the stage, the reason we're showing you AWS is because we're trying to think of a demo that would cover all three cases or all three new features that we have in services. And Shannon suggested this one, where what you're doing is you're using a broker to spin up a VM. And even Amazon takes more than 60 seconds for a lot of their VMs. So in some ways, we're going to show you the asynchronous operation. So this is the first thing. So what you're seeing here is Tom is in the console. So the next step is we're going to start the broker and create a service instance. So you can move it to show that. So the broker is showing here, starting here. And then on the left-hand side, you can see we're logging in. And we can show the different service what's available. So the broker gets started. I mean, this is all kind of like set up. It's running inside our intranet right now. That's why you see that's nine. So first thing is we show, for instance, that this service broker exposes AWS, different plan on AWS, some of which is, for instance, a micro-VM or large VM or a small VM. The next step is we're going to create one instance of this. So this is going to show the next, move it faster. And what you saw very quickly on the left-hand side is that the request went through. And what you can see on the Amazon dashboard is the VM being started. Now, of course, again, as we discuss, the call to create the service will return immediately. The next thing to do, since it's asynchronous, is to pull that particular service to see the state of it. So what you saw is that in the demo, we did a watch on the CF service command, so that it's basically calling the service multiple times until the service is ready. So what you're seeing here is exactly what the output of the new CF command. So at this point, it basically created because on Amazon, it's pinned up, so the VM is ready to be used. So now this is the first part of the demo, and you can see here that it's running. So the next, this shows asynchronous operation. The next second part of the demo is to show service keys. So obviously, if you have a VM, what do you want to do with it? You want to log into it. What's the secure way to log in? SSH. So we can ask Amazon to create SSH for us. So the broker supports create keys where it's going to do this. So you log in again. All of these videos are online on YouTube. So you can take a look at them. They're also going to be on the repo. And you can see here we're creating a key. It takes a little bit of time. What you would see on the left-hand side, on the right hand side, is essentially the output of that create service command. So eventually, you're going to see the command. So here you can actually, once the key is created, you can get the details of that key. And you can see this is the private key, the SSH key that Amazon created for us. So we can take that, cut and paste it to a file, and then use that to log in to the VM. So this is going to be the next step. Thank you, Tom. So here's the SSH going on on the key file that we just created, and it should log in to the VM. The next third part of the demo, and the final part of it, is to show you service parameters. So again, we're here on the dashboard again, and we can see what the AMI that's being used. So one obvious thing that you could do for service parameters for a searcher broker is instead of using, like I say, a default AMI or hard coding the AMI as part of your broker, maybe you can allow the user, as they use your broker to create VMs, to pass the AMI that you want. So that's pretty much what we're showing you here. So thank you. This is the file that specifies the parameter JSON. So essentially a JSON payload, and then we're going to use this to pass it to the create service command, and then that way we'll use that particular AMI to spin up. So here we pass the JSON, and at the end, you basically get a VM now with the new AMI. So as I mentioned, again, Discord, the entire demo was written by Tom and Edward. It's a go broker. It's going to be available for you under Apache license, so you can go and start using it. So with that, let's go back to David, who's going to conclude it, and we'll take some questions. All right. Well done, you guys. That was a really great demo. I'm actually really impressed because a lot of that stuff, there's our features we've literally added in the last week or two. So I don't know how you guys managed to pull that off. That was really great. All right. We're basically done here. Of course, we're always looking for feedback as a team. Our product manager is Shannon Cohen. This is the email address that you can reach him at, right? And he's also doing, this is Shannon. He's also doing one of the open houses tomorrow, I think at 2.50. So if you have any questions for him, feel free to stop by. We'll both be there, I think, answering questions about service enablement. So we're always looking for, where can we improve, and how do our APIs look? So we'd love to hear more from you guys. I thought I'd also put up some pictures of the team, people who've worked on these different features. Yeah, these are the people who've worked on the services API team for the last couple of months. And on the top left is a picture of Raina Missin, who isn't here today, unfortunately. She's back at the office. But she's the incoming anchor, so she'll be taking the reins of the team from that point. And thank you very much. Any guys have any questions? I think it's really great that you had this extension of a JSON structure you can bring in at service subscription to remove the necessity of external configuration in a dashboard or whatever. Would you think the same would be useful at the runtime of the service? For example, you have this 100 megabyte MySQL, and you figure out you actually run out of space, and you're rather extended to a gig. Wouldn't that be an interesting runtime configuration, or would you anticipate still having an out of band? Sorry, see the last part again? Right now you would have to do that out of band, of course, if you were to do that. Could you think that at some point during the runtime of the service, while it's bound, you could pass information to it? So you mean to upgrade your service instance or something? So we actually have a feature to upgrade a service instance. We've had it for a while since last summer or so. So through the platform, the broker has to support it. But through the platform, you can upgrade your service instance from one plan to another. So that MySQL service, for example, you can already upgrade it from 100 megabytes to a one gig, or whatever other plans are offered. So definitely, I think as many things as we can put into the platform, the better. Like I said at the very beginning, we want to make sure users don't have to leave the context of Cloud Foundry to do the things that they want to do for their services. Or with their apps, for that matter. Let's see your question back there. Child, child. Does the broker support custom status messages? Yeah, totally. So we didn't go over all of the details of the broker because I didn't want to bombard everybody with too much information. But part of the broker request during the polling section is that the broker can return a message that says just any generic message, any string. So for example, if your long running operation has updates that the user would find valuable, you can definitely put them in the response body during polling. I think the field is called message or description. I'm going to screw up which one. But it's one of the two. And the Cloud Controller will display that to the user. Thanks. Got some questions over here. Are we OK on time? All right. Where's the question right here? Gentleman on the blue shirt. Question about catalog function in the. Sorry, can you speak up a little bit? Question on a catalog function in a service broker API. So right now, to change anything in service broker, you need to, for example, you can't rename a service. There is a problem to removing existing plans. Because if you replace a service broker API implementation that will not have existing plan and will try to update that service broker, it will fail. So the overall, once you deploy a broker with a specific catalog, it's very hard to change. So the life cycle of a server broker is very hard to maintain because of this limitation. Is there any plan to change to improve in this area? As far as how catalog management goes, I don't know that we have anything in the pipeline standing. I don't know if you can say anything else about that. And then I think we're out of time. I know that there are one or two other hands. Feel free to, I'll be hanging out up here. So feel free to come find me if you have any other questions. I'm happy to answer them. Otherwise, thank you very much. Thank you. All right.