 Hi, everyone. I think we're going to get started. We're going to be on time, just our one goal for today. So welcome to one marketplace to rule them all. I hope we don't make too many Lordeer jokes for you, but I hope you're into Lordeerings jokes because there are a couple. My name's Laurel Gray. I'm the product manager for the Cloud Foundry Services API team, and we're focused on making the developer experience of provisioning and managing services in Cloud Foundry awesome. And my name's Matthew Mignini. I also work for Pivotal with Laurel based out of London and do a lot of work on marketplaces and one of the co-chairs for the Open Service Broker API project. Amazing. So we're going to run through a couple of things today. One is a very brief but hopefully interesting and illuminating history of Cloud Foundry and services in Cloud Foundry. We're going to then talk about the many, many marketplaces problem, and then we're going to do a demo. We're doing it live. And we'll leave some room for questions at the end. Matt and I actually timed this, and we're pretty sure there will be plenty of room for questions at the end. Cool. All right. I'm at first, a brief history of Cloud Foundry and services. I wasn't in the Cloud Foundry world in 2011, so hopefully not too many of you can correct me on this. But back in 2011, a shiny new platform appeared on the world of technology called Cloud Foundry. And that was a good place to run 12-factor apps like the World Wide Web. And that was great. And customers could run lots of those World Wide Web applications on their Cloud Foundries in 2012. Cloud Foundry being the best place to run 12-factor apps, as you probably all know, they tend to need backing services. That's one of the 12 rules. And those backing services aren't just things like persistence, like databases, MySQLs, also things like autoscaling, monitoring services, configuration services. And we wanted a really easy way for those service providers building things like Mongo and New Relic to make their services available to all those Cloud Foundry apps. And so we, some clever people, not me, put their heads together and thought, OK, let's make an API. And the Cloud Foundry services API was born. It's a very simple specification, which basically has five lifecycle API calls for provisioning, updating, deprovisioning, and then getting access to and revoking access to these backing services. So that was all good. And the world was happy. And all of these applications can consume the backing services that they need. And then fast forward two years into 2014. And the world's still good. And this blue wheel thing starts appearing on everyone's radar. And over the next few years, that blue wheel gets quite big. And lots of people think it's quite cool. And then applications running on those wanted to play in the same game. There's this really, by now, a pretty diverse set of services you can make available to your Cloud Foundry applications. And Kubernetes workloads wanted to get involved. And so what was the Cloud Foundry services API? It transformed, got renamed, to the Open Service Broker API. And now both these platforms and all the applications or containers running on them can consume this big marketplace of services which have been made available by all the service providers out there. So that's 2016. Over the next year or two, all these companies got involved. These companies are not only building Cloud-based platforms, providing the developer experience to consume these backing services. But many of them are also building services to be plugged into your various marketplaces in the platforms that you're running today. So this was good. And things were still good. And then you started scaling up how many Kubernetes clusters you had and how many Cloud Foundries you were running. So that reality there where you have one cluster and one CF, we all know is not really the case. You have probably at least a few of those. And we see many customers now as they're getting more and more successful with these platforms, end up with much more, and then many more of them. And that leads to kind of the end of 2018 when we pivotal SAP, many of the companies realized that this was really a problem that we had to start solving. And so like Frodo leaping into Gandalf's arms, that's what we're aiming to do now in 2019. Adding up to Laurel. Great. So I'm going to dig a little into the many, many marketplaces problem. I'm sure if you are using services on Cloud Foundry or in Kubernetes, you're intimately familiar with this. So bear with me. If you're going to create a service instance in Cloud Foundry or in Kubernetes, the user experience is actually pretty simple. In Cloud Foundry, you use CF create service. You give it the service name, the plan name, and then you give your service instance a name, and you tell the platform to do this. It talks to the broker. And then your service instance is spun up. There's a pretty similar experience. In Kubernetes, if you have the service catalog project installed where you just kubectl apply and then give it the manifest file. So magic, lots of magic. And then what actually happens, too, is your platforms become the source of truth for information state about that service. And this is fine. You might have many service instances attached to each of your platforms. This isn't that big of a deal. But like Matt said, the world isn't so simple. If you're using Cloud Foundry, you probably have more than one Cloud Foundry. If you're using Kubernetes, you almost definitely have more than one cluster. And so this problem can become a little bit more difficult where you have your state of your service instances spread out across many different platforms to the point where a really simple question like, how many MySQL instances have been provisioned across my platforms becomes really difficult to answer. And it's not so simple or straightforward. So what we see a lot of people do is one of two things. Either you'll write a script that will aggregate information across each of the platform APIs, or you use some kind of bespoke pipeline that you create using Jenkins, or concourse, whatever flavor you prefer, maybe something else. And this kind of works. The problem is everyone is having to create their own bespoke scripts, their own bespoke pipelines. And then you're having to manage this across many different platforms. And we imagine as the world becomes more and more interesting and things are moving faster. And you all become more successful. You're going to have more and more platforms that your scripts or your pipelines have to work with. And there's another side to the problem, too, which is the Marketplace itself. So in Cloud Foundry, if you're using services in Cloud Foundry, which I hope everyone is, you have a Marketplace already embedded into Cloud Foundry called the CF Marketplace. And if you have the service catalog project installed in Kubernetes, you use kubectl. And you can use the cluster service classes custom resource, which is a tongue twister, if you want to impress your friends later, to access the Marketplace of services that you have available on the platform. And just like the state of the service instances, you have many different Marketplaces that you're having to create across each platform, which again, becomes a bit untenable to the point where if you have tenants who are asking you for the brand new shiny service spanner and they're really excited about it, if you want to make it available into each Marketplace, having to do that isn't very simple. You don't just do one thing. You actually have to, again, write a script or build a pipeline. And again, as you have more and more platforms, you have more and more Marketplaces, and you're having to manage more and more different places for things to be available for your tenants. So what we'd like to talk about today is pulling the Marketplace out of the platform and all that it entails into one thing called the Independent Services Marketplace or ISM. Cool. So I think just, yeah, the ISM is basically our attempt to eventually provide a very Cloud Foundry-native experience to have many Cloud Foundries, especially backed by the same database. So you can imagine a world where you, in one platform, run CF Marketplace, and that looks the same as in your other Cloud Foundry. And then depending on my permissions, I could maybe do CF Create Service, Postgres, in one of my Cloud Foundries, and bind it to my app. That's great. But we all run probably pretty big distributed apps, probably across different foundations. So what we really want to happen is then I target a different Cloud Foundry. I look at the CF Services output there, and I can see my instance I provisioned somewhere else. And I can just run CF Bind Service, which is amazingly simple, and bind that to the same backing service. So that's what part of one of the aims of ISM. ISM is open source. Right now, it's hosted under the Pivotal CF org. And it's built on top of Kubernetes. So we make use of things like CRDs for extending the platform. We'll show you in a minute the ISM CLI that we've started to build. But you can also drive this if you're like a big Kubernetes fan through just manifests and that declarative model that many of you probably like. So there's three goals of ISM. The first is to solve that many marketplaces problem. Laurel has talked through enabling operators to intuitively curate a marketplace that spans multiple platforms. The second goal is enabling workloads on different platforms to share service instances. About a year or so ago now, the Cloud Foundry services API team that Laurel manages introduced the ability to share service instances across orgs and spaces. That was our attempt at recognizing that not all apps that need to share a backing service live in the same Cloud Foundry space. This is getting one step further now. We now understand that those apps don't always live in one platform or one foundation. And they may even span different platforms like CF and Kubernetes. So enabling different workloads, no matter where they're running, to share that same backing service, be it persistence database or a configuration server is pretty important. And then finally, we want to allow developers to provision services for off-platform applications in one place. So for those familiar with CF Create Service Key, that's kind of great if I have a Redis. I could create a service key that essentially gives me the access credentials to it. I can then either use that as a human if I want to log into my Redis, or maybe if I have an application not running on CF or Cates, I need a way to just copy and paste those credentials into it. The problem with that is just like the state of instances and the state of that marketplace get distributed across all of those platforms. All the service keys that have been created are also distributed. So again, you know, security team comes over and they're like, OK, how many people have access to this MySQL? It's a pretty hard question to answer, and security teams are pretty unhappy about the answer that they get today. So now for the best bit of this talk, hopefully, I will switch out of slides and get into this demo. So there has been a little bit of setup here. We basically just deployed a couple of apps, and we have a Cloud Foundry running in AWS and a Kubernetes cluster running up on GKE. But we'll talk through the bits that are relevant as we go through. I'm just going to quickly run the help command just so you can see what the ISM CLI looks like right now. So when I run ISM, I get a number of available commands that I can interact with. So there's bindings, brokers, instances, run times, or platforms, as we've called it most of the way through this talk, and then services. Is that readable for everyone at the back? Yeah, all right. Cool, so first things first, I'm going to list the brokers available in ISM. This is similar to the CF service access command that you can run today in Cloud Foundry. And I can see I have my Azure broker running. We're going to use this later on in the talk, but this is running through a broker proxy on Cloud Foundry, but this gives us access to the Microsoft Azure Cloud broker, which gives us access to a number of Azure services, including Postgres, which we'll need later on. But for now, we can ignore that one. So just like those of you who are familiar with CF create service, I can use ISM to register a broker, giving it a name of a broker that I choose, the URL, where that broker is running, and the username and password. In this case, we're going to ask the broker for its catalog, and that catalog contains all of these services and plans offered by that broker. In this case, assuming all this works, our broker should give us a catalog containing a Redis service with a couple of plans that we can use. So if I now list my brokers that have been registered, then we should see the demo broker there at the bottom. Let me clear the screen so you can see. If I look at the services, so this is similar to the CF marketplace command, although we're using a slightly different syntax here. We should now see my new shiny Redis service at the bottom there. So I have Redis, two plans, simple and complex, and it's from my demo broker that I just registered. So nothing mind-boggling here. This is exactly the same as the CF experience today, if many of you have used that. So just like in CF, I can now go and create an instance, giving the name, a service, and a plan. This will go off to the broker and ask it to provision that new instance. And I'm going to wait a couple of seconds so it makes sure it finishes. And if I list those instances, again, just like CF services, I should now see my instance, the Redis service I provisioned, and finally, just like CF create service key. So this is really focused on this off-platform use case. I can go and create a binding for that service instance. Assuming that's been created, if I go and get that binding, using ISM binding get, you can now see I have some credentials. So I can use my username admin, and my password, my unique credential, and my unique password to go and access that Redis service. And let's say I had my app running on my Raspberry Pi, wanting to consume that, I can go and copy and paste that in. And all is good. So that's the more boring bit of the demo. Like many of you have probably seen that before, just with swapping ISM for CF. So this is the more interesting bit. So ISM actually knows about some runtimes that are connected up to it. So in this case, ISM knows about a Cloud Foundry that we've called CF1. That's the one running in AWS. And we have a Kubernetes called Kate's one, and that's running in GKE. So the reason ISM knows about that is because it wants to start orchestrating what is happening with those workloads and those platforms and which services they can access. So again, if I could elist my instances in ISM, we can see I've already created this PostgreSQL instance here. It's a PostgreSQL 96 from Amazon. Amazon, sorry, Microsoft, the basic plan. And we created it a few days ago. But that's all ready to go for me to use now. And that's a real PostgreSQL running in Azure. In my Cloud Foundry that I've targeted, in my demo and demo org and development space, I have the Spring Music app running. This is a basic app, which gives me a music album curation tool. So what I want to do now is use ISM to inject a binding into that app. So as Laurel mentioned before, when we have the state of instances and bindings embedded within each platform, it's quite hard for those multiple platforms to then start sharing those instances and bindings. Because if you're familiar with the Brokersberg, each platform is the source of truth. And so as soon as you start trying to share those resources, things like ownership, who can upgrade things, becomes a challenge. So I'm going to here go, ISM binding create. I'm going to call it my CF binding. It's for the PostgreSQL instance I've already created. And then I'm going to use this runtime flag. So what this is saying is for the CF1 runtime that's registered, the demo org, the development space, and then the Spring Music app, I want to go and inject a binding into that. This is actually what it's doing in reality is creating a user-provided service in that space and binding Spring Music to it. And the reason that's kind of cool is because the concept of user-provided services and bindings has existed in Cloud Foundry for probably three years now. So if you're running a CF and you haven't updated it in a while, you could use ISM out the box today and it would work with that very old version of Cloud Foundry. So if I go and create that binding, then wait a few seconds and run CF services, then what we will hopefully now see is a managed service being injected into that space, my development space, and my demo org, and it's automatically been bound to my Spring Music app. What that means is Spring Music now, I'm going to restart in the background, the Spring Music app now has an environment variable called vCAP services, and then environment variable contains any information it needs to access that backing service. In this case, it's going to be the URI of the Postgres instance that's running and the unique credentials to go and access it. So once this has restarted, we're going to do the same thing again. But this time, the other runtime we registered, which was the Kates 1 cluster, we want to inject the binding into that cluster. For those familiar with Kubernetes, you'll know there's already a concept of secrets, which pods and deployments can make use of. So the way ISM is going to interact with Kubernetes is by creating secrets in a namespace. So once again, I can create a binding for my Postgres instance, and this time on the runtime flag, I can give the name of my platform or my runtime Kates 1. And all I need this time is the name of the namespace. In this case, it's demo. So again, just like I do with CFBindService, that goes as the broker for a set of new unique credentials to access the service, creates that secret. And if I now go and look using kubectl, I can see the same name as my binding. We've got Kates' binding secret down there. And if I actually open up that secret in YAML, then we can see here I have my base 64 encoded credentials, which will include all the information I outlined earlier, so how that Kates deployment can actually access the Azure Postgres. I'm going to quickly run this rollout status command just to make sure that my deployment has actually picked up that secret and restarted, which it has. And finally, I'm going to port forward the spring music deployment onto my local port 8081, so that we can all see it. All right. So switching back to the browser, I have two browser windows here or two tabs. I have my Cloud Foundry one. And if I refresh this, this should be up and running. And if I click on Info, you can see here that the ISM, I'll make this a bit bigger, actually, you can see that the ISM managed service is shown here. And we can see that the profile it's automatically detected is this Postgres Cloud profile. So that's cool. Now, this is my local host 8081. So this is just looking at my Kubernetes deployment. And again, it's the same application running on Kubernetes. And again, it's automatically picked up that there's a service it can make use of, which is a Postgres service. So in my case cluster, I'm going to add a new album. And I'm going to call it Journey to Modal by Gandalf. And it was released in 1300. Cool. Hit OK. And by magic, we have Journey to Modal. Cool. That's not very interesting. We've just done that in a Kubernetes cluster. But this is running on GKE. And now I have my distributed app. And this is my app running on CF. So if I go and refresh that and scroll down, then we can now see that in my Cloud Foundry app, on a completely different platform, my Journey to Modal has been there. But then there was a big internal fight and Frodo said he was the artist, so I can change it to Frodo. So now in my Cloud Foundry environment, we know it's Frodo. And automatically, because they're using the same persistence layer down in my case, I can see that Frodo's done it. And they can delete it because they've settled their differences. And I refresh over here. And it's gone. That is it. And awesome, jumping back into the deck. This is the most important part of the presentation for us. If you are interested in providing feedback, either unfiltered and telling us exactly what you think, or by being a part of our beta user testing group where we're doing a lot of prototype testing just to get feedback on how the user experiences, just send us an email. It's at isim.pivotal.io right now because it's a Pivotal open source project. And if you'd copy of the deck, you can get it at the bit.ly link. So I don't know if there's a mic or anything, but there's 10, 11 minutes for questions if anyone has any. Do you want to repeat it? Yeah, so the question was, when do we see this hitting, basically, being released for people to use, right? Yeah, yeah, for PCF. So soon is our hope. We're calling this alpha, even though we don't really have a good roadmap of when it will go beta in GA. The alpha right now, this is all real code, except the binding injection bit. That's been hacked together for the purpose of this demo. Getting what you've seen today into a place where it can be run in production, we're hoping is a couple of months away, there's a lot of work that has to be done to make that happen, mainly because we talk about ISM helps widen the gap between workloads and where their backing services are. And obviously, if they're not running in the same Bosch director or in the same kind of AWS environment, networking becomes a challenge. So there's lots of work going on there to try and automate that networking in the open service broker group. There's also some work being done around having some service brokers today make use of credential stores to store those binding credentials. And again, widening that gap between these two things makes it harder. We have to somehow negotiate a credential store for both a platform like Cloud Foundry and a service to use. But assuming we keep making progress, working through those problems, hopefully, by kind of summer this year, we're looking at having this available to be used. As I said, it's currently open source. The nuance to this is that I mentioned earlier, we want this experience eventually where I can do CF create service here in one platform and CF bind service or the Cates equivalent in another platform without having to go get development teams to download ISM, the CLI, target this ISM as well as targeting their CFs and Cates and then orchestrate it from there. Cloud Foundry is a big, complex system. Moving to that world is probably 12 plus months to get right. That will be further or straight, but assuming you're happy to give developers access to ISM and for them to orchestrate bindings from there, like we're looking at this summer. Yeah, so the question was around disaster recovery and what happens if a foundation goes down? Is that the foundation that's running ISM? Yeah. So if you were running ISM in a foundation and it went down, you would lose the developer experience for provisioning and generating new bindings. But the ISM is out of the workflow when an app is actually communicating with a service instance, right? The app has information to directly connect to that instance. So you would lose the control plane if you were running a single ISM and it wasn't HA and it went down. But the applications would still be running that have been bound to services through ISM because the ISM or the service broker isn't in the path in that world. Yeah, so the question is about, like we see customers today who are, when they create a service today, they don't create it in just one Cloud Foundry, they create it in multiple. Yeah, that's fair. Actually, we've seen a lot of customers doing that, maybe partly if it does last recovery also because of the limitations that exist today. And if I wanna share a gemfire across two Cloud Foundries, I have to do two create services and then the broker has to magically handle, okay, we want those to be in the same super set of gemfire and handle communication. Yeah, I think you raised an interesting question around how ISM would help in that world and how make sure that we don't become a single point of failure. So yes, I will, yeah, can I outcome grab you after this chat? That's it, sounds good. Any other questions? All right, thank you very much everybody.