 Welcome back from the lunch. I hope you rested a little bit. Sorry for all the delayed. We have some technical issues with the size of the slides. But we started either yet. If you're looking for the Cloud Foundry Day, you're right here at the bright spot. My name is Christian Brinker. And I'm presenting you a service broker combined with a heat orchestration templates. So who hasn't heard about Cloud Foundry until now, one person, who has experienced more or less some month with it? OK, what are you talking about? Looking at some software developer who wants to deploy an application like our cool friend Bob here. He has this cool app he wants to deploy. And then he has the question, where should I put my application to? So it's accessible to his customers or something. So he thinks about it and decides to use Cloud Foundry. What's Cloud Foundry? Cloud Foundry is a container-based application platform, organizing application life cycles, speed up your development process so you can easily get your application started, rapid prototyping, staging, and so on. By standardizing your runtime environment. And by that, and if you are able to produce cloud native applications, you can easily scale your applications if you have more demand. So if you look at Cloud Foundry, what does it provide us? It's scaling your application easily because if you want to have more demand of your application, you deploy more containers, more application instances. You can easily achieve staging by nearly or completely mirrored staging and production environment. And also, you get something like routing. You get your domain registered. You can easily get your application accessible by that. And you don't have to bother where to put your access. You can easily ask your environment for it, organizing it. But where is the problem? The problem is here. You have this cool cloud native application, completely stateless, easy to scale. But then you have this exactly friend. It's called your database. This problem with this database is it's not cloud native. So you have problems providing it as application on Cloud Foundry. And that's because it's holding your data. And Cloud Foundry has this mechanism called Service Broker, which is able to mine this gap. Because this application needs some this database needs some environment which is more near the infrastructure layer because it has to be cared of the persistent storage because it has to be a little bit orchestrated and there are batch jobs running and such things. So in Cloud Foundry, you have an API introduced inside the Cloud Controller, which is organizing all the stuff here on the left side. And also being able to introduce things like the database. But it's much more than what you can do. You can nearly all things you need for your application provided by a Service Broker, like log aggregation or introducing firewall rules, introducing message queues, load balancer management, all much, much more. And how is it going? Look back at our ID. We have an application and we have a service. And the service lives in some service source, whatever that it is. And what's introducing Cloud Foundry is this mechanism. You get at the Cloud Foundry platform asking for access to some service. Then the platform asks your service broker and he's getting you that for your application. For instance, getting your database. But what it's about, what different kinds of services are there? If you look at this group, you have for chance you've managed services which are organized through the platforms, through the marketplace, or you want to provide apps or something you have to other apps. You can bind them like a database to an application or you have something more abstract you want only to manage but not to bind to an application. You have something which includes interaction with your routing, with your transport layer or something. You have something like a draining of the syslogs and organizing of volumes. And all of that is organized using some kind of interaction. You create a service broker by introducing a URL to your Cloud Controller. It's fetching the catalog from the service broker, yet he's knowing what services are provided by the service broker. If you go to your marketplace of your Cloud Foundry platform, you can see what's provided by the service brokers there. And you can create a service, which means the Cloud Controller asks the service broker for providing an instance. For example, the service broker decides to install a new MySQL server. Then if you want to bind your application against the service instance, your Cloud Controller asks for binding, which means he gets a new user on the database. He gets your password and provides it to your application as an environment variable. So as a developer, you don't have to care about how to connect your application to your database. It's done by the platform. You don't have to interact actively getting, oh, what was the username for my application there on that database. Don't care. You say, connect to this database. That's it. The rest is done by the platform. You only name the database instance you want to connect to. And also you can delete these credentials and remove the instance. So the whole lifecycle of your services is organized through this API. And it's like a catalog of API calls, the REST API calls, which is now called the Open Service Broker API because the Cloud Foundry Foundation and the Kubernetes Foundation got together and said, why the hell should we organize this for each platform on itself? Services are something we have to provide for all. Applications running in Kubernetes, applications running on Cloud Foundry. So we should have, in a multi-cloud system, should have some central common knowledge about how to get a service, whatever it is, even if it's an application running on site of Cloud Foundry provided back to other applications. So what does it mean? If we have, for example, going back to our idea of the database, we may have an existing cluster, a DBMS, installed on site our IIS system. Now we want to provide it back to our applications. So we provide a service broker organizing the access to the database, a DBMS, maybe organizing creation of databases inside the existing DBMS. Or maybe we want to deploy a virtual machine with a new installation of a DBMS system as a test instance for our developers. So they don't crush on the existing DBMS system we use for production. Or maybe we have this heavy load application which uses a really, really big database, on which has to be a HA. So we provide a personal cluster for this application, which has to be there if you needed to. We want to speed up the deployment of our production environment. This has to be automated. So we have to get fast a new cluster for example. So REST API, service broker, management of clusters seems to be a little harsh stuff if you want something like in MySQL database or Postgres SQL. There is in the CloudFront community several projects organizing the development of open source implementations of service brokers. And there are two main lines here. It's the Go framework and here we have a Java framework. There is the idea behind these two projects is that not everyone who wants to get a service broker for organizing his stuff needs to reimplement all that. Here I'm talking a little bit about our Java framework. We started two years ago sitting around exactly thinking about that, at that time everyone implemented its own service broker from scratch, re-implementing the whole API, the whole lifecycle management every time you. If you wanted, there was a MySQL service broker written in Java and then someone said, oh, let's do it MongoDB service broker. So he re-implemented the whole stuff. And at that time we said, no, there has to be a framework for that. And we've implemented is using a Java runtime environment. 1.8 is microservice, standalone server, cloud native application using Spring Boot. I hope you can see it from the colors of the beamer. If you look at our framework or what it's blue, you don't have to re-implement. The only thing that you have to care about is what does it mean for MySQL if I have a new instance? What does it mean for MySQL if I create a new binding? What does it mean? Yeah, here we create, for example, if we are looking at an existing service, here that means I created database in the DBMS. Four, five lines of JDBC code or SQL code you have to run for create the database, some configurations like the codec, some, if you want, what correlation do you want, and so on. And here, service binding means creating a user providing with the correct role, roughly spoken. The whole rest you can reuse from the framework. So if you want to use get a MongoDB service broker, you only exchange these parts here. The rest stays the same. While you're talking here is, now we want to run our service broker on a Cloud Foundry platform on side of OpenStack. So why do you not reuse our service broker to orchestrate our OpenStack platform for providing us with the VMs and so on on the installation on side of it? For example, one use case, one of our customers, we're a cloud consulting company, has this stack of OpenStack on side Cloud Foundry installations. They call them MeshCloud, our German-based, European-based service provider for public cloud. And they have this idea of several tinier and smaller cloud providers and companies with private clouds want to connect them. All running with OpenStack and Cloud Foundry and have a layer on it. So you want something like a service catalog from one to another with a seamless catalog on it. But underneath, the deployment is different because they differ also in the platform underneath. So you have the problem that you have to look there and there. So what we introduced is a configurable service catalog. You provide your application, your service broker you deploy on side of your Cloud Foundry installation at some provider as an application inside Cloud Foundry with the description which services he should introduce to the platform. And then you can introduce some meter information, meter data for the configuration how to deploy. And then maybe introduce an existing service, existing cluster for management. So, but when it comes to OpenStack, you want also to orchestrate the things. You want to do a cluster on its own. You want to say, now I want this big cluster, 10 nodes, MySQL, some kind of configuration. I have this idea in mind what it should be. So you come to the point where you have to say I have to have a blueprint for it. And here, you can say maybe it's MySQL cluster. Here, if we call back the right thing here, the personal cluster. The existing cluster is easy. So you can also connect your app with a service broker. You can get nearly every kind of service broker you find open source, but that doesn't go that easy. So what do we do? We use Heat. Heat is for the people here. I think it's not something new for the ones who is. Heat is an orchestration suite inside of OpenStack which uses templating for organizing resources inside of OpenStack. So you can easily have a blueprint about I want this VM, I want it with this network, I want to have a floating IP and so on. And then I want to install this script on it. And by that, you can easily define kinds of service blueprints you want to provide to your customers. This would look like here. You have on the top side, you have a description of parameters. You hand in from the service broker to the heat template. Then you introduce your resource block where you define servers, where you define installation scripts maybe. And on-site then you define something like persistent volumes for your databases. And then you deploy it to your OpenStack installation. And if we go back, it's this part of your deployment process. But that's some kind of default domain problem. Maybe something more fancy if we go back here. Let's say I have a custom domain for my application, which is myapplication.org. And I want to be providing it to some public cloud. And I want to have HTTPS with CLS termination on the load balancer side. If I go to public cloud, I have the problem that I have to provide my public cloud provider with access to my certificate. Because if you want to terminate, one slide back, I'm going too fast. So you have your user going through the Cloud Foundry to your application, which is hosted in the private network. If you look back, the network traffic is looking like this. You go to the load balancer beforehand of your Cloud Foundry installation, then you have some go-rotor using the routing, using the domain management I talked about earlier. And there are two possibilities. You can have the CLS termination on side of your load balancer, which is part of the load, is there installed. And afterwards is done a new HTTPS connection from the load balancer to the application. Why is that needed? Because here we are scaling. You don't have this one virtual machine installed where you have this application and you know the IP address. Because you scale, you maybe have here hundreds or thousands of application instances on different nodes with different IP addresses, but they're organized with the same internet address. So you catch up here the connection, terminate the CLS, and make a new HTTPS connection right to the application instance. One variation of this is termination at the go-rotor, which is widely more common. Because you have more distance using the original certificate. But we said we don't, we want to use our personal certificate here. We want to provide it to the cloud provider for a year to be installed on site of the router. If you're more into it, how to deploy certificates here, it's not that simple that you have because you can only install one certificate here and you have to use a combination then that it's also working there. But roughly spoken, it's the access here is at the cloud provider. Yes, access to your certificate and can manage it. But you don't want that because why the hell should that cloud provider have access to your SSL certificate? So what you can, what we're working at was a cloud service broker providing you access of this termination process. You have this blueprint description in the internet about Barbican with alabas for OpenStack. So you have Barbican, which is a security store, a secure store in OpenStack, which can manage securities, credentials and such things, certificates, make them accessible for applications, but does not hand over the ownership of it because the application comes, asks for, with some secret, if you can use it, then the Barbican store allows it and it's providing it back. And this is combinable with the alabas, with the load balancer as a service. And what we've done is that we can produce this combination in with a heat template. So you come and the user puts his certificate in a Barbican store, secure store, and from that container the alabas is deployed, which is using this certificate. The cloud provider has non-direct access to the certificate. He cannot manage it, he cannot copy it, he cannot use it for something else, but only with the alabas. So it's secured that the cloud provider, if you delete that container, isn't able to do something nasty with it. So it's in your control to organize your certificate. And if you're going back here, what we have to do, we exchange these parts. So we introduce and say, okay, it's something different for reusing that part of the code. We have not MySQL to install, but we have to organize some alabas deployment and some attachment of the Barbican store, which we can provide with properties from here. So what's the effort? Okay, OpenStack installation has to need support for alabas and Barbican. You need heat. You have to implement some handover mechanism of the containerity of Barbican, which now or less defining that the property has to be handed over to the heat template, which is nearly already there. You have to implement the public IP exposure for the alabas, and which is part of the heat template. Define service and service plan for the marketplace, configure the service broker, so it has access to your heat, to your heat API and deploy it. The upload for the customer is much less. It goes to your horizon or with this OpenStack CLI at the OpenStack, uploads the certificate to Barbican container, creates a service instance, and by that providing the idea of the Barbican store, and making a DNS entry somewhere, pointing to the IP of the load balancer. And what's the most effort in this whole chain because we use a framework for it? It's producing the heat template, which people using heat more often know it's not that big deal. Easily done. You don't even have to use a software developer for it because most ops people know how to use heat. And so you can speed up providing additional services of different kinds to your platform. So you can speed up things. You don't have to have some group of software developers in some bureau somewhere, producing the every new service broker, which is kind. With this, with a framework like this, you can easily speed up by having this heavy load of software development already done, only some twitches, and then you can think about the problem itself, about the service and how to manage it. And because we're open source, like most things in the Cloud Foundation, we're happy for contribution. We're happy if you want to open source your own service broker, if you want to bring in our framework yourself, if you have bug fixes, if you want to help us providing our documentation, if you have tips for us, we are happy for it. You can find us on GitHub. If you go to, we are at our company's GitHub page, and then cfservicebroker, Cloud Foundry Service Broker. We're actually in contact with over Steve Greenberg. We hope to get into contact with the original Java Service Broker team, which made this original blueprint and service broker, so we can get it back on the incubator, on the Cloud Foundry Foundation project, so we get one code base back. Don't divide, positive sum game thing. If you have questions, don't hesitate to contact us. My email address or general email address or company's GitHub account, if you have any questions, please use the microphones on the left, on the right side, so because we're recording, so the people get it too. Hi, thanks for the presentation. Just wondering if we have minutes left, I don't know. Yeah, we have, I think, yes. You could show us some code, I mean like YAML code or whatever you use to do this example deployment that you were sharing. I'm not asking for a demonstration, but to show us a little bit in a, with actual code to see how would be actually implemented, more or less, in a general manner, thank you. So, if you go to the web page, you find only the documentation of the project because we've spread it about several repositories and there's in the topics list, this repositories page, where you can find the different repositories with the declaration, which part they're covering from the framework. And if we jump right to the, where are we at? We have the link, that part, nah, that link, that's bad. I have to talk about with the guys. Let's jump, there, there. So, the main trick, yeah, for sure. Because we have Spring Boot, we have this bean mechanism so we can provide several connections to open stack on parallel, we had to trick a little bit. So we can, so your customers of your service broker can create service instances in parallel without interfering, often with you when you have some database drivers or something. You have the problem that with JDBC or something, these standard clients use a connection to one database only, and so we have to trick this here a little bit so we can connect to different directions. We organize it, yeah. Sorry, I think I just blame myself very badly. This is the actual code of the service broker, right? I was wondering, example of how to use it. Yeah. The YAML files, that probably you have, that would be. I wanted to show something here, and then jump there. So we have this factory organizing stacks, the deployment of stacks. If you want to introduce a new service broker, we have these different interfaces you've seen on the blue slides for providing implementations like the OpenStack platform service or the MySQL deployment service. There we have this trick with stack handlers. This is the raw interface, there. Which are able to organize things like image management or something we want to provide and need more or less organizing reconfiguration of some map of properties. We hand over to the heat template. So you can easily organize which images should there, which keep here and so on, which is starting with the providing of the user to the cloud control of CloudFoundry, which parameters you want to introduce. From there, CloudFoundry puts them to the service broker through the API ACORS. Then they are enriched with general parameters from the configuration. They're enriched with things we introduced in the service plans. We enrich it with platform specific configuration like OpenStack or something. What does it mean for OpenStack? Like a standard default image ID or something we want to use, which is part of our property list. And then you get a stack of a map of big map of properties. And then we go there and say, good. If you want to provide a service, we have to organize standard access to ports because the list of our IP addresses, if we do not use DNS has to be stable. So if we kill some service, they must remain. So we have a stack only for organizing ports, maybe. Then we have something like persistent volumes we would want to keep if the service are killed and new instantiated. So we have a stack organizing volumes. And then we have some centrist stack organizing the deployment of our VMs. Maybe a nested stack organizing different kinds of servers. Which is then put, is provided by this original to other stacks. Getting the properties from the port list and from that volumes using some software deployment, installing, organizing some volume attachments and so on. Slides got down. And then you get a stack complete of a cluster. I have to tell about that because you see several templates later on. So, but if you want to have it easy for test deployment, you don't care about extra ports, extra volume attachment and such on, I can show you an easy, an easy example here on the repository. Which is not that good code because we are actually moving some code base parts so you don't have all online here. If you have a more deep interest in it, contact us, we can provide you with it. We're reorganizing our infrastructure that's because of the deadlings in the documentation. And maybe we have some, you see here, for example, the providing of the parameters which we got out of our Java code provided. And code organizing our servers, for example, using here, it's not the software deployment groups but some kind of script. We download into the server which is, I say it's some change of the code base here. And some volume attachment and volume. Like I said, this is for a test server, not a cluster here. It's only for test purposes if you want to have a test server. So we didn't put that much effort in that deployment. In this kind of, it's a Postgres, so it's only here for an example because we have other code bases, for example, for more, like in RevitMQ cluster organizing service broker which organizes RevitMQ which has its own code bases, its own templates and so on. What's interesting, we talked about the red parts, organizing binding management for RevitMQ. It's more or less some organizing of the access of the management with the super user account of the RevitMQ cluster. It's not that much lines of code and the most of it you have because the standard, the default frameworks for organizing the connection don't think about connecting with the one application against several clusters or several installations because they think it's every time the same cluster. And the most biggest problem when you're writing a service broker is about thinking if we haven't this one single cluster we are organizing, we have to think about multi-tenant, in parallel connections to several clusters at the same time with different credentials and there it gets tricky. Yeah, the time, yeah. If you have more interest in it, we can talk down the floor. Yes, thank you.