 I came back after the break, you hear correctly right if you want something to hear about Cloud Foundry. I see a little change of the people here sitting, some are the same, some are others. Now I'm talking a second time about our service broker framework. There was another talk today at two o'clock, which has a little different kind of focus. Some slides are the same at the beginning and then we're switching over to another theme because we start at the same idea from this thing. My name is Christian Brinker, I'm from Iwala, we're a cloud consulting, cloud software company from Germany, Cloud Foundry is a server member and my colleague Sebastian is ill at home, I hope he gets well. And what we are talking about, what I'm talking about now is about how to provide access to services which are brought to you by a service broker using a trick with the APROX and floating IPs. What's the problem? Here, the ones who were here at the other session, you already know this slide with Bob. Bob has this cool application and Bob wants to put his application on Cloud Foundry. Who in the room did not use Cloud Foundry already or doesn't have a clue? So we can skip the next slides perfectly. If you're interested in Cloud Foundry and so on, you can go back in the video from the first talk or one of the others here today. So Bob wants this database, access and he gets it with a service broker. We have heard something about the Brooklyn service broker, we heard something about our service broker framework today. But the service here, this database, he works on it with his application and then he runs into some failure, he don't have a clue. So he wants some access to it on some way to directly debug some errors here. But that's not only going through databases maybe for all that things here on the right he's using. So if you recall, we have our cloud application on Cloud Foundry, we have our service and it's put there on some services, we have this life cycle ongoing between our user, Cloud Foundry, the service broker and its back end like Brooklyn or like Heat or whatever. So we have this different kind of services, managed services, user provided, bindable, not bindable, router services, syslog drains, volume services. And they all have the same interaction. So you create a service broker, the catalog got fetched, you can get it accessible in the marketplace then you create your service instance and it's there and then the interesting point is you create service bindings for the access. So you get credentials back from the service broker to the Cloud Foundry controller and the application. But that's for the application, it's in the application environment. If you delete that application, that information is gone. So you have no possibility to access the database because the access is binded to the application and that's only in the network of the application, we see that later on. One little advertisement, we are having a framework for this cool stuff here implementing the whole workflow here. It's developed in Java, I've talked about a little bit in the other slot. The blue things are the implemented of the framework and the rest you can exchange. Some configuration stuff by that here. We want to access our service, that's called in Cloud Foundry the service key management. So Bob has his service and he wants directly access. The old workflow that doesn't work. So the change is here that Bob is the one who's getting the access. The exchange for Bob is two other commands in the CFC Ally. So instead of creating a service binding, he creates a service key which he can address a name to which is intensely the same as he has a service binding, it's the same object, it's the same credentials, but the use case is a little bit different because instead of the application being in the private network where the host and the Cloud Foundry environment is and so on, Bob isn't in that private network. So if Bob wants direct access to the service instance, it has to be in the internet accessible, which isn't that cool stuff. So what you really want to do is the workflow here is the same like here, only a little bit different because Bob is going into the Cloud Foundry, you get to the service instance created, you get your service binding information from here, it goes, it's got back the information that the user and so on has created with IP address and so on, got back to the Cloud Foundry and from there on to Bob provided. And then Bob gets direct access, but the service instance is in the internet. So what do you want is this, the service instance is encapsulated in your private network and the service broker isn't the private network, no one can access both from outside from the internet and Bob has only has some interaction with it, but somehow he has to access the database. And that's when we got this little change here. Looks at the first place looks a little bit much, I hope you can read it, I see the writing is a little bit small from behind, from there behind. We have several components here and the first thing is we put an HAProxy into the internet because that's something well known, many people can have knowledge about the configuration of it and you can use it for scaling things, you can use it for high availability, you can make sanity checks and so on, but the problem with the HAProxy here is how does the service broker communicate with it? How can he reconfigure it? So we do not need one HAProxy per service instance. So we can use one HAProxy for all of our services because Bob only wants to debug, he do not want to make a large impact on the database to run through batch jobs or something with Big Lord, but only wants to look after the data. So we want to do it with one floating IP for the internet and one HAProxy for example. So we install some agent into the HAProxy server because the HAProxy in itself does not provide a rich API for reconfiguration from the network. So this is a little program written in Python which connects to Revit MQ for the information oh you have to get a new configuration for the HAProxy from somewhere. And that is done, is produced by the HAProxy backend we wrote, which is called by the service broker when you want a service key. When the service key is asked for, the service broker makes this whole thing with a service instance, hey, make a user, what looks up the IP address from the service instance and then he gets this information, gets the IP address to the HAProxy backend and saying here I want some public accessibility for that IP address. The HAProxy backend knows the configuration of the HAProxy because it started somewhere and then makes some reconfiguration of it because it adds some port binding of something and gets the HAProxy agent to know, hey, there's a new configuration and it gets it from here. Why this complicated setup? If we look at the production environment, we have many services, we have many service brokers maybe, and we have maybe many users on it. And some users may make different accessibilities at the same time, requesting service keys at the same time. So we have the problem that the HAProxy has to be reconfigured from different ways or the same time. So if we get the information, hey, agent, get your new configuration for the HAProxy, you get the actual one, not that one in between, it gets the state from that point of time but does not have to expose some external API because the HAProxy agent thing here does not expose some endpoint outwards. So there's no attack point because all connection is done actively by the HAProxy agent to the RabbitMQ. What does it mean? If we go to our HAProxy conflict, we have, for example, we want to have a cluster of a database with three nodes. We have the possibility to address each node with a binding of a public port to our private IP address, private port. So we get that three nodes differently accessible. One set up, you will see that's not the typical HAProxy database configuration you see all over the Internet because what was our idea behind it? He wants to debug the cluster, which means no failover interesting from the connection of the service broker but getting to know what that single node does. So we have to provide an access directly to the node. For that, we have some knowledge about the reconfiguration, the providing of options to that. But we have also other setups, interests, maybe the connection to our database is not for debug purposes, but we have some application in some other private network which is not directly accessible for our Cloud Foundry internal network, but that application has also to be able to connect to that database. We have no other way but proxying through from one network to another because the problem is if you make it by router or something, you have that wide open channel from that network to another, open for some assaults from that network to the Cloud Foundry environment. So maybe that's not that kind of coolness you want to introduce to your production environment. So you maybe want to only make some failover configuration of your HAProxy and by that made it accessible through the other network and here maybe introducing some access control list, making you some IP sub-address constraints so only your application node which has your application in some OpenStack tenant is able to connect to your database and nothing else. Maybe if your service you're providing is some kind of HTTP or HTTPS service you even can control other things like sub-address, paths and make their segregation. So for example if you have an elastic search cluster, you have as a rest endpoint each separation of data. So you have your document list in some collection of data you have there and you have another one. There are some different rest endpoints behind an HTTP endpoint and what you want maybe is making that accessible but not the rest. You have this distinct endpoint you want to be accessible and within with this HAProxy config thing you can make only this accessible to the other network. So you have a really distinct small accessible point which is canonized through the router through the other network. So if we jump back we have this to get what's also a problem here is you see one service broker for components which have to be introduced to your stack, to your implementation but that's the only thing which is special for that service. So you can reuse your HAProxy back end and your HAProxy for different service brokers. So you can have a whole bunch of service brokers using the same HAProxy back end and using the same HAProxy. So you can manage different services going through from another network. And why that RapidMQ we see later on. Jump back. Our benefits from that setup we have access control list for optimizing your connection protocol for several restrictions like a P subnet and so on where you can make sanity checks like we've done here. This line here in that configuration makes clear that a connection to the database made through the HAProxy is always correct because the connection to that node is tested before the connection is established through the HAProxy. Because the HAProxy has some user which is only able to make a connection but nothing else to the database. This has to be introduced to your service instance provisioning that the role is there but then you have made a connection to the HAProxy. It gets sanity check that the node is healthy it wants to provide to you and then makes you a tunnel through the HAProxy to the database node which is really there not some failover mechanism needed at the client because the HAProxy is a failover proxy for load balancing purposes. You have no direct access to the database so if you run for example a DDoS attack against the HAProxy IP address the database isn't really caring about it because if the HAProxy goes down the database is healthy your application connecting to the HAProxy is healthy your cloud found instance is healthy because it has another IP address has other load balancers and so on so the complete stack isn't corrupted by a DDoS attack against this HAProxy. And that is done because you have a separation of the routing because the routing through the go route a load balancer go router to the application to the service is not is a completely other route than through the HAProxy to the service instances. Failover strategies we talked about load balancing you can have more than one node behind one port or IP address and also what you can do is limit the connections if you provide a service with your service broker you may want to limit the number of connections. So you say you have a plan in your service catalog which only provides five connections at the same time to your database because large scale applications using 100 application instances may should be other treated like a service instance which is laying low. And then they should buy some other service instance which is more capable of handling this high traffic which is more expensive and you want to push your users there not by restriction but by encouraging them. So you restrict the number of connections to the database. The HAProxy has some ACL rule for restricting the number of configurations so instead of using the HAProxy for the service key management you can also use it for the normal service binding management. So the connection from your cloud foundry deployment of your app through the HAProxy to the database cluster can be managed by that. So you can use the failover mechanisms and so on and you can use the limitation of connections for providing a smaller service plan than maybe possible as well. And like I told you connections between different private networks are possible in your company network so you have HAProxy usually establishing this special connection maybe because it's he's in both networks but nothing and no one other. So what alternatives are there for providing such access from elsewhere you have this trick with CFSSH and port forwarding. Cloud foundry if you have a user you can lock in with the CFCLI to the cloud foundry you can use CFSSH to jump into an application container and from there on you can use port forwarding without having to know about some SSH key because this is introduced by the CFCLI making an HTTPS connection to the cloud foundry controller and from there on you have an SSH connection. But the application the connection starts with the CFCLI installed. If you want to connect an application to your database which is not home in your cloud foundry as a cloud foundry application you don't you have to install a CFCLI there or you have to introduce some SSH key there which is also installed on the service but that's not possible on the public clouds. So there's some limitation and also some there are cloud foundry installations they regulate the SSA mechanism by disabling it because they see there are some security issues like I've seen in the Volkswagen Group RT cloud and the connection you're doing is to localhost that from your if you're but that's the smallest problem. The other possibility is having a virtual machine or container or something using as a jump host which is more or less the same trick as above. From there you get the SSH portfolio awarding thing but there you don't have the limitations and so on. The constraints you cannot do are limiting the number of connections and so on. I told you about we come back to the rabbit and here we're working actually on scaling the things a little bit up and we talked about I told you about the thing with may from one network to another when I do something like this you have one service broker who wants something goes to the service broker back end and then yes he knows okay if you want that connection I have to go to that HAProxy making a connection from there to there then from there to there and from there to there and so you can go through a whole proxy chain which is in company networks really often can be found or you want to do it because you have a failover system in your HAProxies you have two HAProxies for failover if one goes down the other comes up so you have to incorporate both and the last one is already there the second one we are working on it because it's you need to have some management of the proxy chain inside of the service broker back end because he has to know it has to know what is the proxy chain we want to go through so the service broker can ask for open me that chain not only open me that HAProxy like I told it's an open source framework for developing service brokers you can find us in GitHub you can contact us we are happy for contributions if you need more more support on it you can also buy it your company but we are also happy for for the open source project here also the HAProxy agent and the HAProxy back end is on GitHub so you can they are getting contact with us raising issues bug fixing making pull requests help us with the docs if you have tips for us hey look at that that's a good idea we're happy for it so I hope you and I got something out of my talk and I'm off for questions there are two microphones there because we are recording please use it with if you have questions now as you can contact us wire that