 Right, so hello everyone. Welcome to the last talk of the day, I think, before the keynotes start. I'm Justin Carter. This is Xu Xiaogao. I let him introduce me because he can say my Chinese name perfectly. I didn't know that. I just make some kind of noise and it seems to be the right thing. Anyway, we work with Stark and Wayne. We're a consulting company. We help people be successful with Cloud Foundry and very important, I think this one is working too, and a very important part of the Cloud Foundry experience of course is service brokers and as some of you may know service brokers is also quite a strong pain point or can be quite a strong pain point when it comes to deployment of lifecycle management. So over the years helping our clients, we've had a few issues, you know, and today now we're going to introduce a few small tools that offer solutions to some of the problems. One of these tools is a piece of software called CF Containers Broker created originally by the Dr. Nick and Ferdy who are both members of the Stark and Wayne team and quite prominent in the in our community. Basically CF Containers Brokers allows you deploy anything as a service on Cloud Foundry via running Docker containers as services. So when a service request comes in through the API to create an instance, what you'll get in the end is a Docker container. And this is nice because we get to use the benefits of other communities, services that may be running in Docker containers today can be used in the Cloud Foundry ecosystem. And we don't need to know all the details of how to package the service up via Bosch, even though we all love doing that, right? No, we don't. So CF Containers Brokers is kind of a good way to go. Here's an example manifest that when we deploy CF Containers Broker, you kind of describe what the service and the plan is that you want to provide and what Docker container should come up when CF Containers Broker is asked to create a service. So in summary, we deploy the broker via Bosch, we register the broker with Cloud Foundry, and we get Docker images as a service. Great. Super, right? So taking PostgreSQL in Docker as an example, you can create service brokers, service instances, and they will come up on the VM. But there is a problem. What is the problem? Well, you can create another instance, another instance, and eventually the VM starts filling up, you have a lot of containers running on the VM, and boom, you have no elasticity. So you basically get the full box of cats situation. So this is one issue that we're dealing with. And we, Dr. Nick, again, our founder, has hacked together, no, created a brilliant solution for this problem, which is basically Subway. And the idea behind Subway is that it is a multiplexing service broker. It acts as a proxy and allows you to scale out your backend brokers. So you get into this kind of situation. So basically, you're deploying a number of service brokers that could be CF Containers Broker, but there are other examples of brokers that have this issue that they're constrained to one node and don't have the elasticity that's necessary for production workloads. And you put Subway in front of them and register Subway with the cloud controller. So that way, when cloud controller talks to Subway, it is able to, you know, apply, put the instance onto one of the number of resources that it has available. Take a bit of breath and let's have a look how this actually works. So CF Create Service, what is actually going to happen is the cloud controller will talk to Subway. Subway will pick one of the backends and basically try to create a service on that backend. And this may or may not work. For example, if it turns out this happens to be one of the brokers that's already full, it might decline to create the instance on this particular VM. So Subway will just fail over and ask the next VM, the next backend, well, can you create it for me? And if this works, then the request will get passed back to Subway successfully and that will get passed back to cloud controller. So the end user doesn't know or doesn't have to deal with this experience of a VM blowing up. It's transparent and the service will be created successfully. And something particularly interesting I find about this Subway proxying approach is that you have a component in the middle that is being called with the exact same API that it calls out with. So the software is very thin because you don't need any translation of the requests. All right, let's have a look at the bind service example. Basically it works the same way. Cloud controller will ask Subway, give me a binding for a specific service and Subway is completely stateless. So at this point it doesn't really know what VM the service will be located on. So it'll go out and ask, hey, can you bind to the service, passing in the service ID? And maybe the answer will be no, I can't because I don't know about the service. So no problem, Subway asks the next one, can you bind the service? And once again, when the answer comes back successfully, the end user knows nothing about the failover that happened in the back end and they get serviced. And after binding, the application gets credentials that talk straight to the back end and doesn't have to go through Subway. So there's no extra latency or something when the real connections are happening. And delete service, basically the same thing you can understand. We ask, are you there? Can you do it? Yeah, this guy can do it. No problem. All right. So now we've talked a bit about Subway, how it works, what it's useful for. And I'll hand over to Xu Zhang to give a little demonstration on how to deploy it. Okay. Oh, this mic is still too high, I wear high heels. So I'm going to do this. Before I continue, I have a simple question. Could you please raise your hand if you speak German? Oh, okay. Thank you. Okay, you will know later why I asked that question. So I will share with you how to deploy the Subway broker first. Then I will give you a demo. So we provide two options to deploy the Subway. First, I guess you already can guess or we can deploy it through Bosch because we can deploy everything in the CF word with Bosch. I hope I pronounced the Bosch correctly. But you know what I'm saying. So use Bosch to deploy the Subway is very simple. I will share with you the simple steps later. The second option we provide, actually, you can push Subway as an app. The reason behind it is very simple. We all know that CF is very good for running apps. So we can think of and provide this. Not a way, actually, it's not unique. It's nice to have this option. So first, I'm going through how we use Bosch to deploy it. Certainly, you will clone the repo from our GitHub. Then you will configure your broker. After that, you can make manifest and deploy. After that, you can register it in a way which you register any other service brokers. So this is the repo for CF Subway Bosch release. It's in the GitHub open source. Cloud Foundry community is an account.NIC created. So basically, we put lots of our good ideas project in this account. So if you go to explore there, I promise you will find something useful. So after you clone this, configure your broker actually is very simple. You only have two things you need to configure. One is the back end broker. As Justin pointed out earlier, if you recall the picture here, okay. See, behind the Subway you have several multiple back end brokers. So the configure here, you just need to specify the URL for the back end broker. You do the user password and either IP or URL. The second part is about the service broker itself. You need a part you need to listen on. You need the username on the password you need to use when you register to CF. So this is very simple. After that, in the repo, actually, there is a script called make manifest. So when you run this script, it will automatically generate a manifest for you. Walden here is I give this example because I deployed on my local laptop in Bosch Lite. So that's why this is Walden. But certainly if you deploy it in, let's say, AWS or Whisperer, you can specify the different parameter generated manifest that way. The temp, my broker is just the file we configured earlier. So after that, you will have your manifest ready and the Bosch department will point to the manifest you just generated. You simply do a Bosch dash and deploy. There you go. So after the VM is running, you can register it, use the CF or create a service broker. You give a name that's called subway broker then you'll have the username and password and URL you configured earlier. So, yeah. Another way is push it as an app. The idea is very simple. You push the app without starting it. Then all the things we configured in the broker.yaml, if we recall, how you configure backend brokers, how you configure the secret and password for your own subway broker, all those configuration in Bosch manifest, when you push it as an app, all those properties actually you just set them as environment variables. Then you restage your app. After the app is running, you can register your subway broker from there. So the CF subway ripple is located in the same account, the cloud foundry community. Here, so basically here, I just set the app name to this. Then I push it without start on purpose because we need, when you see this step, you will know because we need to set a couple of environment variable for the app itself, for the broker URL, for the backend broker URL. After you set that, we start the app. After the app is running, you can just register as you register any other service broker. Yeah, so these are the two ways how we can deploy subway. After that, you may want to take a look how this works. So I'm going to give you a demo. Let me try this. It should be like, oh, yeah. Yeah. See here, let me go to my terminal. Sorry. Oh, so this play, show you mirror. What is that? Mirror, oh, no. Erasurement. This. Oh, I know. It's tiny. Yeah, I know that. Thank you. Can you see the screen? Especially back there. Yeah. Okay. So I already have everything deployed on my machine to save some time. If I run Bosch, oh, I cannot type when people watching me. Oh, this is hard, actually. I need to do this. Is this perfect? Yes. So I deployed the CF subway here, as you can see. I also deployed the CF itself. Then this one is the Postgres Docker backends I deployed. Let me show you. So for the CF subway, I only deployed one instance. You can deploy multiple if you want. Then for the back end, it's a full node. So then the basic idea will be this subway broker will sit in front of all this full service broker. This full service is provided what I did. This full service broker is like provide the same service catalog. Okay. I actually already registered my service broker. What I'm doing here. To the CF broker. So if I show you, oh, no, service, service. We are C. Okay. So I didn't call the subway somehow. I called the Postgres Docker. But you can see if I do this, this IP actually is from the subway broker. See here. So you know the broker we registered actually is this subway broker. What do we have in marketplace? It provides three versions of Postgres SQL service. I actually already created one service. I call it my PG is an instance of Postgres SQL. So if we recall what Justin shared earlier, when we create a service instance, what happened in the back end? You can see from here. Let me see. Okay. Good. So here is, we know, you see I deployed four back end brokers, right? This four pin PA and E. It's just I bosh SSH into the VMs, four of them for the Postgres broker. So you can see this is one of them. Another one, third one, first one. So what I did here is I did a watch command. What I'm watching here is I run the Docker command, specify the host, then do the PS. So it will show me the containers I have on this node. As you can see in the first on the first node, I already have a container running. That is because services, services. That's why I should use the controller instead of tap it. That the reason is I mentioned earlier I created a service here. Now I'm going to show you something cool here. I like control. So if you read this comment, I'm going to create 20 service instance, then we are going to see what happened in the back end. Yeah, I'm not typing, so everything works. Here, you can see the container, it distributed on the four back ends, right? So even we only have one subbroker. So let's go back to here. Okay, we finished creating all the 20 instance. Actually, this may be better. Okay, so you can see each of them got several instance here. Let me go back to here. So I'm not going to show you band app because it will take a longer time. Actually, I tried myself if I do. I tried or not. Let me check. Oh, actually, nice. See, I already bind my PG to a simple app here. So it works the similar way. But for the new instance I created here, I'm going to directly demo you when we delete the service instance on what will happen. So now I'm going to do another magic here. Okay. So you can see we are deleting the service instance we just created. Then if you back here, you see magic happening. All the containers is stopped from the back end. Only have this one left is accurate six days ago. Okay, now if we run CF services again, only one left. So it's very actually, it doesn't have much delay and all the service instance creation request is distributed to all the back end brokers. It's nice. You guys can see I cannot see on my screen. Yeah, but my my laptop is oh, okay, now it's working. Okay, I'm good. So next I want to suggest you guys to try it out. It's again, it's on GitHub Cloud Foundry community. You can try both ways. Another thing I really want to share with you guys is, let's say, you need to write your own service broker. Guess what? If you use subway, you don't need to worry about how to scale out your brokers. You simply you just implement how your service broker works with your service, right? Then you just like maybe 20 minutes, even 10 minutes, you deploy a subway configured in front of your back end brokers. Done. It's so nice. Also, subway broker is in production. We have we have lots of clients for stuck in like Intel, GEs, which come out that one of our client, GE, they have a predicts platform. I don't know if you guys hear about it. Basically, they deploy the CF and all other things in multiple environment. So they had a problem there. They have the PostgreSQL Docker deployed. They also have log stack Docker deployed. But unfortunately, they only can deploy the one instance. So then the service instance number is limited on one host. So to solve this problem, we use the subway to to scale out multiple broker solved their PostgreSQL Docker problem and the log stack scaling problem. It's very nice. I think now it's still running in their production environment. Actually, that's all I want to share. Any questions? It was explained that before the original the original service broker API is fine. So the sorry, the question was, do does the back end need to support any additional end points other than the Cloud Foundry native API? And the answer is no, it has the same API on the back end and on subway. Yeah, yeah. See, if I want to repeat here, your what are you only thing you need to do is you specify your back end brokers, or we don't care what's your back end broker, right? Because you only need to specify your user password and URL here. Yeah, that's where it's amazing because you don't need a variable that far. So that is a that has to do with what back end you're talking to subway doesn't really know about what the back ends deal with. So it does expect though that they all have the same catalog. But if you have a back end like CF containers broker, there's no problem for you. Let's go back to the beginning. This example configuration, there's this, this services key is an array. So you can specify multiple services. You could have a Redis broker, sorry, a Redis container and a Postgres container and any other container that you like. As long as all back ends have the same catalog, it'll it'll work. But I'm not sure that's a but I'm not sure that's a good idea to do that way. It's better like a one subway broker than the back end broker, same service catalog. Subway has no state. Yeah, his data list. Yeah, yeah, the the back end brokers are just vanilla service brokers, as you would implement them for any for any other use case without using subway. So the back end service brokers need to keep the state of the bindings that they are created, just as as normal subway is just a way to scale horizontally. Yeah, yeah, yeah. Let's go back to the slides. So yeah, so the binding that gets passed back from subway is the binding to the back end broker directly. So the application doesn't get routed through subway. It talks directly to the back end. Sorry. If you guys see any secretion on my screen, close your eyes reset your memory. Thank you. Okay, where is my PowerPoint? Oh, no. Maybe you post it. No, I didn't. It's just this good. Oh, it's not a screen, but I think it's a little bit. Here it is. Oh, yeah. So let's find the slide. Here you get this situation. So the binding, oh, I skipped it. Here you go. So this is the situation after binding. Okay, the subway will ask the back end for a binding. When it says, yeah, you can bind it. The back end has to create that binding. And subway will just pass it right back through to the end user. And at that point, the app will be bound directly to the back end. Yeah, sure. It depends again on the back end. So subway is completely has no opinion. It supports the entire Cloud Foundry API. If the back end supported, then you can do it with subway. It's just passing requests through the endpoints that it implements are exactly the same that the endpoints it calls out to. There's absolutely no, no difference there. Yeah, I think a good way to understand the subway is that you think subway is just a simple, a little silly, stateless proxy between the real broker. So it didn't really do that much stuff. There, there is a validation in the cloud controller when you create service. If you have if you're giving the same name, that then a service that already exists cloud control, I won't accept that create service. Yeah, to the back end broker first, then yeah. Yeah, that's correct. Yes. Second one. Yeah. Another question regarding the summer schedule. For example, all all three back end back end nodes provides the same service. Yes, this is possible that they provide different plans of the service. For example, one node runs on an eight CPU core machine and the other nodes runs on a four core CPU machine. No, subway doesn't support those kind of use case. It really needs to be identical. Or I mean, you could do that, right? But it would be basically luck which which one you get if you get it on the eight CPU core or the other one. Yeah, it doesn't it doesn't merge the service catalogs. It just looks at one and assumes that that's the same catalog for everything. Yeah, because the the main purpose of the subway is really just scaling out the single node broker. So yeah, that means thank you.