 Hello everybody and welcome once again to yet another OpenShift Commons briefing. We are really pleased to have some members of the Ansible team here today to talk about deploying multi-container applications on OpenShift using the Ansible Service Broker. It's been a topic that people have asked a lot of questions about and asked for a demo and some background on. So I'll be really happy to have Todd Sanders and John Matthews and a number of other folks from Red Hat on this call. And we're going to let them talk for about 20, 30 minutes. You can ask questions in the chat and then we'll open it up for Q&A afterwards. All of these presentations are recorded and put up on the OpenShift YouTube channel, so you'll be able to find this in a day or so. I'll get them up there as well as on the blog for OpenShift. So without much further ado, Todd and John, go ahead and take it away and then we'll see where we go. Okay, thank you. Thanks so much. I think first we'll just introduce ourselves. So my name is Todd Sanders. I work on the systems design and engineering team, an engineering manager, and amongst other things, I've got an engineering team that's focused on sort of all things Service Broker. So John, my name is John Matthews. I work for Todd and I'm the tech lead for the Ansible Service Broker. All right, so with that, we'll jump right in. I'm not going to give a in-depth overview of Service Catalog or even Open Service Broker API, but I will touch on it briefly here. I would encourage you, if this is sort of a new concept for you, there's some really good YouTube's out there. There's one by Paul Mori called Service Catalog Basics that you can take a look at. That'll give you some real in-depth information around this. But at a high level, this is an API working group that was formed back in 2016 to come up with a common API for interacting with services. The idea here is that there's sort of two entities in this architecture. There's the Service Catalog and the broker itself. And the broker is what we're really going to talk about today. And the broker is all about sort of a couple of different things. Number one, it's around sort of advertising services to the Service Catalog so that the Service Catalog can see a set of services that a service consumer could interact with. Beyond that, it's standing up those services or creating instances of those services and then interconnecting services together or applications with services. So with that, we'll jump into sort of Ansible Service Broker, which is an implementation of that Open Service Broker API. And the idea here is to leverage the power of Ansible and its power in orchestrating and using that within OpenShift, well, Kubernetes and OpenShift to stand up either services or applications depending on how you're using them. So we wanted when we started down this path to be able to take advantage of the investment that's already out there for a lot of folks in the area of Ansible. Whether that's in playbooks or whether that's in roles that are in Galaxy, we definitely wanted to be able to take advantage of that and use the power of that to stand up, again, applications and services within an OpenShift cluster. So as we dig in here a little deeper, there's really kind of two things we're going to talk about in terms of our architecture. The first is the broker itself or the Ansible Service Broker. And again, this is an implementation of the Open Service Broker API, which we mentioned on the previous slide. And just at a high level has the ability to handle traditional source-to-image deployments. So if you want to start from GitHub and deploy from there, you can. It also allows you to take advantage of pre-existing or pre-bundled images that may exist in your registry. The second piece is what we call the Playbook Bundle. And you can sort of think of the Playbook Bundle as kind of the fuel for the broker. This is the application definitions that define sort of, hey, this is what my multi-container application looks like. So the idea here is a simple lightweight directory structure that contains a set of playbooks, if you will. In its simplest form, that's really what it is. The Open Service Broker API defines a set of really four lifecycle actions. There is provision, unprovision, or I'm sorry, provision, deprovision, bind and unbind. And these actions tie directly to playbooks that exist within the Playbook Bundle. We then take this Playbook Bundle and we package it inside what we call a meta-container. But basically it's just a basic image that has sort of a minimalistic either centos or rel image with a Ansible execution environment that sits on top of it. Really, all we really need is enough to run Ansible. Because the idea is that when the broker invokes the Playbook Bundle, it's going to spin up this meta-container, execute the Playbook that corresponds to the lifecycle action you want to run, which causes these actions to be run within your cluster and then that Playbook dies, or then that container, that meta-container dies and goes away. And that'll become a little bit more evident. Maybe a picture helps with that. So if we take again a look at this picture and the architecture and we start in the middle, again we see the Ansible Service Broker. We see the brokers talking to this thing called the Red Hat Container Catalog to get the Ansible Playbook Bundles, these meta-container or these application definition containers. It doesn't have to be the Red Hat Container Catalog. It can actually be any image registry. But the Ansible Service Broker is going to reach out, it's going to discover these Playbook Bundles, and it's going to advertise those as services to the service catalog, which are then going to show up in the UI so that the service consumer can interact with them. So once the service consumer says, hey, I want to instantiate or provision a particular service, let's just use databases as a service as an example. I want to provision my database, they would select that service in the UI, enter a set of parameters or configuration options that we've exposed as part of the Playbook Bundle that give them the knobs or tuning that we want to allow that customer to have interaction with. And then it will send that information to the broker. The broker will then determine, okay, it's this APB, and it will spin up that meta-container and execute the appropriate Playbook corresponding to lifecycle action, and then that meta-container will die. And similarly, that will happen for any of the four lifecycle options that we've mentioned. Again, provision and binding. Provision all about standing up the service, bind all about interconnecting things together. If we look closer in at the next couple of slides, I'm just going to take a little bit closer deep dive into sort of the anatomy of a Playbook Bundle, if you will. So again, really what I want to get across here is how simple and lightweight this is. So again, it's a directory of files. For those files that you see there, provision, deprovision, bind, and unbind, all correspond to Playbooks or lifecycle actions that can occur based on that operation coming in from the service catalog. These could just be entry points, or these could be full Playbooks. The idea being if you have a set of roles, and you can sort of see the deployment role to the right, if you have a set of roles that exist already, and you really want to use those as part of a lifecycle action, say your provision, you can definitely take advantage of that as part of this architecture. The idea here is just to give you a simple way of invoking a set of Playbooks at each one of these lifecycle action points. There's an additional file at the bottom is the apb.yaml. This is all really just, it's a metadata file that's used by the broker. Information that's in here are things like, certainly it's the information the broker needs to know about how to, or what image I need to invoke that represents this Playbook bundle. But taking that further, it's also the parameters that get exposed through the service catalog and through its UI, that the user or service consumer has the ability to configure at provision time. It also has sort of a set of dependencies that we can use to understand when we either connect services together or what other containers we need to spin up as part of this orchestrated process. So again, taking a little bit deeper look inside here, if I wanted to create an apb in its most simplistic form, sort of these are the steps that I would go through. And again, this is where, if we're talking about I want to package a service or an additional application, this is really where the work is from a development standpoint to create these Playbook bundles that we can drive through the Ansible Service Broker to realize our application that services through this architecture. So the first thing I'm going to do is I'm going to create an apb.yaml. Again, that's the metadata file. And then I'm going to create the set of Playbooks that I want to invoke. And if we're talking about starting out, maybe I'm just going to create the provision.yaml so I can actually instantiate my service. I'm then going to run a, we have an apb.tool command that we'll point to later in a GitHub repo to show you that. But we have some tooling that makes this easier. You're going to run basically a prepare. And what the prepare is going to do is the metadata file, the apb.yaml, we actually base 64 encode that and store that as a label on the outside of the Playbook bundle itself. The nice thing about doing that is we can interrogate a number of APBs without actually having to download all of them across the wire. So it was really kind of a performance savings thing for us. And then essentially I'm going to build that container. So now I have this apb meta container. If I take a little closer look inside, again, apb.yaml being that metadata file, here you can see on the right-hand side sort of an example. As I mentioned, it's got a name and sort of the image that represents that apb. Again, the broker is going to need to spin this up to execute the appropriate Playbooks for the service that you're talking about. And then additionally here you can see some parameters. In this case, there's a namespace parameter and a message parameter. These would be exposed through the service catalog to the UI so that the customer or service consumer could set these as part of the provision operation. Then if we take a look at the provision.yaml, again, this can be the full Playbook or can be an entry point into maybe a role that exists. The idea here, though, is that if you look at the task section, you could see these are in its most simplistic form. This can basically just be the front for a number of OC commands that probably everybody's already familiar with. So in this case, it's going to take that namespace attribute or parameter that we saw in the apb.yaml and it's going to use that to create a new project space. And then similarly, it's going to do a create on a resource file that already exists for standing up, say, a sample application. If we take a look at the deprovision, it's sort of what you would expect. Again, simple Playbook tasks basically deletes that namespace or removes that project. And then lastly is the Dockerfile, which is all about how do I build up this meta container pulling in the Ansible execution environment amongst other things. Then the last thing I want to point out is obviously one of the things I just showed was fronting a number of OC commands. Well, we actually can do better than that by leveraging Ansible modules. So right now there's some work going on to develop a set of OpenShift, Kubernetes OpenShift modules for Ansible 2.4. And those are going to allow us to take a little bit more advantage of the power of Ansible. So here you can sort of see comparatively side by side on the left-hand side. You can sort of see where I'm directly invoking OC commands. And then over on the right-hand side, I'm doing the same thing, but leveraging this set of Kubernetes and OpenShift modules. If that's interesting to you, again, this is all fairly new being developed now. I've actually put the GitHub link there at the bottom. If you want to go check that project out, I encourage you to do that. All right, so I've talked a lot about a lot of things. I'm going to turn it over to John here now. And he's going to walk through a set of slides and then to reinforce sort of what we've talked about and then give us a live demo of some of these. John? Thanks, John. So for the demonstration, what I'd like to show, we're going to provision two things. We're going to create a Python web app. And that Python web app will need a database, but we're not going to create the database with it. So the first thing we'll do is just it will be a web app just by itself that's not connected to anything. Then in a separate step, we're going to create the database. After we have that up, we're going to show how you can do a bind to actually join those two together. That's really the magic one assured here is how does bind work and how does bind get the credentials into the right spot. So with that, if we go back to that original diagram that Todd showed, I'll try to show you what we'll do here for the demonstration. So a picture that on the left-hand side, the service consumer, all the interaction the service consumer will do will be through the catalog web UI. And the catalog web UI for this demonstration is going to be talking to the Ansible Service Broker for getting a list of APBs that it knows about. So when we log into OpenShift, we're going to see it there are several services that we can provision. And about half of those services are backed by APBs that we have available in Ansible Service Broker. Well, Ansible Service Broker knows about the APBs that are out there by talking to a registry. In this case, we could talk to you right at Container Catalog, Docker Hub, Quay, whatever you really want to run up there. In the registry, there's going to be several APBs that would exist. And what we'll do is we'll look at all the images that exist in an organization and then we'll filter out which ones are actual APBs. And that's how the Ansible Service Broker builds up the catalog that it will send back. So in this case, we're going to get the Postgres demo APB and we will download that. After we have it available, the next thing we'll do is we'll want to do a provision of that. So the Ansible Service Broker will run that image. So we have the container running and then we'll pass in a provision to it. And we pass in a provision that will correspond to the provision method of the Open Service Broker API. When we pass that provision in, what we want to happen is we want the provision.yaml to execute. And the provision.yaml will be responsible for doing whatever it's needed to actually create this Postgres database. So at the end of this, we've done one provision on the APB and we have a Postgres up, but the Postgres is not being used by anybody. The next thing that we'll do is we'll create our Python web app. We have a GitHub repo and we'll do a sourced image on that so we can get the web app running. But again, this web app is not connected to anything. So this is a case where we have two services but they're not actually bound. So that's the magic we want to do is we want to bind the database to the Python web app. And we're going to do that through the service catalog. And that binding, that's where we're going to create a secret and then we'll inject the secret into the web app and that will allow us to see the database. The last thing I just wanted to highlight when Todd was talking a little bit about binds, when you think of bind, think of credentials. That's really what's happening over here. So the workflow is that the broker is provisioning the APB and the APB is responsible for getting those credentials. And it returns those credentials back to the broker. That makes its way to the catalog and the catalog saves those credentials. When you do the bind, the bind will create a secret and then it will inject the secret into our application. And we go through the demo, I'll show where the secret is. So with that, I'll get out of slides and go into a demonstration. So for this, I'm running a local OC cluster up on my laptop. And what I want to do first is to create a new project. And then I will provision this Postgres APB into that new project. So pick Postgres and then provision it into the demo project. And this is probably going to take maybe about three, four minutes to complete on order of that. And we'll take a look in one minute at what's going on. One thing I forgot to mention here is that this is the newer web UI for the catalog that we're looking at. And you can see that there are several other services that do exist. So right now what I want to do is I want to take a look and show you what happens when you launch an APB. So for this case, APB is running in this project. So let's just bring it up and see what it's actually doing. Okay, so we're looking at the logs right now of the Postgres APB that's running. And what we're seeing the logs do is the logs are just showing an output of a playbook that's running. And you can see some steps here where we're creating namespace, creating the volumes, the service, getting up an image stream, and things like that. So all it's just really doing, this is just invoking normal Ansible code. This Ansible code is going to be responsible for creating our Postgres database. So I'll let that continue to run for a bit, probably done in maybe a minute or two. Let's go back. And the next thing I want to do is I want to create a Python application inside the same namespace. So we have a Python app that's on GitHub, so we'll get a URL to that. And then we'll go back to the catalog, and I'll select a Python application. Again, I want this to go into the demo namespace, give it a name, and then the path. And then we'll build that guy. Now at this point, it looks like my database has finished provisioning. And so I have that here. And then we're just waiting right now for the Python app to come up. So let's give this another maybe 30 seconds, two minutes. Now while it's going on, I'll just go back and look at the logs for the APB. What I want to show you in the logs, down at the end when this completed, you'll see something that looks a little bit strange. It's this thing called bind credentials. And then bind credentials, these are just base 64 encoding of some data that is needed for the Python app to actually talk to our database. So this would be the typical things we would see for a URL, username, password, things like that. So the APB part of it after provisions is that it creates these credentials, and then it sends these credentials back to the broker. And we're going to come back and we'll see this in the form of a secret after we do a bind. So let me go back to the demo project and see if everything is up now. Okay, so it looks like I have my Python app and I have the database. Again, these are not connected. I'm going to look right now at the Python app. And what I should see is I should just see an application that's up with no data. And in this case, that means that I'll be looking at an application, just shows us a map. There's no data points on it. Once we load the data in here, we'll see red dots that correspond to zip codes in there. So that's the next thing I want to do. So this is where the magic comes in. I want to actually create a binding on our Python web app to the database. So I click on the kebab menu, and then I click on create binding. And I have one Postgres APB is already there, so I can bind to that guy. So at this point now we created the binding. And what just happens is that you'll see that this got bumped to deployment number two, and we're redeploying. Let's take a look and look at the secrets. We're going to show you what happened there. So we have a new secret that got created. And this just has the information that you would expect to connect to the Postgres database. So this is a case when we did the binding, the service catalog took those credentials from the broker, and then it created a secret and ejected it into this namespace. And then it told the pod to restart so we could pick them up. So now we expect that the Python app should have finished redeploying. So let me just go ahead and refresh it. Yep, and now we see that connected to the database and we see these red dots that represents that it's able to read data. So that concludes the demonstration portion. So just to take a look at what we were doing in that one, Todd mentioned earlier how there are some newer modules that have been created in Ansible to control Kubernetes or OpenShift resources. This is a good example of one of those APBs that's using the newer modules. So if you take a look at this APB and I have a link up here in GitHub where we can take a look at it more, you'll see an example where we are creating a service in here and now we're using more of the modules. So this is a good example to look at to see how those modules are working out. We also have several other examples. We have some examples of using an external service like with Amazon for RDS. And we have examples of WordPress and a few other ones there. So if you look at fuserapb-examples, you'll find several examples there that you can run. And with that, that concludes what we wanted to share with you. So we have here on this slide just various contact info, other places to look, links to YouTube, things like that. This has been really good and there's been a bunch of questions in the chat. So I think probably the very first and good one is, when can they try this out themselves and how can they try this out themselves? Yep, so a few things that there's... Well, this will be, I guess, in tech preview for 3.6. So we will have it in there. We do have some, if you're willing to experiment, we have some really early bills right now that are available just for playing arounds and we have an environment that you can provision and you can actually run this through. I have a link to it on here. So if you go to Ansible Service Broker, this link, that will bring you up to GitHub where we have several other things here. The one I wanted to show you is our environment for running. Yep, you'll see right here this demo environment with OSC cluster up. That will bring you to a project we have called CatASB. And CatASB is an environment for running the catalog in the Ansible Service Broker. It's set up right now to run... It's an older build of Origin, older build of the service catalog. We're in the process hopefully this week of updating it to some of the newer builds. But this is where I would look at if you want to get a feel of this and start getting your hands dirty. We have this set up so you can run it on Linux, you can run it on Mac, or also deploy it easy too. I'll still work through some other options, but this would give you an environment so you can go ahead and start playing with this. All righty, that's a good start and a good question. Thanks, Thomas. Craig is asking, how was authentication and authorization handled for Ansible Container? I didn't actually mention Ansible Container in this case. So there is a path where Ansible Container would fit in through a guided workflow. But for what we're showing right here, this one's not really using anything with Ansible Container. All right. Let's see. John is asking, the connection from OpenShift to ASP is using the Open Service Broker API. I think you answered that. Yep. All right. So I think you've got that one down already. And then let's see what else is there. Hold on. Okay. Adam's asking, what if the application can't won't run unless it has a database? What if there is a separate application needed to run database migrations? So, yeah, in that case, I think there might be something left to do where the deployment for the application would not actually come up until we have the bind already created. So it might be a case that it's only partially defined at that point. And then there's a question about customization of the bundles. How can we customize the bundles if they want to use Postgres from the broker, but want to use their own Postgres config file? Yeah. And that's really kind of the power of this is one of the things that we really wanted to be able to do was make these things easily extendable. So certainly you could pull down an existing Postgres APB, crack the cover, mutate it, re-bundle it, and rebuild the metacontainer and then take it from there. For example, one of the things that you might have noticed, or maybe you didn't notice, is our Postgres image that John showed in this demo actually seeded the data in the database as well. So here's an example where we took an APB that is all about standing up Postgres and we mutated that to additionally add the seeding of data, re-bundled it, and ran that through this process. So certainly that kind of workflow is something that's very important to us. Adam's also asking, what are the requirements for running the broker? What does it take to host the broker in Kubernetes OpenShift cluster? Can this all be done via the command line? Can I use this to create Dockerfiles in the cluster? It's a long... You can also jump into that and see some of these questions. Yeah, just starting to read through right now. It's a bit easier for us. So as far as the requirements for running this, I'm running this right now just with the local OC cluster up, but I did have a modified version of Origin so I could pick up the newer Web UI for the service catalog, and then of course I deployed the service catalog as well. I'm not really aware of too much on the requirement side. It's just really a case of this one. I need Kubernetes 1.6, so I have enough stuff for the service catalog, so it's happy, and then there are broker plugs into it. As far as doing this from the command line, yep, we could do a lot of that. So the service catalog itself has a different API server, so we have another Qube config that talks to the API server for the service catalog, and we can interact with that just through normal Qube CTL commands, and through doing that we can do everything that we have to do on the broker side, because all interaction is just through the service catalog. So yeah, you could do this through the CLI, using Qube CTL to the service catalog's own API server. I think the plan later on is to put it behind like an aggregated API server. I don't quite know the details on that right now. Do what else. Yeah. Now, as far as the created Dockerfiles in the cluster, that one, we're not really doing that right now. We actually are doing some work. We're doing a sourced image for APBs, where we'd be building these containers, but at the moment we're not creating Dockerfiles in the cluster. The next question was about workflows. What do you expect the workflow to be once an APB playbook is created? Is there some support for versioning? How do I, and how do they modify an existing playbook? That's still from Adam, too. Yep. So we do have some labels, and the labels can support versions. We haven't done a whole lot of work with that just yet. So that's something you'll probably have to design a bit more. And as far as modifying existing playbook, it's really, you could just fork the repo. I'm assuming the APB comes from a repo. You could fork it and make any changes. You could just run the stuff to test it using Ansible Playbook. You don't even have to build this as an APB for a lot of the testing. And then once you have the playbook and you're pretty happy with it, they can go through and build as an APB so it gets sucked back into the catalog. There's a little back and forth around the earlier question around Ansible Container. We have misread what they were asking in the question. How does it get the queue config information? So we're using a service account. Well, we'll be using a service account. That's something that we're actually working on this week. So we're going to have two service accounts that will be held here. One will be set up just for the broker itself. And then the broker will be dynamically creating service accounts just to run the APBs and then revoke in the service account afterwards. Ashok is asking, how do we achieve the CI deployment in ASB? I don't have a good answer for that right now. Sure. I mean, yeah. So I mean, certainly there's, I mean, if there's pieces that we know need to get put in place so that we make it easier for not only building, but also getting the newly built APBs into an internal registry so that they're available in the service catalog to be invoked. You know, some of that begins with some of the work we're doing right now. It turns to support sort of on-demand dynamic building with our work around source to image. But definitely an area that I think we need to expand. But haven't thought that far down that path yet. Yeah. So if you can throw that last slide of yours back up with all of the information on how to connect with you guys. I think that might, there we go. So what's the best way for people to be the email or IRC is what I'm guessing? Yeah, I think IRC is probably the best. I mean, obviously all our engineers sit in there. So if you want sort of real-time questions answered, that's probably the best. Next to that would be the email list. Our sprints are all out there accessible in Trello. So if you want to take a look at sort of what's going on where we're focused, that's a good area. And then in terms of just following along in the development or contributing or seeing where you can help out, obviously all our GitHub repos are out there as well. We do try to make a habit of anytime we do sprint reviews or things like that of actually recording videos. So there's actually a number of good videos that are in the YouTube channel that cover a number of different things. So I'd encourage folks to take a look there for more information as well. There's a little, Joshua is asking a little bit more in chat about clarifying the credentials issue with the OC commands. I just got to take a look at that so I can read it. I think we're okay, too. But it is, there's a little, you know, there's sometimes confusion going between what is Ansible Container and Service Broker and those things. So I think this is good. So I think this brings us close to the end. There's one more question just popped in. There seems to provide the same capability. This seems to provide the same, some of the same capabilities as templates. How does this compare and contrast with templates? Yeah, there's, I think there's definitely some overlap. I think when you start to get into maybe more complex applications that require a little bit more orchestration, I think Ansible is definitely, you're going to find some shortcomings and you're going to definitely need to lean more towards something that's better suited for that. I think Ansible is definitely something that's definitely more suited to handle more complex applications. The other thing is, is I think there might be instances where you want to do things off cluster or external. And I think this technology gives you that ability as well. Yeah, the RDS is a really good example of that where for Jim might hear his keynote at the rent at summit, the last demo that was shown, we did that with one of these ATVs. And for that case, we had a Python app that was on a local machine. And then we provisions and RDS instance that was on EC2 and we just joined the two. So any kind of stuff like that where if you are doing something that's partly cloud native and something else that's traditional and you want to bridge them, we think the ATV could be a good way to do that where you can sort of package whatever you need for the traditional stuff and then you can still link it into your cluster. Awesome. Well, I think that probably brings us to the end of today's presentation and I'm sure we'll have an updated version of this as the new releases come out. So I want to thank you both for joining us today and everybody else who came on. The blog, the OpenShift blog, blog.OpenShift.com is where you can find the deck, which I'll ask these guys to make a PDF version for and link to with the video. It could be up in a day or so. And I'll broadcast that out to the mailing list and out on the Twitters. And next week we have a presentation from Brian Brazil on Prometheus. So if you'd like to join us for using Prometheus with OpenShift, that'll be next week's talk. And we're looking forward to a lot of things. There's, oh, somebody slipped in one more question if we've got time. It's around the elk and metric stack. Are there any plans to deploy elk? We'll get that right there. For deploying elk? We actually have a, is it in our sample repo? We actually, I don't know if it's there now. Do you know if it's in the example repo? I think we took some of it off. We actually have done some example APBs that will stand up an elk stack. So we actually have a number of services that we're looking to expose as APBs. And certainly one of them is elk or from a Red Hat standpoint, our common logging stack. So absolutely. There's lots of good stuff coming. Lots more room for other demos and presentations. So if anybody has other topics that they want to hear about or has something that they want to present, please do reach out to me via Twitter at OpenShift common or at Python DJ, which is my Twitter handle and an easy way to get ahold of me or just jump on our Slack channel and ask a question there. So thank you again. And that was really good. I'm so pleased to see all the service catalog work coming into reality on OpenShift. And this has been a huge help to get this service group or stuff done. So thanks again to everything you guys have been doing. So take care.