 Alright. Well, hello everyone. Thanks. My name is Steve Milner. I work at Red Hat, and I've been there for about 11 years. I'm currently in the platform infra team. And for the last 12 to 14 months, I've been working on the Commissaire project, which is for lightweight REST API for node management. So before I start, let me tell you what we're going to talk about. So we're going to talk about the problems that Commissaire is trying to solve. And we're going to move into the prototype that we did and the lessons we learned, the architecture, how you can set up your own instance. I'll show a quick demo and then how you. The mic. Oh my God, I only have 10 minutes left. You guys ready? Good morning, everyone. Good morning, Robin. So who came to the session because they thought they were going to get some containers for their feet? Because they read Twitter. Okay, you get socks. I'm going to join a baseball team. There you go. Anyone else? Oh, now I feel like you guys are full of it, but there's one more. All right, so I signed up for this talk about Ansible container, because I didn't know if these fine gentlemen actually developed all the time were going to be coming with me, but they did it and what they've been doing on it very recently and what they'll be doing on it in the future rather than what I know from three weeks ago. That seemed like a better idea. So I'm going to hand it off to Mr. Joshua Ginsberg, followed by Chris Hausnacht. And this guy, that is my boss, Greg DeKoenigsberg. Howdy. Yes, have some socks. You can throw socks at people if they ask good questions. Who wants... You're the distributor of the socks. All right. Hi, I'm Jag. I'm the chief architect with Ansible. And with me is Haus and you might remember us from your Sunday night primetime TV lineup. But today we're going to talk about Ansible container. So one of the cool things that we're discovering as we're going down this whole DevOps rabbit hole is that the tooling that we use informs the culture of our teams. The culture of our teams informs the tooling that we're using. And right now what we've come to is containers where containers are this incredible way to have very clear boundary definition about what services are responsible for what the boundaries are within a particular process or VM for greater security. It has all these tremendous benefits of tooling. And the thing is that we've been talking about containers for years and it always seems like the deployment of containers like next year. Next year we're going to move all of our production stuff to containers. Next year is going to be the year of containers. It's like the new Linux on the desktop. And so we struggle I think in large part because we don't have really good tooling around containers yet which is what we're all scrambling to build right now. Containers as a paradigm has basically meant that all of the monitoring and management and building tools that we've spent the last 30 years as an industry building throw them out the window. They don't really apply anymore. And so we're also scrambling to catch up. And in this in this problem space Docker has definitely been the dominant solution. And it's been fantastic because they've created this really great on ramp onto the container ecosystem. You know you download this one thing you install this one thing and off and running you're in containers. It's great. But in the last year or so we found that there's really only one off ramp to the Docker ecosystem. And that's Docker swarm. And Docker swarm is a perfectly fine product and that's a perfectly fine business strategy for Docker Inc. But we at Ansible aren't really about that one thing and we want to talk to all the things we want to support all the things because really there isn't one solution that works best for everybody. There's all kinds of solutions and solutions can change. And so we don't want to invest all of this time and this energy into building these artifacts that lock us into this one off ramp when there might be new and exciting things right around the corner in this highly rapid and changing ecosystem. And so this kind of fits very well with Ansible because Ansible talks to all the things. Ansible is an automation language that doesn't care. We will talk to whatever it is that you want to run your things on. And so when we're talking about building containers it kind of seems natural that Ansible could be a way to have this sort of universal language that doesn't pick sides doesn't pick favorites doesn't pick your destination in order to build and run and deploy your containers. So we built Ansible container. Ansible container is a toolkit for building for orchestrating and for deploying containers and container applications using nothing but YAML and Ansible language. So at the moment you have one playbook and you have one orchestration document and from that you can build all of your container images. You can run them in a development or production orchestration and you can deploy them through registries to any container engine of your choice. We just went blank. Now we started out with Ansible container taking the tools that Docker made very popular and trying to evolve them. So the container.yaml the orchestration document looks a lot like a Docker compose file. So the same way that even if you don't know what an Ansible playbook does you can read it and kind of figure out what it does. If you know Docker compose a little bit you can read the Ansible container orchestration document and figure out exactly what's going on. And the steps are pretty easy in the workflow. So you start with your Ansible container build and what Ansible container build starts with your base images. So sentos, Debian whatever your base images and applies Ansible playbook to those base images in order to commit built container images. From there you can run those containers orchestrated using this container.yaml orchestration document. We support two different runtime configurations one for development one for production because nobody really runs containers on your dev system the same way that you might run them in production. You can then push those container images to any container registry of your choice and this is about as good as the conference Wi-Fi to this is awesome. And you can ship it to any cloud container platform of choice. Right now we have engines to support Kubernetes and OpenShift but much like Ansible it's been designed in a plugable architecture so that we can plug in new engines as the community demands. So the workflow that we sort of envision is that as a developer on your desktop you would run your build in order to build your images. You would run them with the development orchestration settings and then do a get push which would kick off your CI CD system which would do another build would run it in your production configuration. Apply whatever testing you want to your built and orchestrated images push them to a container registry of your choice and then using the artifact created by ship it orchestrate those container images in production Kubernetes or OpenShift. So the way that it sort of works under the hood is that we're actually leveraging the Docker engine in order to talk between the Ansible container builder and the target containers that are the artifact. What this means is that there's no SSH so you don't have to install SSH on your target computers target containers to be able to use Ansible container with it. We could communicate through the Docker engine. Let's just jiggle it maybe that'll work and it also means that because we're using all Ansible you don't actually have to have a Docker file for your target containers. You can use the Ansible language to describe what it is that you want in that target container. On run like I said we support two different configurations so if you are Mr. Handy developer using your dev configuration you can in dev configuration you probably want to take your code from your project and mount it into your container. You probably know if you're running node you want to run like a browser thing kind of thing in order to constantly rebuild your assets as you change things if you're using Django you want to use the run server you know as opposed to using a full fledged whiskey setup. But in the same document you can specify your production settings so that your CICD system can test using the production orchestration and when you push it out to OpenShift or Kubernetes you're using the production setup. We support pushing. Pushing is will only push the images that you've built so if your orchestration uses some pre-baked images off of Docker Hub or off of another registry that your Ansible playbook didn't touch. We don't need to push those images anywhere else for you. And then the the ship it command generates an Ansible artifact that takes your orchestration information from your container.yaml and jiggles the cord and pushes that to Kubernetes or OpenShift. Okay so that sounds really cool. Let's see I'm going to skip over that because house is gone out. So it is on GitHub it's LGPLv3 you can find it right there. I'm going to I think so we're going to turn it over to house now who's going to do a video demonstration of Ansible container doing a build and run so we can keep the screen on. And then we're going to talk about it's been a fun couple of weeks. We've had some great conversations with some folks here about what it is that we wish Ansible container could do. So we're sort of in the process right now of gutting it like a fish and working on Ansible container Mark 2. We'll talk about what some of those changes entail. Alright I'm going to try to juggle all this microphone demo bailing video. Alright so how many folks here know anything about Ansible Galaxy. Alright a few people good. Can everybody hear me without the microphone. Oh sorry alright we'll keep the microphone. Alright is this going to work maybe. Alright pretend I'm showing you Ansible Galaxy there we go. Alright so I'm not going to drill into this but out on Galaxy you'll find some new roll types. There's a container app roll and a container enabled roll. A container app roll is basically a full app it's multi-container. You can build it run it it's all ready to go. A container enabled roll is basically a building block. So a container app is made up of container enabled rolls. If that makes no sense to you hopefully the demo will help a little bit. So there is out on GitHub there is if you can see the top of the yeah. Ansible container demo it's under the Ansible namespace. There are some videos on here if you're bringing this up on your laptops please don't start playing the videos because we'll all like flood the Wi-Fi and then I won't be able to show you the videos. So what I'm going to do this first video I'm going to play here. We're going to initialize an Ansible container project from scratch. So you can see the commands there basically we're going to make a directory change into it and then we're going to run Ansible container init and we're going to pull a project a container app we're going to pull a roll a container app roll off of Galaxy. So the name of that roll is ansible.jango-gulp-enginex and we'll run this video and then we'll go take a look at what we got from that roll. So hopefully everybody can see this okay. So we're just making the directory we change into it. We run ansible container init and then the name of our roll. It takes a moment it's going to download it. Okay and I think it's going to show the directory. All right so let's go look at this in a little bit more detail. Fantastic. All right so where do I want to go here? I just want to go out to the command line. Okay so if we go take a look at what's in that roll. All right so this is basically that's probably not very helpful. All right so we're just going to look at the directory structure of that Django-gulp-enginex and what I wanted to point out you'll notice in here there's an Ansible directory. All right and what I want to look at I want to drill into the container.yaml file a little bit. Okay so this is kind of the heart of the Ansible, an Ansible container project. Right so this hopefully this looks a little bit familiar this looks a lot like I think it's kind of going off the screen here. All right I don't know if everybody can see that. It's essentially Docker compose with some nuances. So you'll notice up the top you see version two right that looks pretty much like Docker compose. And then we see defaults. So those are actually some variable definitions right we've defined some information about Postgres, a user, a password, the name of our database, stuff for Django etc. And if we go down a little bit further we see services right Docker compose. And then underneath that we've defined each of the services and there's four services in here. We have a Django, we have a Gulp service and if we go down we'll see that there's an Nginx and Postgres SQL. So let me point out a couple things about these. So let's check out our Gulp service. You'll notice there's an image directive. So that's telling us that's the base image that we want to start this service with. And then you'll see some typical directives that you're used to if you've written any Docker compose. There's a user, there's a working dir, there's a command. You'll notice the command is slash bin slash false. So if we actually run with that command the container would just immediately stop right. Then we see some volumes, environment and then there's this weird thing here called dev overrides. So this is something that we added with Ansible container. If we do Ansible container run we'll run with dev overrides. So you'll notice under dev overrides there's a command. So that dev overrides command directive overlays or overrides the other command, the command directive we see at the top. And then further down you'll see options. So options is about when we deploy a container out to the cluster and we wanted to find some specific options. So you'll see that we have some options for Kube or Kubernetes and then a little bit further down OpenShift. So right now when we for deployment we support Kubernetes and OpenShift. So hopefully that makes a little bit of sense. But basically what we've tried to do in container YAML is build on top of Docker compose and build in some pieces that let us manage the full life cycle of a container. So we have production settings, we have some development settings and then we have our deployment settings all in one file. Does that make any sense at all? Anyone have questions? Sure. Say that one more time. Yes. And I'm actually going to show a demo of that. But we're using this one file to drive the... To orchestrate the containers on the laptop in development as well as orchestrate them onto Kubernetes or OpenShift. So we'll take a look at what that looks like. Anyone else have questions about the container YAML file? Yeah, so we actually did a demo of that. There aren't specific Ansible modules for Ansible Container. Ansible Container is a CLI tool. So you can write a playbook, use the command and shell module and it will work. So in that sense, yes, but there's not a direct integration. So the next thing I have up here, just real quick, this is our main.yaml file. So Jag mentioned that when we build a container or we build our images, what we're doing is we're standing up a container based on our base image and then we run a playbook against it. So main.yaml is our playbook. And you'll notice in here the host names actually match the service names in our container.yaml. So that's sort of how they tie together, if you will. The container.yaml actually provides the inventory to the Ansible playbook. So without further ado, I want to move ahead in the demo and just show you guys what a build looks like real quick. We'll venture back out to YouTube. Lightning fast Wi-Fi speed, come on. Where did I get to my channel? I can't get anything to come up on the Wi-Fis. Yeah, I don't have an ethernet port. So imagine that I'm building images right now, running playbooks. That might work, might help. Alright, the internet is back. Maybe. Should I turn my Wi-Fi off? I don't know. I'm going to leave it alone. Alright, next video. Alright, so what I'm going to show you next is a build. It only takes a minute and a half, thanks to the magic of... Alright, so this is what we've done is we've run Ansible container build in our project directory. And we're downloading the builder container image. So the builder container actually contains Ansible. The Ansible playbook actually runs inside of a builder container. So you do not have to have Ansible installed locally. We don't leave any, for lack of a better term, Ansible droppings behind. So everything happens inside the build container. And you can build a local build container if you want to. In this demo I just happened to be pulling from Docker Hub. So now it's going to start the build. So remember there were four services in our container.yaml. So what we're going to see is four services actually start up. And you see them here. Oops. So gulp, nginx, postgresql, jango. All those containers are now running. And our playbook's running and it's starting to run against the jango container service. Let's let this go. You'll see it go through all the tasks. So it looks like a typical playbook run if you've ever run an Ansible playbook. And now it's moved on to the gulp container. And in a moment we'll see it move on to nginx. And once this gets towards the end we're going to see it actually commit the images. So what it's going to do is it does a commit which basically builds an image out of the container file system. So now it's just plowing through nginx. The playbook finished and now you see it's shutting down the services and you see it committing the images. More YouTube magic. Let's pop back to our project real quick. All right, so if you follow through this demo and I'm not going to do... Yeah, okay, so here's kind of the section I wanted to highlight. So the project, the role that we executed, right? We said Ansible container and knit at the very beginning and we gave it a role. And that role was ansible.jango-gulp-nginx. That role basically gave us a Django framework, right? And in this project is some additional source code and there's instructions here. So if you go through this readme it'll take you step by step and you'll actually... I'm not going to do it but you'll see where you're downloading a source archive and you expand it on top of that project. So it gives you some custom Django code and if we do an ansible container run, this is what you would actually see. So what did we do? We initialized the project with ansible container and knit. We ran ansible container build to build our images and now we're going to do ansible container run to actually run some containers, start up our services with those new images. And that's what this is doing. So you're going to see the containers start up any moment and then in the demo we'll just walk through the application and kind of show that it's actually working. So inside of container.yml there is... There's a registry directive so you can put multiple registries there or you can just specify on the command line, you can specify the path. So Docker's installed locally. So ansible container run by default it's just going to run against the local Docker. It uses the Docker engine. I'm sorry? Yes, yes, sorry for building it. We'll talk about the future in a couple of moments. So this is just running through the application that the demo builds. It's just a little sort of fake social media site. It's adding some posts. Nothing horribly exciting. I think you get the idea. Let's skip ahead real quick. So I'm just back at the repo on the readme. What might be more interesting is if we actually started up an OpenShift cluster and we did the deployment. So there's sort of two steps to this in the current version of ansible container. We talked a little bit about pushing images. So the next video on here is pushing images. I'm not going to show that. And then after that you run a command called ship it. So you run ansible container ship it. What ship it does right now is it generates artifacts. And those artifacts are an ansible playbook and an ansible role. And so all we're doing is we're changing into that demo directory. We're going to run ansible container ship it. OpenShift is the engine that we want to use. And we're giving it the registry that we want to pull from. So I have a local OpenShift instance running. You can't see it. And that's what local.openShift points to. So that's the registry that we're going to... I skipped the video, but in the previous video it would have pushed the images out there. So now it's just running ship it and it's going to pull the images from that registry. And all that means is inside of the role. So what it's done now is it's generated a playbook and a role, which it's going to show you. So there's our playbook. And now we're dropping down into the ansible roles directory. And we see a demo dash OpenShift. So that's the role that it generated. So when I say pull from what that means is inside that role, it's referenced... Good grief. It's referenced that registry. We've got like 10 minutes left. Do you want to just jump to yours? All right. The last demo video on here... So if you go through the demo yourself, look at that. Read me. It'll actually run that playbook and deploy out onto OpenShift instance. All right. Where's the picture? I don't need fancy resolution. All right. So let's talk about the short and exciting life of Ansible Container. We are currently at release 0.2 of Ansible Container. We were going to have a release of 0.3 next week until we got to Burno and talked to lots and lots of people and changed a bunch of things. So there will not be a 0.3 release next week, but there will probably be a 0.3 release in the next month because I think we made some really good decisions that are going to simplify things and make it a better project. So rather than push something out and then change a bunch of stuff, we're going to take a little bit of time, and hopefully what we come up with will be much closer to what we ultimately hope will be Ansible Container 1.0, which will be the first release of this that we expect to be potentially something that everyone can use super easy. So here's a basic list of things we think we want to do. We reserve the right to change this again at any time with no warning. We're developers. That is to say they're developers and I'm their flak. So number one, a simple converter tool from Dockerfile to an Ansible Container role. There's a lot of Dockerfiles out in the world. We can't get away from that stuff, but ultimately we want to have a complete off-ramp off of Docker entirely. If you've heard Dan Walsh talk about this, you've heard him say that ultimately a Dockerfile is going to be like PDF, right? Many kinds of inputs to create these containers, many kind of outputs to consume these containers, and we want to PDFify this space. So we want to be able to get rid of the Docker tool chain completely, and this will be one step along that path. Number two, container app. We've got separate container.yaml and main.yaml as we went through. Just doing away with main.yaml and putting all that stuff into container.yaml is something we're also looking to do. One thing that a lot of people have said is, I love Ansible, but I don't necessarily want Python on my target container. And right now, because it's Ansible, we require Python on the target system. If there's no Python, we can't do anything. Well, we think we've got a clever way around that dependency, so we'll talk a little about that. We want to move more and more of the Ansible Container logic that's in the Python application into the builder container itself, and we're also going to change the name of the builder container to the conductor container, because it's doing a lot more stuff than just building. And finally, we're going to make some fairly fundamental changes to our ideas of ship it and run. So let me drill down into some of this a little bit. So Docker files are shell, right? It's shell script. It's not really even particularly good shell script in a lot of cases. People take shell scripts and dump them into Ansible all the time. It's one of the ways that people first get started with Ansible is they've got a shell script, and line by line they move it in and start doing stuff. So that's going to be step one, is we'll just have a little converter that does that for you. Here's a Docker file. We're going to slap it into an Ansible file. Step two is going to be to actually look at those commands and say, hey, this looks like a yum install. Maybe you should use the yum module for Ansible that would make this actually work a lot better. And then ultimately, we're hoping to be able to do some of this stuff automatically and come up with a converter. So hopeful there that that will help smooth the way for some people. Like we said, do away with main.yaml. The real idea here is that we want to be moving more towards Ansible roles. So if you go out and look on Galaxy, you will see that there are lots and lots of roles and most of those roles look suspiciously like a microservice looks. So our big goal with all of this, sort of the strategic goal is to be able to say an Ansible role defines a microservice and with Ansible container you can take that role and turn it into a container and with other more standard Ansible tools you can take that role and turn it into a VM or anything like that, right? That's really the heart of the Ansible container strategy is to instantiate Ansible roles, right? So that will make the caching problem much simpler, hopefully. You know, there's just been some roles and if the role has changed, then you rebuild and if not, then you don't. And I'll make this available because I'm running out of time, I'll blow through this. Eliminate Python as a dependency for target containers. So we think that we can use that orchestrator... So the way this works is you've got an orchestrator container and then all of the target containers and the orchestrator container is the thing that runs Ansible, connects these target containers, runs the Ansible command in them, then snapshots them and does all the magic, right? If you can put Python in a virtual end and if in that orchestrator container you can just mount that orchestrator container, we think... Have you tried it yet? Not yet, but he's going to get there because we had this idea like five days ago. Which will mean that you just mount it at build time, run all the Python you need to, and then unmount it when you snapshot it, and boom, you were able to do Python stuff without having Python end up on the target container. So it's very clever. If it works, we'll see. It's a much requested feature so I think that'll be very popular. Move more of the AC logic in the conductor container, talked about that. Also helps to make Ansible container itself a little more portable. If the Python that is invoking it is basically just a very small thing, then we can focus on putting all the updates into the container builder, the conductor builder. And then changes to ship it and run. So we sort of had this idea that run would run in your local Docker environment and ship it would wrap all that up and send it out to create orchestration documents and go deploy somewhere else. Run and ship it really have a lot in common, especially if you sort of decide that Docker is just another target. So now we're changing ship it to deploy because instead of just creating it, we're actually going to deploy. Deploy will basically do what run does and it will take the push command and incorporate it directly into deploy. So much simplified. So this is the big thing here. We're hoping that by using Ansible playbooks, basically using the deploy for everything, will completely eliminate our current dependency on Docker compose. So that's another piece of the Docker tool chain that we can take out. So that's it. And this will be released. Give us a month. So two months. Two months. I think we can get it done in a month. What's the branch called? Refactor slash mark two. So there you go. So that's it. This is where you can find us on IRC and the mailing list if you have any questions. So that's House, that's Jag, and I'm Greg GDK. So that's it. I guess it's time for questions and such. Yes. So there's questions about the size of the images that get built. So up to zero dot two, we take your base image and we commit a single layer for the build on top of that image. So what if you're using CentOS, it has five layers, your artifact container has six layers. So that's up to zero dot two. And the unreleased would have been zero dot three. We actually wrote an Ansible Execution Strategy, which would commit a layer per role, per play, and per include. But as in our moving toward one point, oh, we're getting rid of that playbook. In the newer model, you have one new layer per role specified in the service definition and container dot yet. So the question is about upgrading the packages inside your container after it's been built. Given that the file system of the target container image ought to be immutable, and that once you have your container image, you probably shouldn't be updating the packages. You should be rebuilding the image. The application of the same role again would take your base image and reapply it with the upgraded package. Does that sort of answer your question? Okay. Thank you. The question is, do we view Ansible Containers as being like the replacement all tool chain for all of the various commands that one might execute in any of the engines? I think that we can cover a lot of them that are common between all the various back ends. I think some of them we won't really be able to. They're too specific to it. But the ultimate goal of the project is how do we get people who are already comfortable and familiar with an Ansible language and ecosystem from that knowledge and into something like OpenShift easier. So creating the pathway, I think right now is more important to us than being the all-encompassing tool for all of your needs. I have something to add. So you can use Ansible for a lot of things, and one of the things you can use Ansible for is just to set up whatever clustering environment you want to use as well. We've got a bunch of roles that make it super easy to set up OpenShift clusters on your laptop. So just OC cluster up. There's an Ansible role that House wrote that will just slap that down on your laptop. And you can conceive of a future where there's a whole playbook that is, hey, I want to set this up for my laptop. So step one is run the play that deploys OC to my laptop. Step two is pull down this role from Galaxy that is an Ansible container role. Step three is instantiate all that and start building. And then step four is configuring for whatever the production OpenShift is. But the thing about Ansible is that it's a flexible tool that will allow us to take out different chunks and insert different chunks as we need to. And more importantly, it will allow our community to do that as they need. So a lot of the original rationale for Ansible container was just following the community's lead that they were doing. They were taking the Docker connection plug in and they were starting to build out things that were like this. We basically just said, hey, let's do that and make it a little more standardized. So to some degree, we'll wait for the community lead for a lot of that stuff and we'll sort of watch and see how people use it. And then we'll go from there. For what machines? Virtual machines, right. So I'll be clear here. If you go to Galaxy, you can find an Nginx role and people will just take that and run it in a new virtual machine and they will have Nginx set up. The goal is to be able to take a role like that and make it serve multiple purposes. So you can take that role, write a simple Ansible container set up and have it deploy to a container, use that same role with standard Ansible tools and deploy to a VM. So that's the ultimate goal. And the unit of work there is the role. And then the container is one deployment mechanism. The VM is another deployment mechanism. If you're producing it with Ansible roles, then essentially you're going to write an Ansible container. But if it's just a naked VM, you still have to figure out how to reproduce that VM, right? We're not necessarily just going to be able to snapshot that and shove it in. But that's the kind of work that everyone needs to be doing anyway. Any integration with tower. This is a project. It is in 0.2. We're not going to integrate anything with tower until we feel like it's right and it's supportable. So we are attacking towards rightness. Supportability is a question for later. That being said, the Ansible artifacts that come out of Shipit and will come out of deploy in the next one are a simple Ansible markup that would be absolutely doable to put into a Git repo and to create a tower project for. But there's not going to be any explicit integration anywhere on the roadmap right now. Please. So the question is, OpenShift currently has an image building system called S2I right now. Is there any plans to have OpenShift use Ansible container for building as a first class citizen? I couldn't say. It would be up to the OpenShift folks to determine their own project trajectory. But right now we're very happy to, you know, S2I is a great on-ramp into OpenShift. We're happy to provide another on-ramp to it. Last question. No last questions. Please. Fantastic question. So what are the plans to support OCI? Ansible container has always been built with the idea that the engines behind it would be pluggable and certainly with the really cool developments in the OCI-D space. We're interested in looking at how we can so, you know, there are very few ways to even build that conductor image without a docker daemon. So we're looking at supporting RunC and OCI-D as sort of first class citizens so that if you don't want to have docker anywhere on your system, you can still use Ansible container end to end. Great question. Thanks very much. Thank you.