 All right, so up on the screen here, we've got a bunch of buzzwords that are pretty meaningless, but are really good for getting a talk accepted into a summit. So what we really wanted to talk about is that I have a cloud, now what kind of scenario? You know, you've invested in getting OpenStack, and now you're trying to work out what you want to do with it. And we thought, you know, a good way to demonstrate it would be to build out a sort of a PaaS slash development pipeline that we can show doing like a full CI slash CD, continuous integration, continuous deployment workflow, you know, showing a kind of, I guess, cloud native way to do things. And we're doing it by, we take an OpenStack, obviously, and a bunch of other services, lay it on top of it to build this out, and we'll talk about that as we go through. And we will have some demo involved in this, some live demo, so hopefully the demo gods are kind to us. So one of the things we want to do is if someone has a laptop out and wants to participate, that URL up there is a short URL to our GitHub repo, and we have a, under the presentation slash index.html is this presentation, and if you want to go, if someone wants to go and add their name to it or do something horrible to us, so it gets shown up on the big screen and we look like fools, that would be awesome. And as we get to the end of this, we will actually demo taking that change through to production. So quickly about us, my name is Paul Tchaikovsky. I am a cloud engineer, systems engineer, something like that at Bluebox. I do infrastructure automation, and I also spend a lot of time herding cats. All right, my name is Raghavan Srinivas. I go by Rags, and essentially Paul is more on the off side of the world, and I'm more on the dev side of the world, and we are both empathetic to each other, so that's kind of the whole idea behind doing this talk is to kind of give a little bit of operator and developer perspective. Right here in Echo, does everybody do, or? Okay, that's good. Okay, so like what Paul said, we threw in a whole lot of technologies, because it's always fun to play with a lot of these, right? But actually as we walk through these different slides, you will get an idea of where we are using this and why we are using this. Obviously, we are at an OpenStack summit, so the infrastructure as a service layer is built on top of OpenStack. It's on Bluebox Cloud, and I'll talk about Bluebox for those of you who may not have heard of Bluebox in a bit. We use Terraform for orchestration. We wanted to go with the immutable infrastructure, because that's really the way to do CI CD, right? And what we do is we tear down and bring up infrastructure on the fly. And Terraform is really good for a lot of those things, including the OpenStack clusters that we spin up. He's done using Terraform as well. There are some restrictions to Terraform especially with respect to load balancing as a service, and we'll get to that in a moment that I'll talk about. We use Deus, it's a platform as a service, and why we use Deus will be apparent in a little bit. If you're mainly building 12-factor apps or stateless apps, right? Then platform as a service is great. Heruko introduced this concept of a 12-factor app, which essentially is a stateless app, right? And it's very easy to scale it, very easy to connect to logs, very easy to roll back versions and all that. So the platform as a service provides that, and especially with Deus, if you're used to the Git push or the Heruko style, Deus is very, very familiar, and even if you're not, it's pretty easy to use it as well. We realize that not all applications are gonna be stateless. There is some state that you probably need to store. For instance, we are actually doing something with Jenkins, like adding some of the plugins and doing some stuff with it, that actually we store it back in kind of an object store, and the object store actually is front-ended by the Docker Swarm cluster, okay? So the Docker Swarm cluster is there, and the backend, we have a Swift-based object store, and we'll see that in a second. Obviously, no CI CD presentation is complete without at least mentioning Jenkins, right? So Jenkins is in there. Docker registry, we store all the images on Docker registry. Actually, there are two Docker registries, one associated with the Docker Swarm, and then Deus itself uses the Docker registry, okay, internally. We'll not worry about the Deus part, but we will talk about the Docker registry part where we purchase some of the stuff, okay? And then, again, any CI CD, you've gotta have some Git repository, right? So we'll start with GitHub, and hopefully some of you have already started doing a PR. Anybody already done a PR or submitted a PR, or anybody interested in doing a PR, or because we wanted to keep it as lively as possible, given that it's almost the last day of the summit, right? So we can put the URL back up, and if somebody can do it, that'll be great, okay? So from an architecture perspective, this is kind of how it looks. We have a three-node Deus cluster, and we have a three-node Docker Swarm cluster, okay? And essentially, again, you push changes through the GitHub. It's exposed to the internet through the DNS, wildcard here, which is ATX 2016. I don't know if you guys can see it there. Probably not, but in any case, the idea behind doing that is, again, like I talked before, with load balance as a service, there are some restrictions with Terraform. So we decided to go with a IP that's always available, and this is pointing to the DNS wildcard, and thereby, you can get to the dev and the staging clusters as required. You can see here that it's backed by the Swift Object Store. That's really the main open-stack thing we're using, right? And if you think about the architecture here, it's pretty neutral, cloud-neutral. You can do it on some of the cloud if you want, some of the infrastructure if you want, and if you don't want Swift and use some of that object store, that's perfectly acceptable as well. Again, like I said before, this is for stateless apps, and the Docker Swarm cluster with a Swift backing store is for stateful apps, okay? So you can use either, you can use both, or you can use one, whatever, okay? So the Blue Box Cloud is, Private Cloud is a service. As you can probably infer, it supports most of the open-stack components, right? Keystone, NOAA, Neutron, Cinder, Swift, Heat, Elbas, V2, and I'm sure there are more, right? But we're not using all of these. Obviously, we're using NOAA to spin up some of the clusters, right? We are using Neutron for the networking. We're using Swift, and obviously, we're using load balancing as a service. So you saw all that in the previous architecture diagram, okay? So this is the open-stack part of it. Moving on to how we bring up these clusters, tear it down and all that, we use Terraform scripts. Terraform is really cool because it's, no vendor lock-in, right? You know, that's one of the things that you're looking for in this kind of situation, because we believe that enterprises who are doing CI CD, probably gonna have more infrastructure than just open-stack, right? And you can actually use Terraform scripts for doing that. So it's a multi-cloud infrastructure, and one of the cool things about Terraform is that you can have your resources across different clouds. So it's possible to have my Swift cluster somewhere else, you know, have my Docker swarm cluster somewhere else, and have my dais cluster on AWS or something like that. All of those kind of combinations are possible, and Terraform makes it very easy to do, otherwise you'll have to do a lot of those by yourself, which is kinda cumbersome. How does the script look? This is a script for essentially standing up the, you know, the Dockerform cluster, okay? And you'll see here, you know, essentially it has a name, you know, image name, flavor name, you know, things are pretty straightforward, you know, they associate with what you think they should be. And the thing to note here is that we are actually using, you know, names that are either derived are available through the environment variables, okay? So the whole thing about the mantra about, you know, your infrastructure is in your code, right? So that's kind of, you know, embodied here. You see the security groups, the floating IP and so on that essentially are derived from previous Terraform scripts, okay? So pretty straightforward. And also Terraform, you know, comes from HatchiCart. You know, if any of you used Vagrant or Packer, you know, you know that, you know, they make some pretty cool products, okay? So let's do a very quick demo of, you know, how this whole thing is stood up, okay? And I'm not, you know, the whole idea of not doing this real time is because it takes some time. So what instead I'll do is walk you through a video, okay? And we'll see. Okay, so that's the days, all right? So it starts off with the script, obviously, okay? So the first thing you're gonna do is something called a Terraform plan, okay? And what you do in this plan is kind of walk through this script, if you will. Kind of do a sanity check, if you will, right? So it kind of walks through the script and kind of gives you an idea of how this cluster is gonna look or how this, can everybody see this? Oh, sure, let me do a full screen. Yeah, that might be better. Thank you. I knew I'd do that. Is that better? Okay, thank you. So essentially it kind of walks you through that and tells you, you know, whether it looks reasonable enough. Okay, and once you're satisfied with that, then what you're gonna do is you're gonna, okay, some technical difficulties here. Yeah, I don't know if it's crashed. Yeah, all right, so let me do this. Let me go back to a quick time. These are the fun parts of the demo. All right, and I'm gonna bring that up again. All right, so hopefully this is gonna go through now. Okay, so what I did here was, you know, I did the plan and now I'm applying that particular plan, okay? So here you can see, you know, it basically is going through the creation of the three clusters for the Docker Swamp. Okay, so you'll see here zero, one, and two. Okay, so it's creating it. What is the IP? What is the image names? What are the key pairs and so on and so forth? Okay, so let's just run through this. Okay, and you will see here, you know, you will see zero, one, and two, which are the different Docker Swamp cluster. Okay, so once you number this, basically what happens is we're gonna get a Docker host. Okay, and you will see here that, you know, all the Docker environment variables are available to us. Okay, so we use these Docker environment variables, you know, to essentially, you know, connect to that particular container. So let's finish this up. Okay, and you'll see here there are three different, you know, addresses. You'll see somewhere here, you know, zero, one, and two, it just kind of flew by. Okay, oh yeah, right there, zero, one, and two. Okay, and essentially it's created it and we're done. And the final part is the destroy, which, you know, it just happened, okay? So that's just for completion. So let me go back to my presentation. Okay, and... While he's switching screens there, in real time, that creation and destroy of the Docker swarm takes about a minute and a half, so it's still quite quick. So it's definitely doable that you could have, you know, a CI pipeline that would actually spin up a full Docker swarm, a cluster, do some tests, make sure your infrastructure is working, and then kill it again within a few minutes, which is really cool of it to do that in that sort of short amount of time. The other thing we do in Terraform, which wasn't totally clear, is we actually pre-create the TLS certificates and authorization stuff, so that only you can connect to the Docker swarm. So someone that port scans your OpenStack doesn't see your Docker host and just start running Docker on you? So you saw this, and, you know, we did the Terraform. Okay, so let's look at the next one, which is the dais. Okay, everybody see this, and... Weird working with multiple screens. Yeah, that's okay. So let me go back and I'll bring that up again. All right, so essentially, you know, dais, like we said, you know, is great for stateless apps. You know, if you really don't have a whole lot of data or no data at all, it's really cool to be able to, you know, create the application, to be able to scale the application, to be able to roll back a version, very straightforward. You know, what it does is it snapshots a release, and you can use that, you know, for your subsequent, you know, deployment options, okay? So pretty straightforward, and that's exactly what we kind of leverage. There are really three big components in dais. You know, it's the control plane, the data plane, and the router. Okay, and each of them are pretty obvious as to what they do. Okay, the data plane is really where the containers are created. Okay, the control plane, you know, essentially orchestrates how they're created, and the router basically routes traffic, you know, to those containers, right? So the nice thing about this is you can scale the data plane independent of the control plane or the, you know, the router mesh. So in other words, you know, if you have an application that needs 1,000 containers, you can just, you know, have the data plane scale 1,000 times rather than, you know, anything else. Okay? This is kind of the workflow which, like I said, you know, if you have used Heroku, if you have used any of the other platforms as a service might seem very familiar. The idea is that, you know, you push some code, you know, into Git, right? And that's gonna spin off some, you know, a build, right? And that's gonna be stored in the Docker registry. Okay, each release is gonna be snapshotted, right? And that is gonna be, you know, it's eventually deployed, and the traffic is gonna be routed to that particular container, okay? And again, if you want to roll back here or you wanna go to a specific version, you know, you can do that very straightforward because all of those are in the Docker registry and you will see that, you know, once we go live. So a very quick demo of this, and what I'll do is I'll just use this, or should I go back to, yeah, yeah, let me go through this because this is pretty straightforward here. Can everybody see on the back? You know, I can walk through this anyway. So essentially, you start off with, you know, Git clone, right? Here's a very simple example of a Docker file and the first thing you're gonna do is you create a desk create, okay? And then what we're doing here is we made some changes and we pushed it to the master, okay? So that is gonna start off a build, as you can see here, okay? So that's starting off a build and eventually what's gonna happen is, you know, it's on the master and that is snapshotted in the Docker registry, okay? And you will see here, yeah. And essentially what you'll see here is that, you know, it's, yeah, let me go back. So you'll see here that, you know, if I do a curl on that particular URL, you know, you'll see that it's powered by days. And then what I did was I changed the config variable, you know, I config to something else and eventually you will see that, you know, what happened was, you know, the curl came back with a different command. I mean with a different output, sorry, okay? So you'll see here that it was powered by, you know, instead of days, it was powered by Paul, okay? So pretty straightforward and you will see, you know, how I can see the logs, you know, essentially what I do is, so this is the example of where, you know, I changed the configuration, this is the logs, how do I scale, you know, pretty straightforward, all that I do is days, scale, and how many instances of that I need, okay? So pretty straightforward and you'll see that, you know, it's spawned off like four instances of that, right? I do the logs and you can see in the logs at the very end that, you know, there are four instances of that particular, you know, container running, okay? So the idea is that, you know, if you have a simple stateless app, you know, you can make do with, you know, with a platform as a service like Deus and really don't need any of the complex infrastructure that, you know, typically you might need otherwise. So with that, let's move on to Docker Swarm and Paul, you're ready to go. Yeah, so we knew we would have, we knew we would have some stateful applications, one of which being Jenkins and also a Docker registry in which to store that Jenkins image so we can access it from other environments and keep it persisted as we destroy and recreate our development environments here. And the reason we chose Docker Swarm is for its simplicity both to spin up and operate as well as also to use. If you've used Docker, you can use Docker Swarm. The commands are all exactly the same with just a few extra things to be able to do scheduling and stuff. So you can say run this container next to that container or make sure this container runs on a separate host to that container. And it brings multi-host container scheduling to you. This is what it looks like from an architectural point of view. Usually you would have more than two hosts but this is just circuits on the screen. So you have a load balancer sitting in front that's gonna load balance the Docker Swarm port to the Swarm Managers. And the Swarm Managers are watching service discovery, in this case, that's XED. And then we have the Swarm agent is watching the Docker server itself doing some health checks and then registering it back to XED. So we form a sort of a circle of information that Swarm Manager can use to determine what hosts are up, what's healthy and what containers are running on them so that it can make intelligent decisions about where to route the called or run a particular Docker container that you asked for. We're also running the Docker registry. So we actually run this, listening only to local host on each of the Docker Swarm nodes. And we have the storage for it backed by Swift. And the reason we do that is so that we don't have to secure the front end of it. We don't need to add SSL, we don't need to add usernames or passwords because it's only local host listening for it. So the only way to get to it is being on that local host which means you're already a trusted user. And we saw our Jenkins image and a couple of other things in there. And the reason we do that is Jenkins, configuring Jenkins can be really hard to automate. And so what we ended up doing is we took the Jenkins image from the Docker registry. We started it and then we went to the Jenkins UI and we added say the GitHub pull request builder plugin and a few other plugins and also added the DSC Alliance up. And then we committed that using the Docker commit and pushed it up to our registry which backed by Swift now meant that we can destroy this whole environment, bring it up again later and our state persisted inside of Swift. This is, so we have four jobs in Jenkins to do the actual work for our workflow. This is an example script for the one that's gonna deploy our dev environment. You can see it's really simple. We're simply grabbing the pull request ID and turning that into our deus app name and then we're doing a deploy. So we do the deus create and if that fails, we know it's because it already exists. So we're just adding the remote and doing the push. And that way on a pull request it will spin up a dev environment. If someone updates that pull request, it will simply update that environment rather than trying to destroy it and recreate it. And then this is our application itself. It is this presentation. So it's a pretty simple HTML based presentation tool called reveal.js. And so all we really needed is something that could host a HTML page. So we use caddy which is a fairly small Golang based web server and we just added our local directory to it and told caddy where to find our HTML file. And then this is probably the important bit. This is our actual development workflow. You can see there's three actors here. There's the humans that are actually doing the dev work. GitHub, Jenkins, and deus. So a quick walkthrough. The developer will be working on some new code and they'll push a feature branch up to GitHub and then they will create a pull request in GitHub. And that kicks off a webhook to Jenkins and says there's a new pull request. And Jenkins goes great, I'm gonna run tests and then I'm gonna deploy it. Now our tests here are a no-op because it's a pretty basic HTML page but realistically you'd be wanting to run some linting unit tests and that sort of stuff. And if they're successful it does the deploy which creates our application named for the pull request it is. And then it passes the URL to that application and the test results back to GitHub so that anyone reviewing it can then see the output. So then the reviewer comes along and they review it, they see the code looks fine, tests passed and they go to the URL and they see the actual presentation up there and it looks good. So they then merge that pull request to the develop branch. And the develop branch in this case is basically our staging environment. So when that merge happens we kick off two tasks in Jenkins. The first is to destroy the dev environment, it's no longer needed because it's been tested. And then the second is to actually update our staging environment. And so it also, at each stage it runs tests which as I said are a no up for this. So it deploys up to a staging environment and then that way you can go to a staging environment and look at your changes and anyone else's changes that were pre-staged. So it may contain other people's work as well. And then so instead of then a release manager or someone acting in that sort of role will come along and say, okay, it's time to push these features up to production. So they will again go to GitHub and they will take the develop branch and they will merge that into the master branch. And the master branch is basically our production environment. So GitHub will again send a webhawk through to Jenkins which is gonna run your tests one final time and then deploy out to production which is cutting a new release of your application using Deus and pushing it out. And the nice thing is you can have multiple dev environments right? That's right. So there will be an active dev environment for every pull request that's currently open. And now I'm just gonna quickly change laptops and I hope everything survives. All right, so this is our Git repo here. I have a pull request I made in case we didn't get any others. Oh, we have a bunch. All right, we gotta decide now on which one. Let's look at this top one here. So we can do some review here, right? We can see there's one commit and there's some changed files and we're changing, there we go. So we're changing the blue box there. So if we click across to Jenkins itself. You wanna go look at the other pull request or just go with this? Let's go with this one. All right, so take it off full screen. It's not showing the whole thing. You can't do it. We can see here, let me refresh. Okay, it hasn't done them all. There must be a queue or something happening. We haven't managed it yet. But we do have this one. So let's look at, all right. So we have some changes there. So this is basically the Jenkins job has run and it's deployed and it's created a URL we can click on. And so if we switch across, so we have Boatface is now a presenter and our HTML is going wonky. I think it's Boatemicface that crossed the HTML file. Right. Is it just our screen going clear? I'm trying to figure out, yeah. Okay, so here's this Boatemicface one and so we're gonna merge that. Yeah, it's happening there too. So that's merged into our development branch. So if we come back to Jenkins. So you can see we've got a new job there which is deployed staging. So let's click on that and see what it's doing. So you can see it's updating our staging application and so this is Dayus basically showing back output that it's currently doing a build inside itself and then it will show itself shortly pushing it up to registry. This was much faster earlier when we tested it. You're right. We're pretty slow, right? So what it's doing is behind the scenes is talking to the Dayus control plane. Dayus saw, basically it does a Git push and Dayus has a Git server that has a bunch of post-commit hooks that goes and looks at the Git repo itself. It sees there's a Docker file in that Git repo so it goes, okay, I know how to build a Docker file. I will build it and then deploy it. So it did the build and now it's pushing it to its private registry before it then will update the staging application. All right, okay, so it's successful. So if we come across to our staging environment here and we refresh it and we come across to this and just give it a second. Oh, that's actually a production, I went too far. Here's staging. So there we go, we can see our new presenter there. And then we then would merge that into master which would kick off another build and we would then see it in production. Given that it just took like five minutes to run through, we probably don't wanna run through again and waste a bunch of time. So with that, I think we can pretty much call it a presentation and answer any questions if we have any. You wanna put the links up? Oh yeah, we do have a bunch of links both through the slides and also here and we'll probably update it with some more stuff going forward and everything we have done is up in that Git repo, all of our Terraform configs and everything. So if you wanna look at all the underlying work that we did, it's all in that repo along with the presentation. Questions? If you wanna come to the mic, that'll be great, I appreciate it. I have a question about Deus. Is it intended as a single tenant or a multi-tenant application? It is multi-tenant, I'm not sure you would wanna use multiple tenants from like desperate companies because it's still running in Docker, right? So you still have the do I really want Docker to be multi-tenant? But you definitely like you register users and each user gets their own password and pushes their SSH keys to it and can do deploy through Git and stuff. Excellent, thank you. Any more questions? We still have some time. All right, thank you very much everyone. Thank you very much.