 Thank you all for coming. Our talk is on automating your automation. So I'm James Hunt. I'm the chief architect at Stark & Wayne. We do lots of cloud foundry stuff, bar stuff, concourse stuff, and I like to tell people that they pay me to make less work for myself. So far it's worked out. I'm not sure how long I can keep that up. Hi. And I'm Kevin Yutman. I also work at Stark & Wayne. I help with the community pipelines, open source projects, and go on site to help customers get their pipelines up and going. So we're going to start with Act 1. And basically when you're on site or working with someone, you're pairing with them, you're working on your pipelines, you're typing away, and then you get to the point, okay, I've got some code changes, want to update the pipeline. So it's like, okay, what's the command? I'm going to, oh yeah, control R. Let's go. Here we go. Unpause pipeline. No, that's not the command I want. Okay, let's go through the history. Okay, here are the commands I want. So if you look at the command that you're usually running, you're typing these quite a bit. And they are long lines, and they have lots of information in them. It's easy to fat finger them. If you're working on multiple pipelines, well, there's duplication. You have to type this again. You know pretty much what pipeline is going to be going on. The pipeline you're working on knows what it is, but you're typing it in. So what if we wrap this up in a script so that you're putting it into a YAML file, you run something, it pulls it out, now you have a simple command. So while we're doing that, why don't we check for missing commands and run the set and unpause and handle stores. So we wrote a script. It's not too big, a small type, but you have to get it on the website afterwards. And the beginning at the top here, we check for the commands. And here we handle secrets and we have a spruce merge, and we can, you know, have you want to handle secrets with vault or cred hub or pull them from whatever. We just make it standard. Take it out of the problem of, hey, I'm about to push, oh, I don't have the credentials. Can you send it to me on Slack? Can you send it to me on Skype? Just put it in the script. And then finally, we want to do the set pipeline and unpause and get going. So now if you do that, you have just one command. You pair with your partner, you're making your changes, you get to the end. All you do is you type repipe. It figures out everything for you. And I started doing this, productivity, because you're not that fingering anything. You can focus on editing your pipelines. You can focus on that code. And all this stuff gets on the way. We have this actually pretty much on all of our pipelines now. So if you want to have a look at an example, we have one at our sample app here, but you can also go to any of our other pipelines. And they will all be slightly different. Every pipeline may have something special about it. It may be, you know, there are two pipelines, one for Azure, one for AWS, one for Windows, one for Linux. So, you know, you take this code and you make whatever modifications you want. And you just, like here, commit it into a CI directory with a saying, does the animal. And the wonderful thing is if you have Vault or some other, you know, Credhub, all the credentials are in there, nothing is leaked. That makes pairing and working with pipelines very nice, just that. But... So Act 2, we're going to talk about pipeline templates. We've been using concourse for, well, for as long as there's been a concourse to be used. I'm sure several of the people in the audience have learned concourse by using the start of a concourse tutorial. Dr. Nick has done a pile of work on keeping that up to date, keeping it useful and keeping it relevant. We are huge, huge believers in automation, automation of our software, automation of our packaging, automation of our deployments. And out of that, we've started to see a couple of patterns. For example, this is every software release pipeline ever. We write a lot of open source code and we like to release our open source code with release notes and versions and binary assets so you can download the CLIs that we build and know what's changing when you do the updates. So we have a lot of commonality between all these software projects. We have a lot of version management in pipeline. We need to pull in commits from master. We need to run our test suites. We need to fail the pipeline if the test suites fail, because otherwise why have test suites? We need to keep track of our versions as they're pushing themselves through the pipeline and we need a manual operator involvement that we like to call ship it. The bump patch there on the end, right about there, is actually a recent addition to our pipelines. It's a neat little hack where when we're done pushing version 1.0.0, we just have the pipeline bump us to 1.0.1 because it saves you from those embarrassing, well, which 2.0 are you running because there was one in February and then there was another one in March and they are very different. So we used to write these by hand. We used to have a lot of YAML files, we used to have spruce merges, we used to have copying and pasting from repo to repo. And now we do this. We go into our code base, the next big thing, we pull from our pipeline templates repo and we say, I need a new pipeline and this is a go project. So that means a couple of things. It means we run our tests via go build. It means that when we do our actual packaging and ship it, we build for Darwin and we build for Linux and we build for Windows. I then go in, once I've got my pipeline skeleton, I go into CI settings and I give the pipeline the very specific bits of information. What GitHub repo are we talking about? Here's the S3 bucket, where in the vault are you going to find the creds to do all this? And then I run .ci slash re-pipe. I've gone from nothing to a pipeline in about 60 seconds. This is every Bosch release pipeline ever. We take a lot of our software and we package it up as Bosch so that we can deploy it in all of our environments because we love Bosch. These all look alike too. You run test flights where you pull down the code and you try and build a release and you upload it to a Bosch light somewhere to see if it will deploy. And if it deploys, then we can go through the ship it. We used to write these by hand too. And now we do this. I'm going to go into my next big thing, Bosch release. I'm going to call pipeline template setup. Now I want a Bosch template instead of a go template. I'm going to go into CI settings. I'm going to tell it where GitHub is. I'm going to tell it where Vault is. And then I'm going to run CI re-pipe. Now I'm releasing my software with concourse and I'm packaging it with Bosch on concourse. This is every Docker image pipeline ever. There's a pattern. There are pipeline templates. We build images and then we promote them manually similar to our ship it. Same story. Go into the code repo. We pull up to pipeline templates and we say I want a Docker pipeline. I update my CI settings. I run re-pipe. These are all things that you can use today. You can go out and you can use our Bosch pipeline templates. You can use our go pipeline templates. You've got Docker images. You can use those too. They're available on Stark and Wayne on our GitHub work under the pipeline templates repo. So we have re-pipe, which makes pairing and pushing stuff trivial. We now have a whole bunch of templates that handle all the resources. So let's go for something bigger. What if you're running the Pivot application server? Pivot cloud foundry, ops manager. What if you're sending up an IS or can you pipeline that? Yes, Pivotal has written the pipeline for that. Pivotal has written concourse. They love it. We love it. And they have a large repo for basically tons of things. So once you've gone through the concourse tutorial, you've got your feet wet. And you've started playing around with the pipeline templates, which are great. And you've gone through the documentation. Well, it's always good to have more examples, more things to look at. Because a lot of our environments are snowflakes, but they're not really that different from other ones. They're not as big snowflakes as they could be. Some of the else probably has an environment very similar to yours, same firewall, same network. So having a look at some working examples is a great way to expand your understanding of pipelines of concourse. And PSA pipeline has examples for things like GovC, the utility that talks to vSphere. So there's a pipeline that will import ops manager, right? It imports it, it pulls out all the configurations into a JSON file. And you can use JQ, they have a nice little example to twiddle some stuff, update the settings, and then put it back in again. Well, that's what we've been written. What if your team wants to use a different thing, like an OAV? You have to start from scratch? Well, there are examples. You can just go in, make some changes here, 80% of the way there. So they have examples for things like Terraform and AWS and Azure. And they've got things for downloading stem cells and pulling PivNet. So it's a great resource for doing things that are a little bit unique. And if you want your friend to pivot, we can help you with it, right? So the official builds are on PivNet, but they have a GitHub repo that you can look at. It's a great training resource. So be the old and written, for most people they just work. Some people, it works 80%, you have to do some modifications. But now that you're not starting from scratch, you can do a little bit of innovation. One of the pipelines we're sending up, we did the standard thing of contacting the IS team and saying, hey, we need to get the data store name, the URL, the name, the password, all the information. And the IS team responded, why? We need to put in the pipeline so they can pull the information out. Why don't you pull it from the API, what's their answer? We're giving you, in this case, a full vCenter to yourself. No one else is sharing it, all the resources are yours. And so we were like, okay. So now that we have that, we started writing a little script that grabs the username, password, logged into vCenter with, in this case, I'm using the Ruby gem, but to the Python one, there's a go one. And okay, let's start pulling out some information. What's the data set to name? What's the data store name? Okay, we put it into a hash, we do it to YAML. This is just a very quick, something I'll fill in the slide to get, hey, here's the prams, vCenter, data center, right? So now we've just generated some YAML that will feed into one of the existing pipelines. But James would look at me and say, that's not good enough. No, because what are we doing? We're pulling secrets from somewhere. That sounds like more work for me to do, so let's automate this. So we went a bit further, we wrote a little wrapper. James has a wonderful program called safe that talks to vault. And so we just have the IS team say, here's where in vault it is, we pull out the values, we can print out something and say what it is, and call our script and pass it in. So now we went from having to have the IS team send us a spreadsheet with all the values to a script that can pull it out when we give it a little bit of information to basically a script that pulls out all the values for you. And if the IS team changes the password, puts it into vault, you don't have to go and update yours. It just spits it out. So that is one of the reasons why I'm suggesting, look at all the resources, the concourse tutorial, the PCF pipelines, the minute that you're not focusing on writing the pipelines, you can start focusing on doing more further along. So Dr. Nikolay, you stand on the shoulder of a giant. Act four, Genesis Automation. Does anyone here know what Genesis is or have used? Genesis, don't raise your hands, dark and wieners. I'll say a few people. So we do a lot of deployments. Our day to day consulting is a lot of helping people stand up large scale cloud foundries across multiple infrastructures, multiple regions inside those infrastructures. And obviously, as was the only thing I've had to do more than once, we've been trying to automate it as long as we've been manually standing things up. The culmination of all of this automation is a project we call Genesis, which brings a lot of release engineering already to the table. But what I want to talk about today is the automation of the deployments. I left my clicker over here. So Genesis is target audience, people who have more than two cloud foundries. You can have Dev, you can have Prod, that's not too hard to automate, but when you have Dev to staging to QA to Prod, and you have multiple East, West, US, you've got some in China, you've got some in UK, and you're dealing with vSphere teams and Amazon teams, et cetera, you are going to want a pipeline that properly vets your changes as they go down the runway. Via smoke test errands, any additional errands or additional automation, monitoring, et cetera, but you want something to actually put all that together into one big image, which is a Genesis pipeline. Now it's really hard to read these things on screen, but basically each of those green boxes is an environment or is a notification about changes pending for an environment. This is several thousand lines of very dense concourse YAML. Anyone who's ever tried to do more than a couple of software release pipelines with concourse knows it can get out of hand awfully quick. In this particular case, I think we're dealing with 13 different environments and four of them are manual, they don't automatically propagate changes, but reading from left to right, the Dev goes into the staging, the staging goes into the QA environments and the QA environments feed into Prod. So the way that works inside Genesis is you make a change, you upgrade Cloud Foundry, you rotate some secrets, you instance up a Diego cluster, Genesis figures out what environments it needs to deploy those to and what environments need to be vetted with those changes prior to that deployment. So if you upgrade Cloud Foundry, we're gonna start at the beginning, up here in Dev, and we're gonna say, does this work in the environment that only the ops people care about? It doesn't? Great. You just saved yourself a ton of downtime in your production environments. If it does work, let's go into staging and let's see if it works inside of a real vSphere or inside of our AWS VPC. And if it doesn't work there, you saved yourself some more downtime. And we propagate all the way through, et cetera, et cetera. And like I said, several thousand lines of Conquers YAML, several thousand lines of YAML. This is the actual pipeline for this image visualization, shrunk down to literally the smallest point font that Keynote will accept. And it actually continues off the slide. This is the code in Genesis that generates that pipeline. Seven lines of code. It's clear, it's concise, it is accurate, it is also documentation now. If anyone wants to know when Sandbox gets deployed in US one, we can figure it out by following the arrows in this diagram. Now, there's a whole bunch of other code under the hood. Anyone who's using Genesis can attest to question? Yes, of course, everything we do is open source. So Genesis is open source, I believe it's MIT or Apache, one of the two. You are free to run it wherever, whenever, and however you see fit. If you have problems, please do reach out. I know we've had a couple people come up to us with some stumbling blocks. We are happy to help in the open source community. So yeah, the interesting thing I think about Genesis is that auto line that's kind of hiding right up there. What that's saying is any environment that ends in hyphen Sandbox, concourse is free to deploy at will. If you scale up Diego and all of your environments are in, let's just say just Sandbox, and you push that to get, concourse is gonna deploy your Sandbox. Similarly, the staging environments, we've also marked as auto. So if those changes work in Sandbox, they will get automatically propagated to staging. You will receive a Slack, email, hip chat, stride, SMS, carrier pigeon notification, saying, hey, the staging deployments are good. The next environment in line is prod, and you told me not to deploy prod. So if you wanna come in to the concourse pipeline and push the button on that big green box in the corner, I'll happily deploy prod for you on your terms and on your schedule. Because people do get a little leery when you say, yeah, we're just gonna have concourse. Deploy all the things all the time whenever anything changes. We'll let the users know when it's down, but it's down. So this is, as I said earlier, the culmination of our automation with concourse because this takes all of the things that are hard about operations and turns them into things that are easy. It is available on our second wing GitHub board under slash Genesis. It is open source. It is fantastic. We do love it dearly. So I think now we are open for any questions. How much time do we actually have? Does anyone know? We got about eight minutes. Okay, yeah. Anyone have any questions? I'll walk around with the mic and you can ask. I see a zip car guy. Get to you and then you. You guys don't count. All right. So does Genesis now primarily work via concourse or can you operate it as directly on the Bosch director? As? So of course you can manually run Genesis. Some of the things that it does in terms of making deployments easier. You don't need a pipeline for, as again, go out to the website, meet with one of any of the people in the red shirts, come down to the Stark and Wayne booth. We'd be a little happy to talk all about Genesis and deployments, but are you guys still on V1s? Okay. You had a question? I did, yeah. So Jonathan from Garmin, we do a lot with Jenkins because we have the ability to lock down who can do what by who's logged in from what we've seen so far. Forgive my ignorance, but it doesn't seem like concourse is yet to that point where I can say you can deploy and you can push to prod and you can set up a pipeline. Is that, am I off base or is concourse really not there yet? My understanding of the current state of authentication in ACLs and concourse is teams are the best you're gonna get and teams are primarily focused on whole team ownership of pipelines. So the partial divvying up of responsibilities inside of a pipeline is problematic. I would be happy to be proven wrong or learn otherwise because yeah, that is a thing that we have a lot of people come to us and they go to other CI systems. As I said, we use concourse extensively but we are also a small team of people who pretty much just all trust one another. It's a big, loving, happy family here at Stark & Wayne. So, right, no yeah, we're not under any auditory or audit or regulatory compliance issues. What we normally do in those cases is we just have people spin up multiple concourses and then it's more of a like fiefdom style deployment where each one has their own little kingdom and you can do that with teams. The problem comes into the triggering where you wanna allow a first level ops team to do sandbox and staging but it takes the two keys and turning them in the nuclear launch codes to get prod to go. So, there is, yeah, there is that. You can externalize that access control by building small systems where you might have a REST API that bumps a number every time someone pushes a button and then restrict who can push that button based on access control and that's the only thing that will trigger and pull along all the git changes and whatnot that have percolated through the pipeline to date. It's an interesting experiment. I don't know that we've ever actually pulled the trigger on that one because we get to that point and it's, yeah, I would just trust all the teams. Anyone else have any other questions, thoughts, interests or demands, requests, queries, concerns? All right, well, thank you for coming. I hope you enjoy the rest of CF Summit.