 My name is Bryce Johnson. I'm from Alasian, located in Sydney, Australia. Title of my present was called the Geoprot development team going from DevOps to NoOps. So I'm a build engineer. And I guess I'm also known as a DevOps engineer. Maybe a build monkey. I try to keep the builds green, do all that stuff. That's all good. But I was hired a year and a half ago to really be the first builder with Alasian to accomplish one thing. And that's to really start DevOps with that team. So Jira is certainly a large product. We have over around 25 developers in geographic locations. But now we need to add somebody to the mix because that is growing process control, some quality control, getting handle of the process, looking at our CI builds and infrastructure. And that's what my job was. So that's really starting the DevOps part of it. But my goal for what we want to do in the DevOps world is really to break ourselves away from that. We don't want to be embedded completely on a team. We want to start as a consultant. So part of my work this year and a half is certainly to help automate things with all that process going from CI to release. From what I call commit to customer, even going in the realms of continuous deployment. I want to allow that product team, the Jira product team to be a self-service. I don't want to be the critical path for releasing Jira. Actually, I don't want much to be in the critical path for releasing Jira because that means money for us. Just like a releasing software for you guys might mean that it could impact sales and revenue. As a DevOps engineer, I'm also focused on infrastructure. So what are the things that we can put in place and design to be able to allow all these pipelines, continuous integration, and everything to flow well and be reliable? And like I said, my goal is really, and if you're looking at maybe hiring this type of person within your organization, one of my primary goals is to kind of work myself out of a job to migrate away from that product team but still consult and help them out on all of these pieces when they need it. So we're gonna have two parts. We've got a little coffee break in between. We're gonna start on the development side, the dev part of DevOps. The product team itself designing and implement, a CI release process, one that bridges continuous integration from the development feedback standpoint and bridging that all the way to release. And how do we automate some of those parts to provide that self-service so that any developer on my Jira team can now release the product but still be within the safety bounds that I want as an operational engineer? When we come back, we'll look at the build infrastructure necessary to accomplish the design from part one. And certainly how do we manage those systems and do it at scale? So we're certainly a growing team. We've always had been. We're doing well, but we're gonna be adding tests. We're gonna be adding features. The system itself, CI itself will grow and we have to manage that at that scale. I'm gonna give kind of a big disclaimer. Yes, I'm from Atlassian and we certainly provide a lot of the tool suites based around continuous integration, development, development tools, obviously an issue tracker that's pretty nice. But I wanna give kind of a disclaimer that I don't wanna kind of sound too marketing about that either, but I also want to show you what we do. So I am gonna show a lot of screenshots using the Atlassian tool suite. A lot of what I do today and talk to you about, certainly from the design perspective, implementation perspective can be done with a lot of other great tools out there that aren't Atlassian. So I just wanna make sure that that's very clear. Good question, yeah, about the tool. I'm not gonna be talking about it. I'm gonna be talking about multiple tools that Atlassian, so just showing. So Atlassian is a company. So things like Jerry, your issue tracking, project management is Jira, continuous integration server bamboo. So I'm gonna kind of touch on those pieces and show them, but what I wanna make clear but you can do this with Jenkins or you can do this with Track or you can do this with Trello for the Kanban, the Kanban board itself. So there's a lot of other options there and I'll kind of talk about that as I go, so. So our design kind of starts with CI. It just starts with change. We wanna make a bug fix. We wanna make a feature implementation. So like in everybody else, we've got our CI build server that detects change and commits made for my Jira developers. So I put my picture up there as a Jira developer. This immediately launches a set of functional tests and Selenium tests. So that's another kind of good practice of having both sets of testing as well in our interloop CI. So this is the feedback that developers get for that cycle. We certainly wanna make this as fast as possible but we also want to run as much as possible within a certain kind of time limit. But we wanna take the answer, the result from this interloop CI and do a lot more with that. We don't wanna just stop there. We can take this answer now and if we can reuse those results and reuse the artifacts from that interloop CI, we can do additional testing. So what we also do is we run database tests against the same functional tests that we did in CI on a nightly basis so that if I'm running my database testing against the same tests in interloop CI against a green build, I know that this build result, if it has a failure, it's gonna be focused on the actual database problem interacting with Jira. The same thing for all the distributions that I offer. So we offer a Windows download, we offer a TARDoGZ to launch and put on Linux, we've got 32-bit, 64-bit, all this types of platform combinations. We wanna retest, at least on a nightly basis, just see where those are at. We also have a number of performance tests to tell us from a day-to-day basis whether or not our performance is degrading with Jira because that could mean a release critical problem if we've gone too far on the degradation. So we wanna know about it, so we have automated testing that happens at that time. We also have other components that depend on Jira. So like Greenhopper, the actual product then, depends on Jira, pulls that information in, and we can now run those tests and those builds to determine whether or not that component that goes with our other application to see if it's broken or not, see what the status of that is. So all of this can happen in extra builds at different timing frequencies for a lot of this we decided on a nightly basis. This RCI loop kinda gives us a nice closer validation of whether or not this commit set, this revision number, is actually ready to go for release. This is one of the requirements that we have that these builds are green in order to cut a release of Jira. This green arrow that's kind of in the middle that goes between the two is gonna be really important for us. That's the actual bridge of going between the two processes. So I was asking where a massive Maven shop, so anybody use dependency management here with IV or Maven? So a decent amount, okay. This is gonna really give us that integration standpoint testing like I said when I mentioned Greenhopper, for example. It's also gonna allow us in this RCI loop to just depend on those artifacts that we made in your loop CI. So I don't have to do any rebuilding at this point. I can just depend on them, pull them down and make these builds really fast and again be able to pinpoint and focus where these test pillars are and what's happening with them. I don't wanna stop there yet because I said now I'm getting close to my release validation, let's keep going. What do we wanna do? Well we can pull this information in and now kind of get into a more of a continuous delivery process. So what I want to do then with that same artifact that I built in CI is now deploy this to my internal production systems because everybody inside Alasian is using Jira to issue track as well. So why not get the entire company looking at this, hitting it, getting a real use case to get proper soak time in a production instance. And this will definitely tell us now if we're getting close and close to release. And finally I give it to the customer. So this is our design, going from commit to customer and what I wanna show you is how we do this and make it happen and some of the designs we can use and some of the CI servers that are out there. But I've got one little problem right now before we start to that process. My inner loop CI has 3000 functional tests and by running them all at one time which is my requirement of the team that's our decision is to run them all. That takes about three hours and certainly the size of that test suite is growing as we add features, as we add functionality and as we add bug fixes to Jira. There's certainly other techniques that you can use by not just running your entire test suite you could have a more of a hierarchy of tests. So maybe based on perhaps code analysis you can take a look at what these test results do break them out in the categories based on coverage and then maybe run these smaller coverage tests in a different frequency outside of this inner loop CI process. We could have done that and maybe some of you kind of do that type of strategy with your tests as well for how they execute but we want it all. So we want all that knowledge in the inner loop CI. So we've got to speed it up. We've got to figure out how to break those 3000 tests up. Otherwise we're not gonna be able three hours is astronomically horrible for developer feedback. Some people even have as a definition done for that developer that they have to wait for their CI builds to go green. If I waited for three hours for each developer I'd be wasting a lot of time. Now we don't quite have that because we're allowed to move on and work on other bug fixes but if I did that would be insanely horrible. So we need to break that up. So what we did a year and a half ago is basically have a detect build that runs and with some work to the test runner framework we're able to launch 15 different build putting that 3000 up 15 times. And this was great. This gave up our parallelization and gave us a lot faster feedback. So going from three hours we got it down to 40 minutes. 40 minutes is still pretty long. So I think most people kind of sit the category about like 15 to 20 minutes. We definitely want to get there and we've got some techniques to go there. Question. So are these tests running on different boxes? So the question was do these tests these builds here run different boxes? And the answer is yes. It'll be part of the infrastructure solution to this on part two. But absolutely we need different boxes to get that parallelization going. Okay. So you might not have this quite of a big problem. You might not have this many automated tests but certainly as you grow and want to do more functional testing or even selenium testing with your product you might start to run into this timing factor where now my loop for CI is getting a little bit long. And what are some ways to split that up? How can I do it? Now this is great. So this gave us back some great feedback but one problem here is 15 different builds that we're launching with 15 different results. So as a developer now I might have 15 chat notifications, 15 email notifications going on. My CI dashboard has 15 different builds that are taking up all that geography in my UI that I've got to go look at. That's not very fun. That's not very nice. But it is helping us at this point. What we need as a way is to aggregate these results somehow and certainly organize them. So how do I do that? And here in half ago we asked our bamboo team the same question is how can we aggregate all these results from parallel builds? So they came up with the feature called bamboo staging. Now staging is also similar to what Jenkins has for pipeline builds. Being able to organize kind of one larger build that has a number of builds inside of it that runs in some kind of a sequence. So this is really the same concept here that I'm trying to show. So I'm going to kind of take a look at my CI process and structure. And this is kind of maybe a generic example just to start us off. Have one now big build plan that compiles in Jane and Tess Jira. And then if that passes, we can go on to the second stage and run our functional tests all automatically. And if one of those functional tests fails, then I've got one red build, one result, one place to look for my developers to get all that aggregated information. That was really the power of stages and really the power of what Kosike would with putting in pipelines as well. Certainly if it goes green, we got one green. Again, simple results for the developers to get back. That's what we're aiming for to get back. But with that one result, I can do a lot more stuff with that. I can do more things based on what happened here. So here's that same example now for Jira, what we're looking at. Compile in Jane and Tess Jira, launch the 15 tests, we get a green build. The other little feature that's now very popular in CI systems is the idea of artifact passing. So in that first stage, we're creating all the Jira artifacts and we can pass those onto the second stage. And even though that they're running on different servers like the previous question was, we can artifact and send those artifacts across the stream so I don't have to recompile Jira in the second stage and become now really efficient and save perhaps three to four minutes of time on each of these build jobs that are happening out there. I love saving anything 15 seconds or higher. I wanna go look at it trying to save when I'm working with CI. Question in the back. So the question was, am I seeing some performance improvements with using? Yes. So the question was, are these parallel tasks that are happening inside of one stage here? So for example, one all the way to T15, those are running in parallel on 15 different machines. So those are running in parallel. The CI server is sending the artifacts from the first stage. So there's certainly the first stage is now deploying that artifact back to the CI server. It has to transmit those to the other 15 build boxes that are happening there. So does that answer the question? And so inside of that second stage, yes, I do have to create a dependency for that artifact and tell it where to go. I don't wanna really kind of go deep into that, but yes it does. I have to define what happens with that artifact. Another question? The compiler unit test. Yes. What was the second part of your question, sorry? Okay, good question was, what are you doing with those 15 builds that are happening there? I'm running my functional tests. So my 3000 tests, in three hours there with no, how do I do that? This is this T1 through T50. I'm just trying to make it a little more generic for the presentation, but it's running, it's splitting up my functional tests for annuals in parallel. No, so the developer commits, this build plan will then detect it, just like off a detection yield. Exactly. So what happens here? I've built my CI structure now to compile and run all my JUnit tests first. I mean, why continue if those fail? Let's not waste time. Let's give that feedback even faster developer, let it fail out before launching all these builds and taking up all my infrastructure. Yeah, exactly. Absolutely. So the question was are all of these builds that are happening at one stage, are they independent of each other? And yes they are. And you can do anything. You don't have to run just all my functional tests. You can do any number of other builds that are happening there. So I might have another third stage that after the testing gets done, I might deploy to four other QA servers or QA to UAT and make it all happen right there for manual inspection. If I want to do that, you can do that. Yeah. Okay, let me try to summarize that question for everybody. The first part was are there dependencies between like T1 and T2? The answer is no. The answer is absolutely no. These are running in parallel again. So they might launch at the exact same time and I hope they do. So they're sharing information from that first stage. So that's what I'm trying to say at the time. The second question is do you rerun your builds as a kind of a, I don't know, like a try to rerun failed builds like during the night or something like that just to see if they're flaky, for example, because we have that problem two at times. I don't. Now in that other loop CI doing the database and distributions, they are rerunning the same test as well. So we're getting a little bit more coverage with that. But I think that is a very good design improvement to be able to do that, to rerun some of these failed jobs constantly at night. We might build the infrastructures not getting utilized that heavily to see where are the flaky tests. So those are definitely interesting things that we want to do and have started to look at actually. One more question in the back then, yep. So how do we divide those tests? I mean, that's, so we did that program. You know, we launched cargo to launched, you know, Tomcat and all that stuff, but our test runner is proprietary on how we split that up. Honestly, it's just devising 3000 into n number of times. And I can send that on my main command line. So that's kind of a simplified version. There's ways you can make this categorical and how you split them up. There's a lot of different ways and strategies to do that. This is just a simplified manner. Okay. That's true and they don't change. Of course, if we add tests or not. Okay, so let me keep going on here. So I got a test failure. One of my builds there, what happens? Certainly goes red. Easy enough concept. Now here's a screenshot and bamboo. It's a little bit hard to read with some of the text, but as a developer, I'm going to my one spot now. On the left hand side is the structure of that build plan for the stages. So I'm again compiling JUnit testing JIRA and passing those on to the second stage where it all happens. As a developer now, I got the notification back. I had this new test failure. I've got that information at my disposal right there. And I even have information about existing test failures so that I know that maybe my colleagues have broken the bill previously and that they maybe should maybe action this up and help me out. But the key is having one spot, one result to get back so that developers can work as fast as possible. How can we use this CI interloop then to actually release JIRA automatically? So this is some of the work that I've done to actually automate the process. I'm going to change our CI structure just a little bit. So we have the same compile and functional test here. If it goes green, I'm going to make a child pipeline or build plan to now automatically launch and deploy my snapshot artifacts to Maven. That's that green arrow that connected and bridged interloop to outer loop. And if you're knowledgeable about Maven or Ants or Ivy, this is just the development snapshots. This is not a release. Of JIRA. I created then a second manual stage and you can do this in Jenkins as well to be able to create the release. So if you're Maven expert and you're like, doesn't the release plugin for you? I don't want to use the release plugin at this point. Because what happens with this relationship is, is I keep the exact same source control revision number from one plan to the next. Okay, so I'm creating the release directly off of this version of whatever it is. As a release coordinator, I can go out and choose which commit that I want to be actually be the release of JIRA. Here's all that same plan looks in bamboo. Got my two, three stages there. I'm kind of mocking the release plugin. So the Maven release plugin basically doesn't release for you. Hanging and source control rebuilds it, kind of retests it really quick. But there's one kind of problem that I don't like. I've got to call a code freeze when I use the release plugin. Okay, so does anybody call code freezes when they branch or tag their source? Okay, so I just didn't like that. I want to keep my developers going. So I kind of toss that out. Toss the release plugin out, which any Maven fanatics here will probably not like very well. But as a release coordinator, I can go to this build plan now and just pick and choose which one I want. So we've got up in the upper right hand side, I've got a number of green builds there that have been lining up. I can decide, okay, I want this one. No code freeze, you guys keep working, keep developing stuff, I want to keep going. All the scripting is also kind of embedded with that. So all the scripting that is happening around doing that process, I've got that committed to the Jira source. So if the product team wants to carry on and keep doing this, they can maintain it themselves as well. But I've got some more information here as I can go look at what commit this is. So I've got my Git revision number there. On the right hand side I can look at what kind of code changes came in at this period of time. Maybe my developer was trying to sneak something in. Well, I can see that here with this build that I want. And I can even go to the Jira issue that are affected by that code change if they put it in their commit message. So I've got all that information at my disposal as a release coordinator. So again, as a DevOps guy, I'm trying to release, gives them that power and the ability to release Jira themselves. They can come here and choose the build, push the button and they're on their way. Once they've got the tag made, I want to actually rebuild and see if it works. So this is the actual release build of Jira. So again, now I've got the tag. For example, Jira 443 tag. I then check it out, recompile it, re-unit test. And now I'm gonna launch a couple of separate sets of tests to validate that release. I'm gonna smoke test my windows in Star because I just don't trust windows as much to build a proper artifact. I can run a small set of functional tests against that just for sanity. And I can do that with my distributions as well. I also launched all the 3000 tests again here against one of distributions. I don't really have time to do it against all the distributions, but I just want to do it against one just to get the coverage, just get the comfort level for me to do this release. And the subsequent stages inside of my release pipeline here are all the activities that happened now after I've actually built the release. So many of you might want to deploy this somewhere. Many of you might want to actually put this on your website or activate it on your website for download for consumption or any other host of activities. I've got them here automatically against and scripted out and those scripts are committed back into my Jira source so that I can get help maintaining that so I don't have to be the critical person. Take a step back. Like I said, I'm trying to hand off and migrate away from the Jira product team as a DevOps engineer. I want them to be able to do the release engineering for me. And so what I did is decided, let's use a tractor again to be able to coordinate and document what this release process is doing. So I'm going to use Jira to release Jira. I thought that was kind of cool. So what I did was I built a Jira release template with all the information, kind of a description overall of what the release does and further documentation for further reading so that release coordinator, whoever that is and it rotates around on that development team. So it's developers will do it constantly, might be one one week, another the other week so we can do that. When you clone an issue in Jira, you also get on the master issue all of the child clones that happen. So I've got all the historical knowledge of all the past releases of Jira at my dashboard on this template. And here's all the subtests that I have to do to actually perform the release. I'm just going to summarize what these are. So I'm doing the release tag, doing the release build. I'm now deploying to some test servers, deploying to some production servers to dog food it, to get some silk time to make sure it's good for the customers. I'm interfacing with these other external pieces that I need that are more proprietary but in case you have to do them, you can put them not only as documentation but in your release pipeline. But if you know it's kind of one thing here, you're all assigned to Bryce Johnson. I want to get away from that. So for the Jira 403 release, I was the critical path for all of these activities. I don't want that. I want to be able to give up that responsibility to other people so they can perform it. One other cool thing I did with this entire process is I automated the generation of our release notes. So I made a kind of set of Python scripts to authenticate and interact with Confluence. That's our kind of documentation, communication tool that we offer. So over Confluence XML RPC, I can then call and authenticate and create a template release notes that my tech writer can use. And inside of it, I can actually put a query in from Jira that will list off all the bug fixes from this release to the release previous to it and display that. And I've got it through my CI server as a push button activity that my tech writer can do. So again, push button processes or automated processes. I just saved him about 30 minutes of time. So over a year, that's a lot of money for his time. It's valuable. And that's why we want to automate too. So here's just an example of my test, on my test Confluence server that I did. And showing at the bottom, I think it's kind of a, the neat little part is automatically generating that query against my issue tracker and displaying the information. Are we looking at it? Sorry? Time up? All right. Before we go, hold on one second, if you're coming back. So that's part one of really generating the pipeline from commit to customer. Part two is looking at the infrastructure and how to manage that going forward with solving this problem. Okay. So we'll rejoin back then. Give you a break. Give me a break. Hopefully come back. Thanks. CI server that's hooked up to two different architectures. Number one is we do most of our builds on Linux and we've got a set of actual physical hardware in a data center in the United States. And this runs KVM virtual machines on there to provide about 12 virtual machines per physical hardware piece. And that adds up to about 50 agents that I've got allocated now for each CI server. So this is my kind of my really core environment that I have always available. Now my second piece goes into the cloud through Amazon EC2. And this handles all the specialty platform builds and also hands some burst scaling needs. So if I've only got 50 build machines and I need to go to 90, this'll give us that capacity to do so. So why do we choose KVM? I like to say why we did something, not just to say we did it. Again, this is the goal of the KVM agent was to be always available and a single VM image with that core set of agent capabilities. Why a KVM is because it's fairly easy to provision. And we felt that had better operating system and hardware support compared to like VMware ESX. And finally, one of the most important decisions on this process, we actually had in-house expertise. I mentioned my IT staff that knew it already. I didn't personally have the knowledge. They were able to train us in a bit. How do you just kickstart files and all that good stuff. We were able to do that now. Question? No. Not yet. No. So kind of in a perfect world, I would love to have one big build farm that was a cloudy operation so that these build servers could just request these resources and be more efficient with my allocation. We don't have that integration with bamboo right now. So we've only got integration to Amazon EC2. And Amazon EC2 integration is also available in Jenkins and the other players as well. It's just the most popular, most reliable. We'll get to that in a second. Why Amazon EC2? Again, we need to do this for our specialty builds or platform builds, our expansive builds for the height on demand traffic. Amazon EC2 creating AMI is extremely, I think, fairly easy to do, certainly through its interface. So we can create one AMI image and spend up any number of instances based on that AMI image. So for the provisioning standpoint, I was certainly capable of doing it. We have the in-house knowledge for it. It's well-adopted CI server integration. So again, if you're a Jenkins customer and you're a bamboo customer, you can use the same integration to the Amazon API through the CI server. And yep, go ahead. So we've got some more configuration through the CI server that allows us to, so the CI server itself will spin up Amazon EC2 instances as we need them. We'll also shut them down when we don't need them. So after a period of, and I can configure this, let's say 50 minutes, it'll automatically shut down these instances if they haven't been used in that time. Again, saving money about kind of cleaning it up. Then when I have that burst demand again, they're gonna go ahead and request that through the Amazon API and spin up those AMI images that are needed for that build. So it's kind of a self-managing with the startup and shutdown of those instances. Again, save money. Absolutely, I would not want to be able to do this from an admin control standpoint. I do not want to control this at all. That would be, that would take up my entire day if I had to do that. So that's what this integration piece does. I think Jenkins does something really similar. Let's take a look at just a real quick on the cost of Amazon because I think it's a nice little story. Your default kind of large instance that Amazon provides is sitting at about 34 cents an hour with Linux and 48 cents for a Windows image. This is important to me. A lot of other cloud providers are kind of sitting at the same price tag for cost. But Amazon has one nice little feature that we've plugged into Bamboo. And I don't know yet if Kosake has put this into Jenkins yet. I haven't looked yet. But I can now bid for some more work from Amazon for a lot less price, okay? So instead of 34 cents, I can bid now and get it at 10 cents an hour for Linux and 18 cents for Windows. This is really nice, especially because I'm scaling so much for my bottom line cost. So helping to not blow my budget for the Jira Product Development Team on the infrastructure standpoint, as an IT manager or finance manager, I can run 75 instances, which I think for most use cases out there would be a lot. For me, that's standard. At 34 cents an hour, that's $25 American per hour. And using spot instance pricing, I can get that $8 an hour, so a third of the cost about. So again, I've been up these instances and paying up per hour cost. That's really helping me trim my bottom line. And for me, for my big Amazon bill, I have to pay every month. I don't pay that myself, but we're sitting about $2,000 a month with about a 50% reduction. So my standard bill, if I'm using regular instances, costs $4,000. So this can make a big difference, I think, for small teams starting up that might not have a big budget to use spot instance pricing. They might not want to buy hardware, which is also expensive and also expensive to get going. So I think those great use cases there for going out into the cloud, if you can handle leaving your own firewall. Okay, that's one of the things that you have to handle. Amazon does allow some security settings and its own firewalling on these AMIs that you can do as well to help with and ease some of that trouble. Yes. So the question was, is this guaranteed that you're gonna get the spot instance pricing and what kind of hardware out is a really good question? The answer is no, I'm not guaranteed on getting that price tag. However, with the bamboo configuration I have, I can also set it, after 10 minutes of trying to get this bid, just start up a regular instance, because I need that demand. Okay, so you can get that going. One other thing here to note is, yeah, I don't exactly know what hardware it's gonna run on. However, for my functional tests and Selenium tests, I actually don't care. We've taken all the timing issues out of the tests to not really care about hardware and I'm not running performance-based tests out here on the cloud, there's no way. For the performance testing that we do, we've got a actual defined core piece of hardware that we run on to keep things consistent. Both of them together allow me some other advantages. Now I've got failover resiliency. So if I'm losing build servers in my data center, I can, again, meet those scaling needs out in the cloud. It doesn't happen very often, it's quite rare, but it does happen. The same goes for Amazon. If Amazon has troubles one day, connecting to it, we don't have to, it might queue up a few builds now, but I'm not reliant entirely on that infrastructure. So it gives us a nice natural resiliency factor built in and certainly together the scaling needs that I really want. The final pieces, which is gonna be the next section, I can also leverage automated systems configuration tooling, which you've actually heard a lot about at the conference already. I'm gonna spend a little bit of time on that as well. A big focus on this, on Agile India 2012 has been, how do you efficiently run global development teams? Here's an operational standpoint and viewpoint. So as I said, I'm a big, big user of Maven and I use that to really handle all my dependencies. It helps to really build that pipeline from CI to release. We happen to use Nexus as a repository manager. So some of you may have heard of like Artifactory and some other players. This is just what we've gone with with Sonotype. They're actually in the same building as the guys that made Maven. So we felt that that was kind of neat and we have a pretty good working relationship with them. Atlassian actually has one of the largest Maven repositories in the world that's public. So we've got offices in Sydney, Amsterdam, San Francisco, St. Louis. We've got another development team in Poland. Why is this important? Because now my build server now puts up the Jira 445 release and this plugin developer that wants to consume those artifacts is in Poland. It only requires one person from that Polish team to depend on those artifacts and are consumed in these proxies locally so that any subsequent dependency calls is all cached locally closer to them. This helps their speed up time quite a bit. Jira NSL for this distributions adds up to about two gigs of data. So if you can imagine trying to connect to a proxy server in the United States from India that might take a little bit longer than if you had a proxy cache locally. And then we certainly, we're utilizing a Nexus server out in EC2 to help us with some of the networking costs. Again, some of the build speed costs there that depend on maybe artifacts. So that's a huge way of optimization. The same story and the same kind of diagram can be said about source control. So has anybody had problems with connecting to a source control server in the United States for SVN? Does it take a while? I've run into that. Now we've built proxy servers in Sydney, for example, just SVN, but we've now moved Jira off of SVN to Git. And so we can build out now Git clones and do some synchings across these Geo server developers be fast about source control. Again, if it's 15 seconds or longer, then I can save, I wanna do it. This is huge for global development teams. So why Nexus? Great Maven dependency management. That's a requirement for me, obviously, because I'm such a huge user. This proxy configuration is outstanding. So not only can I proxy up my internal geographical locations, I can proxy to external third party repositories because maybe my developer might want to upgrade Hibernate for Jira. Okay, how do we get that dependency downloaded on those caches? So we can proxy out to the outside world and then not to be dependent upon external internet sites, which we don't want as part of our builds because I'm not administrating those sites. I don't know what their uptime is. I don't wanna depend on them for my releases of Jira. A nice great search utility for artifacts my developers use all the time. Do we have this fact upstream or not? Do we need it? Maybe if we need it, they can request to be added as well. And they've got a nice Rust API that they can connect to and we've got a great relationship, great customer support. So we've got the infrastructure all in place now. Question before I go to that, yep. The configuration based for any of the builds. So, example? Okay. Perfect lead into what I'm gonna talk about and that's make changes to infrastructure with confidence. So certainly, we can by hand put a JDK version and install it on that EMI, okay, or whatever we need. We can go ahead and install that and burn a new EMI image that we have. There's a lot of public EMI images available through Amazon to get you going with, if you want Red Hat 6, they've got one kind of a stock instance there. Bamboo has kind of its own stock instance that comes out of the package for you that Amazon provides. You can update that and we've got instructors how to do so. So you can update and modify that. But I want to kind of graduate away from that because I don't want to do manual administration, okay? And certainly a number of people have mentioned this, but we want to now manage that infrastructure. We've got probably over 125 build computers that are running to solve these builds, right? I don't want to do this by hand. We want to enforce that base operating system for our builds. We want to keep that consistent. We want to know what that is. We want to also protect against unintended change. Does anybody ever had a developer that SSH out to their build box and just decide I'm going to RM-RF slash OPT. Slash OPT just happens to be where I put all these build capabilities and installations of GDK. Have you ever had that happen? I sure have. I've done it myself. I've shot myself in the foot. I want to protect against myself too. I've made mistakes and I want to make changes with confidence. So when I have a change together and push it out, I want to know what's going to work. How do we do that? Okay, you've seen this quite a bit during the conference. If you've been attending the DevOps presentations, I think Tom previously showed us as well. We are also a big user of Puppet. Okay, massive user. Puppet is a great open source change management tool that allows you to really explain and show your system configuration as source code now. If you're using Chef or anybody else, excellent. But I think the message here that you can get from the conference, if you're not using something to manage your systems automatically, please take a look. It is a lifesaver and a lifesaver of time. So the great thing is we can represent our configuration as code. We can configure any number of systems. Puppet doesn't care. And it basically converts our instruction, our Puppet code, into that platform. It uses the platform pieces. So the package command in Puppet will use on a Fedora 15 box, the YUM command. And on Ubuntu, it's gonna use AppGet. So it's gonna translate that for you on the fly. And certainly it's a leading and growing technology. It's gained popularity, and I think you've noticed that during this conference. Just a quick Puppet example. This is their SSHD module example from their training that I attended. I thought this was kind of especially nice versus showing one of my custom modules. The primary three parts of what Puppet code looks like is you have some kind of a package command or some action that you want to do. I want to install a package. Let's install SSH, okay. Now let's use a special configuration that I want to go with for the SSH daemon. I can also install that after we've installed the original out-of-the-box SSH package, open SSH server. I then can tell through Puppet what to do once it has received that change in configuration. I can tell it to go restart that SSHD service all automatically. I'm not doing any of this manually. So this is important because infrastructure is critical to my release process as I showed in part one. Or I also want to document and track that change. And I now want to validate to make sure that change before I go to production on these systems, make sure it's been applied correctly. I want to communicate all that change. How do I do it? Okay, question first before I go into that. Does Puppet have a specific environment configuration management part? Are you talking about like Windows versus Linux or? Okay, there we go. That's what I'm getting into right now. Great question to lead me into what I want to get into right now. Yes. Yep, so like Tomcat or whatever, you can add a, you know, for Apache you can do a HTTP Conf right in there. You would absolutely want to use Puppet for that case. In my opinion. Absolutely. So you can parameterize and in your Puppet code, you know, what type of environment you're running on and specify that. Absolutely, that's within the power. So I'm just going to run through what I do on a basis for how do I actually roll out change, talking about different environments, and how do I track it, and how do I use development practices? Because now it's in code, I can use development practices with what I'm doing. So I'm going to create an issue for some activity in my favorite issue tracker. I'm going to add capability foobar. So it's like adding capability JDK 1631 or whatever it is. Let's call it foobar. And we're going to put this on all the little agents that I've got running on the builds. So here's getting my communication and my tracking part already solved. Now we also use Kanban for operational staff. So I think that's pretty neat using the Agila methodology there. The Greenhopper Rapid Board is similar to a lot of our products out there like Trello, where you can have a Kanban board going automatically. We need this because we want to communicate our changes, not just between each other on a team, but also back to the product teams. So they can come here and look at Greenhopper Rapid Board and see what I'm doing. Because I'm still servicing Jira as a consultant at this point. So we can add in some JQL functionality for Jira. I can add a little query just on my stuff so I can just pinpoint and look at what I need to do versus getting all the noise on a team or I can look at all the noise on a team if I really want to. Now typically as a workflow, I will actually, usually for our Kanban style, is I'll pick up the oldest ticket first when it's been in the queue at first. But we're just going to go ahead and say this is really production critical to get this capability in. I can drag it across and Greenhopper, like a lot of its competitors will kind of show you what to do there. So it's showing me two landing zones of possibilities giving my issue tracking workflow or I can put this issue and drag it to. So that'll signify to the team then that I'm working on this issue when they look at our Rapid Board. Now because it's code that I'm doing, my puppet change, well we can certainly use source control, right? That's a good development. We can actually do style validation as well. So if you're a Java developer, you're accustomed to maybe check style. Puppet has Puppet Lint to help us out with there because we've got four engineers on the team. We all might, some of us might want to put a brace on one line or maybe the new line. Maybe we want to do spaces instead of tabs. Maybe we want to write puppet modules, specific puppet modules differently than one another. Puppet Lint's going to help us out on getting that all together into one style so it's readable, it able to communicate so that we can all understand one code base. We can write number of cucumber tests. I've seen that mentioned here this weekend as well. To be able to make sure that the puppet change has been applied successfully outside of Puppet. So Puppet has done the change. What other facility can I use outside of Puppet?