 That's really nice Let's do this thing. Hi, everybody. Hi, Steph. We don't have mics, so if you need us to speak louder, just kind of, you know, make a motion. I am Jen Krieger. I am the Chief Agile Architect at Red Hat, and here is my colleague Steph Walter. And I hack on a lot of open source projects and have probably worked on many of the things you guys have. You maybe got a patch for me. Maybe. Maybe just a little patch. So, hence today's talk. Yeah, a couple of months ago, Steph came to me and said, you know, I've been talking to all these people internally about this topic, and it seems very similar to the things that you have been talking to teams about, and wouldn't be kind of cool if we had a conversation. And I will preface this with actually our, one of the bosses in our organization said, you really should do a talk together. So I try to make the story a little bit better. So, right? Yeah, so really we're here today to talk about Agile and how to do self-sustaining Agile in your projects. So, throw out some words. What does Agile mean to you guys? Yeah, what else? The process, we'll keep adding process until the crying stops. Yes, okay, so the beatings will continue until morale improves? What else? Trendy. Plexible. Plexible, okay. Meetings. I feel like people from my teams are populated out in the room and are like giving me content to mock them on later. So this is really what Agile is about. Thank you Wikipedia for the definition and also all of the really long words up there. I wanted to bring your attention to some key words that are actually in this definition. The first being, it is a set of principles. So Agile seems to suggest that, or at least most of people's interpretation of it, is that it's a rigid set of rules of things you have to do in order to be faster. But in reality, it's actually just 12 principles that talks about a mindset shift that you want to achieve in order to actually get better results. And so it really doesn't say you must do this to be this. It says it's a problem that you have to try to. It's also about having requirements and solutions that evolve over time. And what that really talks about is that you're not supposed to go out with that big bang release. So the one, two, three, four year releases where you're about 40,000 lines of code and all of a sudden you're delivering that to your customers. Really what Agile is saying is just, hey, stop doing that and try to get shorter releases and more stuff out there so that you can actually find out what you were thinking was what your customer wanted is actually what they were asking for. And the final thing is that whole concept of self-organizing cross-functional team. That's a lot of buzzwords that just basically means, hey, that person who sits next to you or who is on your team across the world, you're supposed to work with that person. And so maybe you want to actually have a conversation with them to achieve results. It is not. And so a lot of times I hear, especially boss types, say something to the tune of, great, now we can do more with less. Anybody hear that before? Because I certainly hear it all the time. And so really what Agile programming refers to is the concept of doing the right things at the right time with the right people. And so it's not always about just putting a massive amount of code out there into production and seeing whether or not like a 5% of it might actually be what your customers asked for. It's about actually doing little swaths of things to make experiments to see whether or not that's actually what they needed. It's also not easy. And that is another thing that I hear people say a lot. It's just, oh, it's just a couple of meetings that you put into place and then all of a sudden you're going to go faster. And oftentimes teams only focus on that concept of meetings. And so they say, well, I've got to stand up and so therefore I must be doing Agile or I am grooming so I must be doing Agile. But the reality is that I actually am asking my teams to do both. I'm asking them to inspect their process in terms of how they're moving things around in systems to report to project managers or the boss type. But I'm also asking them to inspect their technical workflow. And so I want them to improve the way that they are delivering code or the way that they're integrating or the way that they're using their tools. I don't want them spending too much time over here or too much time over here. And it is certainly not Cat Boss telling you what to do at every point of the way. And so I've got several Cat Bosses in the room and what I tell them quite frequently is that, yeah, they might have an opinion about what work needs to be done but I never want to hear them say how it should be done. And so the very big distinction about that is I want feature A. That's great. You want feature A? I don't care if you want to use J Boss. I don't care if you want to use Cock Fit. I don't care if you want to use this test framework. Those are the things that I want the team to discover on their own. So thank you, Cat Boss, but be quiet. So the point of Agile is to be able to react to changes in the market in a sustainable way. And that sustainability is really something that teams often forget. And so we have this concept of like getting a release out the door and we're going to use Agile to get faster and we're going to push more out the door and we're going to push more out the door and we're going to push more out the door. And then what really happens to teams is that you have a huge core of people who kind of, I don't know, maybe get a little tired, right? Anybody tired? I'm tired. I'm tired all the time. And then how would I call the sweepers who come in and they fix after everybody else? And they're the guys or gals who spend 80-hour work weeks and they're up all night and they're the ones who people go to when they say we've got this really bad bug and no one knows what to do about it. So we'll call engineer A and they're going to fix it for us. And that really isn't what Agile is about. It's about trying to figure out that correct balance of features being delivered to customers in a sustainable way. And so what you really need to know. I'm going to go over two points. The first one is focusing on your network of people. These are the things that I think you guys need to walk out of here understanding. And so what is focused on your network of people really mean? It is my theory that if you know the person that you're working with or you know that person you're going to toss your code over the wall to, it is more likely that you will be a better corporate citizen or a better work buddy if you didn't know them. And I have watched this happen for 20 years in my career where one engineer doesn't really know who's going to actually take that code and put it into production so they really don't care what the quality is. They really don't care what the result is. All they really care is telling their boss that the work is done. Anybody want to be honest and raise their hand? Yeah, I got one hand raised over there. At least one person out of 50 are honest. And so what I like to encourage teams to do is actually participate in that ghastly thing called communication and social bonding so that you actually do establish relationships with the people around you and ensure that you are actually doing this proverbial ghost hug, which is kind of like you actually feel like you have a community at work and you feel like you are participating in something that is greater than that code on the screen. And you want collective ownership of work. So this is my favorite, you know, there is no I in team. And so a lot of times in open source communities this is what I see happen. An engineer decides they are going to do something. And they go off and they write their code and they are doing whatever it is that they are doing. They decide this is the right way to do it. And then they put the code somewhere and it goes to a community and then they are getting all this feedback, right? Is that something familiar? People say, yeah, I am not going to take this, I am going to take it. So, Agile just simply says, hey, you know that point where you have that idea? Maybe you want to open your mouth and use words and talk to the people around you and say, what do you think? This is the idea I have. What am I not thinking about? What did I miss? What is your idea? How can we collaborate together on this to make it better? And the point here is really how many have heard the quote, you are, you may be smart but you are not as smart as everybody in the room. That is the foundational point of this. And so I really like there is no I in team, but sometimes there is. And so what I am imploring all of you to do is to simply understand that your default operational perspective is going to be this. And so keep remembering that there are people around you who have thoughts, feelings, and try to be empathetic to what they are doing. And the second part, and the most important part of Agile, and so if you bump all those buzzwords off the screen and you think about the foundation of delivering smaller things faster, the feedback loop is the foundation of all of the methodologies in Agile. And really what it is talking about is a very simple concept. If you are working in a waterfall software development methodology, your feedback loop is likely going to be about a year. And so the feedback loop is the point of idea, creation, to the point that it is delivered into the hands of a customer. That is the loop. And so the point of Agile is to tighten those loops. And so Steph in a second, he is like hovering right here, he is like ready to go. He has had espresso and he is going to go for it. He is going to talk you through those feedback loops. And one of the most important feedback loops that, and in fact, Colin Walters, if you see him in the hallway, he will tell you the first question I had for him was about this feedback loop and when he said he didn't have it, I said, are you out of your mind? And so the bottom line here is that if you are not getting feedback on what it is you are doing and you are not constantly looking to inspect and adapt and shorten those feedback timelines, it is going to be really hard for you to understand what it is that you are doing is actually what you need to be doing to make your product successful. Okay, I am going to let you go now. No, that is totally awesome. And so what are the feedback loops? Think of them. There are lots and lots of them. You can think of things like retrospectives after a sprint. If you have done scrum, you can think of backlog, you know, during the backlog, you can think of sprint planning, you can think of sprints themselves and all these different things, various forms of process and meetings as feedback loops, where you figure out what you did last time and then you try and figure out how you should do it the different next time. Say it applies to handing something off to customers and seeing what sucked, what was great and adjusting to that. But let's arrange these in order of effect. Which ones get you the most mileage? And you are going to see a list like this, with CoderView right at the top. That is a feature of CoderView. You post something, you post some code, it sucks, it does something awesome, someone checks it, and you get better, better, better and then you merge it. That's a very, very short, small thing like you can get massive mileage out of that. I'm not even going to go into details there. I'm going to assume that you do this. But if you don't do your project, this is probably the biggest thing that you can change that will get you massive benefits out of Hangout. It beats everything else. The second one down the list is continuous integration. A lot of the documentation you see on AppHealth and the inspirational, raw, raw stuff assumes you do this. And when you're building a website or publishing stuff online directly, it's kind of easy to do this. It's almost as soon as it's part of your process. But for those of us building parts of an operating system, it's much harder. Yep. What's the first point about CoderView? And if you use 16-programmet or pair-programmet, do you still need to have CoderView? That's one way of doing CoderView is to pair-program together. That's a good point. And you can make that feedback loop between two people looking at the code very tight. And that's the goal there. So let's look at continuous integration. Because it's harder for us, for a lot of us, where we have to bring this into... Think about how we're going to do it and bring it into our projects, into our open-source projects, into our pieces of an operating system or products or different pieces that we're building. And then we'll look at continuous delivery. A lot of the other stuff we don't have time to look at today, so we're going to focus a lot around continuous integration and delivery for the remainder of this talk. If you look at the agile manifesto that Jen was referring to, 12 principles, 5 or 6, depending on how you read them, are related to delivering working software, testing, delivering. This is the meek of the thing. There's lots of other stuff in there. But this is almost the majority of those principles relate directly to this. That further underscores that these are the foundation, the fundamentals of agile. What is continuous integration? You see this thrown around a lot. It means a lot of things for a lot of people. But I'm going to give you a very rigorous definition. So, integration. Let's start from the back. Assemble everything together in your system, as it would be in production, or reasonably close to assimilate thereof, and then drive it like a user, test it like a user. You don't get to put your hand inside a puppet, play it from the inside, you get to drive it from the outside like a user would. What is continuous? Do that for every single change to the software. Every single change. The changes in your software are like a plug-in in physics. You can't divide any further down. Think about when it changes from this state to this state. There's nothing in between. Often this is a pull request. In the case of an operating system, it might be pushes to this git. These are changes where you go from this state of the software to the next one. Each of those changes need to be integrated and tested. Then you have continuous integration. What's continuous delivery? Continuous delivery is picking one of those changes that you integrated completely and saying, yeah, I'm going to deliver that one. Yeah, I'm going to deliver that one. Whether you deliver them all, whether you've shipped them out to all your users, or to some of your users, those are decisions that you end up making. But you're able to deliver any one of those changes. That sounds kind of dangerous to me. I'll show you how it works. I'll show you in an action, and I'll show you how to make this work in your project. All right? And that's what we're going to do right now. The remainder of the talk is these two steps. In action, show you what's happening and how to make it happen. So start with what in the world does this look like in real life? In action. We're going to look at the cockpit project. We're going to use this as our example of continuous integration and delivery in action. What is the cockpit project? It's an admin interface for Linux. You can drive it in your web browser. You click around and you can do all sorts of stuff. I would love to talk to you for an hour about this, but unfortunately I cannot. You can drive containers. You can configure your networking. You can do troubleshooting, diagnosis, all sorts of stuff there. You're just like... So one of the interesting things about cockpit that's really relevant to this is that it talks to the system directly. It's a real Linux session in your web browser and the JavaScript interacts with the system directly. Here we're running a ping command directly out of JavaScript. This is supported. This is not... This is just part of the normal flow of how cockpit works. Each of those things are implemented in different ways like this. That was executing a command. Here we're going to execute... I call a debus API. So get that. From a browser, we're calling each part of the system directly. Here we are. We open the proxy to a debus object. This JavaScript has no idea about this API. It doesn't know about it natively. It doesn't have any intuition about it. It figures out what the API consists of and here we're calling a method on that API and we're changing the host name of the system. So there you go. And just to prove that it actually does it, we're going to do that from the command line. Again, I would love to talk about this. This is really cool and this is why I work on cockpit because this is amazing capability. But to summarize, cockpit is only the presentation layer. There's no mid-tier. We talk directly to the Linux system. In fact, we talk to so many different projects and so many different parts of the system. This is just a list of projects we have contributed, patches to and fixed. The list of things we talked to is about three times this long. If you want to know more about cockpit, there's a hack fest going on right now. Up, go outside, around the corner, upstairs in this building, room C236 at the end of the hall. If you're leaving this talk and walking out, I'm assuming that's where you're going. If you talk to like 100 different things all the time on all sorts of different Linux system, you can expect a disaster. You can expect those things to version at different rates and just to be an absolute combinatorial explosion. So how do we do this? It would not be possible without continuous integration. This is an example of something that is made possible by continuous integration. It lets you do amazing things you would not otherwise be able to do. We bring up 10,000 instances per day, VMs, real operating systems, per day to test the changes that go into a GitHub in the average day pull request that we create. When you open a pull request, you will see something like this. You can see test suite running on all sorts of different operating systems. There's Ubuntu, there's RHEL, there's different versions of Debian, CentOS, there's different browsers, Firefox, and so on. And each pull request will bring up about a thousand different instances in all those different ways. We found this way cheaper than finding these bugs later. And this happens on every single change. And we have built scaling recently. I didn't start with this. We built scaling so that you can bring up additional machines and it will just make the test go faster. And if some of those go down, it will just make the test go slower. This kind of stuff scales, it's distributed. This is really cool. I'd love to give a whole talk about this, but we're going to move on. To continuous delivery, we release 50 times a year weekly, like I said, pick an arbitrary change and that's the one we release. We sign a tag in GitHub and it becomes a release. What does that mean? Creates tar balls, uploads tar balls, does scratch builds, pushes to code you, does a Bode update, coper builds Debian packages, do two packages, Docker Hub containers, documentation uploaded, versioned for that release. I think someone's bringing in vagrant image creation. These all these things happen on my, like I said, sign a tag. That's the only action that the user does. I mean sorry, one of the project contributors did it. And then all that stuff happens. So obviously you can't do this like with thousands of people from all over the world working on this. You can't help you out. You need robots to do all those tasks. Otherwise it'd be extremely wasteful. And again, if you want to figure out how we do this, continuous integration delivery, actual implementation details, what I just showed you. Go to the HackFest. Go to the HackFest in room C236. Is that the live stream? Yeah, that is the live stream. And this is, yeah, it's very productive up there right now. Okay, so, the point of this talk is not to say, oh this is amazing. In the winter they came to down all the leaves and in the summer and springtime they put them all back up. It would take hundreds of people maintaining one tree. Instead, that's just not how nature works. That's not how it works. This is how it works. You have a seed. Put the seed in the ground. And it grows. Sometimes it's ugly. Sometimes it's pretty. But it grows by itself. And that's how this is, and CI will grow in your project. And here is the seed. I'm going to show this more than once. It's important. This is the seed. One, make the tests changeable by the people who make the changes. Make the tests in your project changeable by the people who made those changes. For every change you can change the test. And post rapid feedback back on each change. Post the results of running a test, even one test, back on each change. If you have one test, even just yum install the thing, start with that. Make that test changeable and post the results back. And everything else follows from that. If you miss either of these points, we're going to put a lot of effort into CI and it's going to be carving the tree by hand. Painting the leaves by hand. If you get this part right, the tree will grow. This is how we started with CI and all sorts of things I didn't imagine at the beginning grew out of it. So, in the corollary, we're talking about the changes. We're not talking about master. Once things are merged together in Git, testing that or testing it nightly. We'll come back to that in a second. But we're talking about the changes and that implies you're testing before you merge. You're testing a change on its own. CI lets you then identify. That's the change that broke it. Because the tests are testing all sorts of stuff. But CI grows from that. The fact that you're testing every change, the guy who made the change, and you allow him to change the tests. Your CI tree will grow all sorts of wonderful, scary and horrible and beautiful, amazing things. And this is some of them. You'll use a test framework. We can all share test frameworks, if we want. But you might grow your own. A lot of people do. It's not a surprise. Some trees do different things than others. You can even change some of these pieces as you do grow. You'll probably move your packaging if you use spec files and RPMs. You'll probably get those upstream at some point. Why? Because you're integrating it as deployed in production. This will be part of the growth of your tree. Getting those spec files upstream. You'll start to gate. You'll start to say I'm not going to merge this until the test packs. That usually happens pretty early on in this growth of CI. But it happens after you start. You don't necessarily start with that. Scaling and out, distributing your stuff, tracking known issues that can't track down. It happens every thousand tests run. Something goes bump. You'll probably invent a way to track that. We can share knowledge on each of these things and look at what other projects have done, how they've solved them and the infrastructure to run your tests as well. You might grow into the infrastructure like CentOS-CI OpenShift online where you have this kind of stuff available to you. You might start with semaphore. There's lots of different options there, but this will grow as you do. You don't have to be scared of choosing the right one. I could talk about all these things a lot. I have opinions about these things. Sometimes strong opinions. Sometimes ugly opinions. But I'm not going to. And that further underscores the fact it's the seed that matters. These are secondary. It's that seed that matters. This one. Those two things make the test changeable by the people changing the software and provide feedback on every change. So what are the things that you can do to destroy your tree? Come at it with a flamethrower and just burn it to the ground. And Linus Torvald said hey, I'm going to take the build stuff out of the kernel source. You can give me a caller telling me what that thing built. Well, we pretty much forked Linus Torvald. I mean, everyone forked it right away. This wouldn't work. That's the kind of flamethrowers I'm talking about. Here's the short list. There's probably more. And it turns out one of mine finds your bullets. Nice. I wonder if I can help this because we're going to have problems with this. Just give me a second chance. Have you seen more? Yeah, I'll do that. So hide your tests. Don't make them available to the people making the changes. That's pretty sure fire away to make your CI die. You might use them in other forms of testing. You can use them in acceptance testing, regression testing, all sorts of other testing. But if you want CI to die, do that. Schedule your tests. Don't do them on every change. Schedule them lightly. Make your CI die. Sure, you can do that for other forms of testing. Acceptance testing, performance testing, lots of stuff like that. But it's not CI. Hide the results. Store the results in a cool, dry place. Make people look for them. That's going to make your CI die. Test after you murder, not before. That's going to make your CI die. And one last thing that somehow doesn't make it up there is force everyone to rewrite their existing tests to your beautiful new framework. That's going to make your CI die. So, this is DevConf, not BullshitCon. So, let's actually look at some code. Here's an example for getting started. Some of you have already started. So, pardon us while we go through the basics, but I want to show you how simple it is to get started. And we even have an example repo ready for you. So if you want, get out of your laptops and we can actually do this. I'm not kidding. There's a repo called Cockpictuous on GitHub. You type on Google to find it. I'll also have the URL posted towards the end. There's an example directory in there. And there's two test suites. They're very simple. You can add your stuff there. And a setup file for semaphore. What this does is it brings in a fedora user line into semaphore. There's no excuses here. You can't say, oh, DevConf only has a Bullshit. Well, turn it up. It's a trivial to bring in another user line and integrate it the way your user would in production. Or a green multiple. And then run these test suites. You see this on every pull request. This is what the output looks like. And here in our case we're just proving that it is fedora. And then we have one drop here that fails. And that's posted on every change. And if you go back I can bring this up actually. Let's do that. So here's Cockpitches. We'll come back to this repo a lot. This is the URL here. Here's the example directory. These are trivial. Pull. Prepare. Pull the image. We have 10 minutes. Mount the image when running. And then we run that this runs the test suite. And the test suites, again, are really simple shell scripts that you get to put whatever you want in there. Yum install. Start with that. This is so easy to get started but it matches that C's. You get results back on every change to develop. We made it and the changes are hackable by those people. So because we're running out of time we'll move on. If you already have a CI system that doesn't meet those rules you might probably are not posting the changes back to the guy who made change. There's an in. Nice. Cut off. In that same repo there's something called the SIG. And you can type your test results. Smaller. There we go. You can type your test results through the SIG and at the end put a JSON line and it'll actually update GitHub for you and link it to the URL that it ends up at. Again, this is in that same repository. Also if you're interested come to the hack test we can look at this more. What about delivery? Automating the delivery. That's not magic. There are a bunch of shell scripts. In this case we are going to use tar ball and patches to from a GitHub if we leave a command called release source we're putting in a directory called source we created a tar ball with the right name that uses make check I'm sorry make this check and create some patches on top of it. This is the current state of the GitHub checkout and it represents the thing that you want to release. Here we have release srpm that takes that and those sources and updates the spec file puts the git tag in the change log puts those patches in sets the version and revision number correctly all without human interaction here we have release codev which actually does another update to the spec file to make sure that the revision matches what gets checked out with git increment there and merge change logs together and commit it, push it make sure it works and so on all of the tasks that you would manually do just happen without any human interaction then release bodi makes a bodi update using that same git tag text which hopefully was descriptive that's not hard and makes an actual bodi update so that it goes out there's a bunch of scripts in this directory that give you an idea that releasing software, delivering the software is not a manual task you see a cobbler by the side of the road and you have a beautiful shoe he's polishing it, he's done such beautiful stitching on a beautiful leather and everything and he shows you this shoe and it's amazing, it's an amazing shoe you're so impressed and you're like where's the other shoe? he's like no, I made this shoe and I spent my whole lifetime using this shoe just so that's what a lot of people attitude towards packaging is whereas packaging should really be you can get it out and get it out regularly and that ends up being one of those feedback loops in agile you get it out to people who can then use it so all of this is in a repo called cockpituous and again, come to the hackfest let's review continuous integration assemble everything together as it would be in production and test it like a real user from the outside do that for every single change and here's your seed to get this started and from which the rest grows make the test changeable by else you make the software changes and post rapid feedback on each change alright fire away yes, how many tags does the repo have or tests does the repo have our repo our repo has git tags we have the numbered tags for each release and we're at around 130 of those plus we have for each rel release we also track branches for downstream releases with all the patches that are in them so we can run our ci against them as well so this in the case of an upstream project I imagine we currently have about 200 tags but that git panels that very well I saw once we haven't found the tags slow it down the tests though eventually as your tree grows you might have such a big body of tests that you start to optimize them and in the case of a project like fedora it is important to choose the right tests because hopefully they'll be eventually a forest of tests like I said rapid change rapid feedback is important and so at some point in your tree not a beginning these tests maybe not all of them yep so it's a question about the release source tool and how it makes those patches changes it turns out that parsing is well you can look at the implementation you can look at it after it will be fun and if people have been contributing back so we've already have some people using this the security guys are using it you're using some of this and so people are contributing back cases where it doesn't work for them and we're merging them bringing that tool to work better but yeah it is possible to completely automate editing a spec file yep so the question is how many lines of code versus how many lines of test code do we have in the cockpit project and it does look like about 30% 40% test code in many times when fixing a bug that gets reported from a real user we look at the bug the first question that people ask in the team is why didn't we find that before this becomes part of your system once you have CI and so it's a one-liner fix almost always for a bug yeah it's a one-liner fix and it's a 20-30 line integration test so when you're fixing bugs that ratio goes the other way in addition on time you do end up spending around 30% of your time writing tests at least that's what we found in our project because we've integrated so much stuff but that saves that allows a small team to do what you otherwise require a massive QE team to be able to do yep no this is question time actually I'm talking way too much go for it it's a good point you want the results as soon as possible ideally how much does it take to do this and how much does it take to do this for the request one so if we look at we have 13 test suites currently and we will depends if there's one pull request open like I said there's this distributor that's scalable and we have machines that come up and find tasks to do and they will paralyze all of this you'll start to get things simple things like feedback from semaphore quickly within about 5 minutes each of these test suites takes depending on how heavily loaded the machine is in our case takes about 10 to 15 minutes and multiple of these are done at once because if everyone goes right now and opens a pull request at once obviously that would take a long time to do the drain but if there's one or two or three pull requests that have been open recently the test drivers will actually share this load across them and you'll get feedback probably all the way across within about 30 or 40 minutes given the current amount we have about 5 or 6 test runners which are all some are dedicated machines some are open stack instances are all over and they like I said it's distributed and they choose to do the tests based on what needs to be done you get around 90 minutes to 2 hours of time of developer's attention before they start totally disregarding your CI I have a question for you how did you get the time to do this so all the features that you were required to put into cockpit cockpit lives and dies by its testing so one of the cool things about CI is that it makes you able to do things that you could not do otherwise whether that's fast paced software development or whether it's completely new software that could not exist without CI and cockpit is in that bucket it just absolutely cannot exist without CI and the whole architectural model of talking to those things directly and not inventing yet another management layer on top of every single rail system is only possible because of CI you should have continued what? right so the question is what is my opinion on what to test my opinion is first start very simple start with installing the thing making sure it comes up don't get too ambitious or you will never start to actually just plant that seed one or two tests you have to plant that seed then you start to expand your test coverage some of that comes naturally with the value of the tests and increasing their coverage and hating to find bugs later the developers under project and they want to write the tests earlier when you start to get more test coverage like that test things you start with testing things sometimes you need a login test the login test your command line tools if you have command line tools test and so on and there is I guess there is no one answer here but especially as you see bugs write a test for every bug you find and all you notice as a developer yourself is that the testing you do manually to check that your thing works you are like why am I doing this over and over again I will just write a test for it even for myself right now because I have to do it five times why should I do it five times yeah if we go to get a little bit support to code review do you have some rules like who can code review like something new the code review should be done with some more experienced people or something with some strategy so the question is about policy for who does code review the answer there is when you are starting any code review is better than no code review in fact if you are in this position code reviewing it yourself is better than no code review even just seeing it in patch format is you will find bugs that you would not otherwise have found they do their code review so then you have continuous delivery ship it and then you will come up with what feels right for your project to have some less experienced review it is absolutely not wrong that is fine back there the question is automate the merge and review the gating of the pull requests and that is one thing we haven't run on our free yet other projects have some projects decide to serialize things they always do their tests refacing that or merging that onto master then doing tests and then doing merge they have bots for this we have decided a little bit more privately there is tradeoffs here to do our tests against the current master if they agree someone can choose to merge them both of them are valid and both of them have tradeoffs and negatives so that is something that is not on our free yet it is part of what belongs on your CI yes so are we ready to test up the door or not we used to but no until the door or not I would suggest running your software as you would in production that so as you would assemble it as in production in our case that is why we have that list of various operating systems he has his hand up right there yeah I am curious about have you found any limitations on this basically we are getting a we are going to open this back as a master we are going to talk about the seat was planted at the beginning and it really worked out as well communication has actually suffered quite a bit and it seems like humans can't really deal with more than about 25 people I wonder if you have found any strategies for this cockpit is a relatively small project right now 6-7 people 2 pizza teams 3 pizza teams 4 pizza teams the foundation of agile is actually built in this premise that smaller teams are better than larger teams the amusing part about this is that the industry has this notion that you can scale this process and make it work and how many in this room think that is like a terrible idea I am going to cede the room because yeah it is hard it all boils down to that idea that you need to scale communication and typically in my experience I will be very clear engineers think that the best method of communication is via email how many agree thumbs up communication experts suggest that email is not the only method of communication surprise and human brains actually do not process information simply by seeing one email they actually need 3 different types actually so 3 different methods of communication in order to actually really digest what it is that you are saying and so text could be one of those methods voice is another method something visual is another method so at red hat here get this we send an email we have monitors all over the company that put the information up on the screen so you can see it where you are in the kitchen and we also do conference calls where people can dial in and listen to what it is that somebody wants to say and so relying is likely what the foundational problem is you are saying when I IRC to say that I want to talk about an email that just sent you you should also call me on the phone and also please what's at me telegram me, facebook message me actually sms message me and then send me an email and I might respond but poll request is another form yes yes so yeah do you also do you also do you also do you also do you also do you also do you also do you also do do you also do you also do you also you also you also you else you you don't you don't chill you know actually you So that would fall under performance testing so the question is do we have some way to detect when something is implemented extremely poorly or an SQL query for example is just wrong for the situation takes way too long and that would be what's called performance testing. I've found that performance testing is hard to bring into your CI because of the rapid feedback requirement and so that's not something that we've done automatically yet I would love to find a way to do that and I feel like that is missing from our tree. The question is if we decommission tests that are not appropriate anymore because some implementation has changed and that is definitely the case it's very the best way to meet direct the first part of your the requirement and the first part of the seed is to store the tests in the same repo if you can as the software itself that's the by far the best way and then in the same full request you update the tests for the feature that has significantly changed. Last question. How do you deal with the authentication because a lot of the things you have shown require authentication because they are supposed to be done by a real user. I mean building packages and coaching and stuff like that. The federal infrastructure team is trying to prevent bugs from doing this and you are developing a bug which is doing it too. So the question is how do we automate a lot of the continuous delivery. You are correct that the current mentality is out of the cobbler and if you see it even in changes that are rolled out in all of our encode for example incompatible with what happened last week everyone is forced like the cobbler to read the letter magazine and figure out which cows are going to be good for the next season and really stay up to date with all of this stuff. Automated. Which cows are good for the next season. The metaphors are a diamond. But the reality is that many of the tasks that you do can be performed automatically and they can be officially in those scripts. If you put your credentials in it right now Fedora has Kerberos on the location. Put your credentials in on the machine and if Fedora requires you to run that script by a person start running it that way. And put your credentials in at the top and run it that way. And you are there. You are logged in and it's running. You are not interacting with it at all. The last place I would suggest you interact with your CD is when creating your give back, signing up in time and everything else goes automatically from there. Thank you very much. Thanks everybody. Oh and go to the hack fest. Yeah. But only if you can type like this. Talk to the top of your CD projects on like the different stages. You can get the CD on your square. You can get the CD on your tracks. I had to try to do something like that. I have to say that most of the viewers might not be able to see it. You can see it. You can see it. You can see it. You can see it. You can see it. You can see it. You can see it. You can see it.