 All right. Hey, the numbers are going up. We're recording. All right. Hello, everybody. We're here to talk about testing leading edge kernels. So this is a good talk. You know, if you're the type of person that looks at Fedora or Rahide and things, you know, that's too stable. I need something, something more exciting in my life while you're at the right spot. So first off, some quick introductions. I recognize a lot of you in the audience, but there's some of you I don't recognize. So I'm guessing you may not recognize me. My name is Paul Moore. Hello. The video, people. Hello. Those are my email addresses. Either one of those, you can get ahold of me. In fact, a lot of times you'll see me, I'll reply from one or the other. It depends what happened, you know, what email I'm checking at that particular moment. That's my Twitter handle up there as well. And I'm not as good as a lot of people are about Twitter. So it might take a day or two for me to realize that we've sent something by Twitter, but I will get it by this way. So what do I do? Well, I'm sort of the past, I don't know. Three years or so I've been an SELINX maintainer for the past couple of years. I've been maintaining audit. I've been a label networking maintainer for probably longer than I care to remember. I also created and maintained the LipsetCon project. So what we're talking about today isn't going to be specific to any one of these projects, but if you'd like to talk about one of these, feel free to grab me in the hallway. I'm here until Friday morning. I'm happy to talk about anything. If you realize you have a question but the conference is over, that's why we have an email. Send me an email. I'm more than happy to talk to you about it. What are we talking about today? Well, we're talking about kernel testing. Talking about some of the problems I see that we currently have, some of the things that I think that we can do to improve our kernel testing, and also finding a way to get our development patches in the hands of people that may not be comfortable dealing with patches. We have a great community of people that really want to help and contribute however they can, but some of these people that want to contribute, they aren't always comfortable dealing with patches. You can't say, hey, here's a patch and they say, well, what do I do with this? I can't do this. So how do we make our cutting edge development? How do we make those bleeding edge changes more accessible to people? So the first place we need to start, and kind of a quick show of hands, how many people are familiar with kernel developer? Are you kernel developers? Okay, so a couple. So most of the room isn't part. So I'm going to quickly try to go over what the kernel development cycle looks like, and most of you may not be familiar with it. First thing goes with any part of development, right? The developer has to sit down in front of their computer and say, okay, you want to add this feature, I want to fix this problem. So they come up with patches. Maybe it's one patch, maybe it's a patch set that's got a dozen patches in it. You never know. So everybody sits down and works on their patches. It's into their happy with it. When they post it upstream, usually to the mailing list, sometimes it's a pull request, I think, but for kernel development, it's close to the mailing list. And there, the patches are reviewed by the maintainers usually, but other people sometimes chime in and say, you know, okay, thanks, this looks great, but maybe the patch that you thought was awesome, maybe it turned out it's not quite as possible as you thought it was. But that's okay if you go through a review process, you know, you make some changes, and after a few times you get to the point everybody likes it, it's great, the maintainer says, hey, this is perfect, I'm going to merge it, awesome. Unfortunately, you're not when you're done. You know, you as a developer, you're pretty much done, you got your patch in, and that's great, congratulations. But this is kind of where the maintainer's job kind of starts. What usually happens is they'll take that patch, and they'll merge it into their next branch. This is going to vary a little bit, each subsystem does things a bit differently. You know, the branch might not be called next, but the idea is the same. What basically happens is that when Linus releases this kernel, so you know, we just had Linux 4.7 release, what, a week ago, once he does that, he pushes that out into the world, and everybody says, oh, you created a new kernel. That's when all of the maintainer's work really kind of starts, because then they take everything that's in their next branch, so all the work that's happened since the last kernel release, they'll take that, and I'll send an email out to Linus and say, hey, okay, here's all our changes for Linux 4.8 on the next kernel. Linus takes a look at them, and hopefully, you know, he just takes all those patches, he puts it in his tree, and great. Now your patch finally has made it into Linus' tree, so it's on its way to actually being out in a kernel. And this goes on for about two weeks. We call this the merge window. So we're actually in a merge window right now for 4.8, because like I said, 4.7 came out about a week ago, with a few more bits left, I think, unless I screwed up at least, possibly. Anyway, so that goes on. And once the merge window closes, Linus puts out the first release candidate for the kernel. So the next one we'll see is 4.8.1. So it puts that out into the world, and sometimes that's in pretty good shape, but more often than not, there's some bugs in it, because this is the first time all these new patches have been tested together in one kernel. So if you use raw hide kernels, you probably know the RC0, the RC1 kernels, you know that those might be ones where your machine might panic on something. It doesn't happen often, but it can happen. And then we go through 8 weeks of testing, where we try to get all these bugs out, so we can get the actual proper kernel release. And Linus releases a new kernel every week, usually on Sunday. Most of that comes out on Sunday. And like I said, we do that for 8 weeks, usually. Sometimes it changes if Linus is traveling and he decides to shorten it or extend it a week, or things are looking particularly bad, he might go an extra week or whatnot. But in general, for most cases, it holds. So what does this mean for testing? So we talked about, we've got these 8 weeks. And we're lucky enough, I apologize. I know names, I don't know faces. Is there anyone in here from Fedora that actors up kernels? Okay, I was going to tell them, they do a great job. They do do a great job, I was going to say, we hold around the plot, but they're not here to hear it. So they're lost. Anyway, Fedora kernel team does a great job. They package up all their RSE releases for Rawhide, and they issue multiple Rawhide kernels per week. So they're really trying to stay pretty close to Linus' upstream tree. So it works out really well. The one, at least for me, is a kernel developer and a maintainer. And this is where we talk about these next branches. These are the ones that have really all the leading edge development. We've heard it a few times today, leading edge and leading edge. Fedora wants to be leading edge. I want to be leading edge. Like I said, Rawhide's too stable for me. I want to be more excited. So these next branches aren't packaged up for Fedora. And that's okay. For most people, it's probably a little too unstable, a little too adventurous. But the Linux Next repository, so this is another, you know, Linus maintains this kernel repository. The Linux Next project, they maintain their own kernel repository. And what they basically do is, you know, every day, I think it's usually, if you're on the eastern coast of the U.S., it usually happens in the afternoon. So that would probably be the evening for most of you. But they go through and they pull from all the different maintainers' next branches, and merge this all together and try to do a kernel build. There's actually a lot of work, and it's really impressive that they actually are able to do it on a daily basis. It's very cool. The only problem is, it's primarily focused on finding merge pathways. So if you've done any software development and you've been applying patches for multiple people, you know, then sometimes maybe the patches can click. You know, they don't always go in together cleanly. Two people are changing the same file in the same area. Sometimes you have to do some manual picks up. And Linux Next is great for that. They kind of give you a heads up, you know, okay, you know, there's some other subsystem that's playing in those files here. So it might be kind of a red flag that you may need to do some extra testing there. There is some automated testing that does happen. You know, we'll see, you know, test robots that go out and try to build everything. They'll report back to us and we'll build failures and whatnot. But the coverage of those tests aren't always clear to me. So I'm not sure how much automated testing actually happens. At least I don't know of any test robots that actually run our tests against the Linux Next products. Also, general user testing is rare. I don't know of a single user that actually takes the Linux Next kernel build and runs it on their machine on a regular basis. You know, on occasion it's just problems reported. Somebody might try it out, but it's not happening weekly or day-to-day or whatnot. It's not regular current. And because of that, we usually don't catch interaction problems. You know, so for example, if a network stack does one thing and, you know, SC Linux does something else, independently, we might work just fine. But when we all end up in the same kernel, you might not be able to see your network less. You know, things happen. And, you know, that's not great. And so at this point, I think most of these points on this slide should hopefully make sense or be obvious. And you just talked about, widespread user testing really doesn't happen until these patches did line this as true in RC1. And that gives us eight weeks, less than eight weeks to find the problems and fix the problems. And that's eight weeks if we find it in RC1. You know, if we find it in RC7, in week seven, we've got a week. You know, and if the problem is really tricky, you know, and, you know, maybe it requires like seven or eight machines and a very specific configuration and you have to let them run for three or four hours and, you know, move the mouse just right on the 77th minute. Good about trying to solve that in a week. And this kind of gets to the second point. If it's a serious problem, like, if you know the patch and it breaks Linus' laptop, that's going to be a rough day for you. A rough couple of days, probably. Currently, developers don't like when you break down a machine. Linus really doesn't like when you break a machine. So, they really want to try hard not to break anything in the kernel. Really, really try hard. Otherwise, you will get lots of mail and generally not a nice ride. The other problem, this sounds a little funny and I was sort of typing up the slides and I was trying to figure out how to convey this. But the next branches, they're based on the last of this kernel. Linux 4.7 was just released. We're in the merge window for 4A right now. Once we get past the merge window, that's going to be when we start, after we push everything up to Linus, people that submit patches, all the subsystem maintainers, we'll start accepting them back into our kernel trees. We base our kernel next branch. The most recent you can do it. Linus only wants you to base it on released kernel trees. No R or C releases. Just things like 4.7 and 4.6. For example, for S, Linux, and Audit, I'll go ahead and I'll base mine on 4.7. By the time you get to that merge window, that's eight weeks from now. That means all those patches are based on kernel code, which is eight weeks old. Now, you're thinking yourself, that's two months. That's not very old, right? In the kernel, a lot can change in eight weeks. A lot can change in eight weeks. Especially in other subsystems. Those interactions can try to avoid it. We're not always testing test turner on next branches. We're not always testing the most current code. That can be a problem sometimes. This is just a simple thing. This isn't necessarily a kernel problem. This is just a software development problem. Most developers, I include myself in this, we love the right code. We don't always like to run tests. We don't always like to run tests. When we're developing a certain new piece of functionality, we'll do some testing to make sure it works. That's just how things go. But what about six months down the line? Are you still always running those same regression tests on that piece of new functionality? What about three years from now? Functionality is still there. Are you testing? Some do, but a lot of the subsystems don't. We run that really heavily in code review and, like I said, just add hot testing. So many are using new patch. Here's how we test it. A lot of subsystems don't have regular regression tests. Whenever somebody submits a new patch or for every kernel release or every week, or day, or whatever. There's no real way to easily test to see when did we bring something. Or, like I said, when did another subsystem change something that would cause some side effects in your subsystem that would take problems. So that's kind of where we're at. Those are some of the problems. So it's kind of my if I can sit down and pick up how could we fix this? Where are some of the things? These are kind of my aspirations. Some of my goals. What I want to do is I want to make sure we have regular testing of the latest upstream developments. I want to test as close to development as we can. So we can catch the bugs early. I think everybody knows the sooner you can catch the bug, the easier your life is going to be. Definitely want to get it before it gets out and normal users start banging out at the end of the day about some of those problem reports. And that's not good. I also want to test criminals more accessible to people. And it's like I said, I interact with a lot of people. A lot of people send in problem reports and that's great. One of the things I like most about open source isn't so much the open source aspect. I really like that. But I love the community of that. Everybody really is excited about it. They'll go out and try new software. And they'll just know when it breaks. That's wonderful. And some people even, really detailed reports. Here's how it broke it. Here's how you can recreate it. That's wonderful. You don't know that sometimes these users, they're enthusiastic. They love open source. But maybe they're not comfortable building their own kernel. And putting it installing it and whatnot. But if you can provide them an RTO, they're happy to test out the RTO. So, how do we take all these patches we have for fixes and put them in easy ways so that people can put them on their systems and try them out? Oh, last one. Needs to be cheap. Needs to be important. Larger selfish reasons. But, I also want to make whatever we come up with easy for other people to use as well. You know, if it's very complicated, requires difficult setup, or if it's good in some weird spiritual language that nobody knows, nobody else is going to use it. And, you know, it's useful for me, but I'm hopeful that maybe some of you out there will find this useful as well. And we'll see when we get a little bit farther down. There's very few things here that are specific to the kernel. So, if you have a project that you're maintaining, you might be able to take some of this and use it for your other devices if you're looking for a relatively easy way to do some regular testing. Yeah. So, you can see, you know, Michael's got to be automated, right? If you have to do a lot of stuff by hand, you just not going to do it. You know, if you're automated though, you can run it very often. And more often you can run these tests with a better opportunity. Zero cost. I'm cheap. I don't want to spend a lot of my time to put in the cloud path too. I also don't want to spend a lot of my time maintaining complex infrastructure, making sure patches are applied. And so when it comes out with a new version of that test infrastructure, you know, how do I have to go and update all my systems in a particular case? I don't want to do that. And, if you travel, or, you know, you want this system to be able to run, you know, when you're connectivity is not great. So you can pretty much run it on your laptop. That was kind of my goal. And when I started looking around, it was, you know, you hear people talk about continuous integration. This is great. This is exactly what I was hoping for. You know, it connects up the development, puts it close to the test environment, and if you would provide some mechanisms to deploy that out, so that, you know, once you get everything past the test, you can go ahead and hand these shiny new kernels off to users, and say, hey, we'll play with this, try it, break it, you know, give me the pieces back. And I don't spend too much, so I appreciate you. Everybody sort of continues their integration, right? Yeah, okay. So, yeah, we, you know, everything's on a continuous integration. It watches your repository, you know, put new commit in, takes that commit, builds a new package, runs it through some regression testing, so everything looks good, it pushes it out, deploys it. Life is wonderful. Constant stream of new packages. I've talked to some people where they, it's something like every five minutes in and release. It's just, it's absolutely fun. I'm happy to know that once a week, you know, maybe we can get it once a day, that'd be great. But we ought to get it more than that. And there's lots of services and products that pop up. They're very popular with web and app developers. I'm sure you guys know that, so we'll move on. However, as I was looking into some of the continuous integration things, Jenkins in particular, which I've heard of, you know, people are trying to do things like, this is great. I started looking into some of these frameworks. They were heavy. They were complicated. They weren't exactly well suited for what I was trying to do. They were quite clever connected, you know. I mean, these were all things that had elaborate back ends. You have a builder, you have a test system, you have management systems and orchestras that oversaw everything. And I'm looking at this, and I'm like, that, in my basement, blow along how am I going to carry that around on my laptop. So, yeah, the management cost, the hardware cost, it just wasn't working. The other thing is, it wasn't clear to me that I was well suited for kernel testing. Just to make you know, if I could be developed with kernels, they would tend to be panicked every once in a while. No, your code. Everybody submits code. It's great. It's my code. I always manage the system when I write code. So, I needed a test framework that could handle that. We gracefully deal with the system and it's, you know, the system wouldn't be wedged for a couple of days until I came back and I was able to do anything. So, yeah, and it also needed to, as part of that process, need to be able to handle rebooting with different kernels and whatnot. So, I couldn't find any continuous integration systems that met all those requirements. If anybody knows of one, please let me know. I'd love to hear it. Not all at once, guys. All right. So, I said, okay, well, there's not a continuous integration solution that I could find anyway. If I was going to try and come up with one, what were the things I needed? I tried to simplify it because I didn't really want to start another project to do this. Is there some easy way I can do this? So, I figured, okay, well, the first thing, the most important thing is I need to either create or find a set of regression tests that I can run that are relatively simple, self-contained, single system, you know, so I could have a VM or a real, you know, hardware system, and I would only need one, you know. I didn't need to have this elaborate network, or anything. I also wanted something where I could take the individual tests out of the test suite. If one of those tests was failing, and I could use that to, you know, post a link to Buxilla or something and use it as a reproducer, I could hand it off to other people. You know, I didn't want to have to pass around this whole huge heavy test suite. I could isolate individual tests to demonstrate the problem. I also wanted to come up with a way to automate the patching of Fedora raw hide kernel RPMs. Because as we know, in a raw hide, we talked about this earlier, they already do a good job doing the RCE releases. So I figured, okay, that's pretty close. I mean, it's basically what we see in Linus's tree. So I don't want to have to try and duplicate all their work. They've already got a team of people working on that. I just want to take the patches that are in my development tree, find a way to pass them on top of the existing raw hide kernels and do that in a way that was automated. So I don't have to sit down and spend an hour each day, you know, patching everything through the build. It's one of them. And once I had this new kernel source RPM, I wanted to automate the building of it and then find a way to host that, you know, the resulting binary RPMs somewhere where other people could try it. Preferably, so I don't need to just type whatever and go ahead and pull down the freshly built kernel and go from there. And then the last part answers all of them. The last part is I want to obviously automate the testing, right? I have my fresh new kernel. I have my regression test suites. You know what it means to actually run the regression test suites against the kernel, right? This is kind of, it's funny, I mean this is probably the least involved of everything actually and all the other stuff that needs to be done. But without this part the other stuff, you know, it's not quite as interesting. That's the most important bit. Now I wanted to do it by VMs, not necessarily by VR hardware just simply because there is no deal, right? You know, I don't want to crash my laptop and potentially, you know, do my policies to when I'm here at conference, you know, that's no fun. And if I did it in a VM I could either, you know, I could post the VM on my laptop from traveling or if I'm back home, I've got your network connectivity, you know, I could post it on my system. Give me some options. So, progress. What have I been able to do so far? Well, good news is I was able to find a test suite for acylinx that worked pretty well. That's great. Whenever I can reuse code I think it's going to be very happy because it's my whole goal, to release another word to test here. Of course, I couldn't find a good one for audit that was simple. Audit does actually have a really good test suite. You have to see the audit if you've ever heard of the common criteria. There's a really good test suite that goes along with it for audit. Extremely comprehensive, very evolved. Commons is kind of heavy weight for what I wanted to do. It was hard to isolate individual tests. Some of the tests were part of the network setup. Very involved. It wasn't quite what I was looking for. So I did have to create the audit test suite project. Very similar to the acylinx test suite project. I saw it. I've created some scripting that handles all the Fedora kernel patching. I'll go install these so that it's all about the object and the player that you like. The same patches that will handle the kernel patching, they will also go ahead and optionally submit them off to a copper processor that you have. Copper is great. I don't see anyone here from copper, right? Copper is awesome. If you haven't looked at copper yet, if you're maintaining packages for Fedora or if you're trying to maintain the package for Fedora, check out copper. It'll handle building your package for all the different platforms. It'll deploy it in your repository so that you can point the number to end up at it. It could be easier. Honestly, if it wasn't for copper, I'm not sure I would be able to put the progress on this as I did. That was kind of really an inspiration. Once I saw that, I was like, hey, some of these ideas I had would be much easier to do with copper. For the past year or so, I've been doing weekly builds and test runs with these new kernels and with the test suite. Unfortunately, while the build is all automated and I'm actually running the tests, it has to be that high-end. I may actually quite finish the scripting for that, but it's in progress. I just had a little bit of time so I made a little more progress in the past week or so. Hopefully another month I'll have something that works well enough that I can post about my GitHub. Some of the lessons that I've learned is I've gone through this process, which some of them are pretty straightforward and I think you're all going to go like, yeah, regular regression testing does work. You just have to do it regularly. Over the course of the year, I used the regression test to basically verify all the new patches that people spend me. Caught a few things with it that maybe I wouldn't have caught otherwise. Problems and other subsystems just a week and a half, two weeks ago, something changed in the network stack that broke SCLNX labeled networking for you to keep. But because we caught it so quickly, there's a relative lease. You look back through the GitHub log, I looked at the commit and I was like, it doesn't look quite right. We fixed it a couple of days whereas if that had gone on, we might not have caught that for a couple of months, perhaps longer and we couldn't get a lot of it again. Point of this. So we would have had to bisect and divide it would have been a pain. So I would encourage you even if you don't automate the process at least set some regular schedule at least monthly go through if you have the tests run even if by then you'll be happy to do it. I've also noticed that I maintain my stress levels during the merge window and during the RCE release, this has gone that long because I have much higher confidence in the amount of code that I'm asking by this point. The merge window opens up I don't worry like, I run all the tests, is everything okay? Am I going to Islamist stop? I'm running these weekly at least weekly, sometimes I run more than once a week. So I'm much more confident when I set things up and I don't think we've had any serious issues during the RCE cycle. There's been a couple small things but easily fixed for things there too for my other stuff. So it's been pretty good. The automated patching works a lot better than I thought it would. I was afraid that trying to automate patch on the door kernels would be difficult. I haven't had to do much manual merge conflict resolution. So based on the experience of a passenger I'm actually thinking about automating this further just because it generally works so well. Alex said copper, great tool. I'm probably the biggest fan. Unfortunately I've noticed recently I've had some reliability issues and there was a talk I can't remember if it was earlier this morning I've been noticing some reliability issues with copper so I asked them about it and they were explaining it and there was some reasons they were trying some different things. Copper still developed of course like everything and they're aware of the reliability and it should be getting much better soon. So I'm very happy to hear that. But anyway, yes those are kind of the things I've learned. Here's some of the things I'm still working on. I've talked about more automation right? I just already mentioned that that last step of working there actually run the test in your VM, that's the thing that I'm still doing back here for the most part. I hope maybe within the next month or two assuming nothing else catches fire I'll get that test script done. I also want to do once that's done I want to automate the triggers so basically I want to have things that watch the gift repository for SELINICS Naudet and when they see that I can make a new patch, an original patch for you guys I want this to go to kickoff the new kernel though. Take those patches down, put them on top of the ride build up a new kernel and then another thing that watches this is when a new kernel outside of my copper repository says hey, throw that on the VM you know, reboot that VM hopefully it doesn't panic and then go out and run the tests on it and then probably send you a mail with test results afterwards. Yeah, close to that that won't actually be too difficult I think it's the test execution that we have about the trigger part. Getting the trigger send should be pretty simple a car job that fires every hour or something and then obviously whenever anybody talks about testing there's always better test coverage that's kind of a common theme although I hope in principle it's within reason we talked about test results and they said there's a great test we brought and very comprehensive but it was too heavy it involved multiple systems it wasn't something that could be easily and quickly run I want these tests to be something that anybody who's developing kernel patches can go ahead and take these tests put it on the test system and run it quickly so I try to keep the dependencies small, like I said, keep it all self-contained. These are not going to be comprehensive tests and that's okay. I just want these to try and catch all the stupid little silly mistakes that are here and give us some level of assurance that the kernel patches that we're putting in are continuing. Once we get all that done I'd like to expand this out Escalonics and Aura that aren't the only security subsystems in Fedora we've got things like AIMA we've got TVs, all that stuff I'd like to go ahead and once you get things in a good shape for Escalonics and Aura, I'd like to start taking those subsystems and putting those into test kernels as well and working test against those hopefully to try and make sure that when we have new releases of Fedora and new kernels we make sure those are in good shape as we can rate them. So project links here's all the links I promised you earlier first link is the Escalonics test suite the Escalonics audit test suite the third link on there that's the copper kernels those are publicly available and you can go grab them if you go to that link there's even instructions on how to set it up on your system it's pretty simple a couple of DNF commands and you're good to go so if you have any models in there you can send them an email pmore at redhat.com paul at paul-more.com you can always be in touch with me there if you're looking for the scripts except you're thinking about doing this building your own kernels if you want to try and do something similar for your own package like I said there's very few things in there that are actually specific to the kernel that last repository is where you can can grab those scripts don't bash I'm not fancy I don't really know how to find all that well I don't really know hurl all that well I don't know many scripting languages but bash is easy the nice part about bash is I can guarantee a system that's bash even if you've got the smallest systems you can run a shell start that's pretty much it I can go into a bit more detail about how you actually go in install and set this up but I think we have about 14 minutes left so I just wanted to ask if anybody had any questions before I go into that that's it I guess would you take a question or turn it on I would be nice if this eventually expanded into some I think the big thing is I'm definitely open to adding more subsystems in there you just have to have a test suite that I can run against it like I said it doesn't require a lot of test infrastructure and can easily be automated so I'm definitely open to other subsystems and even if it's just other if it's not patches but just enabling kernel options you can do that as long as we can automate it somehow as long as there's a test suite for it you do it but I get on with it I had hoped to have the VM test part working before it came but there was an audit so it came out just a couple weeks ago and that sucked up all my time and it really shouldn't have taken as long as it did but for whatever reason you know when you're writing code sometimes and you're like the shit word it's not working you end up chasing silly little bugs I burned days on something I should have been able to fix in a couple hours but anyway so as I mentioned there's those scripts which automate the process there's if you go to the GitHub repository there's a review file in there which has all this information in more detail here I put in the slides I didn't want to risk trying to bring up the page on the Wi-Fi here it's kind of a little shaky but if you're one of the lucky ones the Wi-Fi is working for you I had the URL in the previous one but you can go there and see a song or read me in much better detail so the first thing you can do is hopefully you've got a thorough account if you don't have a thorough account you'll get a thorough account once you have that you've got a proper command line tool if you're not going to use copper you don't have to use copper you don't need this the scripts will work without it it's just you'll have to do the the builds either locally or you can use Koji but if you use copper it'll host the repository which is really nice so once you've got all that said you've got copper installed go home it doesn't really matter there's two scripts a reading and a license so we were talking four files they're all bash scripts so there's nothing to build and you'll see the dependencies in the file obviously these things get the normal said awk but it's nothing to elaborate no big dependencies so once you've got that eventually you're going to want to start using it so the easiest way to do that and this will probably as we get farther down maybe I'll pack this up in an hour but for right now this is the way to do this create a new project directory it doesn't really matter where create a new project directory I call it the copper project here soft link in pcopper underscore page pcopper underscore srk in kernel we'll talk about those in just a minute soft link those in that way you can do a get poll and audit all project directories to get updated but don't soft link in the configuration pcopper.com copy that over because that's going to be specific to the individual project that we're working on and we still don't have time to go over all the configuration that are in there but it's just a file that gets sourced by the bash script so it's variable people and then whatever and there's documentation in the configuration file that should be pretty straightforward and there's examples in there as well so once you have any questions about it send me an email catch me in the hallway obviously if you wanted to help out with any of these things over cost surgery more documentation all very well so once you got that soft link and copied over and you've edited the configuration file and everything's good it's time to go ahead and actually get the upstream repository so you know I assume it's get since everything is pretty much get these days but go ahead and clone your project it doesn't support anything the script still supports anything but get at this particular moment and that's not an inherent limitation it'd be very easy to add some additional capabilities to the script I just all my projects are good so I haven't had a good flow but if you have something that's in subversion or curio or something else and you're interested in using these and you've got patches once you have them so once you do that adding any remote repositories that you might want to pull in from other people and create patches from and set that up clone the thrower package for the source RPM get that set up and then you run this course it's pretty simple pcop or underscore patch this tapes the upstream repository and based on the configuration file where you tell which branches the patches live in it'll go ahead and take everything in that branch and create one patch for that branch so you can have multiple remote repositories multiple branches you can create patches for each one and it will dump these on the directory for you and then pcop or underscore srpm-curnal that'll go ahead and take those patches apply them on top of a fedora rockhead kernel it doesn't have to be run, they should say put them on a fedora kernel and if you use the dash speed it'll automatically go ahead and submit them or possibly and if it's kernel come back in three hours I understand you need kernel to test there's plenty more command line options for all of these my luck with live demos is pretty poor so I'm not going to try this or leave that on the live files and stuff anyway, you can check it out there's more command line options we're doing different things pcop or underscore patch pretty much work with anything that I can't know whatsoever so you can use it for use space projects system library any library wouldn't really matter the srpm it's 90% generic the only problem is as you know if you try to apply patches to sourced rpms there's especially if you've ever looked at a kernel sourced rpms it's huge, it's a mess so knowing where to put the patches in there can be a little tricky sometimes the spec file is very large so there is some logic in there that is specific to the kernel sourced rpms that's why it's actually pcop or srpm-kernel it would be relatively easy to make a generic version of this it might not work for the kernel because there's a lot of these in there but you can always custom tailor those to whatever package happens to be and just bring in a srpm- my package perhaps as I play with this more in a bit more time I can make a generic one that works in the kernel I can just play with it right now that's that if you know bash, if you're familiar with spec files it should be relatively trivial to cover that and that's pretty much it any questions? one point pcop to the phishing script I am not sure but I think that the kernel team sounds pretty too many things which is for the federa for the federa there to be one happening so like the people who are using it do you have some other solution? are you requesting for the testing back for I suppose we are doing the testing right now just for Intel so if it's possible to send it to a directory oh yeah yeah so so we generated srpm so that we can build for anything I was originally my primary architecture is x86 before I came like all of us in the room here right? so that's what I care about the most that's what I do there were some things where for a while I wanted x86 specifically a 32 bit kernel so it's building for that the raw hide wasn't super stable on x86 and there were some problems with copper which after having talked to the copper people it doesn't like to be resolved now but basically if copper had support for those architectures we can do that even if copper doesn't support it like I said you can still build it with koji but you can still build it up with a source art you know rpm build rebuild like I said I just like copper because that's the nice thing where it builds it for me and it then posts it it's already in there yes it's my great thinking I don't know if I'm correct so we will be getting close to we can get more I just yeah I just heard that at the presentation I didn't need pulling it from my copper imposters but if it's not reliable like I had to turn x86 off just because all my pills were fair and you know they was driving crazy but yeah let's copper it so it's easy for us to do like I said any time it's easy we got koji you can build it alright well thank you very much guys I hope this was useful just hope it didn't put you to sleep or if it did I hope I spoke quietly but it didn't wake you up so there we go thanks a lot