 Let's go. Oh, sure. Oh, I'm taking them all? Thank you. You're all right. I appreciate it, man. What? Which? Yeah, I... I forgot when your talk is, so I should have called it out. Yeah, this makes peaceful the next... No. What? Oh, yeah, I'm tangled up. Okay, missus. So, who then? To present? We cannot share this. What do you have, two microphones? Yeah, we have two. So, one of them is this one. You have 40 minutes total. And we'll show you when it's left. You have scarves to give out as questions if you have time for questions. And water is here. And let's make this thing. So, which one do you have? Is it HDMI, right? Okay, let me just switch it to HDMI here first. It should work. So, you don't scale it to 1024 resolution. But, yeah. It will work probably. Hello? Test? Oh, there we go. It came back. There we go. Hello, good morning. Welcome on our third day of DEF CONF and first live session in this room. I think you already know all the important stuff. So, let's just begin. There is engineer from Red Hat. Hi, all. I'm Dennis Gilmore. If you don't know who I am, I work on the Fedora release engineering team and look after making sure we can actually ship Fedora and do some cool things. Let's jump into it. So, after a while, we're going to take a quick step back. In the last 10 years in release engineering in Fedora, we merged core and extras. And that brought about a big amount of change. We had to write Pungie because the tooling that was used to make core was not allowed to be open sourced. So, we had to start from scratch in how we make the distribution, the tools that were used to pull it together. It was a big change. Then along came live CDs and we're like, oh, this is really cool. Let's add some live CDs. And then we went from one live CD to about 50,000 or something right now. A huge amount. Then we added cloud images when EC2 became a thing. And I mean, Fedora 14 or 15, I think 15 was the first release where release engineering produced the cloud images for EC2. And then in Fedora 21 we got the additions where we then had to figure out how to go from making an install DVD with a repo of packages. And at the time I think it was about six or seven live CDs and the cloud image to making 20, three different additions with three install trees and we had to revamp what we did. We added OS Tree and Atomic in the last couple of years and then we started doing two week releases for Atomic in Fedora 23. So now every two weeks we push out a new release of Fedora Atomic which includes install DVD we update the cloud base image and the Atomic image and the vagrant box images we really only push out the Atomic side of it. We also create an updated Docker base image every two weeks as part of that process which we don't push out to the greater world. It sounds like a lot of stuff but it's really not the way we produce Fedora today for up to 23 has not really changed a whole lot since the core and extras merge. We have tweaked things, we've kind of grown organically, we've added stuff but it's the same basic Still working on this. Sorry, this is actually a it's probably Samsung that's the issue. Okay. So in Fedora 24 we have been working on a whole bunch of changes and radically redoing what the way that we make Fedora to, you know it's the biggest amount of change since the core and extras merge we've had a whole bunch of organizational changes it's loading okay hopefully switching from live CD creators let me create a live CD Paul Frills so we're pulling people in from different teams as we go so we're like hey, we really want to do this thing we don't have the people in the team right now, let's go reach out to internal DevOps into Fedora engineering because Adam Miller reports through Fedora engineering and he's not a relange person he's just working full time on doing relange stuff to meet the needs of the greater Fedora organization so that's some of the organizational changes we've made we've got a big change in for refactoring Pungy so the old Pungy we would run in serial Pungy to, and feed in the kickstart get out all install media it was clunky and unwieldy anyway, it's what it was part of the changes we're making we're making the install DVDs and the boot media as tasks within Koji we use the runroot plugin in Koji to enable us to do that so we can distribute stuff now and do it in parallel which greatly speeds up the compose time Fedora 23GA compose I think took 9 hours from the time I kicked it off to the time it was done we're doing a compose right now in about 3 hours it varies a little bit between 2 to 3 hours so it's much much faster we also run a single process so instead of having to go off and spawn a whole bunch of different processes to do things, we run one command and it triggers everything keeps track of everything actually makes sure that what we're supposed to get is what we get we also emit Fed messages at all the steps so if you're at home you can follow along and see, hey, the composer started and it's at this stage part of the black box of Relange was that all the log files were on the compose box and they were never exposed to anybody but Relange, you'd have to dig into the machine and be like oh, you know anyway, it's terrible so we have central logging we have a whole bunch of metadata about the compose we produce a whole bunch of JSON files that say these are the artifacts in it and all the logging is available in a central place so if you care about, hey, why did this go wonky you can go look at the log files and figure it out yourself because it's there, it's open and it's available and... you gotta do that Pungy is taking over the management so we had a bunch of shell scripts that would fire off live CDs Pungy does all of that as part of the config file and the work on all this was mostly done I've done some stuff but it's mostly been done by Daniel Mack because Pungy, what we ended up doing was we took the compose tool that is used internally to make Rel and we pulled all the bits we wanted and we grafted it into what we had and we've come up with a new Pungy so a big part of this goal is unifying the way we make Rel and the way we make Fedora to be more flexible, more agile and enable people to internal relinch guys that need to work on a feature or we need a feature done we can pull in people easily and there's more knowledge we're sharing more resources Live Media Creator is a pretty big change as far as live CD creation goes I don't know if anyone knows how live CD creator worked but it's probably me and Brian Lane the only ones that actually know how it works but it basically does a yum install root and then does a whole bunch of stuff on it and at the end you magically get a live CD so Live Media Creator has actually been a tool that's existed for a long time it gets shipped in Larex which is part of the Anaconda stack of packages and it calls into Anaconda and Anaconda does the install to create the Live Media so anyway we get the benefit that we've instantly switched creating live CDs from yum to DNF it's gone from Python 2 to Python 3 which, you know, a nice thing to have it has somebody that is responsible and wants to own maintaining Live Media Creator maintaining the tools we've been trying for probably four years to get rid of live CD creator but we have not managed to do so we integrated it all into Koji I have the last patches I need to deploy in production so that we can actually and then there's like five lines of kickstart change to apply and then we'll be switching to Live Media Creator for all the image builds in Fedora Michael Klain did all the Koji development work and Brian Lane enhanced Live Media Creator to suit the needs and I did all the testing if you have questions at any time feel free to ask yep it's running in a churrut the big problem with Live Media with, historically with Live Media Creator is that the running Anaconda in a churrut didn't work it would blow up in weird and wonderful ways and at least the live ISO initially the plan was to make disc images and enable Pixie to Live for Atomic they the features for that don't actually run in a churrut so we're working with the Anaconda guys on ways to enable Live Media Creator to make all the things that we need to make so the next thing I'm going to talk over real quick is the product definition center it's a new piece that Ralph been worked on it basically is a web front-end and API for tracking everything that's in the compose although rpms the artifacts etc like that that then gives us the knowledge gives us the information that we can work with to look at automating things like the compose the configs that define what goes in the compose because we can say, you know something in the Docker base image has changed let's rebuild the docker base image but we don't need to rebuild everything else or you know something that's in workstations changed just rebuild that and gives us better it gives us the foundation in order to be able to implement rings in fedora and because we can now actually have the programmable accessible information in order to figure out what's in the base what's in the workstation ring what's in the server ring, what's in the different pieces so when we first proposed this, josh boyer on the relangelist said why does this need to change it doesn't need fesco to approve it, it doesn't need you know it's self-contained it's a relangel thing you know should fesco have the responsibility to approve it etc and the answer is really no but we've filed changes for all the big things we're changing in release engineering because we want to create visibility in what we're doing and enable people to come along and say hey that's really cool with that information I can now do this really cool thing that I want to do or you know I want to mine the data and see how much does the package change over the cycle of our release in a six month period in raw hide or whatever the case may be it gives us the ability to do that and release engineering as a whole is trying to be much more accessible and transparent in what we're doing so the next feature that I'm going to go over real quick is koji sign repos jager gooski from internal language has been working on this and the simple idea like today whenever we make repos and we want to make sure the RPMs are signed we use a tool called mash and you can run it from anywhere and you know it's really cool and useful but there's a lot of use in being able to say you know koji give me a repo from this tag with all of these packages signed by this particular key and not have to run external tools on external machines so that's what we're going to do unifies how we make the repos so bodai is going to be adjusted to instead of calling mash in a charoot on you know it's back end it's just going to say koji go make me a repo for you know the f24 updates tag and you know we'll use the same features in punji over time and you know like redefine punji you know it gives more transparency into how things work the logs for mash from bodai they're not publicly available there's a whole bunch of logging that we get in relange that no one ever sees and you know probably no one wants to see it but that one time you do want to see it it's going to be there and you know allows us to scale the work out across multiple machines the layered build layered image builds is the next change it's something that adamila's been working on it will enable us to make layered docker images and integrate that into koji and ensure that we know like what goes into them and it contains a whole bunch of stuff command line interface you know extension of that actually has its own implementation of OpenShift v3 that it uses to build them you know that output is fed back into koji and we're actually as part of this looking at putting in our own docker registry because the docker registry upstream is somewhat terrible they don't provide an API or any way to really in a great way to interface with them so you know probably are we going to have the crane registry in 24 or is that we don't have a solid indexing yet but awesome so the work for this has been done by a whole bunch of people not just adam, colin, walters tomasz, tomacek, tim war and amandicada yoda people and I'm not sure if too much is here but you know like the yoda people they got resources to make sure we could get everything in place and it's probably one of the bigger changes in you know how we do stuff and it's going to enable us to look at you know many more deliverable artifacts which means that we need to be able to you know adapt and deliver more and more and more so we have a priority pipeline and this is stuff that we're looking at working on in fedora release engineering we're trying to document all of the relinch processes internally so that you know one internal release engineers can come and help us as need be and two you know anyone in the community wants to help you know do stuff it's easy for them to get involved and you know traditionally we've been pretty terrible at getting drive-by contributions we can now actually get those drive-by contributions and we're working more towards enabling people to come along and say here you know here's this thing and we can accept it because it's really clear what you need to do this is something that it's like an ongoing project that we're trying to get towards we're trying to develop standardized processes for everything pretty much and that is you know to enable you know make things clear it easier, enable people internally but you know like in my ideal world if when Red Hat wants to hire a new release engineer they come and say hey fedora who's in the community has been doing really cool stuff with and helping you do you know release engineering and we go oh you know Joe's been helping us do some really good stuff and he's being proactive and he understands how stuff works and then he gets hired into Red Hat and he's immediately useful because he doesn't have a three or six month time where he just has to learn how stuff works because he already knows how stuff works and Red Hat internally uses the same processes as we do it's an area where I think Fedora is going to take a lead and define how Red Hat as a company does stuff we need to work out how we can drop I686 for different things either stop making it entirely or you know we need to make more flexibility in how we compose Fedora so that we can promote AR64 for server to primary and demote everything for I686 to secondary but keep multi-lib enabled because people like to run Skype and Steam and other things that come as 32 bit and wine and most of the Windows applications are still 32 bit etc so as much as I really really want to get rid of multi-lib entirely because I keep going it down saying let's drop S390 let's get rid of the 31 bit stuff we don't want that then that only leaves us with I386 on the X86 as the only arch that has multi-lib multi-lib is a pain in the butt I would really like to kick it out the door but you know so we need to work out the flexibility and the tooling to enable us to send pieces of the compose to different places and redefine ultimately redefine what is a secondary arch what is a primary arch what is a primary deliverable what's just a secondary deliverable that if we make it we do if we don't sorry and sure that whatever the case may be we're going to be working heavily on release automation for Docker and Atomic because we're expecting Fedora 25 probably three different Atomic OS tree based things that you know the pdes IoT stuff the workstation working group came to me last week and said hey we want to make an Atomic version of an OS tree version of workstation because they see a use case for that and we make the Docker base images today we don't make any layered images we'll use cockpit as a test case we're testing all of the layered image stuff but you know it's probably going to we're probably going to end up with maybe 100 or 200 or more layered images and QA that we need to you know make sure that what we ship is right and we need to work out things like do we put that layered image in the same Bodi update as the RPMs or do we do separate updates for layered images I mean there's a lot of stuff that needs to be worked out as far as the release and creation of Docker and Atomic we have a few things in our priority pipeline that are related to switching to Python 3 and we need to make everything use create repo C because it has a Python 3 API and create repo never will we need to you know move things from yum to DNF because yum's never going to have Python 3 APIs we want to move DVD creation into Koji as a task so you would say to Koji make this repo with this stuff and make sure it's signed and Koji go make this DVD from this repo and you know abstract out of the tooling and put into Koji all of the releasing so in the end is likely to be something that will do the stuff it does today but will also just be an orchestration layer to say Koji go do this stuff for us and we need to figure out a great way to make the docket images for all the architectures including Power, S390, AR64 so we'll continue on the priority pipeline I actually have 3 slides in this I'm terribly sorry but 2 years ago no there's no wording this is the kind of stuff we want to get done 2 years ago I got asked how can I help what things can I give someone to do to help you and think of there's lots of stuff but I didn't have the time to sit down and write it so I think it's awesome that we have a big list of stuff that we want to do big thing we want to do is port everything to Python 3 the distribution as a whole is moving to Python 3 we really haven't done any work at all on that so Pungie, Mesh which we're probably just going to take out back and shoot all our relinch scripts Fed package, R package you know we also want to make sure that as we do that we write test suites so everything works a big thing today that we need to work on is test suites and automation everywhere because we're going to be doing more and more and do it twice as fast as we did it yesterday and we can't do that with the way that we've been doing things we need to work out how to deliver 0 a day fixes quickly so when shell shock comes along you know we need to not be like oh yeah we'll get that out in 24-48 hours because that's how long it's going to take us to go through the bow die process and get the update out to the world it's a problem we need to solve we need to ensure that we can rapidly get critical security fixes that can compromise machines onto users boxes quickly there one of the things in making redefining secondary archers we want to bring all the Koji hubs together the relinch people that do the op stuff particularly in the secondary arch base spend a lot of time babysitting shitty processes that mostly work but I mean they get it done they mirror the builds properly we don't have the same kind of issues where Debian like Debian one arch will link against different surnames because something failed to build and they just carry on we try to ensure a level of consistency across all the architectures we also want to pull in a lot of the rail processes that they use like license checking, ABI checking RPM diff and automate that into so when you do a commit and you push a new tabo to look aside cash I want to check the license of everything and make sure that licenses haven't changed that everything's licensed okay that these two files in this tabo are actually compatible license wise with each other and you can link them together and automate those checks because people are terrible at doing stuff like that there's a whole bunch of them that a big part of being flexible and faster we also want to make sure that whenever Red Hat decides we're going to go do the next version of rail that they can take what's in fedora take the tooling and processes it's just a lift and shift they lift it wholesale take it internally and then make the changes that define rail and add their extra bug fixes and extra polish and integration and stuff like that but make that process smooth that they start on Monday by Friday they've actually got to compose or we start on Monday in six months from now we'll have rail compose have fedora lead what is going to be rail we've got Koji 2.0 I'd really like to see Eratotool and Bodai Eratotool for those that don't know is a tool internal to Red Hat that pushes all the Erata I'd like to see for the places where we interact with Koji with Fedora processors with Bodai internal processors and Eratotool Bodai and Fedora and Eratotool and internal where we have the same we've got two minutes where we have the same tooling that's talking rather than write two different functions with two different APIs we put a common API into Bodai and Eratotool for shared functions make it simpler to write tools that interact with the different systems so we can share processes more at some point we actually want to solve the embargo problem sorry I don't know it is headline summary headline summary is they're still trying to figure out what exactly it's going to be the initial thought was that they would just go and rewrite something and start from scratch and they're like no that's actually a bad idea let's work at how we can iteratively change and keep Koji functional while we go from the 1.0 model to the 2.0 model so they're still trying to plan exactly how they're going to do that because it's not going to be the ground up rewrite that they initially thought that would be a great idea so that would be a bad idea for a long time people wanted to be able to deal with embargo stuff open JDK is a big example when there's a security bug we know about it beforehand we build everything and run it through the TCK etc internally which takes about a week before you can actually ship something we can't do that in Fedora we've got to wait until that CVE is public before we can build it and then go through it because every open JDK build I don't know if anyone knows it they run the internal Java team run it through the TCK and the Java certification and make sure that it complies so the open JDK and Fedora is a compliant Java and it takes a week so we can't deal well with the security bugs and there's been a few other times where dealing with embargo issues would be nice because everything is open it's somewhat difficult we'd like to look at ways to leverage internal support people to help us deal with the basic day-to-day ticket sorting of things pointing people in the right direction you need to figure out how to do that and what that actually means we want to clean up all of our scripts we want to so one of the issues with FedMessage is it doesn't guarantee delivery and I complained to Ralph Bean enough that he wrote a thing he's called GILMessage and it ensures that that message gets delivered so we need to evaluate what parts of the process that we're going to rely on FedMessage but we want to ensure that the messages get through where we need to use GILMessage we want to look at doing some sort of builds to make sure that if the surname bumps we can have a process set up that automatically deals with that or sets up side tags things like that make sure that signing works have raw hide signed we'll need to work on tooling on a service to be able to sign raw hide sign atomic repos there's a whole bunch of things outside we need to work out a way to do that without a person having to type in a password because that's a big bottleneck we want to actually implement Fedora rings and enable us to do different things the layered perhaps look at doing pulling in non-RPM content into layered images and yes sir that's what's in the wiki I copied from the wiki page I nearly deleted it but I'm like that's what's in the wiki and the priority we want to finally satisfy Richard Jones's request to have some signed metadata that can be used for invert builder to implement simple stream stuff so you can see image I'm not sure of the exact yes sir a big part of the issue with it today is the signing it has to be clear signed like we signed the check sums it's got the text and it's got all the GPG all the signature stuff inside of that text file just like we assign check sums we have no way to automate that and apparently having it not signed is not an option and that's the big issue more than the actual making the metadata that it needs it's the signing also for the there's already work started on making the platform library that makes all the different release engineering common processes put into platform basically API there's only like nine functions there right now but we're adding to it as we go and all of them have high tests all the functions are mocked and we will have all see if you create a pull request for a change the test must pass basically average due diligence but we want to make that for the future because a lot of stuff now you need permissions to be able to test it properly so the goal is to allow people who have skillset to be able to do those kind of things we kind of slowly started changing and we're now getting to a point where we're rapidly changing and how we do stuff we we'll get there we now have a requirement that if you're submitting something that's new you have to put documentation in and document that thing so that in our docs we have everything yes sir yeah it's been open for two years yes exactly it's on our list because of it's why it's here part of it's automatic signing and part of it is we need to work out a way to actually do it because at least if I understand it correctly it needs to be done at the distribution level not at the per-release level it needs to be done at the distribution level not at the per-release level so it involves changing a little bit how you know it's it needs to be integrated in the process but it also needs to be somewhat separate because it needs to not be really done at like compose time but it needs to be done when we're pushing the content live so that it possibly I mean we could probably look at like PDC for is the integration point and how to do that so that it needs I've only got a couple of minutes left so this is really the last slide and there's just questions but you know just as a whole we want to be more flexible and proactive we don't want to be reactive anymore we you know need to be able to be moved with the changing landscape as new content types, new things come along you know atomics I think I said a couple years ago that Doc is not the last shiny something else is going to come along Atomic came along and there's going to be something else that's going to come along as well and we need to be able to flexibly deliver Fedora rapidly deliver Fedora and ensure that we don't compromise our foundation so that just means everything we do it needs to be audible, accountable, reproducible you know done in a way so that we can be you know confident that what we're shipping is a good thing and to do most of this we need to work really closely with the QA team to make sure that there's continuous integration and testing and automated testing happens of everything the QA guys are putting in like auto QA to test and store media and you know we're getting there it's just going to be lots and lots of work so let's say any questions or come talk to me afterwards and you know feel free to yell at me or whatever or not so I have three scarves let's see if we can get three really quick good questions and I can give them out or I'm going to keep them at a time