 I'm going to stand in front of the microphone. Thank you for coming to hear us talk about Bodhi today. And I'm Randy Barlow. This is Pierre-Yves Chibon. And Pingu, he doesn't like to use his real name. I'm Bola Vakesa, if you know RSC. And we're going to talk a little bit about what Bodhi is, although I see a lot of familiar people who I think probably know what Bodhi is, but just in case. I'm going to tell you a bit about the recent history, just sort of what's been going on lately in Bodhi. And I'm going to tell you about our immediate future plans leading up to Fedora 27. And then I'm going to hand over to Pingu, who will tell us a bit about a really cool idea he has for automating the packaging side of Bodhi to make all of your lives easier. So if you're not familiar with Bodhi, it is a tool that really has two primary users. The number one user of Bodhi would be Packagers. And hopefully, most people in this room are Packagers. So if you're a Packager and you want to push out a new update for your package into a stable release of Fedora or a branch through this Fedora, you go to Koji and you make your build. And everything goes well. You can submit your update to Bodhi. And Bodhi makes it easy for your users to give you feedback before the update goes out to the wild. So it gives the ability for people to try your package and report back, whether it worked or it didn't work. And they can say, leave a comment for you. And they can upvote and downvote. The other major user of Bodhi, however, is Release Engineering. And I think that's something that a lot of people may not realize, that there's this whole back end side to Bodhi that it's meant for the release engineers to use. And this tool creates, it's called the masher. And it creates the repositories that your system's all subscribed to and pull the updates down. And that's a major sort of behind the scenes piece of Bodhi, where we've been doing a lot of work lately. So I'm going to talk a little bit about that as well as we go through. So we're going to talk a little bit about the recent history of Bodhi to begin. So I am new around here. I'm not new to Red Hat. I've been at Red Hat for about four and a half years. I worked on the satellite product for a long time. And specifically, I focused on PUL, which is the component of satellite that distributes packages to satellite users. In June of last summer, I joined the Fedora team, which was an amazing opportunity because I've always been quite passionate about open source software and up streams and being involved in Fedora. And actually, I had been a Fedora contributor for a while. So working full time on Fedora was sort of a dream come true for me. And so I joined in the summer and I started making a few patches here and there and various pieces of our infrastructure. And one of the things I made a few patches for was Bodhi. And many of you may know Luke Macon, who was the Bodhi, he created Bodhi. Unfortunately, he moved on. And so when he sort of left Bodhi vacated, let's say, I was asked to, because I'd made some commitments, they were like, hey, Randy, you made some commitments on Bodhi. Why don't you work on Bodhi? And I was like, OK, it's on school. You own it now. Yeah, I touched it. I own it now, and Matthew said. So I own it now. And so I started looking around and trying to focus on some things. And I started talking with the release engineering team, the Fedora release engineering team, because I wanted to find out what are some challenges? What are you guys doing? What's the problems? And I heard a lot of feedback from them. And one of the biggest problems was the masher has a lot of stability issues. So I started immediately to start trying to analyze this and figure out what's going on there. And that was a very big piece of code that is difficult to test. So this is an interesting piece to work on. And so I made some commits. And then I started looking around and I realized that Bodhi had not been released for a pretty long time. I think the most recent release had been in February a year ago, almost a year ago. And so I said, we should release Bodhi. It's had a lot of commits since then. And so on one day in October, I think it was, I made Bodhi 2.2.0, which was a snapshot of the development branch. And I deployed it to production and everything broke. Yeah. That is true as well. We had migrated. The Bodhi back-end masher had been on REL7. And we migrated it to Fedora 24 at that time. That's right, to get rich and weak dependencies in the metadata because the, which piece does that? New RPM? Yeah. So for the recording, we wanted to use Fedora 24 for some newer dependencies so that we could get the rich and weak dependencies into our repo metadata. But the stability issues that resulted that day were difficult. And I think I worked 16-hour days for four or five days in a row because there are so many things that happened. And I think there are some lessons learned because after I finally got to sleep and things were mostly working again, I got to sit down and kind of analyze what happened. Why did 2.2.0 blow up production so much? And one of the things that's a lesson learned is it seems that there had been a lot of hot fixes placed into production that had not been placed into our Git repository. So by releasing 2.2.0, I erased a lot of fixes because they weren't in our repository. And I had not been aware that this practice had been happening. So one lesson learned is put your patches in Git. We all like it. Another lesson learned, I think, is that going a long time between releases also means that you've got a lot of code that maybe you've tested on your development box and maybe you have some automated tests for, but you don't have, it's not been running in production. And this is how development works. But the longer you go, that diff gets larger and larger. And the chances that something you wrote that passes all your tests doesn't work in production grows. So another lesson learned is that we want to release more often. And in fact, in October, I think I made four releases in three weeks, but those are all bug fixes. So those are some of the issues that I encountered when I initially joined the team and took over the BODY project. Why don't I say takeover? Started to lead the BODY project, let's say. Since then, I've really been, again, focusing on testing and stability. I personally have not been working on new features yet because these mashing issues and production crashing issues, I want to make sure that it is solid before we go and start adding new things and create more chaos. So I've been really focusing on automated test coverage a lot. When I joined the project, we were around 73% test coverage. And I have a personal goal of I've been trying to raise 1% per month. So I've got it to 81% in three months, which is a lot more than 1% a month. And I'll put aside a note. Test coverage isn't necessarily a measure of test quality, but lack of test coverage is a statement about test quality. If you don't have test coverage at all, you know for sure you're not testing that code. But just having 100% test coverage does not mean your tests are good either. So just to be clear. What does that mean? Oh, a percent of lines of code that the tests touch. So we're measuring, we run our test suite, and we measure, I don't do this. There's a program I use. But it will keep track of did this line of code get executed at some point during the test suite. Now this doesn't mean I have any assertions about to do something correctly. But it does help with things like catching typos that might explode or cause an exception. So there's some value there. Quality assertions are very important. And I assure you that I've also been making reasonably good assertions as well. But you can't put numbers on that. So I'm just going to promise you that I did that. I don't have numbers on the mashing reliability, but I will make the bold claim that is subjectively more reliable than it had been before. Especially with the 2.2.0 release, the masher was very difficult to get working again. And I want to give credit to Patrick Altrovich. And I know I mispronounced his name and it's in a recording now. But if you don't know him, he is an amazing engineer and admin and debugger. His debugging skills are unbelievable. And he's really, really contributed a whole lot to the masher. So I don't want to claim, I've done a little bit there, but he's really done the lion's share of work in that area. And I was talking to a release engineer recently who told me that the masher does work better today than it did before. It still has a lot of issues. One of the current problems is RPMOS tree often causes it to crash. And then Dusty comes talking to me and asks for what's going on. And then I'm like, I don't know. So we still have some stability issues in our daily mash. So that's an area of focus. And Dusty's been doing a lot of work, I think, on tracking down those problems. And he's been really helpful to me. I'm trying to make the feedback. Yeah, for the recording, Dusty said we're making the feedback loops tighter. So he's really been focusing a lot on tracking down these issues and helping improve the quality there. Because this is very important for the near future. So one of the things, well, actually, I'll get to that in a second why that's important. Just trust me, we'll talk about that again. So now I want to talk about what we're going to be doing in the short term. And for the short term, I'm thinking the next few months. I still want to continue focusing on stability, raising that test coverage. I think 100% test coverage is a good goal. I do not plan to do that in two months. But over a period of years, I think that's an achievable goal. So I will continue writing tests, continue chasing down production problems. And as we do this, I think everybody release will be higher quality than before. And then we'll have more confidence as we introduce new features. And speaking of features, you've probably been to many talks in this conference about all the crazy new things that are happening in the Fedora space. And there's a lot of things that are really going to affect the Fedora infrastructure team. Introduction of containers. And we already do OS trees, but we don't gate them with Bodhi and Flatpaks. So with these new content types, we want to introduce the ability for users to test those like we do with RPMs. So you can see a new version of an HTTPD container and try it out and report back that it worked or it ate your data for some reason or you can't find your cat since you installed it. So that's going to be something that I think we're going to be focusing a lot in the next six months. By the way, feel free to ask questions throughout the talk. I meant to say this at the beginning. I'm open to questions at any point. So, you know. So actually when you were talking about that actually, I'm assuming it's running like it's a separate service or self-service? It is, yes. Is it running on the same VM or machine? No, it runs on a question. The question is, is the masher a separate process? Is it running on the same machine? So one thing that's awesome in the Fedora infrastructure is we have FedMessage. It's a message bus that we use to actually almost everything that happens in Fedora emits a FedMessage. This is how you get badges and all kinds of awesome things I really like badges, by the way. I'm obsessed with badges. In fact, the only reason I submitted a talk here is I want the badge. So the way that the masher works is there's a CLI tool that the release engineer will use. And they will say, I would like to mash Fedora 25 updates testing. And it will query the database for all the packages that have been pushed to stable basically, either through karma or through manual pushing. Then it will present a list of those packages to the release engineer for them to review. And the release engineers like to study this list to look for obvious mistakes. One example of that might be if they see a GCC update, but not a lib tool update, they don't want to release that. So after they hit yes, that emits a FedMessage with this huge list of packages, there's another process on a back end server that is called FedMessage Hub. And FedMessage Hub is a plug-in sort of system. So we have a Bodie plug-in that sits there. It receives this message and then starts the process and create the match. It runs the CLI. It actually runs its own instance of FedMessage that's isolated on the box and doesn't talk on the main FedMessage bus. So it's, well, it's separate. It's one thing. The front end, which is what you interact with, is one box that talks to DB and then there's the back end. Yeah. So Jinnah said the comment that the CLI tool is used on the same machine as the back end mashing. So that's another interesting detail. So you said you said you're going to focus on stability. When are you finally going to fix the workflow for auto-comedy sabers? There are two old bags that have been opened for months. And before that, it was already broken. There was a bag that was open for even more months. And it's still not working. And basically, it doesn't work at all if you disable auto-cama or you always have to wait for the time out. They cannot put anything. Yeah. So the question is about auto-cama. There are some issues where when auto-cama is disabled, there are several issues, actually. I've seen these. One of the biggest problems with booty, and we're going to talk about this on our last slide, is manpower. We don't have very many contributors working on booty. There are a few of us and not very many people that are full-time focused on booty. And the release engineering has been the biggest focus because if the release engineering is broken, we can't release any updates at all. So that's the highest priority issue. And that's what I've been putting my time into. Not that this issue is not important. It's very important. In fact, I think all of your issues are labeled high priority in GitHub. And that really is high. But we really need help. And we really would appreciate poor requests from anyone. Well, Pingu has also been asking me this question. It's just that we haven't migrated to Pagura yet. We do plan to do that. Two? So the remark for the video is that two, three years ago, Pagura was very new. Well, I'd just like to correct that two, three years ago, Pagura did not exist. Oh, really? Yeah. So I think it's just history. We want to do this. But again, we're manpower, right? That's a task. Someone's got to do it. And yeah. But yeah, we seriously are doing a lot of work. It's just there aren't very many of us. So I'm going to move on to talking about release engineering automation, which goes back to what I told Dusty I promised I would talk about the OSU stuff again. So one of the things that I think would be very interesting to do with Bodhi is to try to reduce the amount of time that release engineers have to spend with it. And I think that most of them would be happy to spend less time with Bodhi, especially given how much it used to crash in the past. And to some degree, it still crashes today. So Patrick Alterovich, who I have now pronounced twice, probably incorrectly, has done this amazing thing, where he has made this tool called RoboSignatory. And RoboSignatory signs all of your packages when you create a Bodhi update. So this is also Fed message driven. So when you create a Bodhi update now, a message is emitted. And RoboSignatory receives this message, and it goes and looks at your RPM build from Koji. It signs the package. And then it can go into the update and be pushed to updates testing. This is a very recent change. I think this happened in November or December. And before that, it was impossible to automate Bodhi because the release engineers used to go and sign all the packages because we didn't have RoboSignatory. So this enables us to consider automating everything about Bodhi because now we don't need a human to go and sign all these packages. So we started thinking about Bodhi push, which is that script that runs on the back end that you asked about. And the main thing that is blocking us here is that this is the stability issues. The RPMO is tree that Dusty's been working to stabilize is probably the number one issue we hit these days where the match will crash and we have to go look at the logs and figure out what happened. So I really appreciate Dusty, your efforts there because that's probably the number one thing. There are other issues as well, but I think that's something that we'll be focusing on in the coming year to try to free our release engineers to focus on tasks that they would probably rather be focusing on instead of focusing on our tracebacks. So that's a area of automation focus. I'm now going to hand this cool clicker over to Pingu, who's going to tell you about a very awesome idea that he had that he would like to present to you. So thank you, Randy. So one of the things that the Fedorange infrastructure team is meant to be is the place where we can actually research and develop on how to make your life as a contributor easier. We get to actually work on your crazy ideas that you don't have the time to work on and we also get to work on our crazy ideas to still make your life easier. So one of the ideas is how can we make the life of packages easier? And what are the tools around? So we should start by looking at what we want to do and now it's going on currently. So one of my ideas, basically, I want to be able to do a git push, you know, in this git, I want to be able to do a git push and then go for a coffee. My work is done. And so basically I want to be able to go from this git directly to update testing and it doesn't, I don't do anything with it, it just happens. So the current process is basically you start with a git push, you update your spec file, you update your patches, you open the new sources, everything's done, you git push. Then you need to trigger the build in Koji and then you wait for the build to be finished and then you actually go to body and you create the new updates and then you wait and then for engine engineering, you pick it up and push it to update testing. It is possible to do the update from a common line but whether you actually click to go to body.rearproject.org or do Fed package updates, you still need to actually do it. So. The problem is you have to wait, you can't do everything at once. So Till just pointed out that's one of the issue in that process is that you basically need to wait. So you do a git push, you need to wait to trigger the build, you need to wait that the build is finished, you need to create the update and wait that this update gets created. And then, well, if we actually want to do auto pushing, basically, we actually want to have some testing in there. We want to make sure that the update that you just pushed is not going to break the entire repository. So we definitely need some testing and that's something which we already have in place these days, it's Taskotron. And so once we have our updates, body update created, once we have our test run, relaunch basically takes it up, mash push and since landing update testing. Well, we can actually do part of this automatically nowadays. So I'm going to introduce you about the fiddle build idea and then so that would be, you know, the big new, the new hotness number two. Then we have Taskotron, so that's already in place. We have unit tests and then we have the body part that Rondi already just spoke about that can actually already try to automate the push, try to make the relaunch life easier and try to spin things up a little bit. So what is fiddle build? So the idea is fiddle build is not to change your current workflow, but to make your life easier if you want to. I mean, if you like your life the way it is, I'm perfectly fine with it and you just keep on going the current way it is. So the way we introduce being this entirely opt-in and just on a volunteer basis, we introduce a new file in this git which is going to be named changelog.yaml. So it's a yaml file and it basically needs two informations, it's named the NEVR, so name epoch version release. So it needs to know basically which package you're building and which updates you're creating. We could actually drop the name part of it so it's still working progress, so we might change a little bit that. And then it needs a changelog. While you're going to complain, why does it need a changelog? I already have a changelog in my spec file, I already have a changelog in my git repository, why do I need yet another changelog? Well, this changelog is actually the one that you're going to expose to the people that are accessing the update in body, it's the part where you want to actually tell your tester, well, I've been updating these packages, it has this and this change, so this is what you need to test. This is the important piece, the fact that you update to 1.0 doesn't matter, what matters is that between 0.9 and 1.0, X, Y and Z were changed, so this is what we need to test. So. Can we change that to say something else? Matt and Sano are proposing that we change the changelog terminology, I'm definitely open for that. End user release notes or just release notes. We can backshare about the terminology afterwards, that's no problem. We can vote even if you need. And, but yeah, this is basically what we can. I've added a third field here, which is the update type, because body can support security updates, announcements updates, new package update. These are only, the two fields that are mandatory is the NEVR and the changelog. Everything else is going to default with whatever body defaults to. So if you don't specify the update type, it's going to be an announcement update because that's a default. You can actually also do, you can specify bugzilla tickets that to close, you can specify if you want the bugzilla tickets to be closed when the update is pushed. So pretty much all the fields that body exposes in the UI or in the, in Fed package updates, you will be able to fill in there. So we will need to document this a little bit. I could read this from a spec file. The idea is that I, so I don't want to read that from us. The first thing is that I don't want to read it from a spec file because I want to, I'd say, I don't, I don't want to, I don't want to change the way things work currently if you don't want to opt-in in FedoBuild. So basically I want a way of interacting with these gates and phone message and Koji and body in such a way that if you want to update a spec file built in Koji and then gates built and to push them as one updates, I'm thinking like the large GNOME update that we have once in a while because there is a new GNOME version. Well, when they do this process they probably don't want to use FedoBuild because FedoBuild does not support currently the possibility of putting multiple builds in one update. So I want this to be entirely opt-in which means that I need a key at the beginning of my YAML file that will be uniquely identified which release we are targeting. And the NVR there is also what is used to ask Koji was this build already done? Ask body was there already an update done for this one? Because of course if you update the changelog and the package was already built in Koji there is no point in doing it. If you update the changelog and there is already an update then there is no point in creating yet another update. Also because of the way RPM works you can't actually get the NVR out of the spec file without building the RPM and it might not come out the same way it does in the build system. It did build it on the system. So Matt also points out that RPM is designed in such a way that you can't really extract from the spec file the NVR because of macros, because of how it's... Basically you would need to actually get that from the RPM itself and you might have situations where an RPM built locally would have a different behavior or output than an RPM built on the build system. So you did mention that it does not support grouping both builds in a single update. Have you thought about how you would do it? You've been doing something or thinking about how you could do something like that internally that I built internally as well. Yeah, so the question is how do we... How would we support doing multiple builds in one update? I think the simplest way is probably to expand the NVR field here to include all the package that we want to get in that update and it's gonna be tricky one to support though but I think that's probably the easiest way to do that. Just do a space-separated list of builds you want and then it would combine eventually the different changelog or you just make one changelog for everyone and then when it's finished building all the packages it just creates the update. I'm not entirely sure, but that's basically my first thought on the questions. There are probably easier way or better way to do that but we'll see. So, and I'm not entirely selling you and wavy future that's going to be in a couple of years from now if there is some fairy dust and everything goes fine so it's actually currently a proof of concept. We actually have something and to show you that I'm not lying I'm actually going to do a demo. Well, wait, wait, wait. I'm actually going to show you a movie of the demo. There we go. So, as I said, it's a proof of concept and you can very nicely see it at the top here because it's asked for a fast username and fast password because it's currently uses basically... It was running on my laptop and it's basically uses my credentials to act. So what we do is we have a fed message consumer that just listened to the fed message bus. Here I'm on a certain git repository from Python are complete. I'm adding a change log file. I'm just showing you what's in there so we have the same structure as presented before and if you are updates, update type. I'm doing a git push. Well, git commit first because change log are important also in git. And that's ZSH, ZSH. And then I just get a git push. I'm waiting and oh, what happens here? So, fed message from project git receive so there was a git push. In that git push, there was a change log that channel change. So it's looking at the change log itself. It's checking that basically the mandatory keys are present. So it's looking if there isn't any VR and if there is a change log. So the YAML file is validated. And then it's basically calling Koji. Was there already a build for this one? It finds out that Koji say, well, there was no, this package wasn't built. So, okay, I'm going to build it. I'm now opening the page on Koji to show you that it's actually just triggered a build. And now it's like, okay, so what do I do? Because I need to keep you busy when it's, you know, when it's building and I have, it's a three minute video. So, you know, I need to fill up the blanks basically. I'm relating the page, it's still ongoing. What do I do? Oh, wait, I was actually working on an F25 branch. So maybe I can, you know, kick the build on the master branch. So I'm just going to the right branch. I merge my F25 change, I git push. And then the consumers, thank you. The consumer received the message, figures out that there was a change to the change log file, figures out it's a master branch. So it kicks the build in her hide. And what we see is that we have two builds now running in parallel with the same consumer. Do you get the target? So the question is how do I go from branch to target? Currently, I'm using the, basically though, so in the- Branch name, that's candidate. Yeah. I think it's currently accoded. I think it's currently accoded, especially for the master branch so that it goes to her hide. Proof of concept again. What branch name? And if it's a master to her hide, it works. So what it does? Actually for master, I'm actually going to packagedb. And I ask packagedb, what is the branch name? What is the Koji tag corresponding to that branch? So and I could do that for the other ones as well. Since packagedb has that info. That package makes an assumption that master equals her hide. Yeah, okay, that's also works. So just I wanted to pose it here because what happens here? We got another build state change basically from Koji. So it says that something changed in this build. And it's basically that it finished building the SRPM. But it's not exactly the message we're waiting for. We're waiting for the fact that the build finished. So we just, we recognize this as being one of our task but it's not done yet. So we just drop the, we just don't do anything. We just drop the message. And what we can see here is that indeed the first task is done. I'm going to go a little bit faster now because, yeah, well, the coffee is like after the git push so that you can go for coffee. I'm just babysitting the process here because, you know. It's out, it's like in the video. Yeah, yeah, but I was away. There could be two coffees and then you get rid of two different coffees. So what is it doing now? Is it playing or is that not playing? I don't know. I think that was near the end. So which, where are we? I think it actually did. Of course, demonstration effect. The video was working, but now the presentation is gone. When the terminal, that terminal? This one? It's too fast. Yeah. And go a little bit there again. Yep. I need to go back to where we were. Okay. So, we got our task. Let's go a little bit faster. We keep some building. We get the row here, the building, the row height also finished building the SRPM. So we got another message saying that there was something changed in the build. And we are recognizing it again that it was one of ours and we're still not doing anything with it. And let's go a little bit faster. We still need a, there we go. So we finally got the message saying that the build was finished and we recognized that this is one of ours. So we start processing it, which basically means sending the update to buddy. This is the debug mode. So we got the entire answer from buddy, the entire JSON, that body return when it finished the update. But what we see at the, there is an update was created at this URL. And the only thing which I miss in that video is that I actually didn't click on the URL or like click, but I stopped the recording just before. So you don't see it. But I mean, trust me, the update is present in buddy and it does work and it does is an announcement update and it does contain this build. And if you want to look for it, it's the Python art completes, which was already updated, I think, to 1.8.2. So I'm not selling you pixie dust. I'm actually selling you something that actually works. We need, there is some things that we need to do. It has a very small memory leak in the fact that if your builds fail, we don't remove the task ID from our list. So that we need to actually check that the task SD is not complete, but we need to check that if the task was conserved or failed, then we just stop watching for it because there is no point. It needs a readme file. So if you want to contribute to the project, just a readme file and then pull request would do. And we need to figure out a way of making that an actual service and not something that runs on my laptop. Adam. So you still have to update the spec files yourself. You still have to update the changelug to the changelug file yourself, and that's about it. But there is one thing that we are working on. It's making Pager being a front end for this git, which means that in a very soon future, we're going to have releasemonitoring.org figuring out there is a new update coming in. Then we have the new hotness that listens to releasemonitoring.org message and finds out that, hey, Python are complete as a new release and Python are complete is Python are complete in Fedora. And the version that is in Rohide is not the version that was released. So we can update that one. So it will update the spec file. It will open a pull request on Pager. And then we're going to have some CI integration here. That's what's going to be happy. And it's going to probably do something like kick off a scratch build in CodeG, see if it builds, runs a couple of tests, if it doesn't break the entire world, and then report on the pull request that, hey, this is building fine, this is not breaking the entire world. And you as a user are going to be able to merge that pull request. And if we do things properly in the new hotness, not only update the spec file, but also the changelog.jml, it means that it's going to kick off the build and the update in body. And well, it's not actually going to trigger the update in body since we're using a Rohide, but it's basically going to end up in a Rohide. So it still allows you to go for coffee, you know? So in the entire pipeline, it would make things quite a bit nicer. Dennis, how do we deal with the Lucoside Cache update? That is a good question. To which I do not have the answer of right now. Well, if we are going to allow, if we're going to allow FIDO build to actually kick off builds, we need to have a service that is going to do the uploads of the sources when we do the merge. Validating. Yeah, we will definitely need something to do that. We could ask the new hotness to at least update the source file, and then we need something to somehow retrieve the sources and upload them to the Lucoside Cache when the prerequisites match. Yeah. I mean, cool requests work really good for the patch to a package, but the new versions. Yeah, that is definitely a good thing. We need to think about it. I mean, there are a bunch of technical solution that could fix this issue. The question is to find the one that pleases all of us, or at least that this pleases the least number of people. The service, all the builds in Cache will show up and be run by this service. That service, yeah. Do you have any plans to somehow map those kind of people who kick off the build somehow? So the question here is, if FIDO build does all the build in Cache, how do people get badges? I actually asked that same question. And Roddy was really, really worried about that. I really like that. You can mess up my statistics. Yeah, that's also true. So one of the ways to exclude the FIDO build as you're excluding Deniz from the statistics. I don't do that anymore, but he made a service account. So we could reuse that service account. The other question is we find a way of saying that the build was triggered by FIDO build in the name of that person. So with the government's authentication, we should be able to proxy or use a similar method of proxying the user, as we did for Koji Web, which Logins Disabled, but where that service would be able to authenticate that. Like when you're logged into the service and you click just merge this and trigger the build, it then proxies that user credentials through. So the build actually happens as the user that merged it rather than some service account. So Deniz mentions that using the Kerberos credentials and system that we have rolled in infrastructure recently, we could actually, when people are logged in on Pergure and press the merge button, you actually proxy the authentication so that it kicks off the Koji build as that person and not as the service accounts. So you can keep on getting your badges. But basically badges is a very valid use case which is on our mind in approaching that problem. It's the most important goal of all. I have a question. Good. Can the direct signing RPMs, what does it mean, what's the purpose? So signing RPMs is, the question is what does signing RPM means? Signing RPM is basically the way for user to be sure that the RPMs, that they are done on incomes from the Fedora project. And there are basically every single RPM that we distribute is signed with a GPG key, which is unique for each release. And so when you install Fedora 25, you get the Fedora 25 key. If you have already installed the RPM fusion repository, the first time that you install something from the RPM fusion repository, it asks you, are you okay with importing the RPM fusion GPG key, which is going to be used after that by RPM to make sure that the RPM that you're downloading do come from Fedora RPM fusion or whatever repo you have accepted the GPG key of. That's the idea. Okay, so I guess that concludes a little bit this talk. We've seen a little bit what happened to body in the recent months, what's going to happen to it and I've just made you dream about coffee and how you could just get push and forget about it. But we need differently help in there. So if you are interesting, willing, Fedor build is pushed to pagoda.io. It has no readme, so that's a missy fix one and we should get a readme in there. Body has a few more updates, which are a little bit easy fix and a little bit less easy fix, depending on how deep you want to dive in body. But we welcome every patch and every single ideas and RFPs and bugs reports. Although there are no bugs in Koji on the random features. There's no bugs in Fedor build yet because it doesn't do very much. There are bugs in Fedor build. There are bugs in Fedor build. I already mentioned two of them. Even a small project, you got bugs. Alright, are there any questions left? Rotten tomatoes, flowers. Tomatoes, flowers. I'm really lazy and I don't want to update the file. Check it in. Can you make it so I don't have to do that? The question is how do we end up people being lazy and not wanting to actually add and trigger and update the changelog file? So the idea, we could do that using and pushing people to make an actual useful description in the RPM changelog and use that automatically. One of the issues which I have there is that basically I don't want to change the current workflow from people so I really want days to be obtained. We could change that eventually. But I mean, in the longer run, if I need to release monitoring.org and the new alternatives are doing their job properly, all you have to do is review a pull request and person match button, which doesn't imply editing files or even interacting with the git in any way. So you can sip your coffee and press buttons. With changelogs is that we actually have three different use cases for changelogs, which is the package package has its own... I'll make it to the video and then you can point people to it afterwards. You should watch this talk and then go to the 45 minutes and then you have Matt speaking about changelogs. I care about this and I don't know the answer. So every piece of software that doesn't suck has its own changelog as part of the software. This is what's happened in this release. This is bugs fixed, all that kind of things. There's also a changelog in a spec file and so the audience of that is often developers of the software or users of the software are both kind of mixed together. The spec file has a changelog and the purpose of that really is to inform other packages what things you've done to the spec file. Like I've moved these macros around. This is when this did this and why you've made that update to the spec file. And a lot of that is stuff that the user never cares about but we keep track of there. So that might make sense as the git commit message for disk it. But then the thing here that goes into Bode, that should never be the changelog from the disk it or from the RPN spec file because the audience of those is totally different. What should go into Bode is why I made this update and why you as a user or a tester should care that this update exists. And we are really bad about that in Fedora so I would like to do something that would make that easier because a lot of the, some of the very best Bode ones are like updated to new version. Like, thanks. I could see that. Yeah, bump release. Yeah, bump release. We see that one a lot. And there's no idea, maybe it says bug fix but maybe it's linked to a bug but you don't know, how important is this? Why should, if I'm trying to test it, what should I look at? I often see a lot of things that I pick on Docker. Like a Docker update comes out and there's like critical things and Dusty is telling us, everybody go test that and I look and it says bump version, you know, and so you don't know what to test or what the reason was or what, so anyways the spec file changelog doesn't tell you that so if we dump that into here we're not helping. Which means that the new Atlas and NTR are not entirely going to solve your problem because we still need to actually human-readable changelog to put in that. Right, and maybe the upstream changelog helps. Yeah, that was me. Yeah, that's a good guy. You need to give badge for a nice update. A badge, yes. You're only looking for more badges. Fedora mission statement too. Fedora is a game by which you can obtain badges for software configuration. Professor. Yes, the changelog macro which is, well, that I can saturate the changelog file because it's shared between the depth and the RPM part. So the discussion that's going on is how can we actually improve the RPM changelog in such a way that it is actually consistent across packages so that you always have space where you expect space that the dates are correct that the information provided in there are useful. So there are two ways proposed. One is to follow the Debian rules which is very, very picky about the date and that it does ensure that there is a space where there should be that the email address isn't actually email address so that the version numbers are consistent. So RPM need already complains about dates that are so it wasn't Monday morning and you wrote it was a Tuesday morning but you got the rest of the date fine and then RPM need is going to complain that date never existed and therefore there is a problem it will do that for the entire changelog which means that if you were not awake five years ago when you updated that package it's still going to complain that five years ago you were knocked away when you updated that package. And then you need to edit your tag file and add to the changelog entry for your update to 1.0.4 that you actually also fix the changelog to the date that was released. So and then the other approach would be to actually key so there are two other approaches. There is one that OpenSUSE uses which spitted out the changelog outside of the spec file and that changelog is then shared between the RPM spec file but also the end user what's presented to the end user if I'm correct. And then there is the other option which is to basically get rid of the entire changelog in the spec file and then rely on the git commit information because most of the time we hear a lot of complain about people complaining that they have to enter changelog three time once in the spec file once in the git repo and then one in body and if they are if they are right about git commit and spec file there is definitely a different use case for the body updates that Matt already nicely explained to us but spec file and git commit are probably more close together than the body updates so maybe it would make sense to match these two. Yeah, but yeah like so what they got stuck into RPM spec file and who the hell ever we is it a big problem if you actually have an entry that says I fix the changelog because it was a git commit it's not really big so I need to go so there is eventually a way for you to do that we're going to stop here because Randy has another talk right now so if you want to follow him he's going that way those are really cool thank you I want to ask how does it feel to be here and say that you have taken over the maintenance of a troubled abandoned project to work 16 hour days and be treated to a few hostile questions oh thank you Hi we have some instruction for you do you want to use hdmi or vga play hdmi first yeah so it works so you don't have to worry about it no it should work yeah let's see what's the native resolution of the projector you know what the native resolution of the projector it doesn't need to be a native resolution the only thing we care about it's 4.3 and I don't know maybe it's it doesn't matter we have really good experience with 4.3 4.3 exactly so this is the microphone for recording you should stay in this area so the microphone gets so yeah and also please repeat the questions for the recording because you know there is no mic we also have signs for that remind you that you should repeat the question so we have the long talk it's for 55 minutes I guess so it's usually 45 minutes for the talk and 10 minutes for the questions we will show you signs that there is 10 minutes left actually it depends on how you want to do it you want to see the signs for the end of the talk so I will show you the 10 minutes left before you have 10 minutes left for the talk and there is still 10 minutes left for the questions itself so yeah that's how you actually want to do it show me signs for 10 minutes left 10 minutes before the end of the talk yeah it's a way I think I mean it takes about 40 minutes but you can also use the pointer or the girl for the presentations yeah I don't know hi I guess