 We can introduce ourselves. We can do it ourselves. This is the microphone that's supposed to speak. Yeah. There were a couple of antennas on that access point up there. It's like six of them. So what are the slides? I just changed it for six of them, right? There they are. Are we ready? They talk on reproducible builds, on building software everywhere, not just RPMs, but all sorts of software. I'm Dennis Gilmour and this is Holger. A little bit about me. Most of you probably know who I am, so I'm not going to drown on too much, but I'm Dennis Gilmour. I have been a Fedora user since day one. I used Red Hat Linux before that. I've been involved in Fedora since Fedora.us that was building extra packages. Red Hat Linux. I am currently the platform team lead at Red Hat, so I look after release engineering for RHEL and Fedora. And that's been my day job for the last eight years is, you know, Fedora. I help make sure we get the bits out there and built and just sanity and whatnot of them. And it's Holger who introduced himself. I work on Debian. I will spare you the Debian details. Mostly I've used Fedora in 2008 or something for OIPC to build the schools over there. And I've been working on reposal builds for the last two and a half years now. I'm funded by the Linux Foundation to work on this as well. And the Stebian branding of the slides is my fault, not Dennis. The story is mostly explaining what the Debian story about this will have some outlook to do for Fedora. So, and this talk is not my work, but I'm just one person of a huge crowd of Debian people who worked on this. So, first before we start, who are you? So, who has seen a talk about reposal builds here? Some people? Who has contributed? Yay. Who uses Stebian? Yay. Maybe this question, who uses Fedora at Red? Should have been... So you're all awake, that's good. So, first a bit about the motivation of why we do this. So Free Software is great. You can share it, study it, modify it, use it, that's really cool. But we can do this with source code and we use binaries. And we need to believe that the binaries come on the source they are said to come from. There's no way to programmatically, automatically find out that this binary comes from a given source. You can maybe analyze this for one small binary, but it's manual work, so it's generally not possible. And I don't want to believe. I want to be sure, I want to know stuff. And this is what it's about. And the problems that's been causing this is explained in very much detail in this talk from two years ago at the 31st Paris Communication Congress from Mike Perry and Zed Schoen. I give some examples of what they fell there in an hour or something. So that one example of a root exploit in SSHD and there was a single bit in the binary difference and if this bit was different, it was equal or greater equal or greater, and that's a one bit binary in 500 kilobytes of binary, that's a root exploit. And so if you change one bit in a binary, you might get root. And they also had a live demo with a kernel module which modifies the source code in memory. So if you look at the source code, the source code is fine. If the compiler builds it, you get it backdoor. You cannot really be sure what's running on your machine or what you can install it and then it's probably a good system. But after it's been on the network for some time, you cannot really be sure, especially if the motivation is high. And it's also how much Fedora has a build network, Debian 2, Debian developers can do binary uploads. But even if everything is built on the build network, this build network is a great target and it's not paying you admins, it's paying your janitors, paying whoever is involved there. And if you really have motivation and money, you can compromise these systems. And there's also legal challenges. It could be forced to put in a backdoor in your system, maybe just for some customers and get a get order, you're not allowed to talk about it. So this is what we are trying to prevent. And another example was also this CIA conference that was in the Snowden paper. They had the idea to backdoor SDKs to compromise the users by going after the developers. And while this was a CIA white paper, there was Xcode Ghost, which was something found in the wild, which was a Trojan SDK for iOS, which was put on faster servers, which people in China mostly use when they wanted to build iOS applications. They downloaded the SDK from another server, which was faster but which was compromised. So there were 20 million applications which were backdoors with good source code but bad binaries. This is happening. It's not fantasy. And our solution is that we think that anybody should be able to independently verify that a given binary is coming from the source. So you should all be able to verify the same thing. And we call this reproducible builds. It's not about repeatable builds, but reproducible builds in the sense of by bit identical. So you build a package five times and you get five times the same checksum and not a different checksum. And also if you do it in two weeks or two years, you get the same checksum. And that's really it. That's all it's about to get bit by bit identical binaries. And the same with RPM, whether it's Derby and Packeters or RPM, it's the same. For RPM, there's a bit the challenge that there's a signature in the RPM. So you cannot just recreate the signature. You need to replay the signature, but that's doable. Because if you have the same binary contents, the same signature will always apply and you can get bit by bit identical containers. So what we want is that we really just do the checksum on the result and the checksum should always be the same. We don't want to analyze the contents because if you have tools which analyze the contents, these tools can be buggy and these tools can be exploited. So there should be no special tool to ignore some differences. We want really the whole thing to be the same. Everything, this should become the norm. And I... When I put this there, we want to change the meaning of free software. It's only free software when it's reproducible. And I gave this talk at the Free Software Foundation last year. I was a bit curious what they were thinking and in fact they liked it and they put now they have the Free Software Foundation as a priority list of important software projects and as part of their security priority they put reproducible builds there two weeks ago or something. So this is the Free Software Foundation basically agrees. And there's more than just security. There's lots of QA benefit. We found many, many sub-tile bugs like we built twice and then verified the results. Like we had... One example, we had different... the Python shebang line changed. Because if you build it on a fast machine below a second then the make timestamps were the same. So if you build it on a fast and the slow machine you would get different results. And we found many, many QA issues like that. So if you look... try to make your software reproducible you find a lot of other stuff which is strange and funny. Google does reproducible builds to save time, developer time and thus money. And Google made all the internal software reproducible. There's smaller deltas. So you get faster updates both for packages but also for images. Because the content changes less and so you can just reuse the data. And as a side effect you can also do a meaningful binary diff between two versions. Because if you just change one function of a huge program then only this small part of the program should change. And if there's lots of other differences from timestamps and whatever you cannot see this. But if everything stays the same you should be able to see just this function has changed. And there's probably more but these are the main other benefits. So what we've done now, since a year we have this website reproduciblebuilds.org which is... we started with having information in the Debian Wiki but at some time we wanted to have more distribution agnostic information. So this... if you take one URL from this talk then please take this URL with you. There's documentation there which has common mistake or common problems pitfalls which is making stuff reproducible. We have IRC channels with one for Debian one for general one we have mailing lists and we wrote different scope. Different scope examines differences in depth so it recursively recursively unpacks an archive or an ISO or whatever and recursively recursively goes through everything and will display that whatever RPM which has a PDF inside and the PDF has an image and the image the timestamps vary then Difascope will show you the difference. There's HTML and text output and it's available in Debian, it's in Fedora it's on FreeBSD it's quite quite widespread now. And this is how Difascope HTML output looks and the easiest to see is the lower line there with this version which is different in this example but it will also in the beginning Difascope only worked on two Debian archives but now it also works on ISOs can put it on directories can also use it to compare two different versions just try Difascope it's really, really useful and if you want but Difascope is not the tool to really see whether something is the same Difascope is for debugging if you want to compare if something is the same it should be really the same checksum Difascope is just for finding the reasons why something is not reproducible and there's also a web service tri.defascope.org where you can upload to files and have to install Difascope if you install Difascope and install all the recommands it will install on a base system about 1.5 gigabytes of binaries because it can also go APKs, Android APKs it can do whatever tools so all the different helper tools make it quite big then I started to set up test reproducible which in the beginning was just called reproducible Debian net and in the beginning we only tested Debian by just building a package twice and compare and then we started to add more variations that we changed the user name changed the time zone changed whatever parameters and now we're also not only testing we test all three Debian suits they're constantly on four architectures and we also but also test OpenWRT, NetBSD FreeBSD we used to have tests for Arch Linux but at the moment not working and it's a giant Jenkins setup with 44 nodes and terabyte of RAM so it builds Debian twice and two to four weeks constantly and it's also now there's many people working on this setup so this is how we vary the build and this varying the build is done to try to catch variations there will be more variations in the wild but we assume if we just vary this we will find most likely cases so we vary the host name, the domain name the time zone, the location the user built with different kernel versions on i386 we built with a 64-bit kernel and a 32-bit kernel we have some architectures where we have different CPU types the file system order is also important because we fear the order is not deterministic or depends on the i nodes which can change and we also have systems running in the future so we have systems running a year and a month and a few days in the future so we can see what breaks which is also funny because we also find bugs where certificates expire and if you want to reproduce something in two years then that's bad so maybe we should also not just build one year in the future but five years in the future but then we will catch more errors and we want to find less errors so what we found mostly it's time stamps time stamps are either the file time stamps or also very common is that the time stamp is outputted in the build product not so much in the program but more in documentation so the main pages are often included this was built on this state which is really meaningless if you can rebuild on every day any day and get the same and also the time zone if you unpack a zip archive from 20 years ago the local time zone will be applied and that goes on the build product again the locates are also in there the build pass and everything else is some other corner cases but it's mostly these four things that make stuff unreproducible and it's also sorting order is different depending on the locate not for the alphabet but for the special characters and then you get this if you don't sort hashes then you also get an arrays they have run in order and it's often easy you call gzip-n to have time stamps in UGC this is all on this documentation that's our web page now besides this documentation part on the web page there's also Lunar gave a talk at the CCC camp two years ago where you have many examples with problems and the solution so if you want to watch a video this is another good video to watch and then we came up with source state epoch because the build date is usually not meaningful but the epoch is the last modification of the source code as an epoch so in seconds and this can be used instead of the current date it can be used in random seats so they become pseudo-random in Debian we set this from the latest Debian change lock entry could also be the last git commit or whatever and we wrote a spec which is two or three kilobytes of architects so it's really short you can read it in three minutes or five whatever it has been adopted by many distributions and tools like this there's patches for RPMD package support the GCC, ghost script it's more than 50 tools now which support source state epoch so if source state epoch is not set tools use the current time and date and if it's set they will use that and then we wrote two more tools strip non-determinism does that so it removes time stamps from images or other time stamps which are clearly meaningless they are just set to zero and in the Debian case we call it with step helper so all packages which use step helper which are 90% use it automatically so strip non-determinism is not used by other distros yet but it will increase the percentage of stuff which is reproducible and the other tool which we started to write in the last half years re-protest which builds something then applies variation and then builds it again so what we do on this big Jenkins setup this is also something for you to use so you can build something with re-protest but it's the same I'm not sure if anybody has tried it with building RPMs yet so in Debian the status in Debian this is testing and testing is now 93.3% reproducible which I think is fairly great we're getting further slower but on the other hand three months ago we're still at 91% so there is still progress happening and then we had the idea hmmm we don't vary enough so for unstable we started to vary the build pass for testing we still use a deterministic build pass which is basically slash build D slash source slash version and there we use build and random pass and the build pass gets embedded in lots of stuff so we only get percent reproducibility there so what we think we will do for for currently we say use a deterministic build pass but we want to be able to build in any pass because developers will do the same I want to compare if I build in home Holger and he builds in home Dennis then I also want to compare this but this might take some time we have patches for GCC but GCC is just first GCC is only built used for half the archive or something and then there's also lots of stuff in documentation generators so at the moment we still recommend to not vary the build pass and just say use some standard pass then because just looking at those 25,000 Debian packages is too much to grasp we have package sets as well so we have whatever GNOME package set the base set all the Perl packages all the Ruby packages all the whatever packages and so Debian the essential set which is 20 source packages or 30 is now reproducible except Bash and GCC 6 I think and GCC is the documentation so the other package sets are not that reproducible yet but it's that helps to look at people and we categorize issues so we have a system to take notes and we found 282 different issues so whatever it is Erlang embeds the build pass and GNUT Pundock puts something in there or local variation these are all these different issues and we found 7,000 packages still which have some kind of issues even though they're not and we've looked at almost all packages so there's like 100 something packages left which we don't have a know but we don't have an idea and we want to also make these node cross distro because if something is most likely the issues are the same on Fedora and FreeBSD even because it's all the same upstream software and then to rebuild we came up with build info files which the source code uses with the check sum the generated binaries which is the stuff you want to reproduce and the environment so they use later can be used to recreate the same environment because you need to have the same versions installed it's you cannot guarantee that you get the same results and for Debian it's quite easy because all the old versions are available on Snapshot, Debian and Net I have no idea how to get old Fedora versions especially during development but it's all in Koji it's all in Koji in 30 terabytes a disk that's the same size as Debian and this is the the green ones are the fixed bugs and the red ones are the open bugs those are all bugs with reproducible issues and we only file bugs with patches because as it's useless it's just saying the package is unreproducible please fix there's not a meaningful bug report so all these 800, what is it? 700 I think we have left open where we have patches to make software reproducible and most likely you can take them while we did this we rely on the Debian maintainers upstream which some maintainers do and some maintainers don't so now Bernhard Wiedermann from OpenSusers started another project where he's actively sending patches upstream I think so far he has sent 100 patches upstream but this is he started that mid-December last year so that's not a long time ago at the moment this was a proof of concept until very recently our changes to de-package the central packaging tool were only accepted in Debian in November so since November de-package in Debian Unstable can create reproducible packages and since December it stores only the build info files so when Debian stretch will be frozen in nine days and we will not have many reproducible packages in there because Debian doesn't well Debian doesn't do rebuilds so at the end of the free cycle Debian doesn't rebuild everything for historic reasons I think it's crap but that's why Debian stretch will not reproducible not the 90% on the other hand if somebody else takes Debian stretch they will get the 90% but Debian doesn't have it I'm not so happy about that and we also don't distribute the build info files yet we save them but we don't have a mechanism to move them and this is also quite unclear still how to do that but Debian stretch now or whatever the unstable and make either 1704 or 1710 partly reproducible so Debian 10 in 2019 will be partly reproducible yay and for Debian 11 in 2021 or whatever it is I think we should have policy mandate if it's not reproducible then it will not be released but it's a long way since basically the beginning we realized it's not enough to only care about Debian but these changes needs to go into upstream so we made a we started to write reports weekly report which is a blog where we document our progress and development and discussions and stuff and we made now two summits one in December 2015 and one in 2016 2016 Dennis joined us there and Bernard Wiedemann from ZUSA was also there there were like now there were 25 projects 2015 there were 20 projects or so and we will do another summit this year if you want to go there please talk to me we would be happy to have more Fedora people there it's sponsored by the Linux foundation and Google mostly so please talk to me if you want to go there we also did two we're doing the third round of outreach two Google summits and three outreach rounds which also brought new contributors which was really good so now for the stuff which probably interests you more so these we also test now with variations on this Jenkins on the Stabian Jenkins we built now FreeBSD and NetBSD all the other stuff core boot used to be called Linux BIOS before core boot is 100% reproducible with the CBIOS payload the problem with core boot is they don't do releases at all so they have reproducible theory Net and FreeBSD is almost also in the 70-80% range it's usually the base system which is more problematic the packages are basically the same and there's more until last week I thought that Bitcoin was where the first because the Bitcoin developers they were in 2011 Bitcoin had a market capitalization from four billion dollars and they thought there's a compromise Bitcoin binary which steals the Bitcoin and put it somewhere else the developers could not prove that they didn't make it so they made reproducible so they could be sure the binary you're running is really coming from the source code and toward it then the same with the tour browser which is Firefox as you probably know so there was clear okay you can also do it with really big binaries and that motivated then Debian started in 2014 and last week I learned that Cygnus this company which made GCC and other stuff in the 90s they had a bit by bit binary bit by bit identical releases in 1992 for eight architectures but that code bitwrote it so and now it's completely forgotten and when I read this last week I was like wow um ElectroBSD had a talk last year at FOSTA where they made ElectroBSD as a small, freeBSD variant with a small base system that was 100% reproducible which was the first major distro um Tails is working on their image I think they made almost their web converters as another live distro which where the images are 100% reproducible Google Bazel is the Google build tool which aims for reproducible builds reproducible is a build tool for windows so you can create reproducible windows binaries and there's few commercial proprietary software which is reproducible yes which Windows there's at least source code for iOS there's not even source code Microsoft says there's source code so medical devices in your body arms, drones whatever nuclear power plants, self-driving cars they are all not reproducible gambling machines are reproducible by law for tax reasons in Germany and France so that started with this started in December to build SUSE twice in this open SUSE build system and he built 3000 packages now and managed to get 2000 of them bit by bit identical and 1000 and I've set this up with Fedora 23 so it was in 2015 but there I didn't have a patch RPM so there was no result and that bit rotted as well so for basics in Fedora Diffoscopes available there's a new release of it yesterday so we don't have the latest in Ryde and Fedora 25 and 24 we have the previous version so you can test your binaries today to see what the differences are between builds which is useful we have a few issues potentially in recording all the build info kind of data YUM and DNF do different things so if you create a build route with YUM I created with DNF we potentially got some problems because they resolve different dependencies which may result in different binary hopefully not but it's an area that could cause problems because in RPM 413 they added a macro to set the build host name so you can define the build host is Fedora and then every build will have Fedora as the build host instead of whatever the host actually was that you build it on in order to try and make the whole bit by bit reproducible and we get a few issues with signed RPM so Koji itself drips the RPM header out and has an unsigned header when we sign it we strip the signed header out and we tell Koji write the signed RPM and it gets the payload and shoves the correct header in there so in theory we could reuse that so that you could take the binary RPM that you want to reproduce strip the header off the signed header, reproduce the build shove the signed header in and it should be bit by bit reproducible it's probably a bit ugly and I'm getting dirty books back there from RPM folks but it should work it's a bit of theory but we also have issues where in the devian case they're looking at adding the check sums of the RPM's that went into the build environment in Koji we use unsigned RPM's in all the build routes if you're using Mock at home you're going to get signed RPM's of all of the RPM's going into the build routes so it's going to make that may be something we just say we're not putting check sums in there because you can go from signed to unsigned contents is exactly the same sense the signature header and the build is reproducible or not yes sir we could potentially strip the signature before we check some but it depends on where we fit it all in as well and the cost of stripping that out and getting all that information we certainly don't want anything that we're going to do in Fedora we don't want to have it be overly burdensome or too intensive where the cost of being enabling people to reproduce our builds is so high that people want to turn it off because it adds 5-10 minutes to their builds people complain already that the init of a churrut takes too long and they want to do things so that we can make the churrut appear faster so they can get their build 2 minutes earlier so we had something that's time consuming it's not going to help at all but that's kind of the basic part there's a whole bunch of 2-2 type things the equivalent of the build info file we have probably 80% of the data 90% of the data in Koji already it's just not easy to get we store all you know for a long time we've said we do things the way we do so that we can reproduce the builds and practice done that we've not no one's ever actually sat down and said okay well I want to reproduce this build in Koji they're just like oh we've got this stuff we can reproduce it so it's kind of looking at it it's really just going from the theory of reproducibility to practically being able to reproduce stuff and say hey you know what you can trust that fedora is what we want because here's the information that you need to reproduce the builds the format of how we present that information to the users needs to be decided on I think that there's probably 2 aspects that we want to 2 places where we want to do it we don't want to have to have people that want to reproduce that up Koji but it's going to be really nice and convenient that since Koji has all the information that you know we have an API call I'm rebuilding foo1.1-1 give me the build info and you can get that information so I think Mock needs to have the ability to write out a log file or write out a file that has all of the same information that you would get from a Koji call to get the build info and I say Mock and not RPM build because in the case of say if you're building a rail 5 RPM like for Apple in Koji today the churrut is set up by the build host which in the fedora case is fedora25 so you're setting up a rail 5 churrut on fedora25 and rel5.pm cannot read fedora25's RPM databases so if you wanted to like do rpm-qa in the build it's going to blow up and choke and you're not going to get builds so I think Mock needs to use the system's RPM to be able to tell you which it does already in the root.log but put it in the right format right it should be able to do it in other ways but like if you wanted to make RPM do it you'd have to read the RPM database to query these with the RPM's installed in the churrut which in many cases there's a reason why we banned any call to RPM inside of the builder in the packaging guidelines and it's not because we want to be jerks or anything it's that we can't guarantee that you can do it it works sometimes and other times it won't there's some defiants that need to be set in the environment like so state epoch the Susie guys told me yesterday they're working on a patch that will set a macro to get from the RPM changelog and set the source state epoch and they're telling me they're going to send it upstream so hopefully that should come with time we already record the sources we already record the output we know what RPM's we get from the build so we have most of the data we just need to put it together in a format that's consumable which will then allow us to write some tools where we can wrap around Mock and say hey we're going to re-produce this build it sets up the chur, it sets up the build environment builds everything and hopefully gets the reproducible builds out at the end but it's quite we've probably got 90% of what we need with Koji without any thought or effort and the last piece is going to take some work and people to work on it but I think it's something that's reasonably doable the next 12 to 18 months the hardest bit would be setting up something either using test.reproduciblebuild.org or getting Google or NASA or whoever to actually reproduce builds and provide us with or even set up something ourselves where we can reproduce the builds but then do you trust this to provide the verification of ourselves but yeah it's something that I think is really reasonably doable with reasonably minimal effort because we're doing most of what's needed already but Suzy uses zipper I looked at the equivalent of Mock and they use about a 1400 line bash script but to do the equivalent of what we do in Mock so I'm not sure how like in that portion I'm not sure if there's going to be a lot of overlap between like Suzy and us in Fedora so so far this was mostly about making reproducible builds possible at all enabling us to do this but for the future we need first constantly testing this whether we with that bitrots or not which is not doing so and then we still need tools infrastructure and policy to for this become meaningful for users so we need these build info files and infrastructure to distribute them because there will be Debian has 25,000 source packages so 10 architecture so that's probably 100,000 files for one release and how to distribute that and we want to sign these and there's almost no work being done yet so far we've only been focusing on making reproducible builds possible and maybe there's probably very different projects different solutions so if so far the reproducible builds making it possible that it's the same source code and we can share the work but how we build these tools that will be not so shareable and we need something like build info files everywhere we have them for Debian we have some for core boot and OpenWRT and Fedora Dennis just explained how to do that but this is all for the other projects this has not really been done and then this having individuals doing the rebuilds we don't think that would scale so maybe large organizations like ACLU Greenpeace, NASA the NSA, the Russian Army the North Korean Army, the US Army they all rebuild and agree or we do Fedora rebuilds Debian and Debian rebuilds NetBSD or big customers rebuild it themselves there could be several different models and it's also you need to decide whom you trust and what you do if you find get one checksum which is not matching and this is all future work for the next years to come and then we need integration in user tools we really want to install this one reproducible software or do you want to build it yourself first and then compare the checksum and how many science checksums do you need to call a package reproducible and from whom isn't that what Nick Zawias is doing all the time kind of there's Geek's challenge which is a tool to rebuild and do the stuff once you get the RPMs and packages reproducible you then can also look at things like live CDs the Anaconda installers and cloud images and all the other binary artifacts that we deliver there's a couple of people that were working on a lot of the live CD stuff like McSquash FS apparently does things that make you can install everything exactly the same way and you won't get a reproducible ISO out of it because it does stuff differently and so they're working on fixing that so that once we've got the base done there's places under higher up in the stack that can be made reproducible also once you create images with file systems the file systems need to be reproducible and they all put timestamps in there so Squash FS was recently fixed but there are other file systems which are not fixed yet so if you want to get involved as a software developer please stop using build dates if you really have to use source date epoch and you could also look at your distribution like build, fit or twice build the packages you maintain twice just build them half an hour later or two minutes later see whether they are quite identically with lots of documentation I'll be here for the next two days as Dennis as well can also talk to us yeah that's it questions? there is for re-protest re-protest is our builder tool for developers and so there's the idea bit like bisecting twice with all variations and it's the same but if it varies then you stop varying whatever the locale if it's then reproducible you know it was the locale but that would mean that you maybe need to build the software 10 times and maybe you still don't find the reason but that should help we're going to place things like unreproducible we all know from instant ps certificates and we don't want to ask we just want to set some same defaults and then say sorry there's something wrong which I kind of installed at this time or whatever it's kind of like a DNF plugin that could do something and it could be that there's a system-wide policy that says by default we set it that reproducible software can't be installed in the metadata that's already too big and you know I think that I totally agree we need to think about this more these are all old thoughts and we've not spent much any time on user tools in the last year what is the other biggest besides build dates which makes builds unreproducible and that is build pass so for build pass we can use an easy workaround just use a deterministic build pass so then I think the next biggest is probably locale time zones and which again we can fix easily by forcing time zone to UTC before the build this is from Debian we want to allow developers to build in their environment as it is the other thing is also the locales indeed we want developers to get error messages in French or German or whatever I'm not forced them to English soft soft so this you can if you normalize the build environment you can get a lot many things done and Koji is a tool to normalize the build environment Debian has also these tools but Debian they are less strict but if you are really strictly enforcing the build environment you get a lot of things how many problems do you get from different file systems? we only looked at packages so far so we have only a few packages which include a file system I mean the question was what are the difference what differences do you get by using different file systems like XFS or EXT3 EXT4 you can get quite a lot if the sorting of the file systems is different depending on how they list things out and do things we case by case mostly it's sorting issues because the read year order is different and we wrote another tool which I didn't mention which is called DisorderFS which is doing that it's a fuse file system which randomizes the read year order and there was the test frame where we catch those cases at some point the question was do we plan to update the packaging guidelines to include something for reproducible builds probably at some point so the patch that the Suzy guys were working on for RPM was to set a macro called sorted epoch from change log and the idea is that it gets the last change log entry and whatever time that was we only use dates but whatever time that was it would set to 0, 0, 0, 0 for the actual time of that and if we're going to do something in Fedora we would probably put in Fedora release and set that macro it's set globally then and anyone doing a build it would just kind of get turned on but we would probably need to and we would need documentation on when if you get failures you get issues this is how you debug it and work on fixing the issues but I don't know that there's necessarily anything specific we need to set you know like having packages set the macro in every spec file is not going to work it needs to be set some way globally and as packages get their work done regardless of the package that we built it's getting changes upstream as far as possible so that the tools that produce artifacts produce them reproducibly so if the CCC gets changed if the tools gets changed if the target gets changed it's not about fixing they're doing moves files they're working on a bit files they're making a whole system to build so that way you don't need to change everything on a package unless you want to do it temporarily until you get the patch upstream so it doesn't really go on guidelines in the end yeah just to avoid probably wouldn't yeah he said that it's not about changing the packages but rather changing the tool change upstream so that we don't need so many guidelines which yeah we got the last question so and one thing what we did in evidence we changed the policy to say we have not even said at the moment there's nothing in policy so what's the first change we want to have should be reproducible and then later must be reproducible so it's a long process last question how we deal with build systems that are not GCC based like I think how we deal with build systems other than GCC the same way as with GCC we try to find the course and send the patch upstream and there's probably there's probably rust an example of where things went wonky there the build hardware builders in Fedora use the build hardware have I think 16 cores and like 64 gig of RAM and the build VMs have got 4 cores and like 8 gig of RAM or 16 gig of RAM something like that but GCC when it was built on the build VMs would get one hash it was consistent but you got one hash and if you built them on the build hardware they got a completely different hash that it used in the shared objects and so then you'd have to rebuild everything else because it was like this difference that was introduced into the package purely based on what machine you built it on and how fast it could do certain things cause it to give differences in the build result that's something we've seen in Fedora so thank you very much everyone any of the builds we've got we've got the software and so I've already trucked in all the things how they start finding them that's good let's move away you can build levy packages and this is a how well it's for the the history of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the web of the