 So, hello everyone. Thank you for joining our talk. My name is Tomáš Hrenčer and with my colleague Miro Gronča, we'll be talking about how we are upgrading Python in Fedora. Slide please. Yeah, I was just slow. Sorry. So, in the past, before Python 3.9, new Python was released every one and a half years, which means every three Fedora releases. When you talk, just don't read the chat, I'll do it and we can do it again around. Okay. So, Python was released every one and a half year, which means every three Fedora releases. But later, Python developers adapted their schedule to match the Fedora release cycle. So, nowadays, they release Python every year, which allows us to upgrade Python every second Fedora release. Synchronized release cycle benefits both sides. Fedora can rely on the regularity and Python developers receive our feedback from the very first alpha versions of upcoming Python. Moreover, Fedora is the driver of fixes for many upstream projects. Next slide, please. As you can see on the slide, Fedora offers multiple Python versions. It's because we aim to be developer friendly, so we offer a broad spectrum of Python versions. We try to keep up with the upstream and pull requests with new versions that are usually open the same day as they are released. And the availability of different Python versions allows real developers to use Fedora as their development machine. And we also provide other Python implementations such as PyPy, MicroPython, or Jiton. For every Fedora release, there is one main Python. Fedora has a stack of thousands of packages, and this is the Python it runs on, and Fedora itself runs on it. So, whether you install Fedora, you install packages, or when you make packages for Fedora, you are using the main Python. Also, this is the Python you get when you are under Python 3 command in your terminal. For Fedora 35 and 36, it is Python 3.10, and it's a critical component for Fedora. And for the many users, it's the only Python users we'll ever see. Of course, you can install different versions, but this one's the main. Now let's move on to some statistics about Python packages in Fedora. As you can see, the number of Python packages is steadily growing. When Miro gave this talk for Python 39 in Fedora 33, there were about 4,700 packages. Two years later, we have 800 more. It makes up about 10% of all packages in Fedora, which is probably the biggest stack. This is the number of packages, not components, and in case of Python components, it's about 20% of all components in Fedora. There is one catch. The Fedora 37 number is not final. There will be probably new packages when Fedora 37 is released, so at this time, it will probably, the growth will be about 5% less as previous Fedoras. You can also see that the growth at the beginning is a little higher than the growth now. That's probably caused by the migration from Python to Python 3. When every time a package migrated from 2 to 3, it started to pop up in the statistic, and around here, the migration is basically finalized. We still have some Python to craft, but it doesn't move anywhere. We are around 5% growth for every Fedora release. Those packages, most of them, not all of them, but from the 5,000, about 4,000, they do require the specific Python version on runtime. Either they require Python ABI virtual provide, which is versioned, or they require the Python versioned library. Usually, this is for reasons such as the location on the disk, so if the packages installed in Python 3.10 directory, you cannot use it with Python 3.11, and you need to rebuild it to gain the new dependency. There are also other cosmetic things like the bytecode cache and stuff like that, but it's not really important. When we need to upgrade main Python, for example, now from 3.10 to 3.11, it means we need to rebuild 4,000 to 5,000 packages, and you can't just rebuild them all, like alphabetically or all at once, because most of the packages have some build time dependencies on other Python packages, and before you rebuild those, you can't rebuild the others, so you need to build those packages in proper order. What's problematic is that packages in Fedora, and especially in Rohit, they don't always build. We will talk later about the reasons for this, but basically now it's only important that it really complicates this, because when one package doesn't build, anything that needs this package to build as well can't be built either, and you have a chain reaction, and then you have a cluster of packages that you can't really build. One of the problems with the rebuild is that new Python version is not backwards compatible. This is intentional, not just a buggy behavior. For the purposes of Python 3.10 to 3.11, it's a major upgrade. The versioning of Python predates semantic versioning, so some might think that the second digit is a minor update, but for Python, this is major. Updating the first one is beyond that, and we never talk about it anymore. So much? Okay, so we covered some general intro, and now let's go back in time to fall 2021 to see what we were doing when the very first alpha of Python 3.11 was released in October. We packaged the new Python version so people could start testing it. We created the package as a form of Python 3.10 to preserve the little git history, and we also had to adapt some markers like variables in the definition of the package. For example, there is the base version variable, which is 3.10, so we edited it to 3.11. We re-based our federal specific patches. They are stored in Git, so it's pretty easy. And we also removed some old craft, such as old patches. Sometimes the Python main team develops new features in Fedora, and then we work on upstreaming it. Once it is a part of Python, we can remove them from Fedora. Interestingly, we can build the new Python package for Python 3.11 in two ways, either build it as an alternative Python or the main Python. It has a beacon, so the same spec file can be built differently, based on environments. We don't need to care what Python version is the main. Since Fedora 33.8, there is no more Python 3 source package or component. We only have Python 3.9, Python 3.10, and so on. And the only one of them builds the main Python 3 package from that. What we do next is we write the Fedora change proposal. Usually, it's the very first change for the Fedora version, because at the time we are writing it, people are still working on the previous Fedora version. Last but not least, we created Python 3.11 copper repository. Copper is a place where everyone from the community can build their packages. Users can then possibly use the repositories to install additional software to their Fedora machines. In addition, Fedora packages can use it as a playground to test new features and ensure that everything is well integrated. Next slide, please. We use this copper repository to do several things. One is that we bootstrap the initial package set. We build packages in proper order, which means we sometimes need to build some packages without tests or documentation first. And later, to build something else using that package. I would also like to mention that we use only X8664 architecture for building in copper to make things faster. This brings a risk because the packages can fail on different architectures, and we won't know about it. But at this point in the very beginning, we just accept this risk. Once we have the initial bootstrap, we start building the rest of the packages. In the past, we used to add all packages into copper and kept building it repeatedly until their missing dependency is very ready. This has changed, and now we have a script detecting which packages have available dependencies and can be built at the moment. Miro will be talking about this in a while. Building packages with an unstable development preview of Python ultimately leads to problems. The most common ones are missing dependencies, Python 3.0 on incompatibilities. As Miro already mentioned, since Python isn't the course compatible, when they remove some features, packages stop to work. We are trying to identify such problems and report them to maintainers by opening backzillas. Then there are Python 3.11 bugs, since we are working with development versions. So it's not so common that we find some bug in Python itself, which we report and sometimes we fix it if we can't. And there are also some unrelated reasons. Personally, those are the worst failures when the package fails with the reason unrelated to Python, at least for me. For example, when package doesn't build with new GLC, I can't fix it, so we have to communicate with other maintainers because we need to rebuild the package in copper, so we properly test it with new Python. And also sometimes failures happen only in copper and not somewhere else, so we also need to somehow work around it. There are about 4,000 packages that we were rebuilding in copper. Since Rohe is ever-changing, we need to keep up with it. And the good thing is that copper supports webbooks, so every package we have there is rebuilt whenever there is a new commit in this kit. And another great feature is that package pull requests also trigger builds in copper, and those builds are in isolated environment. So you can use it to check if your package will build with Python 3.11 at the time when you're upgrading your package. And if it's not working, it will help us if you fix it right away and don't have to wait for us to open the packages. Yeah, and I want to explain something that we did in the past, and now we are doing it differently for various different reasons. I think we are doing it better, but in order to see the improvement, I need to explain how we used to do it. This is called the past. We did this the last time with Python 3.10, and we used it also for Python 3.9, and I think we initially started with 3.8, but I am not sure it kind of blends in because it's always the same. When we needed to build a package, the package is in dependency order, we use the tool that is kind of made for building RPM packages in a defined order. It's from our friends and colleagues from Software Collections, and it's called RPM List Builder, and it basically gives you, you create a YAML file with all the packages in their order, how they are built, and the YAML file supports RPM conditionals like pecons or other kinds of macros. So you can have in the list the build this package without tests, then you build another package, and later you build the first package with the tests enabled again. This should, in theory, be able to solve cyclic graphs and cyclic build dependencies and build dependency loops. I'm just going to skip to the next slide so you can see how the YAML file usually looks like. It's really small here, but the file itself is like 500 packages and some of the packages have multiple lines. So here we say that we rebuilt Python 3.9 with some macros set to a different value than the normal, and later we will build it again here with all the macros set to the initial values. And what we did is that we created a list that contains about 500 packages, and we called it the first critical set, and it included packages that are really important for us, like set up those by test and stuff like that. Also, it includes packages that are really important for Fedora Linux for the distribution, like RPM itself, DNF, Anaconda, free IPA and stuff like that, and also for Fedora infrastructure. So Koji, Bodhi, Fedpackage, Bongi. This is necessary for us to be able to create a compose later when we actually ship this change and also to make sure that even if some of the packages are broken and Fedora packages who actually run Rohit will be able to fix them. And it also includes a couple more important packages that a lot of other packages require, like NumPy, SciPy, Jupyter Notebook, but also packages that other non-Python word is using, like Boost, Library, or Selang, or GDB. The problem with this is that this is not 500 packages, it's like 20 packages maybe, but they all have a lot of transitive dependencies, and this YAML file needed to have all of them in order. The problem with that order is that it's sequential, so normally the dependency graph is a graph, and if you remove the cycles, it's a tree, but in this case, it's a line, which means that or a sequence, which means that if something doesn't build, you're blocked, but in practice you can work around it by rearranging the list in a way that the stuff that's not fixed yet, you moved that to the end of the list and moved something in front. So this is, in theory, entirely automated if the list is perfect and everything builds, but the list isn't perfect because the dependency data changes. Rohit moves fast and the data is from last Python update, and you need to update it because everything changed in the meantime. So in practice, there is a human operator, it used to be me, no, it was Tomasz recently, and the human operator sits behind the YAML file, runs the builds in order, and if they are stuck, they need to figure out what to do. It turned out to be really tedious, and some of the packages in the list were lurking there. We never knew if they are really important or just remnants from the past, and it was also really, really slow because of the nature of the sequence, other than doing it in parallel, approaching it like a graph or a tree. And then we could basically create this list for entire Python collection of federal Python packages, but that will be 10 times longer. So what we did there is we used the brute force method or shotgun method, where basically you take the list of all the packages that need to be rebuilt, which is quite easy to get by DNF repo query, and you build them all at once in copper, which is quite easy because, as Tomasz said, copper has automation to rebuild Fedora packages. So you define the package to use the Fedora source, and you build it. You set up all the automation, this was already set, and you build the packages in waves. Basically, you rebuild everything that wasn't rebuilt yet, you wait until it still builds, and some of the packages actually finish, and so you have more dependencies, and then you keep rebuilding the rest until it moves. You're basically kicking it until it moves. Sometimes you, if there is a dependency chain, which is 20 depths in a row, you will try to rebuild the package 20 times before you actually succeed. And of course, it takes a while before the build is started, before it runs and figures out it has a missing dependency. So this is, it worked because submitting the packages was easy. You could do something else while you do that. You could watch your favorite TV show, read a book, or work on something else. But it took a lot of time and honestly was wasting a lot of resources somewhere in copper. What was done, and we still do that now, is that while some of the packages failed, you need to figure out whether they failed for dependency reasons, and you try later, or whether there was actually a failure to report. And while everything else was building, we could still try to report failures. And Tomasz will talk later about how we did that. One problem here is that, or think to a challenge that you need to always update the list of packages because new Python packages appear in row height almost daily, at least weekly. And if there is a new dependency loop, you need to figure it out somehow. Because if you do it in Vaves, the cyclic dependency will never solve itself even if you do it million times. So what we switched now for Python 3.11, and I really like this approach, especially for the copper thingy, is a tool that I called what do I build? You can find it on my GitHub, what do I build? And basically it takes a list of packages that it also generates. You don't have to use repo query. The script does that for you, or for us. And we resolved the build route with DNF Python API, which means we don't actually initialize a build and try to install everything and then see what was installed because that's tedious. But we only use the dependency solver and the repo metadata to figure out what's going on, what's supposed to be installed. So basically we get the build dependencies from the repo, and then you get the initial set of packages that are in all the build routes, and then we tell DNF, hey, install all this to an empty route, and don't install it. Just tell me what would you actually install at the end? And it spits a list of packages, and then we cross-reference this list to packages that we want to rebuild. And for each of the package, we can say if we already rebuilt it in our copper or not. And if all the packages were already rebuilt, we know we can rebuild this one. And if not, we note it down, and we move on to the next package. So basically this tool, that's why it's called what do I build, tells us what we can build and what's pointless of building. It can also detect loops quite easily because for each package, if it's not ready yet, we can put to a dictionary what blocks it, and then we can recursively vote the dictionaries and when we do a cycle or a loop, we can report it. We can also, we need to solve this somehow. So instead of a huge list of how to build packages with various beacons flipped, we only note down the beacons that we think are interesting or that we know are interesting. So when we find a loop, we find the beacons or we create the new one, and we note down that this particular package, if it's built without tests, can break a loop. And then we need to resolve the package without tests. So what the script does is it submits a scratch build with the beacons flipped. And then when you run it later, it downloads the resulting source RPM and queries it for the build requires and source the source RPM on your disk in a cache folder. So every time you figure out if you want to build this package, if it's possible to build it as is, it's reported as possible. And if it's impossible yet, it queries the source RPM to figure out whether it can build it in some modified version. Among other things, this could also lead some interesting statistics. So at the end of the script, it does you, hey, this package is the most significant blocker, it blocks the most biggest number of packages, or it can also say this package blocks lesser amount of packages, but most of them are, this is the last blocker for them. So if you fix this package, it will actually let 40 more packages to be built. There are problems with this approach. You can't really figure out if you can build a package. If DNF reports that the build route is unresolvable, it will not tell you why. So for some packages, it just does, can't, no idea. And there is this problem with random architecture build requires. So if you use repo query a lot to figure out build dependencies, you probably know this problem. And that is that if a package is built on multiple architectures, the actual build package that users install on their machines is stored in six or five different repositories per the architecture, but the source package is stored in just one. And since there are multiple source packages for each architecture, there's one, KojiPix is just one of them, which turns out to be random and put that into repo. So if you repo query, which was the script does to figure out the build dependencies, you don't always get all the information. For Python, this turned out to be very little case. There was one package that we need to figure out manually. Somebody asks if it's easier to use network X to do this. Yeah, we use dictionaries. The good thing is that while building 5,000 packages is huge, figuring out the dependencies between 500 packages is small. You can have that in memory in proper dictionaries, and it works like a charm. So this is the output of the script so yesterday or day before that. So it goes through the packages. This is one of the packages that can't be rebuilt yet. It's module build service. And it figures out from the repo that there are 31 requirements, the requirements. And in order to resolve the build route, it needs a 369 package installed. It goes through the packages and cross references it to our problem set and figures out that 98 of those are Python packages from 95 different components. And for each of them, it figures like, I've already built this. I've already built this. I've already built this and blah, blah, until it goes and says, yeah, this one package is not yet built into crypto. So we can't really build module build service yet. Don't even attempt it. It would be futile. And at the end, this does this for all the packages. And at the end, it will tell you some statistics and also spits a list of packages that can be built, but they're not yet rebuilt. We started in Git, so we can work on it with Tomasz and me, because I tend to work in the evenings and he tends to work sooner so we can take shifts and then we get the data and see what's new if there are new packages that can be built or not. Okay, so once we have built all packages in copper or some of them, we need to somehow try as them and report the failure failures. We are trying to try to report all packages that fail to build in copper. And we report the bugs loss. We are also trying to explain the cause of the failure. It's not easy and always possible, but we are doing our best. For some, for similar errors, we're using gregxsys to locate them in logs and we have scripts that help us to open multiple prefilled bugs at once. We are trying to automate as much as possible, but there is still much manual work. The goal is to report all broken packages before the next alpha comes out because they'll likely break another batch of packages. This is very difficult, especially with the growing number of Python packages in Fedora and also due to plenty of changes in Python itself. And because of this, we can't fix all affected packages, so we'll help from other maintainers if reporting it upstream or writing pages is crucial. And thank you all for doing this. Next slide, please. Yeah, I wanted to say one more thing here. Sometimes packages don't run any tests and it turns out that that's not a good idea. So some of the packages actually succeed to build, but are absolutely useless on the runtime because they crash immediately. And then we open bugs loss for the packages that are actually failing to build. And later, we realize that 10 packages don't build because this other package was built successfully and there were no tests or the tests were not good enough. So don't just comment out, check to make it built. It's not really helpful. Okay, so on the slide, you can see all the bugs loss that were assigned to Python 3.11 tracker. It was about 1,200 packages during the testing of Python 3.11. And I opened most of them, but this number doesn't contain only Python 3.11 problems, but it contains also various other bugs loss that just failed to build from source. Basically, every package that wasn't building the market is blocking Python 3.11. We are doing this because for some of our scripts, we see some statistics. And yeah, in addition, some packages were broken on many levels. So fixing one error with a stool uncover another. So there's various problems sometimes. Looks like this. So here we are November, December, and January every month, one alpha 3 list. And every time we were updating the package in Fedora, so developers could use it for testing. And the good thing about copper is that the new commits are immediately built as the main Python package. So, but we have to do some basic checks because during the alpha stage, there is no guarantee of binary stability. So sometimes on packages C code for extension modules, they start to sec fault with the new Python versions. If this happened, we had to do a partial reboot strap, meaning rebuilding all arched packages that the affected package depends on. One tricky thing about copper is that when there are multiple builds with the same name, the port version release, copper uses the oldest one. I'll likely build it previous alpha. Instead of bumping releases, we have script deleting old succeeded builds. And we also triggered a rebuild of all packages in copper to discover new failures or possible fixes. Copper is significantly faster than it used to be in the past, but rebuilding old packages still takes about three days. It really depends on if there's another big project building in it at the same time. Finally, in February, something else happens, and that is this federatory six has branched, which means that the Rohite is federatory seven. Up until this point, we can't really change anything in Rohite because it would affect federatory six, and we only want to do this in federatory seven. I think we are one of the very few teams that start to work on federatory seven before it's Rohite is federatory seven. At this point, until this happened, Rohite was federatory six. And if you report back to us for federatory seven, when Rohite is federatory six, many maintainers don't consider the Baxilas really urgent. And it's really hard to explain like this is we need it now because we want to test everything and this package is blocking others when the maintainers have full plate of other bugs that are actually happening right now in Rohite and not in some playground copper. So this point in the Fedora really schedule is really helping us to get more attention to the Baxilas. Unfortunately, it's not good enough. Some of the maintainers still don't consider Python 3.11 Baxilas important because Python 3.11 is not yet in Fedora, right? It's only in some copper. So we'll get to that later. What happens later, March, May, we do the same stuff. If there is a new release, we must rebuild everything. If there are new Python incompatibilities, we report new Baxilas. If there are new breakages in Rohite and there are always some breakages in Rohite, we report those. We had some problems here with the stuff that's in this kit versus the stuff that's in the repos. So sometimes what happens is that maintainers commit something to get with no intention to build it at all yet because it's working progress. And we can't know that. We trigger build for every commit. Also, if they commit something to the package in order to build it in their site tag, we try to rebuild it immediately in copper because it's a commit. And sometimes we have problems with that and we need to figure out if this problem is temporary or if we need to report it. And obviously, there are dependency issues in between all the breakages because sometimes breakages get updated in Rohite without checking the impact on other packages. So a library is updated and suddenly five packages that were fine. We can't really rebuild them with the new alphas anymore because problem needs to be fixed first. Ideally, we would freeze everything and tell everybody not don't change anything. And then this job would be quite easier. But considering it takes a year, it would mean that Fedora can't ever change anything but the Python version. And that's not realistic. Okay. So we are in May. In May, there was a release the first beta. And it's an important milestone for us because at this point, Python entered the feature freeze phase and we started to plan Python as rebuilt in Fedora. First, we needed to figure out whether some critical breakages were ready and some weren't. So we began to focus on them, offered help to form maintenance and even fix some of them by us. And by the time of second beta was released, we were prepared and could announce that we were starting the mass rebuild. Esmeralda already described the process of resolving package dependencies. We also used the same approach for rebuilding in Koji. We didn't break everything. We were building it into a side tech. And maybe you remember, as I mentioned, that we were building only on X8664 architecture in copper. So this is the time when we were discovering the failures for different archies that Koji builds in different type of architectures. And we also had to sometimes step in and work with maintainers or fix it by ourselves so we could continue if it was looking for us. And now we will be talking about merging the side tech. So when we build packages in side tech, it's because that if we don't, that as soon as we update Python, suddenly everything is on fire and nothing installs. So we do it somewhere else. And every time you do something somewhere else, the original place is still moving. And Rohit is really like rolling, rolling, rolling. So we fight this urgency to do it properly because it will take a month. And during a month, all the packages would be already updated in Rohit and we would need to synchronize those things over and over again. So we have another tendency and that is keep the side tech open to the bare minimum. Like ideally, you rebuild everything immediately and merge, merge, merge and nothing happens. That's not possible because even with the what do I build approach, it takes a while before everything is rebuilt. And ideally, when this is done and everything is rebuilt, we merge the side tech back to Rohit, but it's never done. There are always failures. And even if failures are often fixed, new failures pop out somewhere else. So there was never a point when we did this, when every Python package in Rohit would successfully build. That's just not realistic. So we need to find the too soon too late problem. If we merge the side tech too soon, there will be a lot of breakage because not all the package would install. And we merge the side tech too late, it would create new problems. So we try to make it around two weeks tops. And there is also a Fedora Mastery build. And if the Fedora Mastery build happens before we merge the side tech, then we need to start over. Fortunately, the Mastery build is like one month or two months after we start this. I think it's one month. So we never needed to keep that open that long. And when we decide to merge the side tech, it's basically like a point of no return because suddenly Rohit is using the new Python version. And it's really hard to get back. But during this release cycle, we were considering to do that even after that. So it's not really a point of no return, but it's rather a point of a very, very complicated return. You can always regret stuff. Funny thing is that the what do I build scripting was completely broken when we merged the side tech because it expected that Rohit has Python 3.10 and you can resolve stuff there and figure out what depends on what. But suddenly we had 3.11 everywhere also in Rohit. And it just reported I can't resolve anything that's yet to be rebuilt. So what we do is that we catch the metadata from the day before the side tech was merged and we use the cached metadata, which obviously is already very, very outdated. So what do I build is not that useful after the side tech is merged. In reality, this time the side tech was merged in 10 days, which I've checked is the same amount of days as the last time. So it's probably just normal. And we managed to build 3,700 or something like that packages, and about 500 packages failed to build. But they went through the list and it was all libraries that nothing depended on or isolated clusters. There was nothing that Fedora installation media would use. So we said it's not important. And I know that for the package maintainers, all of their packages are important. But on the scale of the distro, we identified nothing important. And during the years, we already broke a lot of important stuff. So we have some experience about this. And we know what's important in this regard. When the side tech is merged, suddenly the conchi build route has Python 3.11. But the Rohit repositories, they don't. It takes a while before that happens. And the longer this takes, the more complicated it is for package maintainers and people who use Fedora Rohit on their CI, et cetera, to actually consume this. So we tried to make a compose really fast after we merged the side tech. If you don't know what compose is, really, really simplification, real simplification is that every night, the repo is created. And when the repo is created, also the installation images are created. And a lot of stuff is checked. And if something doesn't check out, there's no repo. The compose failed. And the packages in Rohit repositories are the one from previous day. So if we create some large disruption, we don't get a compose. And then it's really complex for us to fix it. This time, there was a problem in Anaconda with Python 3.11. And the compose failed, but we managed to fix it immediately. So the next compose next day. So after 11 days, we started building. People could consume 3.11 in Rohit. This one is also on mine. I was waiting for too much to speak. And now I noticed a little M in the corner. What's important that when we actually put 3.11 in Rohit, suddenly the bugzillas that are still open, finally, it actually breaks stuff. And before it's broken, many don't have the tendency to fix it. And now it's really is broken. Your package don't install, your package don't build. You can't update it. You can update different packages that depend on your package. So now the response time for the bugzillas gets much better. But still not everyone. Unfortunately, many maintainers don't consider Python 3.11 bug's priority until it actually breaks it. Many maintainers, although they are active in Fedora, they just don't read bugzilla or I don't know what's going on. You never got a reply there. And we opened, as Thomas said, 1,200 packages. Sorry, bugzillas. So we operate on scale. And unfortunately, the nonresponsive maintainer policy does not really scale because it assumes that you try to contact the maintainer, talk to them and figure out what's going on. Then you write emails somewhere, open another bugzilla for them. And you can't really do that for 20, 30 packages at a time. And unfortunately, despite the intention of the policy, many maintainers do take and try for the nonresponsive maintainer policy as a personal attack. And it's not really pleasant when people assume ill intention here. So what we use now is the fails to install policy, which is much easier to handle because it targets packages and not people. So if you run a script that tells this package is bad, this package is bad, somehow that's much more acceptable to maintainers than this person is bad. You don't do that, right? So this policy actually targets packages. And if somebody does not reply, it allows us to orphan and eventually retire the packages that were not rebuilt with a new Python version, which is a good thing, because if they were not rebuilt, users would not be able to install them anyway. So they just take place in the repositories and they scrub statistics, but they are not useful. There is existing tooling to open bugs elapsed for packages that fail to install. And I'm running it normally for all Fedora, not just for Python. And I keep it maintained. And this really saves us a lot of trouble trying to communicate at scale with people who don't. Still, unfortunately, the policy is targeted for bugs elapsed that are in a new state, indicating that nobody is looking into it. And then there are reminders. And the reminder says, if you set the bugzilla to a signed state, we'll assume you are working on it and we'll stop bugging you. And unfortunately, what happens still is that people do assign the bugzilla to assigned to avoid the reminders, but then nothing happens and you need to figure out what's going on. Fortunately, those are individual packages and not hundreds or dozens. As I said, we were about 500 packages left when we merged, and most of them are already fixed at this point. Unfortunately, right after we merged, started opening all the policies and starting to getting some progress, Upstream Python decided that they have too many upstream release blockers. And they said, if all of the blockers are not fixed in, I don't know, end of the week, they'll delay the release of Python 3.11 by two months. And with the yearly release cycle of Python, this is really tightly coupled to the Fedora release cycle. So the final is in October, and the final is in October. It's basically we all target the same month. So if Python would be released in December, yeah, this year it might have been December, right? And then what the hell, what are we going to do? Because we would not even have a release candidate when Fedora is released. And before release candidate, we don't have any binary compatibility in Python. So we would release Fedora 37 with Python that will not be compatible with any future updates possible. And that's just two bleeding edge. So we considered several scenarios. One of them was to revert immediately to 3.10. But it would be a really a lot of work and a lot of mess because Rohit went from 3.10 to 3.11. We would made it go back to 3.10 and then back to 3.11. And it means that we would need to do two more master builds of Python and it would triple the amount of work. The other possibility was that we wait after Rohit is Fedora 38 and we only reverted in Fedora 37, which means that Fedora 38 would remain on 3.11. But that would complicate pre beta testing because everybody would say like, yeah, this this is buggy, but we are going to revert to Python version next month. So let's wait. So that was also unfortunate. What's awesome is that many of C Python upstream developers pulled it out and all the blockers were fixed in time. And all this revert conversation was actually moot. But it gave us a lot of things to consider. And now we will be less nervous when this happens again, I guess. Okay, so third, fourth and fifth, because we're really still in July. But there still wasn't a guarantee of ABI stability. And since no new features were allowed, but possible reverse could change it. So before speaking in Fedora, we upgraded it first in Copper and checked the ABI. And in case it changed, a minimized rebuild of all arched packages, it's about 600 would be needed. But for Python 3.11, it wasn't required. But what has changed was the bytecode magic number. Next slide, please. PIC magic number is a part of bytecode files. When you run Python script for the first time, imported modules are compiled into bytecode. And so next time you run the script, it can skip compilation and it runs faster. These bytecode files have magic number in their headers, which specify with what Python version it was compiled. And it's not possible to regenerate those files without package rebuild. So when 4th beta of Python 3.11 introduced a breaking change, all Python packages had to be rebuilt again. Luckily, a few days after this release, Fedora 47 was rebuilt. So it took care of rebuild. But still some packages failed to build. So when Fedora langs opened, failed to build from source bugzillas, we set them as blocking for PIC magic number bugzilla. And so it's tracked. And it was about 60 packages. Some of them are already fixed. So it's about 15 now. Yeah. Now we have August. And we expect somethings to happen. And I hope they will happen on time. One of them is the first release candidate of Python, which was scheduled for Friday and it's Saturday, which is not uncommon. If there is a release of Python scheduled for a day, it usually means it will happen in that week or like around the week or maybe on that month or it might happen someday. We hope it will happen in August. There is Fedora 37 beta release in August. And we really, really want to get a release candidate version of Python into the beta. So if the release don't happen before the freeze, we will ask for an exception to make it there. And the idea is that when beta is released, it should have the same Python as the final. And release versions of Python are pretty much the same as final versions of Python because up until release candidates, all of the cometers to Python can commit anything almost. And after release candidate, only one person, the release manager, decides what comets go to the release. And they are very careful not to break anything. I've never seen real break just after release candidate. So the plan is to ship Fedora 37 beta with at least the first release candidate or another release candidate if that happens. And this day in the morning, 85 packages in Fedora still fail to build and fail to install with Python 3.11. And we'll make sure to either rebuild them or to retire them before the final release. Once the beta is released with the new Python, we hopefully get some users, beta testers that will run Fedora with Python 3.11 and report runtime-related problems. We got a lot of problems discovery when the packages are built. And we have the fails to build from source that determines that something is broken. But there is still a huge area of problems that could be only discovered when you actually run the Python on your system, not when you build the packages. Usually, there are some small problems discovered during beta, but nothing really critical like a new version of Python 8 My Computer or something like that that never happened before for us. Okay, so we are back in September and October and it's time to finalize this project. So we are expecting the last release candidate at the beginning of September if everything goes according to plans. And afterwards, there are a few weeks before Fedora 37 final freeze to make the remaining packages either built or retired, as Miro mentioned, because there is no point in shipping broken packages, so it's better to retire them and reintroduce them later once they are fixed. Python 3.11 final is planned for 3rd of October and it's one day before Fedora final freeze. So in case it's delayed, we will be asking for freeze exceptions. So we can ship it in Fedora and there will be also absolutely things, removed packages to unblock upgrade paths. And maybe you're asking whether when the Python 3.12 will be released. So it's planned, first the alpha is planned for the same day for the 3rd of October. So once we finish these finalizing steps, we can go back to slide number seven of this presentation and we can start all over again. It's a never-ending story, but fortunately it takes a year and the steps are quite diverse. So it's not that boring. That was all from us. This is how we do it. If you have some questions, we have five minutes. I'll switch to the questions answers tab. But if you want to see some links, communicate with us, go to Fedora loves Python. We don't have stickers printed yet with the new stuff, but hopefully before an actual in-person Fedora conference, we'll get them printed. Okay, question and answers. So will Python 3.11 be on schedule? Well, we can't tell before it actually happens, but it looks that way. There is always a risk. There was a huge blocker this week and fortunately it's fixed. We really have fingers crossed for the release candidate because shipping the final release of Fedora with release candidate would probably not be perfect, but it would be okay. Shipping final Fedora release with a beta version that would be a disaster. Then there is a question, how do we handle maintaining all the Python 3.11 components across Fedora releases? I'm going to take that to the end because it's complicated. How do we ensure build ordering in copper? And do we start build batches manually? I think this talk actually answered that question. If not, we can talk about it later in some social museum or something. But we start build batches manually when we need to do it for the new Python version. We basically brute furnace it still. And when we have problem, we delete some of the builds and then we use what do I build, which preserves the order. I'm trying to do this fast because we don't have that much time. Alexandra asked for a very excellent question. Can we test and do we test the compost before the site tag is merged? Is it possible? I've been told for three years now that some people might know how to do that, but then they don't remember how they did it the last time. And it's not like we can do it. We ask every release, can somebody run the test compost for this? And then somebody like from Fedora QA or Fedora Infra sometimes is able to run a partial compost or something like that. But it's not that easy. I wish it would. I wish I could click a button or run a command, run the compost, see the result and say, this is okay, let's merge. And I hope it will happen someday, but not yet. Is it better to ship free 10 or better RC? So let's sort it. The best way is to ship the final release. The second best way is to ship a release candidate. And then we have a decision whether we should beta or whether we revert. And that would be a very hard decision at this point. I hope we don't have to make it. It will depend on a lot of factors. When does it happen? When we need to make the decision or what ABI compatibility do we expect for the next beta? Like why the RC was not released yet? Is the broker that blocks the RC likely to change the ABI or not? Okay. So I have exactly one minute to explain how are we maintaining all the free X components. Enthusiastically, cleverly and complicated and whatnot. We use a tool called fairy pick that allows us to cherry pick commits between different Fedora packages. When we create a new Python 3X component, we fork from the previous one. We preserve to get history. So it is quite clever and that we retain the spec file is not that hard for them. So when we open a pull request in this Git for one Python, we can run a command that creates the same stuff in Git for a different component and then we push there. And we maintain our patches in GitHub as commits to be able to cherry pick them easily without dealing with patch files. I hope that answers the question at least partially. Okay. That's it. We are now at the end. It's 7030. Thank you very much. Thank you and thank you. Thank you. See you around. See you. Bye-bye.