 So, my name is Frédéric Roza, I'm working for SUSE, and I will talk today about distribution building and delivery style. Can we have a solution which would fit for every use case? So let's see if it works. So first, some warning, mostly for French people, might work for Belgian people also. SUSE doesn't do that. But we could do with it. Of course, you can drink it, be careful, there is alcohol in it. But we don't do that. On the other hand, we do that. I hope you took a bottle of that at the booth. But anyway, more about distribution delivery style. First, I forgot, if you have questions, don't wait until the end. Raise your hand and ask your question. I like this to be very interactive. If you see what he thinks that I'm doing, I'm saying something completely crazy, shout. Anyway, so delivery style. I classify the style of delivery of Linux distribution in three. We could argue that our modes are less, but roughly, I think there are three styles. Rolling, regular for need of a better term. Standard, pouring, and LTS for enterprise. So in the rolling style, we have bleeding edge. Everything is the latest version of everything. In that bucket, you could put open to the tumbleweed, you could put arch, you could put gen2. You name it. Regular, it's kind of the distribution people have been used to for, now we can say decades. Couple of decades, even more. So kind of releasing every insert a period of time. It can be every six months, every nine months, every year. It kind of depends on the pace of each project. And usually what those distributions follow this style do is they upgrade their entire components. And then they are going to work for specific period of time to try to stabilize it and ship it. In this, I would put Ubuntu, Fedora, Debian. Yeah, that's the one that came to my mind. Then there are the LTS, long-term support and enterprise style. Here you have what could be seen as a very slow cadence. Release every year, or it's sometimes even less than that. And usually between releases, as little as possible should move. That's kind of usually what people think when they think about LTS releases. And there you could put open-suzolip, Ubuntu LTS, Slash, Sled, Rail, I forgot CentOS also. So now we have three styles, but can a Gecko adapt to everything? Can we have a distribution or maybe not a distribution? Sorry, it's not in a loop, so it's already out of frame. Can we have a way of working to create distribution that would suit for all those styles? Before jumping to that, just some terminology, because I'm going to use that a lot. So how many people here in the room are not familiar with the open-suzolip releases? Kind of half of the room. So let's go to that. I inserted this slide. I will talk a lot about SLEE, that's Suzolip Enterprise. It can be server, it can be desktop, whatever. That's an enterprise distribution that is developed by SUSE. I forgot to say I'm one of the SUSE release manager for the SLEE. I will be also talking about open-suzolip factory. Open-suzolip factory is our development repository. That's basically where all the latest version of everything lands to. On open-suzolip. Then we have open-suzolip tumbleweed. Tumbleweed is open-suzolip rolling release. It's done by the open-suzolip community. And it's only using factory packages. And it's tested by OpenQ. Often, people tend to mix between factory and tumbleweed. You will see there is slight difference, but yeah. If you read factory or tumbleweed, it's kind of equivalent. Then we have open-suzolip, which is open-suzolip stable release, or LTS release, if you prefer. And it's combined part of the SLEE common code base with packages from factory. So kind of stable or enterprise-ready packages with latest version of all the rest. More about that later. So integration process. Sorry, my drawing skills are not as good as others. I should have stolen Richard's slide. It's way nicer than mine. But yeah, I only saw that this morning. And I say, nah, I'm not going to redo my slides. So here is kind of the process we use in open-suzolip and in open-suzolip when we get package changes. So what we call SR is a submit request. So if quick survey, who have already tried tested open-bill service, OBS? Not a lot of people. So kind of small advertising. Open-bill service is a bill service provided by open-suzolip where you can build packages for not just SUSE and open-suz distro, but almost every distro on the market. You can create a single spec file or a single description file. And then you will get packages built for Fedora, Arch, Digian, Ubuntu, Magia, who I forgot. I probably forgot a few, but there. You can have everything. You don't have to deal with, I need to install this distro or this other distro. It's going to build everything. We can have all the distribution in place. So anyway, in the OBS, in the bill service, when you do a change, you are going to submit this change upstream to the project where you are creating your branch. So we call that a submit request. For instance, yesterday after seeing the matrix talk, I thought, oh, I should fix a few bugs in the matrix package we have in OBS. So that's exactly what I did. I created some branch. I fixed the system services which were kind of broken. Then I commit my changes. And then I create a submit request. This submit request went to the devil project, which then will be reviewed and pushed to the distribution. In our case, tumbleweed or factory. It lands in what we call a staging environment. What does it mean? It's creating some kind of a, it's recreating the distribution, kind of tumbleweed. But in this distribution, it's going to replace the package which are modified by this submit request with exactly those new version of the package. And then it's going to rebuild the distribution for the packages which are built, which have built impendencies on this modified package. Let's take another example, because yeah, matrix is not a core package. Let's take OpenSSL. OK, let's take something else. Yeah, OpenSSL. We have a new version of OpenSSL. We have to push it in the distro. How can we be sure it's not going to break the entire distro? So the developer will work on OpenSSL, submit his thing, it's going to staging. Staging is going to rebuild the entire distro with just this OpenSSL change. And then it breaks half of the packages. So you see that if you accept these packages, your distro is going to be in very bad state. So what we do is we block the staging and we tell the package package, either fix your package because it might be a package bug, or we tell people who are maintaining packages which would be broken by this OpenSSL change. Please, there is a new OpenSSL which is waiting to be accepted. Fix your damn packages. This way we keep that in a staging. We have tons of staging for OBS, 150. And we wait until things get green. If it's green, at least on the build stage, then we generate the distribution, an ISO image of the distribution. And on this ISO image, what we do is we run OpenQA. On OpenQA, we are going to test not the entire suite of tests we have for the distro, but we are going to test like five main case, like does it boot? Does it install? Does it run GNOME because it's a default desktop environment? Does it boot? Does it install? Does it run a minimal X server with XDM? Does it boot? Does it install? Does it run QLVM because you have noticed people tend to break that very often? And a few others, like five things. Then we see either it breaks. It could break because there is a real bug introduced by the change. Like we got the example of the free IPA maintenance update or fixes which were like, no, it's not really fixing stuff. Or it could be that there is a UI element which has changed, a background which has been updated, et cetera. So it's not a bug in the test case. It's not a bug in the package. It's a test case which needs to be updated. We check that. And then there is a review of the changes by humans. At OpenSUSE, we have a rule of four eyes principle. So two person have to review each submit request. If it's go in, so everything is green, then we accept the change. Then it's going to be integrated in the distro. A new image is going to be generated. And then this image will go through OpenQ again. But this time, with not just five test cases, but like 100 test cases. Or maybe more, I'm not sure. 210 for tumbleweed. 210 for tumbleweed, which can take half a day or day. It depends. What's happened in this case if some tests fail? That's the difference. For instance, in tumbleweed, we are going to gate the release of this distro to the public. So if we see that it's a minor leaf packages, which is kind of breaking one small test, the release manager might decide it's good or it's bad. We can ship it. But very often, we are catching those regression very early. And if it's really something like which was not caught by the initial staging gate, then the release manager might decide there are half of the big test suits which is failing. So we stop there. We don't release it because it's not yet released. It's still internal to the build service. So we can say, nah, it doesn't pass. We revert it. And then we can continue again back to stage. So how are we doing aggregation of submit requests? Yeah, sorry. So the question is, example of OpenSSL, it needs fixes from this, this, this, and these packages. What do we do with that? And our answer is we aggregate several submit requests in a single staging. And then we can check the result. We do the same for GCC upgrade, whatever. And sometimes we also do that even for submit requests which are unrelated because just to save time, building, and power, and stuff like that. So question is the second step of OpenQA tests. What does it cover? It covers the entire distribution which was rebuilt with this change. And sometimes we accept several staging at the same time. So we test everything, not just this change. We test the distro as a whole and not just some changes. The point is OpenQA and our staging process is really used as an integration testing framework and process. That's kind of the power of distribution is we provide a coherent set of packages. It's not just random packages throw against a wall and we keep only the one which stick on the wall. So we have this staging system, but we also have some rules we enforce. So this is mostly just for Suze Enterprise Linux. So this is just for the Suze employee. But still, it's interesting to know. We have a policy we call Factory First. This policy is, in effect, since Sleetwell SP3, which is 1, 2, 3 years, maybe a bit more, 2 years. So what we tell the Suze people, whenever you can, do your development upstream. Upstream is a given. If you can do your development upstream in the upstream project. But very often, development has to be done not at the upstream level in the project itself, but at the package level, which is kind of flee level or open Suze. And what we tell people, please do your work directly on Factory, on tumbleweed. And then you push back your changes to the Sleet version. Why we are doing that? It's simply because we want to be sure that when we release the next major upstream of the Suze Linux Enterprise product, we want to not lose everything which was done because we base our Sleet product on tumbleweed when we branch a main code stream. And I have a very good example. When we worked on Sleetwell a few years back, we branch again from tumbleweed, or Factory at the time. And we checked all the packages to make sure how we missing packages from our Sleet11 code base in Factory. And we found one packages which had a missing patch in Factory. It had this patch in Sleet11. Then we went back a bit more in Sleet10, then Sleet9, which means for 10 years, roughly, one guy was redoing his work every three or two or three years, back porting a patch which was applicable without any change directly in the upstream of open Suze. So some people might say he was making sure we keep his job. I hope not. But we didn't have this enforcement like, please make sure all just change you do. You push that in the open Suze distribution so that first the entire open Suze people can use it, test it, report issues, improve it maybe. But also it will save you a lot of work in the future. And it's still a process which is still ongoing internally. We still have a lot of arguments on that because it's kind of, as you know, working upstream, being either the upstream or the distro or upstream in project, it's difficult because you have to get your code reviewed. And sometimes people tell you, no, redo it. So yeah, we have all of that. We have a lot of automated check when people work on this. And we also have automated check to make sure that when somebody sends something on SLEE, on the SLEE code base, the submit request, we check that is this change already in open Suze or not? And if it's not, then a bot is going to add a comment on the submit request telling nah, you should, it's different. You, we could not find your change in open Suze. Please comment or please fix it or whatever. And then people sometimes they argue with a bot which can be quite funny. But very often it has been proven very effective that they just fix it because they were sometimes new in the company or whatever. They didn't knew that they were diverging kind of by mistake or not knowing unknowingly. So with the thing we discovered that when you can implement quality check or code review check as much as possible with bots, this way people will not be as pissed as if they were being told by directly a human guy. A bot, they will say, ah, I'm going to fix it because otherwise a bot will keep complaining. If it's a human, you will get a mail. Then you reply and then you get another mail. And then you spend half an hour arguing. Yeah, I derail a bit, but yeah, nah, this is done, this is done, this is done, yep. And thanks to all of this factory first policy and so on, we are making sure that all development is always done in the open. So if you want to follow what's going on on SLEE, since we are in beta phase, you can see it on OES. Yeah, so as I said, we have a bot making sure that we follow this factory first policy and which are doing a lot of reviews. We have the legal bot, which is kind of going to crawl the entire code base for the package and making sure that there is not some more proprietary in a few header files or these kind of things. It's going to either detect that there was already a legal review in the past on the same code base or if it's completely, something has changed a lot. Like recently we had a community guy who redid the entire icon set for YAST or installer and it triggered a lot the legal bot because those are SVG file generated by Inkscape, I think, and they were triggering the legal bot because it had a lot of templates about the license of the image. So yeah, after you get a human, preferably a legal guy to press a button and say, yeah, that's okay or nah, you have to fix it. And sometimes we catch error in tar balls, I'm going to take. Sometimes we catch error in upstream tar ball because yep, this header file or this file, it's not under proper license compared to what is declared in the license file in the tar ball. So it's good that it's not just a safeguard for SUSE and as an enterprise distribution vendor, but it's good also for the entire upstream community because not a lot of, I mean, Debian is also checking license of tar balls but having something which is kind of automated and which is kind of gating, blocking things to enter in, it's make sure that we are going to pester upstream until they fix their tar ball. Question in the back. Yeah, so the bot we use is called Cabel. It's free software, we have released it, it's available on GitHub slash open SUSE. And for all our contribution, all the work that SUSE and open SUSE people are doing, either that follow, of course, upstream license or it's usually under GPL. So I told about legal bot, we have a maintenance bot which is kind of making sure that people submit something which is already building, which because if people submit a package which is not even building in their project, is a home directory, it's not great. We have a change log checker, for instance, firstly, we want to make sure there is always a reason why people are touching a package like a bug number, a feature number, this kind of thing or my favorite, ensuring that patch are mentioned in change log. We have a bot for that which is making sure that people don't drop patches without saying why they drop it. And you don't know how many things, so when this happens, a bot directly declines the submission, like five seconds. And by magic, usually if two minutes later you get a new submission with the change log fixed. Get that with a human doing the review, you lose time and people are going to complain because yeah, but I didn't need to do this change log entry. And then we have what we call the leap a bot which is kind of making sure that if a submission is done on sleep, we check, is it available on factory? If it's not, then we kind of decide what we do. Either we wait for somebody, a human saying, yeah, it's okay that we diverge or we say, nah, it's not okay and we either recheck the submission or we ask the people, please make sure that your submission is upstream to open source. Yep, so lessons learned, lessons learned over the course of years. As I said, use bots, use bots, use bots for doing review when you can. It's really effective. People are less emotional about it. Second, if you reject, it's good to give people a reason, not just reject. Give them a link to the policy. Give them a link to yes, we reject because you forgot to say why you drop this patch or you added a patch without mentioning why it's there, this kind of thing. Bots are good, but it's always good to have a way to override them. We do that, I should not say very often, but we have to do that because then we have other issues like keeping schedule and these kind of things or sometimes you have to bend a bit the rules but still making sure that in the end, you still follow them. Yes, and as I said, empower your own contributors and make sure that they don't have to, again, bots are helping people because it's going to reuse things very fast, decline things very fast so people can learn from their mistakes and create better contribution. So I talked about our processes but thanks to those processes, we can do kind of whatever we want and when I created my slides, I forgot about Qubic, but yeah, for those who didn't attend Richard talks this morning, it's available, it will be available later on the first-name website. But anyway, open system with rolling release, which means it's rolling all the time. It has tons of staging. So on Friday, when I double check my slide deck, I checked, we have 14 staging, so 14 staging area where core packages can land and rebuild the entire distro. Core packages meaning GCC, G-Lipsy, GTK, QT, OpenSSL, Python, Ruby, whatever, Pearl. Oh, I forgot, Kernel, this one is kind of important. And then we have 150 staging for what we call lift packages which are packages which are not core to the distribution. I would tend to say application packages like, I don't know, whatever application you could run matrix or other packages like that. For those packages, we don't rebuild the entire distro because we know that they are lift packages, the distro doesn't depend on them, they depend on the distro, so we just rebuild those packages. In a staging, make sure that still they don't break everything, but we kind of save computing power to not rebuild everything. We just make sure they still pass. OpenQA is that can still be installed and so forth. Yeah, as I said, I took the example of OpenSSL earlier, but when we get a new GCC and Python, a new Pearl, whatever, it can take time to land those things in, but the good thing is that because we have 14 staging in parallel, yes, we are going to kind of block one staging for the latest version of, you name it, OpenSSL, but it doesn't matter. It's not going to block the entire open SUSE terminal with release because we still have a lot of other staging and we will make sure that we don't slow down the pace of release because of that. People will still be working on that, but in parallel, if we get the latest version of GNOME or KDE, it doesn't matter. We put that in another staging. We make sure does it build, does it pass OpenQA? Yes, yes, yes, then we can ship it. So some numbers stolen from Richard's slide deck a few years back, so I'm not even sure those numbers are creating more. It's faster now. Okay, so for a quiet week and in red for a busy week, but I guess busy would be higher these days. There was between three and five release of tumbleweed during the week. So the entire distro was released three times or five times, kind of one release a day. Keep the bugs away. On those week, 550 packages were updated. Busy week, 300, maybe even more these days. 50, 20 to 50 packages added and or removed from the DVD because if a package has not been building for six months or something like that, we drop it from the distribution. It means that nobody cares about it. We cannot rebuild it. We cannot ensure that it's building with the latest GCC and the entire stack. It's not building if nobody, we of course we want people in advance on the development mailing list. This package is not building. We want the maintainer. This package has not been building for several weeks. Please fix it. Please really fix it. Please you should really, really fix it. You don't want to fix it. Okay, we drop it from the distro which means we don't release it anymore to users. It's still available on the build service whenever it's being fixed, but we don't want to have packages which are not available all the time. And yeah, minor things like one, two, three kernel updates during this week. Who cares? So again, this entire process allows to kind of ship a new version of the distro every walking day. I should say every day, but people are still supposed to not work. We just ship the snapshot while you're talking. Okay, so comment from the audience from an unknown guy who happened to be the open to the chairman. When I was talking apparently as a release manager of open to stumble with release a snapshot. Slee, so we were on the extreme role of rolling edge of things. Now on the extreme end, on the other end, Slee 15, currently we are working on Slee 15 SP1 service pack. Initially when we started to work on Slee 15, we took tumbleweed, we froze it, we took a snapshot of it. And then we only kept a subset of tumbleweed because tumbleweed is 10,000 package, source packages. I have a few, yeah, I have the number of these. In Slee it's 3,300 source packages. So kind of one third of tumbleweed is in Slee. And the thing is what we are doing with the service pack, what we call service pack in Slee is we are not rebuilding the entire distro when we do a service pack. We just rebuild packages which are modified and we don't even rebuild their build dependencies. So we import the binary from Slee 15 SP0 in Slee 15 SP1 and then if there are changes on top of that, we are layering them on top of that. But the thing is if you think like enterprise customer, they don't like changes at all. So even packages changing from one service pack to the next just because we rebuild it, they are afraid of that because you don't know what might have happened. It might have been that UGCC is causing changes somewhere. And when you are certifying application, big database, whatever, on top of packages, you want to have the least change as possible. So the way we do it with the build services, we have a core distribution, so Slee 15 and what we call GA or SP0. And when we release a service pack we just change a few things on top but most of the distro doesn't change. So when I checked the statistics last week it was like only 30% of the distro in SP1 has been branched and modified and rebuilt. So 2 third, it's the exact same binary packages as Slee 15. And we are on yearly cadence. So we release a service pack every year. Yep, as I said, SP1 is SP0 binary packages plus the SP1. And when I say SP0 packages, it means SP0, including all the maintenance updates which has been released until we release SP1. Then we have Leap. Leap is a hybrid beast, I should say. It's something that a lot of people have a bit of a difficulty to understand because people tend to, of course, compare us with a red friend. And our red friend have their enterprise distro and their rebuild of enterprise distro which is exactly kind of almost the same. I'm not especially so please, I'm trying to don't say anything wrong but correct me if I do. The way we are doing open to the leap is slightly different. We are not just giving all the source packages from Slee. They are available on OBS if you want, so you can do whatever you want. But with that, we are giving that to the community and would it make sense to not just rebuild and do a very small subset of the distro with kind of a third of the packages which are on thermal weed. But use that as a core, which is kind of stable, maintained by a company which is going to take care of all the security updates, bug fix, et cetera. And on top of that, all the packages which the company is not interested into shipping and supporting, we add that. And that's kind of what Leap is doing. Leap is, there is a core set of packages which are inherited from Slee. And on top of that, there is all the packages which are latest and greatest from thermal weed which are rebuilt on this core thing. So it allows you to have something for the community which is stable in the sense that it's going to be maintained, you don't have to have a lot of community people focusing on those core kernel and other things. It's handled by SUSE, but the rest, KDE, or I don't know, LXDE, you name whatever desktop environment you want or whatever other package you want. It's just, people just have to contribute it to thermal weed and then they can just ask open SUSE Leap release manager, please make sure to grab those packages in Leap and they will be available on Leap. So in a sense, Leap is released at the same time as a SUSE service pack, which means Leap 15.0 was released at the same time, mostly as Slee 15. And 15.1 will be released at the same time as Slee 15.1. And as I said, we had thermal weed packages when they are not available on Slee 15. And the slight difference is because our open SUSE Leap release manager likes challenge, you want to be sure that Leap could be rebuilt by anybody all the time. Those extra layer of service pack on top of service pack, it's good for an enterprise distribution. It might be seen as overkill for a community distribution. So on Leap, all the packages are rebuilt all the time, which sometimes cause some challenge for open SUSE Leap release manager because it discovers that, yes, there is a maintenance update which is causing these packages to not be rebuilt because we don't rebuild all the time on the Slee site. He noticed things before we notice them. But in the end, it works quite nicely. And it's also very important, I forgot, because we are part of the reproducible distribution topic. So we have people working on making sure that what we build, people could reproduce it on their site. So it's also very important to be able to always being able to rebuild the distro and rebuild exactly the same thing that was rebuilt in the first place. So I think that's my only graph in this slide. So to give an idea of where things are coming from in Leap, so in 15.0, because it was based on Slee 15, of course, you have kind of two-third of the packages which are coming from Tumbleweed and one-third, yeah, the core thing is coming from Slee. And then 15.1, because it's a stable distro, it's up to community to decide what they want to do in this new version. Do they want to have latest and greatest of everything but then it means some maintenance burden and bug fix and secretive handling is on their site? Or do they want to still use the resources of SUSE and inherit most of things from the Slee site? So which means that most of the packages in Leap 15.1 are coming either from Slee 15.0, basically they didn't change. Slee 15.1, so there is a few packages which were not shipped in Slee 15.0 and now shipped in Slee 15.1, and for Leap it's relevant to grab them. But yeah, a good chunk of packages are kind of exactly the same as the one which were shipped in Leap 15.0, which means it's a LTS distro. So people are not looking into getting latest version of everything there. If they want that, we have time for that for them. Yeah, so this is exactly the same thing in written form. So I'm going to skip it and then, yep. In short, with the Open Build Service, we've all processes, staging processes, with Open QA, we are able to kind of deliver any style of distro we want, or we can now create a cubic, so a distribution cello for containers and Kubernetes. And even cubic is, currently there is one flavor which is tumbleweed cubic, if some people would really want, they could do a Leap version of cubic. Doesn't really matter because those two are so versatile that you can basically create whatever you want. I didn't give an example of doing regular style distribution simply because it's just tumbleweed, but very slow. So yeah, and we don't do that anymore in Open Susan. We just say either we go very fast or we go very conservative. That's it. Question, yep. Yeah, so question is, is there some automation to kind of inform people in the distribution that there is a new version of full-bar packages and you could grab it, package it, test it, and if it goes in, no. That's kind of, it's kind of fresh meat slash, I think it's dead these days. What we have in the build services is we have source services which kind of automatically fetch from a specific directory, URL specified in the spec file, new version of packages, or get up, get check out, or grab the tarboard, but we don't run that manually, automatically it's still up to maintainers to take care of that. Yep. So question is, I talked about build dependencies. What about runtime dependencies? Runtime dependencies, you are going to see that very quickly in the Open QA test because they won't be met. We have consistency check as part of one of the few tests we have in staging. So you will see that, the distro you have just released, it's not consistent. You are asking for a version of something which is not even available, so. So question is, how do you fix it? Either you copy the package over or you don't accept the submission. I would say it depends. Sometimes we will see we need a new version of whatever, then we are going to put the submission on hold until the fix is available. This is kind of happening all the time with, I don't know, new version of GNOME, new version of KDE, which need new version of what, libfoo, new version of libbar, whatever. But the good thing is that we can even do that not at the last minute, but prepare that during the development cycle of those projects because they are packaged on the go, so which allows, for instance, a GNOME version, GNOME release, usually it's available in Tumbleweed in the week of the upstream release, which is not that bad, yep. Just to make sure that I understood the built-in dependency system, saying that in the .0 release, everything is built with the same two chain within the .1 and .2 releases, only if the compiler changes, only packages that are changed are compiled to the new ones and the ones that were relaying the same were compiled to the old ones. Yes, so question was, in a major code base, we will repeat everything with the same compiler. When we upgrade to the next service pack, for instance, what do we do, because if we rev up the compiler, usually, firstly, we don't rev up the compiler, we will back port changes, we will provide the new version of compiler in addition for customers who want to have the latest and greatest, but we will stick to the same GCC because this is exactly the thing we want to make sure we don't break things by upgrading the compiler. And I'm out of time for questions, sorry if you have questions, we can talk outside. Thank you.