 Hello everyone. So this project started out at last Academy. So me and Harrell sat down and tried to figure out what we're going to do over the next year in terms of continuous integration and integrating master, the Gitmaster branches of KD frameworks and Plasma 5. And this is where we are after a year and we're going to prevent a work. Hope you guys like it. Next. Someone need to do the keyboard handling, man. Well, I just did. Oh, you did, right? All right. So cool. A couple of years ago when I started with KDSC packaging, and I got involved in KDSC packaging, it was more or less 60 sources, 70 sources. And the way KD releases used to work was you, the packages would get the source Star Wars a week before the release to prepare all the binary packages. And being the optimistic person that I am, I just thought, Hey, that's good enough time to get the binary packages up and test it well enough. Turns out not so much. It took about four days to actually get through all of the packaging, which left about two days for testing. And we couldn't do enough QA to catch all of the problems at times. And this got me thinking about two points. First of all, that is that there is a lot of work to be done in terms of KDSC packaging that there are massive source but source towers that are released. And that it doesn't scale. It wastes time and lots of energy. And this was when we had five or six people working on KDSC releases in kubuntu. The second thing I realized was there was a lot of improvement to be made in terms of automation. So it was mostly serialized work that could be automated away so that all of us could go to beach and enjoy mojito. So things kind of had to change, right? So Felix Greer and Philip came up with these scripts called kubuntu automation tools that popped up on launchpad, which massively parallelized all our work. And what these scripts used to do was prepare the source packages, upload them to launchpad, let them build, and then the human would actually look at the failures and go through them, fix them. This helped a little bit, but still there was still a lot of work to be done in terms of the idea that we could just go to the beach and have a mojito and just press the button, have a release, and go to the beach and have a mojito ready. So the process really had to change and that's where our CI tooling comes in. So Android Pangea. Pangea is about 14,000 lines of Ruby code and it has 44 unit tests. It is spread across 13 servers, which is basically three Jenkins servers, nine slaves, and one mobile imaging server for the plasma phone stuff that you saw yesterday. So the way it works is that we distribute packages for Debian and Kubuntu and for Debian, since Debian does not have a DPA-like structure, we use Amazon S3 to actually distribute the packages themselves. So how it works. We have Mojito jobs in, can you open KCI perhaps? There's a figure. Sorry. Wait, that one. Right. Which one is it? It's this one. Yes. So we have Mojito jobs, which basically what they do is take the, we have the packaging store on git.debian.org and we have multiple branches over there. So we have branches like Kubuntu One stable, which track Gitmaster from KDE frameworks and plasma and we have Kubuntu Vivid Archive and Kubuntu Biley Archive and all of these changes need to be merged properly and we have Mojito jobs like Mojito analytics. And what these jobs do is merge the appropriate branches amongst each other. So things like things that go into the archive, go into the Kubuntu Wiley branch on git.debian and they get merged into Kubuntu One stable automatically by these jobs. We have the builder jobs, which prepare the source packages. And by preparing source packages, I mean, they grab the Gitmaster or whatever branch you want of KDE frameworks and merge the packaging in and prepare the initial source packages. No binaries are produced at this time. It's just purely the source packaging. Then depending on which CR you are looking at, we have the binary jobs. For Kubuntu CI, we upload them to Launchpad to build them. For the Debian CI, we have slaves that actually build out the binaries. Then we have publishing jobs which publish them to Amazon S3 or in the case of the Kubuntu CI, there are QA jobs which run through some tests to make sure the packages are actually semi-usable at the very least. Can you go to the next slide, please? No, that was fine. So I'm going to talk about DCI a little bit now. So DCI currently targets Debian Unstable. It does not target anything else. It builds for two architectures, AMD64 and ARM HF. It has a PPA-like CI setup. Can you? What do you want? Tell me what you want. DCI. That's this one. Right. So it has a PPA-like structure setup. Each of these folders, so to speak, has a separate repository mapped to it. For example, you can only use the Qt 5. We currently build Qt 5 from Git as well. We build the 5.4 branch. For example, if you just want to use the 5.4 branch from Qt, you can just add the Qt repository. You don't need to add any of the other ones. If you want to use the frameworks repository, you have to use the frameworks repository and the Qt 5 repository, so on and so forth. So each of these is actually a separate repository on Amazon S3. Can you go to the next slide? I can't find my mouse. Right. So the architecture, as I explained before, for DCI was source binary publish. For the binary stage and the source stage, we currently use SCH routes to build out packages. The plan is to synergize with KCI and have Docker build out packages, primarily because Docker has an API instead of me calling commands via Ruby to SCH route. And APIs are just a nice way to interact with the system. Let's see. Right. So some statistics. We have 780 Jenkins jobs on DCI, out of which 257 are sources. So there are like source packages that DCI builds out. And that's about all three. You can take over now. No, I don't want to. Well, you have to, unfortunately. So Rowan talked about Deviancy part of things and I'm going to talk about the Kabuntu stuff. So we have Kabuntu CI and the Kabuntu CI was the first CI that we built. Therefore, its architecture is very wicked and mighty and complicated. But it shouldn't be a surprise that since it was the first thing that we built, it runs most of the core services. As we have seen the mergers run on the Kabuntu CI and all the QA is run on the Kabuntu CI and a lot of additional management stuff that is happening in the background in order to facilitate CI work is also happening there. So in a way, it is the heart of the operation. And right now it is using Launchpad as Rowan has mentioned. It is using Launchpad for builds and it is using it excessively. So Launchpad has like 50 build servers, I believe. And at peak hours, we would allocate 30, we would try to get 36 of those just for Kabuntu CI. It is a massive thing. In fact, it is so massive. We have 600 distinct build jobs for 200 services. All of these are in KDE, of course. The 600 jobs would be separated in Rowan also as mentioned. Two versions of Kabuntu, we always integrate against the latest stable version, which currently would be 15 or 4. And the upcoming version, which currently is 15, can. Additionally, we integrate both get master and if applicable a stable repository. So for KDE applications, we right now would implement master and applications 1508, which is going to be the next stable thing. Next one is the mobile Kabuntu CI. As you have heard yesterday, Blue Systems has been working on making a phone kind of software thingy. And obviously, since CI systems are very obvious, very advantageous and very awesome. As you should have heard, in Alex Fierster's talk yesterday, feedback is very important. In fact, feedback is so important that I think we could not have pulled off the plasma phone stuff had we not had this particular CI. Now, this particular CI had a bit of a rushed development, in a way. Originally, we wanted to use the regular Kabuntu CI. Since launch, you can do arm builds except they didn't work. So what we did is what we always do when we run into a problem, we sit down and just run our heads against the wall and come up with a new solution. So yeah, whiteboards were touched inappropriately, arguments were had, flowcharts were drawn, and at the end of the day, we had a completely new delivery pipeline that is sort of based on what Debian CI does. So you create a source package, then you build it on multiple architectures, and once the multiple architectures are done, additional QH types would run. I will talk more about the QA later. Basically, you would have different QA depending on the architecture relevance thereof. So on a phone, you might want to do different QA than on a desktop. So this CI is very much the future. I'm not going to show you these Jenkinses because I think it's not very interesting to look at. It's just a bunch of mostly red things. What's that? Oh, no, I think we don't have a simple enough build to finish in time. Unfortunately. So what we're doing is with the mobile CI, we have all in all eight build hosts, four of those built for 64 bits and four of those built for ARM. And the ARM ones are ridiculously slow because they aren't. So that's why it takes so long. The 64-bit ones usually finish in two minutes for most frameworks. Just raw build time. And the ARM ones usually take like 20 minutes to half an hour. Quinn, I believe, takes one hour. It's one of the biggest things along with KIO and plasma framework, I believe. But yes, so if you're interested in this stuff, then go take a look at all the CI's on pangea.pub. It's a domain where we host all this stuff. And if you have questions, you can ask us about it. The architecture. So we put a lot of thought into how the architecture of the jobs aligns and how it all comes together into actual packages. So how is this all relevant to assess KDE? And also how is it relevant to a distribution to have a CI? After all, the topic of this talk is continuous package delivery. This is also something that Alex talked about yesterday. So if you have not seen his talk, I would encourage you to watch the video recording of it later on when you're at home. So there's CI, which is continuous integration. What you're doing is essentially you take a piece of software, you build it, and you might perform a series of tests on it to see, is it good enough? But then that in and of itself is just a very automated thing. You have done automated tests. There's, of course, a manual testing. In order to do manual testing, you need continuous delivery in various ways. So there's two ways to do continuous delivery. There is the sort of testing delivery and then there's the release delivery. Ideally, it would be the same thing. And that is where we are ultimately wanted to be. Right now we aren't there. But there are plans, discussions, stuff is happening. So how is it relevant right now? So we're doing continuous delivery. We're also doing CI for the packaging aspect there. The obvious thing, or perhaps not so obvious thing is, we have build.kde, which is our general kde CI. And it is integrating in a very liberal environment, right? We want to have our build succeed and not fail for weird reasons like missing dependencies of some obscure library and then we have to buck the sys admins to install the piece of software and whatnot. So it is a generally very liberal environment, whereas distribution package building happens in very strict environments. Most distributions in the packaging would have a very concise list of which dependencies are needed in order to build this package. So there's a distribution level integration aspect to this. Integrating things on a distribution in a packaging sort of environment adds additional value as you have tighter checks on things. So you would detect things like if you use a new header and you're not looking for whatever thing provides that header. Most of the time one of our sys is going to trip over it because the dependency for that header has not explicitly been declared. So that is one advantage. Arguably not the biggest one, but it is an advantage nonetheless. I think Martin highlighted this couple of weeks ago that Quinn, in particular, is currently having a lot of development with regards to Weyland and sometimes dependencies get forgotten to be looked for in CMake and distribution integration then highlights this and can enable much faster integration and iteration on this issue. The bigger and in my opinion, biggest advantage right now is that Debian and to that extent Covintu is doing ABI checks on the libraries, unlike build.kde, unfortunately. So what that means is when any one of you breaks a library, breaks the binary interface of a library, I will know. I will know because my CITO is read and I get screamed at by my own tooling because the library is not broken. And then you will get an angry mail from me saying that your library is broken and I will tell you to fix it. So far I think there have been three instances where we were able to prevent binary incompatibility between frameworks releases, thanks to that stuff. So ideally I want to have this go into build.kde because it is really awesome and it is pretty much quintessential, particularly if there are monthly releases that we do with frameworks that we have, that we have tight control and tight verification of our ABI. It is one of the key offerings of a framework is that we do not break compatibility. So that is very important. It is very awesome and I hope to talk with Scarlett about this in a boff. Another thing and that is again, it is good to have but it is not quintessential, is the verification of installation paths. So for those of you who don't know how complicated packaging works, we would have a number of packages generated by one source. The way we know which files go into which packages, we would have a list of all the files that come out at compile time and allocated to one package or another. If this list of files is strict enough, then our integration will fail if you change a file path. There are two reasons why we want to have that happen. First of all, we want to know when you move a file from one repository to another. The other and perhaps more valuable thing is we know when you accidentally break something. There has been a case with Blues Qt, I believe, recently, where the installation destination was changed and some divorce file, I believe, was installed to an incorrect path. And I would say it turned red because now the file is not here anymore. So it looks at it and, yeah, the destination was incorrect and that got resolved. Any questions to the KDE advantage that KDE is getting from this? Then there's a QA advantage for the distribution and there's a development advantage for the distribution. So the problem with packaging is if it's like with KDE and we have 50 frameworks, each of these frameworks would have minimal changes that would have to be done to them. You would have to do that every one, so you would have to spend at least a day every month doing these changes. I don't know about you, but I don't really fancy spending a day doing weird packaging stuff, boring, tedious. So continuous integration of the stuff against our packaging enables us to do atomic changes. Something would change in a KDE repository. The packaging gets adjusted the next day. Everything's green again. And in addition to that, we can, of course, do automated QA. In fact, excessive amount of automated QA. If you introduce a new CMake dependency, we will know about it because I have written a nice tool that complains if there's a missing CMake dependency. If it's optional, of course, if it's required, it would fail anyway. I also have a tooling that would check if all QML runtime dependencies are available. There's a tooling that checks that all the files are actually installed into a package. As I was explaining, we have these lists of files explaining where installed files should go in the packaging. We have checks for that. And perhaps the most important one for us is actually the check that everything installs and everything can be upgraded and everything can be removed again. This is a test that takes about one hour to complete. And it's literally installing, I think, 600 packages. It's upgrading 600 packages and then removing them again. Yeah, so that's cool stuff. I'm done. So you do a lot of testing of the builds. Check in that the packages contain what you expect or what should be there. What degree are you testing? We have tests to also check the functionality of the software you have packaged, like actually running the application and checking the case. We currently don't. I have been wanting to do it for the last six months, but I haven't found the time for it. There's some provisioning for that. So we can, in theory, do it. So all the builds are all separated in Docker containers, so we can easily try to run an application. It's just a matter of sort of setting up the environment, getting a minimum of PDE session, as it were, to run and then have the application start and see does it exit with zero, for example. So it's on the list, but it's not currently done. Related to that, Tiki, have we looked at OpenQA? Alice just mentioned that to me. So I need to go and talk to the OpenSUSE guys and see what's going on over there. General is using and it has basically automated acceptance testing of the full list of the graphic and things and so on. So this is pretty powerful, but I think Fedora is also starting to use it and maybe there's also something that should be very easy for you. Yeah, so one thing that we definitely want to do long term is do testing of applications. So what I actually want to happen is to have someone figure out how to do application testing through the accessibility layers that we already have and then edit actual functional testing to the software, right? So we could have a QML app that would be tested automatically. And yeah, so definitely something to look at. A short comment to the graphics part of the tutorial because we united a data for an X server running and a little from visual hardware to software possible. So the experience I had with OpenQA was that will your connector be the real error I've never seen before and I've never seen after us because couldn't just start on the OpenQA infrastructure. Because at the top, yes, you don't have OpenTL, you don't have X-Map, you don't have anything, you actually expect. So, and that's the real hard part to rely on. I don't think it makes much sense to test, for example, class time to start up. It's just quite a plus for the visual of OpenTL. It's quite a plus for this visual framework. But there it is. And when made, for example, it also just made a lot of such visualized hardware because you just have to implement it and you bail out on the end pipe because it doesn't make sense. If you don't have OpenTL on radar, you just bail out on XLG applications there. Yeah, great, you have an XL0 value. But it doesn't have to end. Actually, OpenQA can do real hardware as well, it's possible. Maybe they switch plus one over here because they're currently at birth and when OpenTL was issued last night five, I was very concerned that it was going to be stuck in OpenQA for four weeks because of the OpenTL and the plasma currently transferred. I think that, in general, these sort of problems, they shouldn't, they are blockers, but I don't think we should just give up because of them because testing is just so important. Again, Alex was saying yesterday, feedback is incredibly important, right? There is not much to be shared, to be honest. So distribution packaging would always implement their own build, directive sort of, right? So they would call CMake somehow and they would call make install somehow. So there isn't much to be shared there. Other than that, QA-wise, I would like not to have a lot of overlap between a distribution here and the KDs here. Ideally, a lot of the things that we are currently doing on the Kaboom to CI, for example, should really be done on build.KD. So there isn't much that can be shared in that thing. But there's a continuous integration of some time and we will definitely talk about it. Are you sure we're not going to use CM or is it? It's already available, so there's, I think it's maintained by the Linux Foundation. It's called ABI Compliance Checker, ACC for short. And it does exactly that. Currently, we are not actually using that. We're using a deviant-specific solution that literally does dump all the signatures of all the functions in the library into a file and then compare them to the new version. What do you mean by that? We're producing a little project by email to ensure that the package you compile does actually can be written in the same way so that you get the same binary that you compile it again for the same toolchain and stuff like that. Can it be done? I am not sure there's much point in it. Well, there is because we know then that nobody's having trouble with the GCC and insert a vector into your own binary. Yes. I don't see how that could happen, but should someone present a reasonable argument for why we would want to have that done on the CI system, then it can be done. Maybe it's not that easy to do it on the CI system, not that the CI system could help you show that. Oh, yes, in a way, no. So the thing is, the packaging of the CI system is always ahead of what would be in the distribution, right? That's sort of the idea that you continuously integrate the packaging against what is in KD Upstreet. So the packaging would always be different and ahead of what is in the distribution action. We have three more minutes for questions. Then we'll get to the follow-up. Is the status of the browser or the platforms like Windows, MacOS 6, and the mobile platforms that are coming, is there some reports that mostly goes into the area of build.KD, I believe? So what we're doing is exclusive to Linux packaging, in a way. Technically, you could do the stuff we're doing, you could technically do with any operating system and any Linux distribution. It's just Windows builds, for example, since KD is essentially the distributor for Windows, it would make sense to have this done on build.KD. Yeah, same for mobile, for that matter. And actual mobile stuff is a bit tricky, anyway. For a question, and I think we'd love to create tools to check our packages, for example, we have Lindsay and to check for... Oh, yes. We have QFARs to check for installation and integration, and we have a DPUT to check for operating and demand files and stuff like that. Do you integrate those? We're currently integrating Linton. We're not integrating the other ones for various reasons. But yes, so Linton actually is also adding value to KD software. It's essentially a static analyzer of what the package is, and it also catches a lot of the issues that you would have in upstream software like your desktop file is incorrect or some stuff like that. It's actually in QFARs, since the Eric Alibaba edition. It has, so we have our own tooling for that, which has mostly traditional reasons because of how launchpad PPAs work. Ideally, this would go away. So yes, we are aware of it, and we'll look into it. Okay, we're out of time, so thank you.