 There's some flash drive with all the files that you will require. I have also QCaul image for QMUKVM that you can use if you don't want to mess up your computer because you don't need to install about 6-7 packages on Fedora. So, yeah. So, the goal that we are trying to achieve across Red Hat is to share as much tooling as possible with Fedora. That means open source anything that we can because Fedora can't, they have a rule that they cannot use anything which is not open source. Also, it should be published on open source, let's say, you know, like GitHub and so on, so it should be closed. GitHub is not fully open source as far as I know. So, it should be on Peugeot and so on. So, the list of upstream projects that I selected is not the biggest one. I think that we have about, let's say, five times more. But these are the ones that I think that matters as far as Linux distribution goes because we are building also some J-Bus products and so on. Koji, who hasn't heard about Koji? No, I don't believe it. So, Koji is a build system that we are using in Red Hat, also in Fedora project. I don't know about any other bigger projects, maybe you guys know, but I don't think that CentOS is using it. However, this tool is basically, if I will tell you that there is nothing that we should publish to customers that wasn't built in Koji, I think that I would be right. So, if you have the presentation, all of these should be links, which will take you to the project site or something else. So, this is how it looks like. I don't want to go through the whole Koji because that would be for two hours. So, basically, think of it about build system and it's also tool which enables you to execute remotely commands. Let's say that you would like to build your own distribution on all of architecture. Architecture is not just the one that's on, let's say, your laptop. So, you might use Koji to run some commands on different hosts based on inputs. Could be anything which is registered as host within the system. Remote execution is called, basically, run root. So, if you are interested in build systems, if you think that you might need one for the company, definitely look at the project. I'm not using it in this demo because using the build system would require to have all the packages required for rebuild. And so, in the build system, that would take several gigabytes and my thermal has 150 megs. So, my source of packages is actually repository, not the build system. The good thing about Koji is that everything that you built in Koji should be reproducible. There are time events in history and you can be sure that if you resubmit the build, you will use exactly the same build root set of packages used for building the package should be exactly the same. Also, within the company, we are using Koji as a storage and linking all the packages that has been built and stored in Koji to, let's say, composes, repositories, and so on. So, if you would have any questions asked, but I don't want to spend much time going through each project. So, Panji, this will be the main actual tool in the demo that I would like to show you. I would call it a compose tool. If nobody knows what compose is, I would probably call it set of repositories, images, ISOs, basically deliverables, somehow connected through metadata. So, you can process metadata and you can iterate over all of these things that you are building. It could be RPMs, ISOs, doc ramages, and so on. Productively, so I mentioned metadata and you don't want to parse these on your own. So, we have a library for parsing all of the metadata that is being used in compose. So, don't try to write something new. It never works. All of these projects are actually hosted either on GitHub, so this one is, for example, on GitHub. Some of them are on Azure, like Panji. Also, we would be so happy if you would try the tooling, it will fail for some reason. Just don't throw it away. Report issue, hey, I was running this, and it failed, trade back, whatever. Just submit whatever you can, it's really helping. Because we are using R2Us in a very specific way. We are probably not going to change the way how we are using it. So, you guys will have different use cases and it might just not work for you. PDC, has anybody heard of PDC? It's, oh, awesome, awesome. More than I thought. So, PDC is Product Definition Center. So, again, release engineering is using the tooling in a different way than quality assurance and so on. So, I will talk on behalf of release engineering. So, I've told you that we have compose, which is set of repositories, ISO files, Docker images, bootable images, and we have metadata, but it's just some JSON locally. It does have API through the product MD, the library for parsing the metadata. However, it's usually stored on the destination which is not accessible to everyone. And PDC, for me at least, is, let's say, service, which is storing all of these metadata. You can basically, you can filter them, you can, it has big API that you can use and it's very comfortable to use. So, Lubbush actually told me that Fedora project has testing instance. I don't think that it's fully populated, but it might give you some opinion about how it could look like. So, how we track things in PDC is that you have product. For example, Fedora, then you have release like Fedora 24, and this release is linking composes, which are sets of repositories, and each compose has basically subsections, RPMs, images, and you can basically see everything which was built inside the composing PDC. Could be used for CI, CI can just listen, hey, new composes was imported. Let me see if there was a docker image. Yes, there was a docker image. What's the metadata for the docker image and they can decide whether they want to test it or not. So, this is quite useful actually. Oh, sorry guys, wrong link, wrong link. It's not all, definitely not. So, let me go for the here. Will it work? Click PDC. No, okay, so, HTML5 is beautiful, but I'm not sure if it's always useful. Oh, back, PDC. So, basically, we have a compose set of RPMs, images, we have metadata before they imported it into PDC, and now we would like to publish it to customers. It doesn't have to be external customer, it can be team within your company. So, PDC can also store mappings, let's say, that all the RPMs from this section of compose goes to this location. So, you can basically store all of these mappings and then you can use, for example, Pult to distribute them. So, internally at Red Hat, we have a layer in between them. There is another two languages, basically parsing the data from PDC, and then we are actually creating some sort of transformation above compose and pushing it to individual delivery channels. They can be FTP, it can be RHN, can be so many other services. You name it, and Pult is basically repository manager that we are using. It's also being used in one of our products. So, I believe that Fedora might actually consider to use it as well. I don't think that they decided, just saying that it's out there, it's open sourced. It's actually originally from Red Hat. Feel free to use it. I'm not using it in this demo, that would be so much overkill. And I think that's all about it. Coffee break, no. I think that we went through it quite fast, actually. So, but if anybody would like a coffee, I would like to move to the more like the technical part with the demo. So, it might actually take some time to get the tarballs with the data. So, guys, if anybody wants coffee, I think it's a right time. If you are all set, it's even better. So, that's everybody have internet connection and how many of you actually have Fedora? Because I have a virtual, awesome, awesome. So, those of you who want to try it, there is a 50 megabytes tarball. You can just download it. You will have to install packages like yamutails, create repo. And these are actually basically used for creating repositories. And the tooling that is actually located in tarball requires them. So, if you don't want to mess up your computer, feel free to use the QCaum image. It's working, I was testing it today. It's on my flash drive. So, or here as well. So, just ask for it. So, let me stream it on the screen, actually. So, are you downloading it? Sure, sure, I will wait, no worries. Yeah, flash, I have actually tarball here. If you want to borrow it, it will be much faster. Just pass it over. So, the presentation is called, I believe, Building Distribution, the Red Hat way, while the data is called DevCon 2016. So, use the DevCon fun. So, I was thinking of a small packet set that would be like 50 packages, and the DevSolving would be complete. And I was thinking of distribution which consists of pre-different shells, Bash, current shell, and ZSH, ZShell. So, there is a setup which allows you to build distribution, including all of the dependencies. So, it's not going to be bootable distribution, definitely, because that would require Anaconda and a few other things. However, if you will iterate through the configs, you can basically build it up to the bootable distribution. However, it does have ISO files. So, yeah. So, if somebody would be actually looking for the tarballs, people would have to come on. It's here. I know, I know, I know. So, wait for the flash drive. I'll wait a few more minutes. Meanwhile, I can just get my desktop ready. It would be maybe cool to show you our Pazure. What sort of projects do we have? So, as far as release engineering goes on Pazure, Pazure is basically open source version of, I would say, source control, management, web UI system, something like GitHub or SourceForge. It's quite new, actually. Developers are from Red Hat. They are super responsive. If you have some issue, it doesn't work. It was really crappy from the beginning, and each issue that I filed was solved in one day. It was a really awesome experience. So, the tool that we are going to use is called Pangee. It's the Compose tool. It's located on Pazure. Under Pangee, feel free to check it out. We have documentation, sort of. It's more likely a RST with example configuration. This is for Fedora, I would say. Yeah, definitely. Oh, here, cool. So, one thing, the Pangee itself can actually gather data from multiple locations. One of them can be the build system that I was talking about, Koji. So, as I said, Koji would be overhead. It would be few additional gigabytes in the tarball. So, I decided to use just the repository with set of images. My repository should pass on them, as far as I remember. And, otherwise, you can easily use Koji. If you are using just the local environment, you are actually lacking run routes. So, you can only run it on the native architecture of your host. But, for this purpose, it will be sufficient, I would say. So, let me center out the numbers. I'll see that soon. Okay. So, you guys have to imagine the very first character, because it's not fully visible. Yeah, but, then it won't be full screen. Yeah, could work as well. So, this is the readme file. Oh. I know. So, in star requirements, basically, you can see that it's create-repo-c, buy-kick-start, selenix-utils, ISO, MD5 sum, because we are producing ISO files for DVDs. Pangee can create both bootable images and non-bootable images. So, the cobble here is set of wrappers. We are using so many functions from it. It's also open-sourced. So, as I said, what I would like to achieve is to create some sort of distribution with ISO files. It will have just the shells plus dependencies. You have full control on config files. You can add some more packages if you can generate some fake ones. Yes, I was thinking whether to generate 1,000 of some fake packages from dictionary or just take something which makes sense. So, for me, shells were quite small. Everybody set? Also, I believe the flash drive has also bootable image. So, if you are using bootable image, you are just boot into Fedora, which already has it under root. So, what I would like to go through and what I believe is the main goal is to show you how to use it, tell you all the features, what the compose can actually, Pangee can actually generate, because it's not just RPMs, it's not just ISOs, it's so much more. And I believe that it can be useful for people who are building distribution and they might not even think of what everything should be inside. So, this tooling is currently being used to build Red Hat Enterprise Linux 5.6.7. Also, for so many other layered products, layered product is something that you install on top of other product. So, we are all set for layered products as well. Of course, we are stuck on different versions of the Pangee across the company, and we can't change it during the product lifecycle. So, the code base is a little bit different, but this is the newest one. If you would have any problems with the turbo, just let me know. I have it all locally, takes few seconds to generate new one with fixes. Also, the turbo is basically Git repository. So, if you feel like you messed something up, just reset it to Hat and it should be clean. You can start again. Those of you who already have the turbo extracted, feel free to read the readme file. You might actually want to go to Conv directory, which hosts basically all of the configuration. As I said, it should be, oh. Yep. So, the default paths are root DevConv because I created the turbo for the QCal image, actually, which is only having root user. So, the first thing that you will have to do is to change these two paths to your location where the directory is, and that should be YOLA. So, we are still waiting for a few guys. Let me quickly go through the config file. I will go through it once more, once everybody's set up. So, this is probably the very minimal config that you can come with. I call the release DevConv Linux. So, I think it matches the purpose. Release short, this is basically the string that will appear on ISO files as far as file names goes. Also, the directory structure will use DevConv inside the name is slared. This is basically whether you are building product which is supposed to be on the bare metal or you are actually building a product which is supposed to be on top of another product. Bootable, false, we can't really boot bash. Comps files. This is an XML which is basically setting the building blocks for the whole compose. So, you can say my product is going to consist of variant called, let's say Fedora Workstation. And the Fedora Workstation will consist of several groups. Let's call it core, network utils, network services, web server. So, and these, you know, web server is going to consist of HTTP and few others and you can iterate like this. And this is basically all defined in a single or multiple XML files within, we call it comms. It's also being held in repo data as package groups, basically. It can be either SCM or the file locally. I've just found another bug which is kind of, it will fail if you will use just the path instead of the SCM dictionary. So, please use git. In this case, we'll fix it. There is a guy who will probably fix it. He's responsible for it. Variance files. So, basically comms is just grouping the set of packages. Let's call it web server. Let's say SQL server core. I'm just thinking network services and so on. And the variance is actually grouping them into variance. So, let's say variance server will consist of 24 comms groups. So, the variance is actually defining like more abstract layer above the comms groups. Sickies. So, if you are building RPMs for your company or if you are working on some product, you might know that these RPMs are being signed with some key which is identifying Q as a vendor. So, Sickies is set of keys with priority from left to right. So, none means that you allow unsigned RPMs to be inside your distribution. You shouldn't do it on production. It should be always signed, ideally. If you are doing something internal, only it can be unsigned. Three architectures. As I said, we are not using build system or Koji. This means that we can run only on native architecture. Basically, this is going to be the host that I'm going to use. So, it's on the Intel. We can do also 32-bit. I decided not to. If you are using 32-bit and 64-bit, then you can decide to use Multilip. Does everybody know what Multilip means as far as distribution goes? That means shipping RPMs of multiple architectures. Usually, we call it primary, secondary. So, in 64-bit, it would mean that, for example, you are shipping both 32-bit and 64-bit Firefox or some libraries. Because as far as maybe you remember, back in days when you were actually supposed to use Flash Player on Linux, it was only 32-bit. So, you have to have all the libraries on your 64-bit systems, and they had to be 32-bit as well. So, we are actually covering these scenarios as well. Java is so typical, you might actually need to run 32-bit Java on 64-bit machine. So, if you decide to use Multilip, it will put basically these 32-bit packages into 64-bit repositories. So, I left it empty. Run route, this is the remote execution of commands. So, let's say that you would like to run make on a non-native architecture. So, you would call something like Koji, run route, P means a package which would be installed. Let's say that for make, we would probably need I think it's make utils. And M would be some file system that you might actually need to be mounted in some sort of run route. It's all inside truth. So, you might actually need to mount Koji. And then there is the actual use shell. So, you can actually use string instead of a dictionary, oh sorry, instead of list. And use shell. And then you would just do whatever you need to execute, like, I don't know, echo, hello. And it would be just executed and you could collect results afterwards on it would be probably stored somewhere on the mount Koji. So, this is how we are using remote execution. So, the best approach is if everything is tasking in the build system, whether you are creating repository or building an ISO file, it should be task. If it's not task, then we are using run route. I think that tasks are much more elegant because they have defined API and it's not something that you are writing by your own. So, this is the run route. Packet set source. So, it should be either Koji, the build system, or the repositories, which means that you have some storage with all the RPMs and you are reading it from the RPMs. So, a packet set repose. This is basically the source for individual architecture. So, you can see that X8664 is going to be located in my home directory. So, inside this test data, sorry, can't see it. Inside this test data, there is actually just directory with RPMs and repo data. And we are actually using it as the source for our distribution. You can set multiple ones, just read the documentation, everything is described in there. So, these are specific only if you are using the build system. So, we are not. So, let's just have them commented. Get resource. Comps. Comps means that we are actually reading the building blocks of our distribution from the comps.xml file. It can be as well known. And then you can use section below. It's called additional packages. And you can just say, hey, server variant consists of these and these and these packages. And you can completely skip comps groups. So, it allows you, I think it's more flexible, but it's more heckish because you can create something that nobody else will be able to read, several conditions. So, I would always recommend you to use some valid file like XML that can be validated, everything. And to use sections below that, I will show you only in corner cases when you need some exception that it's not achievable through comps files. I'm not sure if I should show you like our internal configuration files, but it's messy as long as you are actually, as you are joining the dark side. So, create repo C. That means whether you are using create repo or the new create repo C. I believe that the old create repo was written in Python. Performance difference is huge. So, we are using the create repo C. Build install method. This is basically, let's say you can tweak Anaconda to a product specific, let's say, no, that's the product image, sorry. So, build install is basically a phase which generates a bootable image. So, create ISO section. This is reality two optional. Let's not talk about this here, maybe later, because our demo doesn't cover that and it would be so much additional logic. Multilit blacklist. So, I've told you that you might actually decide that you would like to ship 32-bit packages on 64-bit system. Blacklist is basically, also these Multilit packages are being, the decision making is in some logic. We have several approaches. One is that you might actually decide to keep only libraries to be Multilit. Or you can say you would like to have all Multilit by default, or you might just set some exceptions that should be Multilit and the rest will be just 64-bit, let's say. So, you have blacklist for it, you have whitelist. I think that they are pretty much clear. You can either say this package shouldn't be Multilit or this package should be Multilit. Additional packages, this is a section that you can basically use either on top of comps or without them. So, you can decide whether you would like to add some specific packages that are not listed in comps groups into some sections of your compose. Filter packages is the exact opposite. You can just say that these packages shouldn't be on these sections. And yeah, we won't really use product IDs for our product, and this doesn't make sense unless you are actually building Docker images. So, I would say let's launch the script and let's go through all of the outputs that the scriptature generated. And then we can, then probably most of this configuration file will make sense. So, if you will go to the DEF CONS, DEF CONS 2016 directory, there is a readme. Always readme before running anything. You should have these dependencies installed. They might be called different if you are using, let's say, RL7 or CentOS or some other distribution. So, if you are, then my recommendation is to iterate through executions and install whatever is missing until it will be installed. And let's just run readme.sh. Yeah, also a good thing would be probably to show you the CONS files that I've mentioned. CONS, CONS. So, I basically told you that we are building a product which consists of shells plus dependencies. So, here is the definition of group shells. You can have multiple groups. For example, if you would like to create web server, you would have the same group, you know, similar group, but with different packages. So, you are supposed to set only the base packages that really matters as far as your distribution goes. All the dependencies will be gathered automatically. And then the variance file which is basically saying, hey, we have a variant called core, just coincidence. And this consists of these architectures. You know, each variant can be supported on different architectures. So, you can have server which is for PPC and then you can have client which is only for X8664, if that makes sense to you. And our core variant consists of a single group called shells. So, basically you can create also variant inside variant and then we have something called integrated layered product which can be just a single thing that you can install on top of basically anything. It doesn't have to be variant specific, okay? So, basically what this is saying that is that our compose is going to have variant core and the variant core is going to consist of shells which are defined in the comms group, right? So, these are, so let's actually see how it looks like. So, if I will use, I'll use browser because I believe it will be much more with comm 2016 trees. So, here we go. So, basically if your script went right, I'm not sure if you guys executed it. Hopefully yes. And it finished. You should have structure trees under the DevCon directory and this will consist of one or more composes. You can run it as many times as you want. It will just iterate. It will create new directory and it will create new sim link pointing to the latest one, okay? So, each composes some sort of compose ID which is created by merging the product or release short which is defined in the PanjiCon that you guys seen plus the timestamp, right? And the one in the end means that it's the, basically first you have zero, one is the second iteration and so on. The n actually means that it's nightly. Each composes can have type. It can be either production which basically tells, hey, this distribution build is marked as production. Otherwise, this is just some testing compose. So, Ann is basically saying, hey, this is just this testing compose being run on nightly basis. Status finished. If it's finished, all is set. Can be doomed as well. That means that something got broken or it can be running. So, if it's broken, Panji actually allows you to, let's say, repeat some specific phases. Like you might not actually want to rerun, create repo on all the packages. If you know that only ISO file creation failed, then you might as well consider rerunning just the ISO file and see what was wrong. So, Panji runs everything in phases. I will show you after I will go through the whole structure how it looks like. And the actual tree structure is in the directory called compose. As I said, compose as metadata. These metadata are created by ProductMD. You can read them again by using ProductMD. It's basically very simple. It should actually tell you the directories for all the, let's say, packages that you have in your distribution. So, in our case, it's just core. You can have debug packages, ISO files, Gigno files. And we know that it's only x8664. So, this should be a list of all images. So, as I said, this compose is going to produce ISO files. They are not bootable because shells are not bootable. And here is basically some metadata for the image. So, in ideal case, you would just take this metadata and you would upload it to the PDC, which I already spoke about. It's definitely better than just relying on some files that can be actually deleted by some cleanups. At least they have some history. And RPMs, this is basically JSON with all of RPMs which are located in the compose. As you can see structure, it's variant architecture. And then SRPM and then the binary RPMs which were created from the SRPM. So, as long as you are using product them, you can easily iterate through it. Close extra pack, you just put all the dependencies on those three shells. Exactly. And I'm going to show you how it was generated. I mean the dependency tree. So, there are several methods. You can say either gather all the dependencies or you can just skip them. Or you can say just include everything which is in the repository. So, it's different approaches you can configure it. So, I believe that here if I would look into configuration file, you would see that we are using the comps and getter method is depths. This means gather all the dependencies as well. It can be none. Then it means that hey, don't take the dependencies. Might actually make some sense. For example, if you have layered product which is, let's say being installed on top of Fedora and the layered product is somehow expecting that it shouldn't contain the same packages which are already on Fedora. You can set something which is called look-aside repository. And this is saying, hey, if some of these dependencies that this package in my comps group requires are present in this repository, just don't include them because we already, they are part of different product, if that makes sense. So, you can also set some, I think it's here. I've seen a hand there. So, what was the question? Ah, yeah. So, the greedy method actually built here. If I will show you the source, we have plenty of time. So, yeah, let's do it. The greedy method or the method is actually saying, let's show it. There are several approaches. One is called best match. So, basically it will take the one which is matching the best while it will be the newest one with all provides that you need. And I think it's alphabetically sorted actually. So, if you have multiple versions, if you are using the build system and build system is linking packages into tags and tag can reflect only a single version of package in time, then it's super simple. You have always the version which in Koji or build all build system terms says if you take the package in the last one into the tag, it will be always the version. It doesn't matter that the version is older or newer, it's just the timestamp when it was actually somehow tagged into the system. So, if you are using repository as a source of data, that's kind of risky and you are right. It will, the method will take the best match, but it doesn't mean that it's the package that you really wanted, okay? So, as far as build system goes, it's super clear and there is, you know, let's say, if you would like to see, if you would run Fedora Compose and you would like to see what version of Bash there is going to be, it's going to be this version. And it's super, you know, there is strict rule and you can't go over it. If you would like to see all the versions which are in the build system, see? So, it's really a matter of which one was actually tagged. The latest tag usually means built basically because the tag is an operation which is being done after each build. You can say, hey, if I'm building this package from this branch in Git, it's going to be tagged with this tag, let's call it F24, for example, which is representing your distribution, okay? So, yeah, so greedy method. Yeah, I wanted to show how it looks like because the good thing on the table is that you have actually sources of all of the tools that we are using here. There is also a method which, if you have multiple versions, it will just drag them all which might be behavior that you are actually, see? None means only best match package. All means all packages matching provide that you've mentioned. So, provide is basically the record that you've set in Compose. If you are looking for system release and there are several packages providing system release like Fedora Release, Fedora Release Workstation, Fedora Release Server. If you will set none, you will be the only the best matching one. As far as system release packages goes, there is ugly hack inside Pungy which is always checking the name of the variant, name of the system release package. So, this is something that we would like to remove. Also, if you would use all, that means that it will include all packages matching the provide which will be all the versions that could be in the repository. And build is the best match plus its dependencies which is what we are using here, okay? It also includes, if you have SRPM, let's say that you want only Bash, but Bash SRPM actually results into several RPMs. It's Bash, Bash documentation, probably Bash main pages and so on. So, if you are using the build greedy method, it's going to include also rest of the RPMs which are located or created from the same SRPM as your requires. So, in this case, Bash doc should be part of the compost even when we didn't really ask for it and it's not really dependency of Bash. This can be useful because, at least how we are building Gral, if you really want to include single RPM from SRPM, you as well want to include all of the rest. They might not be supported, they might be delivered in a different way, but there will be part of the compost, okay? So, yeah, let's go here. So, as you can see, our compost consists of all of these packages which are basically dependencies and there was a question about dependencies if it's going to include them or not. So, Panji, the binary that you've been, or I've been executing, it's called actually Panji Koji and the Panji is just a tool which generates your dependencies and creates a single repository based on the inputs. Let me show you how it looks like because I think this is the base tone of the whole tooling. So, if you will go the root of the compost directory, there is directory called work, Panji. Here is the configuration which is basically group shells which is defined in the comps file that you've seen. So, it consists of KSH, Bash, and ZSH. Fedora releases edit automatically. This is the logic of Panji which I dislike and I would like to see it removed but it is what it is and here is the look. So, Panji is actually saying, hey, I just found definition of Z shell or provide. I found these, you know, current shell and Bash as well and let's just include them into the distribution. I am skipping the SRPMs. So, ignore the SRPM, uh, SRPM messages because if I would like to include also all of the build dependencies, it would be huge. So, here you can see that, edit glpc because Bash requires it. Edit and cursive slips because Bash requires it. So, this is the way how the Panji is actually getting all of the dependencies. So, it's not the compost tool itself. It's a binary from the same package. It's called Panji and the compost tool is called Panji.cog. I know it's really tricky and I don't know why did they pick it up but it is what it is. So, if you are looking, hey, why the hell was N-Cursive Slips included in my compose? So, you might as well, you know, see it edit N-Cursive because of, you know, blah, blah, so here it's because of Bash, right? So, it's really useful because if you are trying to get rid of some additional dependencies and you are saying, hey, why the hell did I get Firefox in, you know, compose with on shells? So, you will see why was it, you may consider actually dropping the dependency from spec file if it doesn't make sense and you can, you know, shrink your distribution to minimum. So, this is the Panji. F also told you that it's going to produce a DVD. So, this is just the work directory, sorry. So, we should go back to compose. See, and the DVD is here. As I said, it's marked in metadata. So, if you would like to see all the ISO files for a variant query, you can just iterate through the JSON, through product MD, super simple. So, I've said that I would like to go through how Panji works and I've told you that it has several phases. If something goes wrong, you might turn on the debug mode and you can repeat just some specific phase. So, yeah, let's go into trees and let's see why the dark compose failed. Yeah, that is one thing. There is a security, let's say, check whether if your compose was finished successfully, it prevents you from rerunning it again. And, you know, there is a message which is saying, dangerous, the data is going to be unsupported. So, it's really used only for debugging purposes and you shouldn't destroy your working compose, which is working just fine. So, let's put doomed into the status. Let's remove the, are we still okay with time? I hope so. Let's remove the ISO file. Let's say that something failed, I don't know, either dependency issue or so. Or, you can do a, yep. I think that the unsupported probably will mean in this case that our image manifest JSON will be completely broken after this change, but still you can debug what was wrong and so on. So, if you would like to do some debugging, there is actually nf.sh, which is setting your environment, because these two are not being installed, they won't be a part of your paths and so on. Source nf.sh. Punchy, Koji, which is the compose tool. Punchy is just the method, you know, gatherer or whatever. So, Punchy Koji, help, what does it offer? So, we have debug mode. You might want to set compose there. This is going to be trees, well, ah, fconf, yeah. At least you can see on which product I'm working on. And you need to set config, which you would like to process actually. So, this is going to be punchyconf in our case. And you might want to say just face, create ISO. One good thing, if you will skip debug mode, it will rerun everything. So, always keep in mind that you want the debug mode. Also, it specifically tells you're dangerous. So, punchy debug mode composes the account. Yep, and likely, of course. Critical, checksums must not be blank. Oh, so, I think this is a bug actually. Okay, so it should be working. Maybe, let's try a different face. Let's try create repo. Nice, nice. That's why it's called dangerous guys. So, if I will do also just face, just face in it. Thing it should actually work it out. Okay, nice. Okay, so let's just keep rerunning composes or we can debug it, and we can fix it and submit patch, which would be ultimate, I would say, goal of this presentation. Yeah, so, what to do in case that something like this actually happened in your system. So, this is like real life scenario. Bungie issues, new issue, description. Let's go for traceback. But I would like to have again the create ISO. Let's just upload the traceback file. Fconf 2016, trace. One question. Do you know why I'm not using latest in this case? I know it would be working, but in real life, if you would have multiple composites, latest is always similar to the last working one. So, you want to be sure that you are actually touching the one that really failed. Oh, go. Look, it's not there. Yeah, image fast, okay. Wonder what I was doing last time. Well, go. Bungie traceback. So, what really happened? Jason couldn't be decoded. So, there are resulted into invite Jason, right? Let's see what was actually composed metadata. Image is Jason. Ah, there you go. So, if you do like, I'll still need to have some metadata. So, what about remove it? Bang. So, something like that. And it will be fixed. So, this is the real life scenario. This is the profit. So, in this case, oh yeah. I think that the traceback would be enough. So, in this case, you can see that rerunning actually resulted into empty file instead of empty Jason. So, there is definitely a code that needs to be fixed. So, if you want to play with the compose, I think that I have some scenario for you, which would be not so easy to achieve if you are looking for challenges. So, if you would like to do a deep dive into config files and you are really interested into it. So, there is a batch doc file, RPM, and I would like to make it in a separate variant, which won't be called, or separate group, and it won't be called shells, but it's going to be called shell documentation. And it will consist only these documentation files. It would be quite easy to achieve. If you want to play with it, if you don't, I can again go through the phases of the compose because I've already promised you that I am going to do that. But meanwhile, feel free to play with configuration. If you will go into test data, 2016, there is a bunch of RPMs, which is waiting for you. Right? Feel free to create whatever configurations you want. If it's going to fail, just report new issue. It'll be awesome. Otherwise, let's go through the actual code. Hey, this is the original owner. Or the creator of the tool. Just found a bug. Daniel, just found a bug if you will rerun compose in Pangee. Yeah, if you will rerun, yeah, exactly. See, that's what I told you. So, just to repeat for him, what we've done is that we went through some list of tools that we've open sourced, which somehow makes sense for our next distribution. And we just did some testing compose with several shells, bash, current shell, and so on. And we did some rerunning and it failed. So, I've told you that the Pangee actually runs things in phases, and as you know, you've seen the failure that we've had, and you've seen that we've actually decided to rerun just one phase, which is the create ISO. So, basically, if you will go here, you can actually see all of the phases, which we have. So, each phase is represented by a Python module, I would say. If you need to add some deliverable, like atomic trees, which are actually requested in Pajur already, and they are quite, I don't want to say urgent, but they are. So, feel free to submit either RFE or, you know, just implement new feature. It's really welcome. And so, you can see packet set, which is basically, I believe, getting the basic packet set for distribution, right? Build install phase, which is doing the build images, gather phase, which is gathering all the depths. Extra files, you guys might need some readme file or some non-RPM content on DVD, like documentation, release notes. So, these are actually defined in section in config file called extra files. Source can be either a kit repository, it can be partial content of RPM, for example, some main pages or whatever, or some images from, let's say, in our case, Federalogos. You have create repo phase, which is creating the actual repositories, product image, which is creating basically product-specific installer image. Create ISO phase, which is the phase which is responsible for creating images, right? The, not the boot image, which is in the build install, but all of the DVDs that, you know, like the DVD that we've just created. Live images, if you want to have some live images, we don't, in this case, it wouldn't make sense. Image build phase, this is for Tokara, QCOW images. What else do we have in image build? I believe appliance images as well. Image checksum phase, this is calculating all of the checksums in the end of all ISO files images that you have, and it's storing them in the image manifest, then you, in ideal case, would import into PDC, right? And pass it to CI for testing. And test phase, this is just running repo closures on the repositories if everything was fine. I believe the behavior is that if some dependencies were not satisfied, it's not going to fail. It's going to write errors in the logs, which can be produced again by QE, and that's it. So these are the phases. If you want to create new one, you would just write few lines here. If you will go into source code, this is quite nicely structured, actually, and most, basically, all of these files are representing phases. You can see that creating, for example, this is Tokara image phase. It's not that long, so I don't know. 200 lines, 250 lines. So would you like to go through some configurations that you might actually want to achieve, or if in your head you have some solution which you are not sure if Pange would be able to cover? I think that some people might have such questions. Feel free to ask. We can go through it. We can try to map it. As I was already talking about variants, in our case, we've had single variant called core. You might have variant called server client. It can consist of several groups. You might actually have some additional repositories for client. Let's call it, for example, web server module. And that can consist of different packages. And then you can have something called integrated layered product, which might be variant less specific, so it can be used on any variant. So I think that somebody here said that he was working on his own distribution. I'm not sure if it was gentlemen here, or somebody on the right side. Somebody was raising hand when I was asking who was already building his own distribution. Oh, yeah, sure. So I have a few scars here, and I would like to really share them with you. So, ah. Sure. Sure. I would say that if you would use it just to create repositories and not for distribution, then it's the right tool. You said that you are not taking these dependencies or you are including them as well. Yeah, we're including them. Exactly, exactly. Sure, sure. Well, what you can say is, for example, if you would have just scenario that you want just some dependencies and not the others. So, and if you are using the EPL, you can set EPL as the look-as-side repository, for example, and you can say, hey, just don't include anything which is already in there, and include just the dependencies which are not in EPL. That could actually work for you. Yeah, I would say that it's the right tool. I think you should have some of that solving and just print the results. So, if you're looking for something that needs dependencies without all the big overhead, which is part of Punching, because you're basically building a distribution other than simple at all, then you can just verify that it's there on your browser. Yeah, because, you know, after that, it's over. Is it in your private branch, or? Yeah, it's my private. So, in this case, his name is Daniel. You can go to pager.io.io slash Punchy. Go for forks. Just see Daniel here. Let's look at his fork. Branch, yeah. This one, right? I'm not sure if we are actually able to go too fast under different branch here. Yeah, yeah, okay. So, there is gathered ENF bin, right? Punchy gather, I guess. So, as Daniel said, it seems like us, basically slim down Punchy, which doesn't do that overhead and just print through the dependencies, right? I believe it's quite true, I'm not sure. Oh, yeah. It would be nice if it would result into some JSON file, for example, that you can process. Yep. So, whatever is your use case, you can have it covered here. You can call the function, which is going to return you the packages, or if you want the repo creation as well, you can use whole Punchy. I can see some use case with this script, actually. Also, I don't know. I didn't want to mention it here, but basically, below build system, we have additional layer called this git, which is management of git repositories, and there are some conditions under which you can actually commit, or you may not commit to the repository, like you need to have approved bugzilla in order to do some fix. And I have seen that release engineering on GitHub, we have, it was good luck, actually, that we were able to acquire release engineering. There is this git, which is the same name of tooling that we are using internally. I just checked it, and it looks a little bit different. And I'm not sure what's the stage of it, but you might consider having a look at it if you are looking for some solution inside the company for, let's say, some detailed level access to git repositories as far as commit policy goals. It might be a nice place to look at it, so you don't have to create something on your own. You might just use like GitHub fork, right, and use some other address, but this is giving you more powerful rules and conditions under which you can actually commit into repository, or you might not. I've seen some graphs how it works here, so client would be developer, right? You have a local git, you have this git server. Basically you can cook anything locally, but you can commit it under some conditions like your commit message needs to, you know, reference some bug and so on. So I'm not sure what's the state, but it looks it's well documented, so feel free to check it out. That's something that I wasn't aware of, actually, until yesterday. There are also several projects, some of them are actually related to Maven and so on. However, I wasn't mentioning them because Maven is not necessarily the deliverable that you are actually looking at as far as Linux distribution goes. By the way, if you are actually, if you are interested in the build system, which I believe is really advanced, then you might just use it in the company, especially if you are interested in reproducible builds, which might be critical. There are several tools which allows you to do some testing instance in a few minutes. Because last time when I was setting it up from the scratch, it took me a week, which is terrible. And these scripts will really do it in a few minutes. One of them is Kojak. I just tested it yesterday, it's not working. So the code is not really current, but I believe that we have a solution which is generating Docker images for the build system. So I'm just going to check it and it should work. I haven't seen it here in release engineering. This is Doc Paul, this is the friends. Koji Dojo. Doc image is for testing services that need to have the Koji. That could be it, I believe. Happy image notes, yes. Sure, so it looks like it's basically Koji, Koji image that you can use. So as I said, Panji can have multiple input sources. One of them could be repository. This is what we use, or you can use whole build system. Then basically the input for the Koji would be, for, yeah, for Panji would be the tag, which is holding, if you will do list tags or Koji list tags. You can see all the tags with Fedora. Sorry, let's go screen and you can't see it. So basically there is whole history of Fedora. So if you would like to see one of the latest package in Fedora 19, you can easily search it here. So the compose then is just taking the image, sorry, the tag, and the tag consists of, let's say, thousands of builds. And it's just iterating through all of them instead of the repository. And then the Panji is actually linking the file instead of copying it as in this case. Because Koji is actually having its own structure with data. And if you are, let's say, if you are really picky about saving space, then linking is the best choice, I believe. If the file system with packages is well protected, it's probably the best solution that you can do. So if you would be interested about like what packages are actually stored in the tag, tag, it's going to be huge. But as I said, it's one of the options. For a small company, it might actually make sense to use repository instead of build system for a company which is really worried about reproducibility and security, build system is way to go. And the integration is really seamless. Oh no, it's going to take a long time. Yeah, thousands and thousands of packages. Just saying, you know, if you want to, if you will try it at home, it will be probably much faster. So any questions? There was a question, and I believe it was a good point. So I think that you deserve a scarf, or some issues on Pachura. It'll be even better. You mentioned that you are involved with build so that you got images. Yes. Yes, we can build, it's part of the image build. So if you will check image factory project. Go G. We have implementation which supports basically anything within the image factory. So basically our implementation is, let me show you some example. Oh, here is the configuration RST. So the section here is saying for variant server, consider that Fedora server in this case, we would like to have Docker file, which is going to have suffix, the format is going to be Docker, suffix is going to be this one. The reason why I'm actually putting suffix as well is that single task can result into let's say 20 images. And these images will have all different suffixes, and this is a way to map them because unfortunately Koji doesn't provide any metadata which would say this is Docker, this is Kukau, it's just output of single task. There is definitely space for improvement, but as far as of now, that's what we have. So this is the image name. Target, this is the target in Koji. Target consists of several packages which are going to be used for building of such image. It has also some comms groups which are saying, hey, for Docker build, let's use this set of packages for Kukau image build, let's use this set of packages. Then of course you need to have kickstart file which is saying what's going to be the package set of such image. Such kickstart file may look like thinking what is safe to show. And I think this is production, so. Six, seven. For example. So this is how the kickstart file is going to look like. And this is saying, hey, this Kukau, was it Kukau? No, Docker. So this Docker image is going to consist of batch, lipcellinux, Python, blah, blah, blah. And we don't want lipss, upstart, and kernel and firmware packages. And then the image build is Koji, or sorry, the punji is actually running Koji image build config file. The config file which was generated by punji. Basically it's just a transformation of this section. So here it will be image build. This looks quite absurd to make sure. And so it's basically going to transform this into any file which we pass to brew image build or Koji image build. And this is going to produce image. Punji is going to gather them from the Koji tasks. So it's going to save tar balls inside decompose. And that's it. So it's just a matter of configuration. We already know how to do that for quite a long time actually. I believe that Fedora is adapting this tooling right now. So they are using actually some huge batch script, but the goal, as I said, is to use the same set of tooling. And I believe that they might actually be using it for running the QCAP images, but it's not the images which are actually going to reach production yet. I know that they had several RFEs regarding the image build. But I believe in next Fedora it is definitely, but I don't think it's quite there yet. But as far as our internal products goes, this is how we generate images that we ship right now. Yeah, like yesterday and day before. Since you're using Q-Start, it means that it actually uses Anaconda too. Exactly. That's how it works. So we have several approaches in Koji how you can actually generate these images. One of them is just use Anaconda inside QCAP image which will generate some tree structure. You are just going to create archive from the structure and generate metadata. The other approach is create QCAP image and then run a tool. I think it's directly Docker which can turn the QCAP image into Docker image. The only problem is that the solution using Docker takes about one hour while the installation and just making terrible stream the metadata takes 10 minutes. So I believe that people actually, the general mode is go towards in direction which is the QCAP transformation into Docker. However, the time spent on a single image which is like 50 or 80 megabytes, one hour, this is terrible. It's not worth it yet. But we could be there. I got some promise that in year or so we could be at 20 minutes and then it might be actually considered. Then it might be considerable. Yeah, but your kickstart means that we are actually using Anaconda. Cool, I think that this is kind of... So, any other questions? I believe this, the gentleman here deserves it as well because he had a lot of activity. So, any other questions? I built from source with this tool. Oh, no, no, no, no. This is actually just creating repositories and deliverables but the actual build from source is done in Koji. You can use Koji to rebuild something and then punch it to create repository out of RPMs built from the SRPM. So we are thinking about layer below this tool. But that's what we are doing on in one unnamed product or project right now. You can use Mock instead of Koji. Exactly, and Koji is using Mock. Yep, exactly. If you have your own kind of needs to build something in the same way, and Koji's or kind of build systems are truly... I would recommend Mock because it actually puts down just everything in a change route which is always the same. Exactly. So the output is much more predictable than the dependency-wise as well. Yeah, I can also recommend it for like testing purposes because it's just saving you so much time and you don't need to have the whole chain of infrastructure on your laptop. The only problem is it shouldn't go into production because you could have injected anything. While in Koji, you can actually see what was in the build route and everything. But yes, definitely, it's the same technology which is actually being used in Koji. It's just being somehow locked. Yep, that's what I thought it... It's RC. Oh, instead... So you can go back and show you... So let me show you such example actually. It's difficult to install. Oh no, no. If you will use like the... How was it called? Dockard Dojo or Koji Dojo? Or Kojak takes a few minutes. It's out of the box basically. And... RPM fusion mold based on CVS, right? Well, I don't know. I don't really know about RPM fusion mold. Yeah, that's their problem, exactly. Many people prefer using it so it's also a problem for users of Fedora. RPM fusion is not affiliated with Fedora in that form of waste. Yeah. I don't know what this is going to be about. So let me just show you the use case of the database. I believe that it should have an option to actually show also the events, right? This is the task ID. But basically, if you would like to... Yeah, so let's do this history. Help. Let's turn events. And you want to build. No. Where am I? Here. Build. And you would like to see when exactly was this build build and what was in the build route. It's super simple. Build. No address. Oh, yeah. Damn, I'm sharing so many secrets here. So here. So it was tagged here. But this is not exactly the build time. But if you will check actually like rule list of Koji list packages or list builds or list tag, you can set actually on. If you will look at the build, you can see the target that it was built against which in this case was Fedora 23 candidate. And you can actually see that this is the actual build route build. And you can specify the event. Tag. Oh, yeah. You can actually specify the event, I believe. Which is the event of the build time. Tag it into testing updates. And tag, tag. You can use for example this event. And that will give you the exact content of the... Yeah, you want the inheritors for. Inherit and like this as far as this event goes. And that will show you the whole build route all the RPMs that were available at the build time of the package in that certain moment. Which is really cool. It's just going to take quite long. So what I consider really nice here, Koji finally, list tag inheritance. If you would like to see how the structure of repositories goes, then you can see that Fedora 23 build environment is inheriting basically from Fedora 22 updates. This is inheriting from there, there, there. And it goes back to Fedora 18. So this is quite nice. You can create quite same for the actual tag with builds. You can create such structures if you have multiple products. Let's say my product is going to be, I don't know, HTTBD, like some web server. You might actually want to inherit from either Fedora or CentOS, the packages which are actually used for building that. You can add some more in your own tag, which is really nice. And you can just reuse all the bits that you want. There are some certain configurations which might allow you to prevent inheriting from certain tags like, hey, I'm working on, let's call my tag development. And I don't, I want to prevent any production product to use my bits. So I can set, I can actually tweak policy to prevent inheriting from such tags, which for example, have development in the name or something, you know, there are several conditions. And you have to think of so many scenarios how to prevent the actual user from consuming your data. But first you, priority of rules that you can use. How it builds the event. Oh yeah, sure. Oh yeah, sure. So I think that we are basically done here. If you'll wait for probably the rest of the presentation, it might, but it's going to be 12,000 plus packages, you know. So huge. Which is okay, it was installed at the end. Yep. So you can actually compare later on to different spills if the bin would actually change. You know, this is just like end of the presentation. I basically showed you the page or the GitHub, so you know where to look for the bits, you know where to report issues, you've seen how to report issues. Yeah, so if you guys would like to play with it in your company, consider starting with Koji. Check the Koji Dojo that will help you. If you are into Docker, if not, check Kojak. It's called Koji Out of Box. GitHub, Kojak, Koji in a box. I believe it works with Fedora 22. It works with RL7, it works with RL6. Just run it. There is option install, create VM, or create Docker. So you can install it locally, inside EVM or in Docker image. Should be quite simple. Authentication is done through certificates, or Kerberos, or password. Take your pick. So I guess this is all from me. Please keep in mind that you can manage this talk on the website, and we are still looking for the Viking talks, so you can propose your own if you want, or vote for the next one. Thank you. Yeah, thanks a lot. We have a document here. I don't see it. Document, where do you have the document? I can do anything. I already found it, but... There are actually two problems. One is that all the phases are not created, so they don't generate checks, so all the phases are not created. I don't know if you want something like this, so you want to add a new sign to the image manifest. There is no other way to do it. Yeah, it's not a big deal. But there is a danger there. Yeah. What is the IP address? I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't think it's worth it. Look, it's so beautiful. What was it? Don't know. And... The Sidemen... No, I don't know. I can't remember what it was. The participation is rare. It was a pleasure to meet you. You do visit the Sidemen a lot. Yeah, I'll ask them too. I can give them some information or they'll be able to extend their ordinary life. I mean, is it yours or mine? I think it's not mine, actually. It's not mine, definitely not. So yeah, hey, good morning. Good morning as well. Yeah, you can prepare yourself. I noticed that there are too many people in the store, right? That's how I said it again? Yeah, I do it as a colleague of mine. So he is probably joining now in a couple of minutes. Yeah, he is probably joining now in a couple of minutes. Well, actually, this is the second part of the workshop. We already had the first part yesterday. And probably many people who came today have already been there yesterday. So I think there was an introduction. So you already know that you can answer some of the questions. So you already know that you can answer some of the questions. What else? Would it be possible to ask you to do a presentation on this topic? I already did yesterday. One more. One more? Can I use it to get some guests to the party tonight? Or should I leave this batch here? Okay. You guys are doing a workshop in here? Yes. I'm going to leave you with some USBs. I know, but there was some. We can replicate the new version which provides us colleague before. Only if you need them. So if we have the one from yesterday, just keep them. Someone just dropped the new version this morning. So can you just check the current version? Yeah. Thanks. Can I just reclicate it to all of you? Okay. Let me double check that it's the current version. Okay. Okay. Yeah. Okay. Okay. Okay. Okay. Okay. name so I pronounce it correctly. You should have write it down for me here. Is that okay? Sleep? Sleep behind here. Okay, I'll put it here. Okay. Okay, S-E-H-E-R. But it's also on the side. E-R-N. E-R-N. Yeah. And the pronunciation is? Shroud. Shroud? Yeah. Shroud. Shroud. Exactly. Sorry. Well, I don't know if everybody still has the piece from yesterday or if they gave it away. Yeah, but probably everybody who was there yesterday already copied the content over to their local machine and there was actually no need to give them the updated USB-6. So, yeah, only if you show them what was on the other side of it, then of course they need it. But otherwise it's not available. Hey, Marvel. Hey, how are you? I'm good. So, it's in the game? Yes. Yeah, that's how it works. So, if you get the USB-6, I can give you a string and then you have content available. And copy it over. Now I need one USB-6. Okay. Thank you. Thank you very much. Thank you very much. Thank you very much. Thank you very much.