 Welcome to my talk. My name is Yuri Vasilevsky. I'm a Gen2 developer that comes sometimes to devian conferences and finds it quite interesting because I can learn tons of stuff. What I've been doing for the past couple of months was LIBUS. It's a distribution generator which saves Gen2 portage and converts it into devian. So we have nice devs which can use all the wonderful devian tools like the i-installer, bootstrap, probably devian live up to whatever we did. So before I start I really would like to send Joe with his bike tour because the last time I was in DevCon, that was actually my first time, I went like, what the hell am I doing here? I love Linux, I know how Gen2 works whatever, but it would be nice to understand how devian works and why it works that way. So we had like a couple of five-hour length talks when I was squeezing Joe's mind absolutely for all the things that happened. Why is that? And why is that? Can you do it better than that? Oh no, there is a glitch with that. So I really was able, thanks to that, to start understanding the why behind how devian works and its ideology. And as well I would like to send Daniel Brown who has helped me a lot during this dev camp with tons and tons of tiny details which why I wasn't able to get from the devian policy manual and I was doing my own. And he was, no, that's not the way devian policy ends all day. Okay, oh you should do it this way, it's much better. So thank you all guys. Well, what is it? Oh, I see. Okay, this I like by itself. If you go to the Gen2 bowel page, you will see a nice summary that will says that Gen2 is a free operating system that can whatever, whatever, whatever, that can be automatically optimized and customized for just about any application need. And it's wonderful, it's whatever, you can do anything you like for servers, devs, docs, embedded, customized application and because of that we call Gen2 a meta distribution. So let's make it happen. Because so far we have all the flexibility to do that but the tools aren't just there. We can build binary packages with Gen2 but there are no tools to maintain them. They have problems because of dependency because they use the same dependency systems as in Gen2 source packages which I will come to that a little bit later are ADI dependent but no ADI dependent for binary packages. You usually need to take care of ADI as well. So let's see the motivation for the work I have done. I love Gen2. It's really close to being ADI. The problem is it serves babies. So I need to compile lab racing. It's really hard to have two identical systems because you will shorten and tweak in a little bit different way. So it's hard to reproduce baths if a bath is there. It's hard for you to see and it takes a lot of time to maintain ADI and Gen2 system. So there is also ADI which I now know a little bit about and scroll you guys and it's also quite nice. It actually has absolutely wonderful tools to manage it but the problem is it's not source based. Then I cannot pick it as I would like to pick it and I cannot have the consistency in my system that I would like to have because there are some packages that some maintainers decide to compile with this class, with this particular option, with all settings, some other maintainers decide to want to pick. Some package with some other option, some other maintainer makes five flavors of a package so I can manually choose which one would be which. So it takes time and it is not even close to the possibilities that there are in Gen2 to customize things. So a little bit about Gen2. It's source based. What does it mean? We compile errors. Yes, we do that. We can optimize that so most people know us as crazy optimization freaks but that was actually only in the origin because Gen2 was using some patches that for GCC 2.93 that weren't upstream yet and that made Gen2 like 20% faster but by the times 2.95 was released their patches were already upstream so that was close but we still retained that fail. And there are really good things about that. We are ABI independent. What does it mean? Some package will upgrade some library. I have a nice slide about that. Oops. Yeah. We hope so. Leibar library with some name Leibar sort 3.1 then we update it. It becomes a library with some name Leibar sort 3.2. The API has retained the same. So every package that worked with Leibar version 3.1 still works with Leibar 3.2 but there is a bit problem. The library name like that disappeared. So what we do? It's easy. One common fix. We just recompile every broken package because of that. It's actually reasonably fast because on a given system you won't have all the thousands and thousands of packages that are available in Gen2, in Debian, in every distribution. You just review the one you need. So you don't have this problem of another 10 binaries depending on that sole name. So if you update Leibar to a new version you would have to review hundreds of packages in the whole Debian archive. But on your systems there are just a couple of packages that you review and that's it. It's Wi-Fi, it's Wi-Elegan. Every package uses a newer version so there's less possibility of bugs, less possibility of security errors. So it's perfect. No. Actually I have some batches reported which I need to polish a little bit and try to integrate that will inform you that this and those package broke so please recompile them. And we'll show you a point that you should type in. So in Gen2 we have the philosophy that nothing should be done automatically. It's all users control. So we do provide really nice tools to manage a system but they are more not so interesting. Yeah. So you still have issues where the APIs are not compatible? Yeah, exactly. And then the other thing is that I thought in many cases in Debian you can update the minor version number but and the lot of code can use either 3.3.1 or 4.3.1.2. So that problem doesn't really happen. Yeah. In many cases you patch sole names and do stuff like that but that requires a lot of manual intervention. So it can be generated. It's part of the package name. Yeah. So the first part makes that. Then it changes to be a lot of them, a lot of nodes. It's not relevant normally too. But you don't go back and you buy a primary installation. Yeah but for some upstream that do it that way you need to have batches to be able to do that. You describe the package name in the Debian control panel. So on one upstream it's actually going to be the data package name. I mean you just expect the assembly at the end of it. And I mean being that a new package when it goes to bit bar 4. But on 2 it goes to 4. 2.1.2.2.3. It's still a lot inside everything else. And that is the old manual. That only happens if the I mean if the surname is actually changed. You have to say go from 3 to 4 that's different. Yeah. But if it was 3.1.3.2.3.4. In that case you have a SIN link to the and it's linked against version 1.2.1.2.1.3.4. If it is linked against specific 3.1.1.1.2.1.3. You know like that you still have to have a transition. Yeah. Because then the surname is actually 1.1 or whatever. Yeah I'm talking about exactly that. Yeah but there is still API dependent. API dependent. Obviously how do we manage that? We do allow for certain packages to have more than one version installed at the same time. We call it slotting. So if the maintainer sees like oh this is important for library. For example, db is one case when there are several packages, several versions installed of it or light cql. So we do slotting but libraries can be installed at the same time. Both versions, the headers are just installed in a little bit different character. So it works reasonably well. And if you have this library version down that will break API and that will break some packages, we have the policy not to stabilize it until all packages dependent on it can work with it. So we do take care of that. Also I really would think about, for me I like it but a lot we have a live package screen. There are no releases. Well there are releases but they're just for installation media. Not for the actual packages you get installed on your system. So it doesn't matter if you install the system four years ago, if you install it yesterday, if you update the system it will be exactly the same in both cases. So yeah, you don't need to have that problematic transition here, stages when you need to update at the same time a lot of packages that are independent of each other. We do it like small piece at a time which means things almost always very smooth. Yeah, we have up-to-date packages, we have a huge package database, absolutely huge and it's strong which is nice. There are tons of package addition requests to Portage 3. So yeah, okay a little bit more about GenCo. This is a strong point for me and probably for like about 99% of GenCo users. GenCo is extremely customizable. You have this central Portage 3 with all the main information for all the packages and then everything else is dynamic. We have profiles which define what the system in GenCo's terminology, essential ingredients terminology packages are on these given instance of a GenCo installation of a GenCo distribution. We do provide in profiles the falls for vehicle packages for use settings with which we will be compiling stuff. Use plugs are usual. Okay, I'll explain this on the next point. And so we can have exactly the same three. Whenever we use any architecture, whenever we use 3DSD kernel with GNU, a user line domain will never be used. GNU, user line, Linux, kernel and UCDPC, it's all managed by the profile. So you just, if you need to make a very intrusive change that will break everything, you just create a new profile like switching from GFC to UCDPC cases like that. And you set up the right thing. You build stages which I will explain a little bit later as well with that profile that you have a minimal GenCo system which you can unpack and start the installation from there. Excellent. Use plugs. There are any questions or any time feel free to drop me and ask whatever it's above. So yeah, for any comments, whatever. Many packages in GenCo provide a lot of use plugs. There are cases that it's absolutely insane. Mplayer provides like 80 use plugs or it can be built to the power of 80 possible ways. PHP provides like 100 and sums him. So it's absolutely insane. You can customize almost a package in almost any way that upstream provides. And we try to explore that flexibility that many upstream packages provide through the use slide. So for example, we have used GNOME. So you have this, if the person has a GNOME use flood set in many compilations script for the configure script, you will have some parameters like at runtime we'll change to GNOME. And we have use with that will change to with GNOME or with LEEDFOO whatever it is. So we try to export as much as of upstream interface to its build system with use plugs. There are some cases that it is not saying not possible. It will break things. So we do hard code some pages for the cases we do provide. Then we have compilier flags, eflags, or eflags, so CSS flags, which the user can say, but it's like, okay, optimization are nice to play with, but they're actually don't give too much of advantage except for cases like it's good to be able to set OAS for embedded store and actually for the most system or compile by all intermediate store with O3, which can be drawn freely with a little hack. And that hack involves emerge rules. So if there is anything that by default, gen2 does not provide you to customize the system as you would like, there are tons of emerge rules. And ebuild, which is a compilation script with a gen2 package, gen2 ebuild, whatever, are basically bash patterns with the instruction that if you would have to compile it manually, you would have to enter exactly the same instruction, just a little bit smarter, actually. But being in bash, we have an instance of bash running, which we can hook to with a bash or C5. Then we can add things like auto-patching facility, for example, after the back stage, bash will enter with the compile on the compile stage of the process of compiling package. And you can go, huh, there is a package with this main. We have these variables in ebuilds automatically set, so we can match. If there is a patch with name pn set to a file we already have with matching version, let's patch it. So there is no need to actually modify gen2's ebuilds from the ones that come from gen2 to have them changed in any way. And you can do the same thing, like for stripping a needed library, stuff like that, documentation, whatever, but there is actually a smarter way to do that, if it doesn't matter. Okay. So, and, yeah, internally, when you install something you use emerge foo that would find foo at some category in USR portage category, let's say, these are the packages. It will find an ebuild screen from here, which we can see. So this is the patch stage. I will talk about that later as well. There is some compile. So that is actually quite easy. You just image is a wrapper of the main command. You pass it some flags and install. So there are some stages. What did you need to make a wrapper? Overmade because sometimes we need the best additional parameters that are not on the old system. For example, when we are cross-compiling or some other funny stuff, like we also have a wrapper against configures, people make that kind of thing. So if portage will detect that this is not just a default situation when we type make and are done with it, it adds the right flags to make and configure streets. Okay. And this internally will call when you do emerge fooies, whoops, build with the actual foo.ebuild. And some form. And this command can be, whoops, any of setup, clean, fetch when we download the source files, digest when we generate the RCS5 songs, MD5 songs of the source port on the build, manifest the same thing on that. This is already starting the interesting part. Compile, test, pre-inst, install, post-install, you merge with the wrapper about merge. Unmerge, pre-remove, post-remove. So we can hook to any of those actions with our basher C file and do stuff in any of those actions. Or having access to the source port and to the installation image and to the system. So it's really easy to customize things. Let's see where we're going. Okay. So, basically, you can do whatever you want with it. I'm not a lifetime, almost. Yeah. Nice extras. We have automated security tracking, group size, gen to linux security advisories that are computer parsable. So when someone from a security team finds a bug in the package or find notices that someone else has found security bug in the package, he uses a group size that will say packages starting from this version are safe packages below this version or that exact versions are affected. And there is a tool to the side check that will check the list of all these security advisories against the packages that you have installed and update them automatically to save and secure versions. We have nice dependencies in each script that use and specify what they use, need, provide and be started in parallel and do other fancy stuff, which I kind of like. We have native support for cross-compilation. We can generate a ton of cross-compilers. Yeah, basically for all these architectures with all these C libraries. Some of them are quite broken, but it's okay. And some of these special targets like for AVR, chips, microcontrollers and stuff like that. And we have cell support for PlayStation as well if you have one. And everything can be built with soft or hard flow. And we have automatic support for the firmware. You just usually pass something like C-host whatever, C-target whatever, emerge full. There is already a cross-compiler for that. It will build and use that cross-compiler and set up the right option. It will merge dependent. You usually won't want to emerge that into your root, so you pass as well something like that. It will install dependencies into your system with your native provider and run dependencies to your root. So it all works quite nice. There are obviously many packages that are broken for cross-compilation, but we have these all the system in daily and terminology essential packages to work with cross-compilation. So we can actually, if we need to port something to a new architecture and there is no, there is GCC already for that. We just cross-compile a set of system packages for that architecture and then we can install it on that architecture and start from there. We have really nice configuration tools like select which can do select like OpenGL, profile, for example, file list. Right now I'm using GM2 profile, this version desktop. I would choose another profile for them, but that's actually a little mean and it's easy to do it manually, but there are some like list of like root groups. Ah, I used them. This one I still give GCC config and it's somewhat wrong. I have no idea what to do. But given history and it should have been like that, it would bring the list of all the GCC of all the tool chains you have. Let's check if we can also do work. Yeah, excellent. I have only one for target and I would have the same if I had a profile that was built on this machine. I would have also the list of all the targets I have and I would be select independently which one I would want to use so I can have as many as I want installed. I know the links will be managed for me automatically for the versions and stuff I want to use. And there are quite some more nice configuration tools like dispatch.com to actually have some computer files today. So it basically scans my ETC directory for config files. Then it checks in all the revisions of what is stored, if the change that appeared were already accepted or rejected by me during installation, if that's the case, it would stick to the option that I have chosen here to reject that setting or to accept it. So it will not ask me for each time like I have enabled X11 on my SSHD form file and each time the default is not. So it will ask me only one time, do you want this to be reverberated to null? I say no and it never asks again. And then you can merge them whatever or accept new and I usually accept new for everything because it doesn't matter. Excellent. And we have applying plain text database for the packages we have installed. So they are easily varsable. I do no binary no linking to whatever tool. It's just like let's see how GCC could be nice. I have two versions of GCC. So we have many of the actually all of the important portage settings in here. These are variables that are used during the new process like I use. It's the use flag that this package support use which are used by the MS system has. This is all I do like. I want them to be and before support whatever be sexy or these. So everybody that can support that option will review with that dependence. What else do we have? We have support for soul name checking. So in merge or the system is using a library that is being updated and the soul name will be removed. It keeps the file to library for a while until the new one is usable and then remits it. So the system will now be broken with type of a view. We have the originally built in case the new one or there is a new one. So we can know which version did we use to merge. Get nice things like the script and all that. The list of files that were installed by this package. Let's see which one is interesting. And it exports all the variables like yeah, which are provided automatically during build process for the system. So we basically can construct the build environment. And in case that variables are not enough, we have a nice dump of the environment with all the crap that were actually asset for us and all its help functions on the end like these ones. And stuff like that. So and this is I kind of like because it's easy to build tools on top of it because it's exports in a nice way. Any questions about that part? Okay, so now what do I have to do with all my labels to make it work? Let's see how Gen2 is installed. First you have to set up your build from a CD but there is an installer which is boring. I won't talk about it. First we build from a CD and then do anything manually. That's a fun part. We configure the network in case we need to get some files which I always do and I do always make it so they're faster, nicer. Set up the disk, extract the base file. I have them all just yesterday. Okay and we can install Gen2 from several stages which are kind of useful for if you want to add a new architecture but sub architecture. So to the three you usually start with the stage files that work for your architecture and it's not optimal that it would strap with your settings and your profiles from that. I will explain a little bit in here. Yeah, okay. Yeah, so we have the possibility of four types of installation. Stage one which is like it's basically a system that works on your target. That's only requirement and it has the necessary tools to build a new tool chain. So it basically starts building. I can show you what it builds. And then it would build linux headers of the things needed for genutils, gcc, jlpc. It builds base layout which is like base files for Debian and zli which is also needed for most packages. So that's the bootstrap process and that's the difference between stage one and stage two where in stage one you have already bootstrap for your system and after you have bootstrap you are not allowed to change your seafoes or obvius ripson. During the bootstrap process you are allowed to change your seafoes. Yeah and then you just have to compile your system or essential packages and do the rest of the stuff. You have the option of doing a stage three install which already has a compile of your essential packages. So you just add a parallel, a bootloader, some tools like cron, c-slot and you can boot. Or you can do grp which is gen2 reference platform install which are basically the compile packages for most of the things that you would probably use like apps, norm, apps, songs, cron, demons, whatever. Does it say that you don't need to do palatian? Yeah but the problem with the gen2 is not really designed to manage binary parameters it is very easy to break them on update because it doesn't keep strap of ADI dependencies. So that's the problem and the format is really crap. It's basically a table of your image after the table you put a couple of zero blocks so the table will stop and after that you put another table with the metadata that is in that bar db directory and do some magic to strap that part so it's really not nice at all. And actually what you do to install it is using this environment bz. You can recreate the exact state of portage in between the install and merge stage. So you will cover the install image that is installed in the separate directory then recreate the old environment with which portage was started and then do the merge of that with the usual portage as it were compiled from source. So it's really happy. Excellent. So which one would you like to see or should I just do like this is what would happen to put strap and then go to this is already put strap system let's do max. Yeah. Stage 2? Yeah or something. So you can get to the actually what you have worked on in page 3. How much of this do you need to know by the way in order to use Gen2? You need to know how to use high speed so and you need to know that there is a Gen2.org. There is also documentation in there. So basically the handbook is very good. It does the explanation of what you are doing and why you are doing it and gives you some suggestions on what should be your default way to use your first install. Well Gen2 is known for having good documentation and good documentation writers but how much do you have to end up knowing after you? You end up knowing a lot. That's another good thing about Gen2. I first my experience I started on back in the keep talking. In my experience with Gen2 I started with Red Hub because in Mexico we love Red Hub because of Miguel de Casa but that's another story. So and it was like no I have troubles. I want to do things. I want to understand what's happening to my computer and not just like doing three clicks and praying that it would work or else there's no idea what has happened. So I tried to use Red Hub. I tried to use SUSE and I was no that's not for me. Then I found fault in low-bit 3DSD where you can customize stuff. There was good documentation as well and you can do things you know what you are doing what's happening to your system. But the problem is that 3DSD had very bad and still has support for hardware, cheap, crappy hardware, broken maybe in some cases. So I wasn't able to get some things working under it so I was like switching to 3DSD doing some stuff and switching back to Windows doing some stuff and so on. And then someone gave me the tip of gentle and it was like really wonderful because the first time I installed it I was completely lost. What should I do? There are so many configuration options that it's absolutely impossible to get it right on the first time. But then the good thing is that because you do it every so manually you know exactly what are you doing to your system. So when you see something is broken you're aha I did something related to that at that stage. You go back to the handbook which is the install manual and see aha it was related to this file so let's see what has happened. And it's really easy because of that to learn how things work in terminal. Do you know how old almost works? No, I don't want to know. So something you want to know is something you don't know? Yeah, it's more like you know about how your systems work together, not specific packages. Excellent, so this is basically a stage to install. It's quite similar to the manual installed in the appendix of Debian installation that you do as a debugstrap or cdbootstrap which generates your change route which we are doing by unpacking the stage 3.5 and then continuing most things manually. Do you guys compile for size? Oh, I'm looking. You do okay? Yeah, on my computer side you should. Debbie does it compile for size? Does it? Sometimes. Sometimes you go. You need some packages to search in the app. Debbie and we have a problem. It configures every translation. Yeah. If you get around that and just configure the translations as you want, you roll your lasers all like this. These are the settings I have on this laptop. I will show them to you. You have basic information but you have installed and you have your seahorse whatever flags and you have linwas which you set or do a set of local that many packages support. So for example, because actually that means for the since the configuration stage you actually don't actually do that anymore. So you select from the compilation time you select everything up so you don't not even message formats anything else. So this are the linwas it supports all I have only English one enabled but the other one don't get installed. Nor the errors all we waste time in compile time. If the build screen of upstream supports that then it's not so broken. Else we just don't install them and build them with the broken. That was that upstream version has. I'm sorry. Okay, excellent. So once you get unpacked that and unpacked which I will do in a different way you can actually change route to that. Yeah basically you have already installed the system and you do all the updates and stuff like that and you do need to configure portage which is like the important part for me where you set up a profile which is basically oops yeah which is basically a link to a profile under the portage directory and you set up main potions see flags whatever you want and local overrides that are not lower but just for some packages. That's the important thing that has to be done manually for every gen2 system unless you want the default you have to provide same default at the point of customizing that. That's where the manual work is after that you install the kernel, you install some you configure FSTAP hosts whatever, you install pronounces load, loop loader, underboot the system and you have a working gen2 system like my last pickup almost has. There's exceptions like it has broken that GCC configure thing. Excellent. So then how do we maintain gen2? We update portage of 3. We do merge team, we fetch a new version of the portage 3. We update the system with like we could update just the files the packages of we or selected as the one we use which end up in the world file or we can do a deep update which will update all the dependencies of that packages as well. We can check if the new and if the use flags were changed or something like that and then they then defeats for ADI packages sometimes or we just leave it like that and if you find some we just review that package at that time there's a tool for that. We will merge configuration changes and repeat that and now still. If you did that every day on a little laptop like that how much of your daily usage of the system would you take? I am using unstable so packages change quite often so on average it's like up an hour a day, maybe an hour a day, not really much. If you use stable 3 it's much less than 1. So it's actually not too much but still it's not nice to do that especially in laptops when you don't have them all the time. So how do you define this? You said that gen2 has a live tree so it doesn't do releases. What's the difference between stable and unstable then? We have different projects. On every build we have to specify this variable one of those are keyboards if they have a pile of prefix then it's unstable on that architecture. If it doesn't have that prefix it's stable on that architecture. If it has a question mark before that it knows to be broken on that architecture and do not even try to do that to install it on that architecture. So that's how we do it. But you have a stable mark that's stable AMD64 and X86 so why do you run unstable if you're stable? I run unstable for the packages that there is a newer unstable version. If the most recent version is a stable one I obviously run the stable one. The idea is that you don't always have all the trees as far as I understand no trees you have packages which are you don't have always a package in unstable. But you have some package which could have something in stable and might have been unstable or things like that. It's closer to our concept of experimental. Yes something like that. So my point is that if this is so easy that I was able to write a basic program for it then why do I have to do it manually every time? Maybe I should better put some real demons that would do that for me and generate many packages like Debian packages that I could use all the tools that are on Debian to manage that. So nice. So basically what do I need to know? What is Gen2 from Leibniz perspective? It's just a set of configuration files. Which profile do you use for that system? Which C flags, use flags, linguas, whatever I use for that system and a set of packages that I would like to have on that system that should be compiled with these settings. Aside from that everything should be automated. So that's what's Gen2 for me at this stage. World file is the file of the packages that you explicitly install. There are as well packages installed that are called as dependencies but considered less important than the world file and are automatically managed and if you stop needing that dependency for some packages you can just remove it with them. So if someone wants to have fun keep and try to CD bootstrap the packages that I have built with my bootstrapper on this laptop from the HTTP server that was running on this laptop it will fail because there is a tiny bug in CD bootstrap that tries to install always important not essential packages and in my case that set is empty. So it just calls dp-i without anything else and that's an error for the package. So but you can get a working change route of Gen2 that way completely from devian packages. You could get all that just by putting one of the packages that you need to see the boot. Yeah I just heat that but there are some small things so I just decided to just leave it at this stage for now and then fix it the right way actually. So excellent at this point you ask me whatever you want we have some more questions about this and then I will see to the next part which is like the problems I need to work around and what I would appreciate some ideas. So you actually created a Gen2 CH route by packaging in a devian style the Gen2 tools and stuff like that or the other way around. Exactly, exactly. I use my script uses emerge that goes up to the install stage. In theory that part is actually not done. I bootstrap it in a slightly different way because I need to start from something. So the idea is that I will have all my tool calling ebuild the ebuild file install that would generate an image directory with what should go to the root file system and another directory with all the metadata that is in var or package dv packages that we did thing. So then I can use that information to abstract for one or from the image the data are gz from the other thing I have enough information to abstract the dependencies the package names whatever so I can create a control file rule file to make it I use the character on my root file. So you actually create only finally packages without actually having the source package. I have a source for my gz. Yes, I wanted to generate a live cd and I wanted it to run on 286 computer whatever you know above that very small memory and I just wanted it to run one program with the gem to help it generates a system like that. Yes, yes actually the smallest router booting can work in router fast it's about 600 kilobytes in size if you use cdc at buzzybox. You can for such things you would need to create the cross compiler and then you can just install base layout light which is made for embedded stuff it's on the release on buzzybox and has system 5 init system with the script or whatever and you link that against use ellipsey and emerge also cross compile with that compiler buzzybox to your root cross compile base layout light and you can make an image from that and turn away so 286 maybe someone still needs this information for kevin fun and kevin jen from excellent so what did you do at that cover oh i got jen to install so uh always nice to see some ways that another possibility is to do it first and originally trivial one because we have categories in jento we might have duplicate package names pace a pace b because we are allowed to install or to have a more than one version of the package install and the same time and we do use that we also end up with duplicate the package names so for the case b it's trivial you just happen do i pet that of course of course in order to solve the problem yeah i think you could do something and use a sowname in a way that it used in debian so we actually generate uh a package which has a different name yeah exactly the sowname yeah for the case major what i do is uh my jento package my debian package name will be package name from jento with the appended soul name unless it's the default soul name so that's kind of trivial for this i could probably do some mapping because all the cases are like reasonably trivial like there is so do and there is a vi plug in v in plug in for so do so i just remain like to win so do and stuff like that it's uh reasonably trivial yeah slot my pings that i mentioned and this is a problem with dependency as well especially in the bootstrap stage because i do not know what would be the package name of the package that i have still not made so you need to take care of dependencies actually and there is another problem because in some cases a package might work trivially with uh several slots and if i depend on just one slot then i will not be my dependency will not be as flexible as it should be so this is not a very mapping but actually i don't think it is a problem i hope it is not a problem because uh here i'm not going to reveal to a package all jento into that i'm just going to repackage the stuff that i used so uh from this point of view i already know uh for which packages i have that build and i just depend on that in if that changes during an update i would just have to rebuild that and because it's all local there are no modern links whatever i have a build system i will have a build system on my local net i will generate that and it should not be problematic to do one more update of the package so that should work as far as i understand this is the next one is a very big problem that i would have to do some really hard things i will work with that very times to have it to start with that uh unsupported architecture this is where i need help because um from you guys because in jento we have tons of architectures and we have many flavors of that architecture we can have arm l arm di soft hard floor save for mix same weed or without uc lip see or g lip see so what would you think would be a good way to map my huge set of architecture to devian architecture and how problematic is it if i just like don't use uh mix or a plus of soft and hard floor and use arm for everything that is arm and just like know that i'm using the right repository um i'd suggest doing something that we thought about the didn't and using it dependency having every package that's safe for arm soft float depend on arm that soft float something provides that uh in the core and that way you don't mix things because um i mean you basically have lip see depend lip see provide that or something and that way you don't install two lip see is they have the same name won't make it done with this i don't know i guess you'd have two lip see packages that actually have different names but that's something make it really hard for the user because they will see that information in the package manager uh well we'll see that they'll see that it depends on arm soft float or whatever and uh it's provided by something yeah um so do it like we'd provide yeah do it with provides and depends but we don't have to all have different names for a soft floor and hard floor why would why would you well yeah you do but i was i was assuming you got that anyway yeah so you just have like one package which is always the same public gen two yeah and then have it provide arm soft float or whatever are you thinking of and actually that would also that would also prevent these these devs from blowing up deviant systems yeah you were fine it's not quite related to this but i was wondering uh did you have in mind creating deviant well their packages with uh which are mixed architectures on the same system so you would do you have in mind thinking that some you would install let's say uh some binary package with soft arm enabled and some other without it but on the same system okay i plan to support whatever gen two supports and so far well how we do this we have this program with uh ind 64 and x86 so we do really buggy stop it we should not do but we haven't figured out the right way to do it just yeah so we have uh and yes yeah we have emu linux uh x86 basically which are awarded for ind 64 but they are x86 binars so this package will still be a md 64 and not x86 without name kind of what happens currently on the main list with a i i start doing this and all kind of yeah because you don't have any other means of doing it because you actually do one the 30 bit lips to be and binary system no but i was more asking about uh let's say let's say exactly the hard case with very thinking of mixing soft float and hard float binaries in on the same system because in in because in that case you wouldn't be able to do that that trick it provides well you might be able to you could have the package provide both somehow but yes yes because that's that's what you want to prevent yeah in case you have two packages which depend on the right version so yeah different package name in that case i guess you need to have a different dynamic linker for both that's often hard as well yes so yes so actually you were planning to do that uh i hope to do that and i really hope the gene to solve the problem of how to do that anytime soon but i don't think it will be anytime soon so i personally do not have a solution for that and my main goal is to like provide a weak package gene to us at peace and then we can start thinking on how to extend yeah excellent so i'm working with the disaster victims you know give away hardwares and like that the way to generate the system so that anything that would be uploaded when those first ones want anything certainly not many things probably yes it's you know it's not like gender is not some magic thing just like expose the functionality that other packages provide so as long as linux kernel provides it it will run with gentle if linux kernel provide doesn't provide it will not run with gentle unless you package the linux kernel and then it would be really nice if you provide it upstream for the linux stream so it's not like um like yeah we do everything we do everything we try to do as much as upstreams gives the possibility to do and we try to expose that possibility to the user but not a bit more than that so actually what you could have maybe optimized binaries and optimize packages for the hardware that you have so you might have some performance benefit and you could run on maybe older hardware but it's it's really depends a lot on whether upstream support yes but what i was thinking was some old versions of linux run under those you know seems to me like versions of linux that were made in the same time frame as the one is 3.1 you just took yeah you can create absolutely without any problem a profile which is based on all versions of linux kernel all the version of glpc which you will need with that old version of kernel and without this store being the rest of the gentle street so it is perfectly possible without I think there is some actually like you find the old sources because we find the mainstream packages you get the old packages you're not going to find but actually for most cases you're only yeah glpc will break some stuff if you're just very old glpc and you're you need to be able to have access to the sources of the world around that time yeah that that won't be a problem if you get some image of some distribution from that period then you would be set yeah but then you can just use that distribution but if you want to optimize it yes no for the bootstrapping phase yeah so in order you will not need you will not have problems glpc no you mean that always because newer version of kernel may not work on that hardware anymore and then newer version of glpc would not work on that hardware anymore yeah so you will have to stick with that version forever yes but in order to bootstrapping we would actually need an older ggpc of all the binary ggpc and another binary gc so yes you do the bootstrapping with old sources and stick with the old sources actually not in binary you can just cross-mobile it for i is three eight six without i'm not sure if you already know what our ggpc is still working on that there are there are things which which happened and you they kind of built up on each other and now you cannot quite go back just to get the same things currently best for it okay does the windows on this kernel support the 286 no never no only 36 i think windows 3.1 works on a 286 yeah but it goes around some so i'm just saying that there are scenarios where you can run windows 21 we'll never be able to run them yeah you might want to toss those 26 computers no no 36 computers will work because they have virtualization so yeah yeah i'm talking about 26 okay let's go back to this thing there are some packages that do some postings free of which you perfectly are aware about like probably the most common case would be like creating users and doing some 3d or stuff like that but what i was thinking would be probably the best way to do it is to build the package on top of union fs then before i merge the package into the real group from that image that where i have installed it first i start a new branch on union fs and then just check the leaves like okay pass file was modified let's see what was done there so i probably think that to be possible to over the majority of scenarios that way that would be like pass files some permissions some funny directories stuff like that and the rest would probably have to do them by manually having spritz for that kind of packages and so just copying that to the end of posting it script and stuff like that any idea is it possible to do it better without that blackmail the problem with just parsing the builds is that they might be way more specific than what will end up in devian because in devian's binary base distribution so i don't need many things so i can be more relaxed on dependencies as well as i'm making an option to install all the gen2 ish theme so for example for each package right now i'm creating two packages the real package and package dash vdb that behaves the gen2 database so if i install both of them i could from any point start using a merge on that system if i install just of the real package but contains the real program that i will not be able to use gen2 system this is one problem second problem is in gen2 builds files have inheritance and have multiple inheritance and the e-class script that they inherit change from time to time these whatever so it would be really hard in an automated way to just to just extract the right bits from that ebuild file and all the e-classes they inherit so that's why i'm not considerant to trying to extract it from the build itself what what do you think if you actually make a special package with in in the base system and which would provide actually the portage classes which you have to define so actually would be able to to use in post-hist to use actually ebuild portage that's actually trivial but i would like to do more without it so i can have a real delian system without gen2 store i can use it with embedded systems where i don't do cross compiler whatever so i would prefer to have a more flexible way than have it like a huge gen2 system with all the tools that you would need to build everything and there is if i build up it there's still the second problem of not being able to parse automatically well i don't have the knowledge to parse automatically all this multiple inheritance that can be recursive not recursive but they can be diamond shape imperidences yeah what were you like to say i don't know i was i'm just thinking about things like i'll be preloaded tracing of stuff or various fun stuff like that i don't know thing or um set that shacks type stuff but it doesn't seem to really get you over there so no it would know the idea is not going touches with the difference but to abstract semantic knowledge from the difference so if i see past file was modified i know that the user was pregnant and i know how to use that fact that changes that's what that's the idea not the music yet so no better ideas if my email is again does the gen2 have binary packages i know they do so how do they do it there uh they what i was talking about they would just extract that metadata create the environment for the bash environment and then just keep going i'll say would would be just a normal nice soul okay and that actually breaks if your e-classes has changes had changes in an incompatible way so that's what you could theoretically do that with the wm package oh what we could theoretically have the uh i'll get that part of the ecosystem in there and call it but that would still break if e-classes are broken and then i would need the complete portage tree on each system which is not fun okay if you haven't suggested my name is why was he left agent to go to okay there is another problem kernel images and models the way we do it now it's we provide sources for a huge amount of different uh kernel flavors like special meep sources whatever we provide vanilla sources suspend two sources jit sources gen2 sources which are sources with gen2 packages there were some time record sources mm sources so loft sources and then usually you would want to do a manual installation of the sources of the module configure the kernel by hand so but that's not good for a completely automated system what does it mean because each sort of flavor has different patch sets so there is also jim kernel that will generate a kernel that will boot on most system it will be a huge kernel will be applied some modules as well but it's still not nearly as slim down as they in kernel is that creates a nice meek run a test and we get the models needed to access the hard drive and then lets you have to do the rest of the magic and i personally like that way of doing things and i would like to write some automated live info my system tool that would take this gen2 sources or whatever and build one and build all that images or if it is possible but i right now i don't know how to do it i think it might be possible but i need to read some more so it would be nice for me if i could add to like see kernel like gen2 image and or gen2 compile gen2 binary whatever so i would it would have gen2 sources as a dependency and it would include that build script and do everything for me from a gen within a gen package so then i would just package a package that package as a kernel package that was wait a minute package and be done with it so i would like to get that as well and i'm not exactly aware right now what should be the way the best way for me to do it so what are you trying to achieve is to try to play from all the flavors of kernels that you have a usable devian kernel image yeah but why don't you use make a pkg uh because uh do you like why don't they use devian like yes kernel uh you know like not the big kernel if there is a tool yeah make a pkg which actually allows from any kernel source tree to create image file source kernel source file and there's file and documentation so actually have those dev packages and you're done with it okay question for you guys how friendly is this for embedded for customizations of kernel and stuff like that for the dev team which models should reveal and uh let's go right down kpkg kpkg right now mate so how friendly is this for embedded stuff how flexible it is it takes a convict it takes a dot convict file so it's not a kernel module so it's just like a jim kernel basically yeah okay so yeah i could repeat this up it's the same with jim kernel but i would like to explore like a little more uh i would like like not to have all to create manually kernel for every possible convict flavor but like to have if this is a desktop this set of options should be in there if it is so that way i would like to have packages as a gentle package so it would have like a set of use flag like with minimal flag it will be like only what you need for that architecture to look and we use it on an embedded store maybe you could use a regular kernel well maybe an image package and you could actually do something like leave it modular but create strip out of that modules so you can create a dm package for let's say each one or some different sets of which would depend on your only with on your image which has support for just the file system and disk access and the unit or unit or something like that so you actually if you want to have the minimum amount of modules you would you would have a compromise between actually just having the image with the actual support in it and having a maximum configured kernel with all the modules available right now so you actually just install supplemental dm packages when you want another module actually extending a bit your reasoning maybe what you could do is to prune the kernel tree just to build the bare basic kernel and take the modules out so they can be built as packages handled with module assistant there's no there's no way to do that automatically and to keep the updated for new kernel versions probably well that's which is not the point is that they're like they write these builds per manually what is it like i would like to avoid but for for module assistant you actually do the build on the target machine so you would actually probably you want to do something in post case or something like that so in postings you trigger the the compilation if you're okay with that by doing so you actually provide this as kind of set just the source package and module assistant you would compile it on the target when you want it so you do not need to create hundreds of thousands of packages actually i this is a good point in devian you do need to make it on the target machine because devian is huge it provides quite some kernel versions so it will take ages to build all the possible combination here i'm like almost maintaining one system i'm doing a tiny distribution a very customizer but a tiny one so i would need to buy a very few flavors so it would be nice to have it on the build server sorry yes but notice what was actually like that to like create packages for specific models and then depend on those for systems but another option would be like along with this configuration another point dot on so that would be like another possibility it's like the easy way i don't see the problem with actually using module assistant to do that what you're trying to achieve having already 3,000 kernel and compiling all the you since you're using since you're actually using a gentle system on the target then you probably might probably use thought that would have gcc installed that everything not absolutely not necessary because what i or we have engend to we have depend which are good build dependency are depend runtime dependency p-depend which are dependency of the package that need this package to be built for and we have virtual provides and this is the way how we do our conflicts but that's another topic so actually my debian packages only have run dependencies and post install dependencies they absolutely do not depend on any build stuff because they're binary already there is no need for that i at this stage and if someone has done that cd bootstrapping thing they will end up with gcc and all the build make or several flavors of out the point out the main because they are system the same as essential packages for the pro one profile that was used to create the stage file i'm repackaging what if you add something like cd band and you have something like configure time dependencies so you have the gcc and afterwards you can just trip it out i don't it's it's package but it could get there but the package doesn't support that but yeah average on this so there's uh what do they mean uh this uh depends on the packages that need to be installed on the system for this package to be built okay so that's debian built depends yeah exactly this is uh the sum of these two are uh debian depends these are packages that this package needs to run and these are like uh this is a hackish way of to work around stuff like you have one package that uses parts of itself for building so sometimes you can split it into packages where they're really one package and make the part that is needed for building the other part are depend on the second part so first the part that is needed to build the package will be installed and then the second part can be built with that okay so this is what's around sorry bread depends and depends no no no it's not well it's not for it i think it's kind of like the depth packages we have in the building so you the you need to have there are some things like that in order to mount the holes yeah there's no mapping so the package should be fully configured before this package start to well the problem is that this situation only arrives when you are dealing with source so there is no exactly the same situation for debian because it's up to you there's a similar situation in end end we need to identify the dependencies that are needed by the cross component during the build yes as a cross package yeah and actually this would not work and we do not exactly do it this way because if you do it this way you will have similar dependencies in the tool chain so there are another halves for us to build cross compilers we just do it in the right order basically hand-coded with the right use flags that um actually i can show you your interest there so you don't actually have these sort of dependencies listed anywhere on the interfaces no ah yeah i mean the chain it will take some time yeah it will be like the way it is doing it whatever and then it is just doing it in the first stage that you need to do it with the right or use flags that's probably very much so there is that problem as well and i need to analyze the possibility of including the configuration file of our packaging in a or very different not gen to each way of the packages as a package for model or something like that if i need to feel fine during dependencies since you are using some different things or do you want to stick with the dead end kind of how do you like to be as close to dead end as possible because um there are way too many things uh that or tools that are used in dead end that can be used with all dead end kind of systems that do take the foreground so the less tools i break the better for me if you have a really small set of packages where you need some kind of where pd bands or stuff like that maybe you could actually add in the control file some x xs something pd bands or something like that yes but no one will be able to parse it uh well did you ignore it sorry nobody except you use it so yeah exactly and everybody also ignored it so it's not that problematic actually yeah i'm actually really into the control files how many gentle things with that x whatever so i uh users my users i will be able to see how this package was compiled without going to work like the useful that have been used uh like the home page of the package that they still have no clue why maybe they don't have that flag and control files so they're really really yeah so i will be adding stuff some stuff like that or me to parse yes that's kind of about it there is also a possibility to restart um services automatically if i want to do it really really baby initially but then i would have uh to stop uh the installation at some point what the user for the merge config stage so i need to take into account that if i uh would prefer it like absolutely baby initially automatically or uh have completely unattended uh updates maybe a little bit me uh have you thought of contributing something to that yes yeah there are some things in the package that would be lovely they would be uh expanded and just for the beauty but reports yeah and and it has a better my packages so actually what i'm asking is uh maybe you should try to hello if you is there any use case where they've been to benefit from what you have done uh yes i have been talking to some of the guys present here in the wacky ideas so we'd make independence is a lot more flexible that i could really use there and it would provide respect it would be lovely yes like the initial dependency to do that also for example in debian it is a little bit stupid to say at last like in general we can have nested or dependency stuff like that in debian you'll need to put the crazy string with all the possible permutation and it becomes completely human and possible so it would be nice to have this kind of uh dependencies as well so there are some things that could improve debian and i hope they will at some point any questions more and thank you for being here thank you for your nutrition and i expect more now more later more whenever you can reach me thank you very much