 Hello everybody, my name is Michal Schurm. I work at Red Hat as a software engineer. I'm a package maintainer for Fedora and RHEL, and my tech MariaDB, and MySQL databases. Today, here I will be talking about installing Fedora from scratch. When you can't use, for example, Anaconda for any reason, you can try this one. I will cover basic steps that you need to install, basically, any operating system. You can install very similarly, any other Linux, and others. I started my project because I wasn't able to use Anaconda installer, which is a default installer for Fedora at the time. I found out it was really difficult to use Anaconda on older hardware and on some not-so-common hardware. Nowadays, it's, I don't know, three, four years later, the situation is much better, and I haven't found a device the Anaconda won't work on for a while, so it might not be needed anymore. But anyway, it's a nice overview. What do you actually need to do to get the system running? In the meanwhile, I found quite a number of smaller, big issues from some minor reports in the DNF installations to situations like the specific versions of kernel and system D won't work together and the whole system will crash. So I worked on the installer scripts for about a year. There was a long times in between when I am at a blocker box that I had to resolve first with the people who were maintaining the components, so it was quite a long run to create the working installer. As I already said, nowadays, the Anaconda works pretty well. The one drawback might be that if it crash, and if you don't know Python or you don't want to dig deep into it without a good knowledge what that error message means, it might be difficult to troubleshoot anything. And since we are talking about an installation, you might get into a situation you don't know what to do next because you haven't had a running system, so there's not much what you can do. Yeah, some other distributions like Arch Linux Gen2 and so on are installed this way usually. So I'm not discovering anything new really. I just try to get the same things working on Fedora. So the hardware I used that on primarily is this cute single board computer. It's called Udu, and for me specifically, it's cool because it has Intel processor inside. I had issues with Armon Fedora years back, so that really helped me to get everything working. You can check it out on Udu World. They do a lot of really cool boards like this. What you will need is to get the Fedora installer USB or something similar. What we need to do is to run Bash and to be able to install the packages so run RPMDNF or something similar. And my install scripts, you can find them on GitHub. There will be also link at the end of the presentation. This is how it looks in the folder. I can show it on the device. There are only quite few scripts. I have two configurations files, so where I set everything I need from the, how the device should be partitioned, what partitions should be there, and which operating system, which version, what packages should be there, so on. Well, and I will just, maybe it's working, it will install a lot of packages, so we will get to that later. Okay, I will now try to go through the steps you need to make and try to point out some pitfalls or some difficult parts you need to understand to get your system working. First, you need to partition your disk, which may be easy if you have only one way how to do that, but on the new device you can find that there can be the layout, maybe BIOS or GPT, and the partitions can be different based on that. You may find out that you can use several combinations of, for example, the MBR partitioning standards on the GPT partition disk, but you will need some additional partitions, which must be formatted or partitioned in specific way, and most of them have some neat codes that you have to use to get the partitioning done. You can use, you can find them, for example, yeah, I think the Arch Linux wiki has nice pages about the partitioning, so the quotes are there, there's nice Wikipedia pages on that, and you just need to know what you actually want to have partitioned. Before we start, we will create a folder and that will be our truth. We will install the new system inside the truth. Before an installation, we should mount inside the Proc Citizen Dev Folders or directories, because many installation scripts rely on what's inside them. There was a bug, I think it is resolved now, it's about a few weeks or a month that you have to check the Selenux contacts on the mounted folders. Also, what I'm using is calling DNF with some installer root, which is telling the DNF, well, install the packages, but install it inside that folder, take it as your new truth. But the behavior may be tricky, because if there is any configuration inside the truth, the DNF will use that configuration and it will use that repositories. So if you have an empty truth and you will call DNF installer root, it will take host configurations and repositories because inside the truth there isn't anything yet, it will install the base packages, for example, then you will call the same command again, with different packages to install, but now we will find out the behavior is different because the DNF switched to using the configurations inside the truth. You have to create the ETC FSTAP files, for example, preferably copy the ETC Resolve configuration from the running system, for me it's the easiest way. Sometimes I even get to situations when I go to the system, but not the kernel installed or the group installed. If you want to check that the installations, if you want to check that you really have everything installed, you may need this nice line. So yeah, we are still running here. I was using the ASAP disk utility to partition the disk and make a fast utilities to get the file system on them. The tricky part is that every make a fast binary works in a slightly different way and its arguments have different meanings for each. I was using labels to label all of my partitions and I saved these informations to the FSTAP file so I copied the resulting FSTAP file inside that truth system at the end. What is good is that you can install groups with DNF. For example, the group install core is cool because it will get you mostly everything you need. You only need to check that kernel and group at the end. Yeah, you usually need to set up a root password and it's a tricky because the passes of the and change passwords utilities aren't working well inside the truth. So right now I used set to insert the hash of the root password to the ETC passes with it. I am trying to find out why the utilities doesn't work as I expect but it may take some time. Also, you need to tell the system to reliable all the files upon the first boot. So this is a quick overview. I think that's mostly all I had. I can show on the device that will be working or at least it worked every time I tried it. But it's still installing. So right now if you have any questions, you can try shooting them at me. Okay. I'm not feeling that you're returning to basics. This is how it was looked at before. Can you repeat that please? I'm not feeling that you're returning to basics. This is how it was looking directly at this time. Is that the basics or why? Well, the question was that I'm not returning to basics that I'm rediscovering how the redhead installer looked 20 years ago. The question is how else should it look? I tried to minimize everything the federal installer did. At the time I wasn't able to read a single line of Python. So I took a different approach. I was also curious how to do that. I knew that other distributions are usually installed that way. Before we start with this ultimate, what was installed there before that? Or were you there? Shall there? So where was it? Yeah, so what's the running system before I installed the Fedora? The running system is the, well, Fedora. I get it on the Fedora org. It's the installation images. I'm just not using the Fedora installer on them. What problem I was solving? Mostly the Python errors in the Fedora installer. When I first tried to do something on Linux, I had really old hardware like the motherboards from 2000 and so on. I wasn't able to use Anaconda on any of them. There always be some error and since I at the time haven't understand Linux, nor Python, nor anything, I blocked by it and as the time went, the number of devices that the Anaconda won't install on were shrinking, but even two or three years back, I still got some hardware at home. And it was the purpose for me to get the Fedora running from them. The question was how long it took me to get it all working. Right now the speed of the installation itself, it's about 15 minutes mostly because of the DNA and the packages, but at the time I was getting the scripts together was about two years probably. I started with reading an interesting blog somewhere on the internet that was named something similar, something like Fedora from scratch or something. And it showed me the way that it should work, but it didn't in general. So I started there, I got the basic information, what probably is the way I should take and from them I start learning all the things. And most time I spent doing mostly nothing because I was blocked by the bugs I encountered on the way. The question is if I will go this way. Yeah, exactly, please, please, please, please. I'm not sure, I would try to get familiar with the next year's GEN2 maybe, which will have some similarities. From the beginning you were doing a DED of what is an existing image and then doing DED of objects. No, no, no. That was a joke. Which brings me to the question of why would you necessarily, why don't you do that? So why would you take an existing image, DED, and then just reinstall the packages to fix configuration or system architecture issues? This was actually a joke, I got a friend of mine here. I had the same talk on the OpenAlts a few months back and he was asking me what am I actually doing, why I just don't use DED from the existing constellations. That's it. Well, that's a very nice question, why wouldn't you do that? What is the purpose of this approach versus that? Well, because you could DED a very, very minimal image that is near or a hundred megabytes. You would have a functional system and you can use DAF to reinstall packages where you might have other type of issues. Well, on the other hand you have to have the image first. And then don't know where I am, where to take it. Well, you know how before they were reinstalled at first. Thank you. Thank you. Well, let's do that something. I understand the principle behind it, because until five years ago, Anaconda was really bad. Anaconda used to crash around, he thought the other Lex 86 consisted of me. And the Python back then, as you said, it was really awful. I mean, even today, live-at and live-at GUI have a lot of problems, nowhere near as bad as it used to be. So it seems like this is a much more complex approach. So, any other questions here? So, that will be probably all from me. I hope that you got some interesting information or you found out something you didn't know. But, yes, the link is here.