 So, I'm an Information Technology student at Faculty of Information Technology at Bernard University of Technology. I'm also an intern at Desktop Team at Fred Hat, and I've been an intern since June 2019. It's been almost two years now, time flies. And among many other things, I'm also a maintainer of Toolbox. So, the agenda for today is I'll introduce you briefly to next-gen operating systems, what are containers, and what actually is Toolbox, then we'll have some quick, not really quick demo, and then I'll lead you through recent accomplishments and our plans for the future of the tool. So, the first thing we should ask ourselves is why Toolbox exists. There are many tools out in the wild, and why there is yet another tool. Well, maybe you have heard of image-based operating systems. Maybe Project Atomix sounds bell, or you've heard of systems like Fedora Silverpool, CoroS, IoT, or even Nix OS, CoroS itself, or its fork flat car. These systems are quite different than classic operating systems, and we could describe them as immutable, which is not exactly accurate, and you can check out blog of Colin Wolters at blog.vermin.org, where he has an article about this immutable term. But I will not go really in depth because that is a topic for yet another talk, but I would describe them as secure, reliable, but clunky, clunky to use, and I can't say that very because I've been using those systems for almost two years now, daily. Here's a screenshot of what from my machine where I run Silverblue. There is no DNF. There is this tool called RPMOS3 that allows me to work with images, and those images are different deployments of the system. Whenever I update my machine, I need to reboot, and this is kind of hard, let's say, if I were to install quickly GCC to compile some project very quickly. Normally, I just install the package, compile, delete, but this would on Silverblue, this would require at least two reboots. So what's the solution? Containers. Containers, they are means to isolate applications with their entire runtime environment, but unlike virtual machines, they are much more lightweight, and part of that magic is that containers share the kernel with the host system, and the difference is kind of huge. Instead of having a virtual machine that takes up gigabytes of your disk and needs several hundreds of megabytes at least of RAM, you have a very minimal container that takes up at maximum tens of megabytes. So when I say containers, all of you, if you are not a very huge diehard for Podman, you'll ask me, so it's Docker, right? And I say, oh yes, but actually no. And what's the difference? Well, we are not really using Docker, but we use Podman, which is a demon-less container engine for developing, managing, and running OCI containers on Linux systems. Yes, I can read. This is a description of the tool from the official side of Podman. I would describe it very quickly as an implementation of Docker CLLI. It complements tools like Builder and Scopio, and mainly it was implemented with fruitless containers in mind. What does it mean? In the past, when Docker showed up, you had to use it as an administrator, as fruit. So let's say even though you created a container that isolated the app, once there was a security hole and some malicious software inside of the container got out, it had access to the whole system because the container ran as fruit. So if you have your service in a container that is unprivileged, then even if that malicious software gets out, it will not have greater privileges than the unprivileged user. So what is the classic usage of containers? They are mainly used as easily disposable entities, which is good for rapid development, so CI, CD. They are used in orchestration, so all of us heard about Kubernetes. Overall, is it run services? Let's say you want to spin up a web server or a game server even, let's say for Minecraft. You don't want to download or the binaries and whatnot. You find yourself an image, set it up, and you are good to go. But this is not the only way to use containers. Yes, they are disposable and non-interactive, but can we shift it? Can we go around? Yes, we can. And that is the reason why Toolbox exists. It's as we heard, it's an interactive container environment. It's built on top of Podman and brings the concept of toolboxes, which are pet containers. And these have two attributes. They are persistent and they are integrated. I'll talk about these two points a bit later on. So they are the solution. Pet containers are toolboxes and they are the solution. So imagine I have my system where it's not DNF, and I need to compile something. And for that, I need my package manager because I trust Fedora's packages. So I say my toolbox, enter a toolbox for Fedora 34. I enter and look, there is DNF. Not unlike on my system, there is DNF. And with DNF, I can do anything. So what's the catch? Why everybody uses containers in a disposable manner, but not the way toolbox envisions? Well, once you create a container, the setup cannot be changed. The container by default is isolated and you can share with it, let's say a volume, a part of your file system. But once you share it, you cannot unshare it or share anything more. And you can set up mounts, entry point, network, set comp, resource limits. That list goes on and on. So there is a lot of lots to configure. Is there a solution? Yes, at least partially. First, we can mount as much as possible and then cherry pick and use a mechanism that allows dynamic change. Yes, that's something I did not mention. Apart from mounts, it can also set an entry point, which means when you start the container, there is some commands, some entry point that is executed every time a container is started. That cannot be changed too. But we can add there some mechanism that takes care, that is dynamic on the inside, even though we don't change it on the outside. So the key is the initial configuration and the entry point. So this is something that toolbox does. And as the entry point, it uses itself. So it calls to a toolbox binary that is shared inside of the container. But that binary sits on the host machine. And when we update it with updates, it gets updated inside of the containers. Then we mount parts of the host file system. Let's say slash boot slash etc slash def on and on, you can read. And those mounts go under slash run slash host. And if you want to work with them, you simply bind mount them or assembling them inside of the container. It's all up to you. The resources are available. Apart from those toolbox takes care of setting up a user. That means it maps your current user inside of the container and to make it more accessible and easy to use, we remove the password. Let me remind you, by default, we use rootless containers. And through those containers, we'll never have more privileges than the user creating the container. Then we also set up care boroughs, core dumps, CTL, journal, on and on. There are several things we do. So this gives us persistence and interactivity. For those of you who know Podman a bit more, you can check out this small glimpse configuration. You can see all the bind mounts that we currently do. This is something I had in one of my containers. And on the next slide, you can see the entry point. You can see there is some, there is the toolbox binary and a neat container command that has several switches. With this, we can set up the container. Okay. So that is the introduction. And now the main part or the main course demos. I have an extensive list of demos I want to show you. So I'll switch work spaces and we can get to it. So can everybody see my terminal? And if I type something, can you see what I'm writing? Or should I make it a bit bigger? Yes. And nobody's complaining. So I think we can start. I believe you know that when you, those of you who use Podman and so on, do you know there is this file in slash run called container and not CAD, but let's use that. There is no such tool. But if I were to run Podman run and do the same run that container and I don't have the image. Sorry. Different thing. Let's just show you toolbox itself. There is this command list and you can see all my images that I have both and all my containers. So I have four different images and these containers use all of those. If I enter a container with just enter and because I'm on Fedora Rohite, which is 35 and there is no image for 35 yet because Boi and Koji is doing something fishy. I'll enter a Fedora toolbox 34. And as you can see, my host name has changed. It's now toolbox. This indicates that I'm inside of a container. So let me just check it. It's always release. And yes, I'm inside of Fedora 34 and it's in container image and it's a prayer release because I did not update my image after Fedora 34 was branched from Rohite. As I was saying before, there is now a slush run that container and not enough and yes, there is this file and because toolbox is a bit special. We also add a toolbox and file. So if you use both podman and toolbox, you can differentiate in your script for let's say aliases in bash what aliases you want in each different environment. As an example, I can show you what my bash RC looks like even though I use fish, a bash RC will be enough. There is this little block of it where I just check if the file is in and these two variables just change depending if I'm in the toolbox or not. Now, a different demo will be with known builder. It's those of you who don't know builder builder is a graphical application for developing software. Just an ID meant primarily for developing a normal graphical applications, but it can be used also for different projects. Let's say toolbox. On silver blue, I don't have go link. I don't have all the packages in installs to actually compile toolbox, but builder has integration with podman and I can choose my runtime, which is the go link toolbox where I have all my dependencies and let's just try to build it. And as you can see, we use meson with ninja as the build system and it actually does its job. So if you are on such a system, you can install all your dependencies in a container, connect it to nonbuilder and just do your work. What's the time? It's 10.02? Good. As in a different example would be gaming. A very special game I really love is toward fortress. Maybe some of you know it. And this game is a bit special because it's kind of hard to play and mainly to configure. But and there are graphical tools for this. But those are not really usable on silver blue. At least that's my experience. So I downloaded for fortress, but I don't have flip SDL installed. Well, I have my I have my container. I hope it works. It should. Oh, I hope I did not delete it. It's not game, but games. And I hope it works. Yes, it does. And as you can see, we are again in the toolbox and my shell changed to bash because fish is not installed by default in Fedora images. So we toolbox automatically switches to bash. So let me try to launch the game. And it's actually starting. And we are in game. And because the game does not like multi monitor setup. I can just scale it all over the place, but it is possible to install my dependencies in the container to play a game. And because it's in the container, I can just delete it later on if something changes and I can just recreate the environment. Now, I have a point here that I wanted to show you how to get the stack trace. And actually, I can show you. For this, I have a special toolbox. I have a fruitful toolbox prepared. Maybe I did not mention it, but toolbox can be used in both fruitless and fruitful mode. Fruitful mode is mainly good for debugging the host system because the capabilities inside of the container are the same as on the host. So if let's say you want to connect as trace to a running process, you wouldn't be able to do that in a rootless container, but you can do it in a rootful. But that I'll show a bit later on. Now let me enter a container that I have prepared for this. And I am in a directory that is not inside of the container because normally when I run fruitless toolboxes, my home directory is mapped inside of the container into the same location. And that does not happen for a rootful user, for the rootful use case because if let's say I had 10 users there, I wouldn't know which one I want mapped inside. So we don't map anyone there. And I just have to go back to the root. So now I should be able to enter. And you can see I'm the root user inside of a toolbox. So let me clear this. And I have a very special core dump. I've been looking for it yesterday. And it's GSD date time. It's been crashing on me quite frequently on a roll height. And it's been causing stutters of my system. So let's take a look at it. And what do we do? What do we need to do to actually be able to seed the back trace? I'll show you my TNF history. The most important part is to again install known system settings and then install the debug info and GDB. At that moment when you type core dump CTL debug and oops, the user live exact GSD date time. At that moment you'll get GDB and it will provide you with suggestions of dependencies or of debug info that you should install to get a meaningful back trace. So I did. And that is the last part of the history that is quite long cost me about a gigabyte of memory. But I don't have to reboot my machine and layer all those packages on top of my base system. So let's open core dump. And there is a bunch of not pretty stuff printed on me. And I'm not proficient with GDB. So I actually don't really know much about what to do about it. But what I know is that I can do threads apply all back trace and actually what I see is not that bad of an output. I'll zoom out a bit to be able to see something. And let's this line here. There is GSD times on monitor and it checks the settings just there is something happening. And actually right now I can see what is going on even though I'm inside of a container and I'm debugging a software that is on my host machine, which is kind of dope in my eyes. So while Rails starts do have any questions that I may answer. Well 8.3 does not have toolbox preinstalled. So I had to build it locally and put it into my path. But 8.4 will have toolbox preinstalled. Let me go to fruit and I have a toolbox prepared here. And in the meantime, I'll launch a very simple process here. And when I enter toolbox now, which by the way is not a Fedora image, but it is a UBI image on well, we have UBI images mapped to the system. Now, let's say I want to get PID of watch. Then I can do S trace right now because I'm root. So let me get PID of watch dash P and as you can see every two seconds, we get a bit of a bump because the watch is running every two seconds. This wouldn't be possible in rootless container. I would just get an error message that the user does not have sufficient capabilities to connect to work with P trace. Okay, just one more thing. When I'm back to my user, you can see that when I type toolbox list that shows all my images and all my containers, you can see I have a well toolbox, but I also have my Fedora toolbox. So if I just quickly show you my OS release now, you can see I'm on the Braille 8.3. But if I enter my Fedora toolbox of release 33, then if I read OS release there, EDC OS release, you can see I'm inside of Fedora 33. So even if you are running well, you can use this to get an integrated environment to compile or to do anything you do on Fedora but cannot do on well. And that is all for the demos. I hope they were clear and easy to understand. So, before I go to the questions, I'll just go through our recent accomplishments. Toolbox will be shipped in Braille 8.4. We allowed the root for use case in the past, quite recent past, it was not possible because we relied on a piece of software that did not work. We added support for Braille, respectively for UBI images, which also lays foundation for adding support for other distros, which is something we want to do in the future to support other distros like Bantu, Debian, OpenSUSE. Doesn't matter which image will come in our way, we will try to use it. We added toolbox specific tests to Podman. In the past it happened that some change in Podman broke toolbox. Now this should be very or much more resilient. And overall, even though the tool in during the past months did not receive a lot of, you'd say fancy and lots of words, a lot of new features, we added a lot of polish and bug fixes and now toolbox should be much more reliable. The future plans, but the customization, you cannot really change the default behavior of toolbox, which is something a lot of users complain about. We hear them and we want to change that in the future. We also want to make the CLI user experience more intuitive, because as you can see, it's not that bad, but I bet you already saw and worked with which with much more fun to use CLI tools. We also want to add means to invoke commands on the host. Let's say I open, imagine a scenario when I open my tool on my terminal, and I don't get the host shell, but I get a container. And for this use case to work, there has to be means to comfortably let's type let's say RPM OS 3 if you are using solar blue or core OS. With that you would be able from inside of a container to control the host. And of course, more tests, because we broke during the way a few times fixed every time but testing is important. So questions. Let me look at the Q&A and any progress on running container from inside of toolbox. Not officially, and I don't think that is a very good idea. But maybe I'm wrong. But no, no, we don't work on this. Is it possible to use the toolbox on Fedora workstation? What are the benefits? Yes, it is possible toolbox is normally available through DNF as a package just type DNF install toolbox, and you'll have toolbox. What are the benefits? Unlike on silver blue, you can install all your packages with DNF on the host. But let's say imagine a scenario that let's say a year ago, you installed some software and you did not delete it immediately. I bet that now you don't remember what the packages where. And now you may be thinking, well, if instead I used a toolbox, a container where I installed all my dependencies, and I had it, and I have it named, then I would just have to delete the container. And that is the benefit on workstation. You can categorize and containerize your workflow, and then just when you're done with it, you just throw it away with a single command, and you don't need to remember what where the specific dependencies. And is there a feasible way how to automatically add fish to my toolbox and containers. I know I can install them manually but I'm hoping for some integration maybe all the way to my custom container. No, not yet. Not yet. And I wonder how that would, how that could be done. One way would be to integrate with package package managers in some way, just, let's say you enter a federal container toolbox will know that it's a federal container it can use DNF, or in Ubuntu or Debbie and it can use apt. In one way but considering the size of the project and the staff. I don't think that it's very realistic. Some people are using Ansible playbooks, some people use scripts, and some people even create their own images. So if you are proficient with images, you can set it up, let's say a pipeline on Docker Hub, and rebuild your own images with your own packages slared on top.