 Hello there! I'm delighted to be giving this talk at the Open Source Summit today. I will be sharing my experience on building Linux distributions, and I will discuss a little bit about the future of Linux as the OS. Before jumping into the contents of the talk, let me introduce myself. My name is Marga Mantelella. I am a long-time open source developer. I work at Kinfolk, a startup company based in Berlin that focuses on Linux and Kubernetes. This talk is a collection of my experiences working with Linux over the past 20 years. It's been an interesting and exciting journey and I'm thrilled to be sharing it with you today. I will first tell you about how my career with Linux developed working at small and big companies, as well as being a devian developer for 15 years. Then I will try to summarize the lessons I've learned throughout this time. These are my experiences and my opinions, but I'm sharing them with you, hoping that you will find them interesting and perhaps learn something new. So, let's start at the beginning. I grew up in Buenos Aires, Argentina, where I studied electronic engineering, but I never really worked in electronics. I've always worked in IT. My very first job was as an IT support technician. I then moved on to software engineering, system administration, and eventually side reliability engineering. I installed my first Linux distro back in the year 2000. This was a Mandrake installation, a Linux distribution that hasn't existed for a long time. A couple of months later, I replaced it with Debian Potato, and I've stuck with Debian since then. While at university, I was quite actively involved in the local Linux user group. Working with this group, I delivered a bunch of Linux courses, helped organize a few conferences, and tried to expand the use of open source software throughout our university classes. We love it for professors to not require proprietary software for their coursework, and rather allow students to use open source alternatives when they exist. This sounds kind of obvious today, but 20 years ago, most teachers were skeptical about the whole idea of open source software. They didn't expect the tools to be good enough to use in their classes. Still, little by little, we managed to convince more and more people that this was a viable alternative. For a couple of years, I worked as a Java developer at a consulting company. I was already running Linux on all my home computers, so after a while, I managed to convince my boss to also let me run Linux on my workstation. But I was still working mostly with proprietary software, and this made me unhappy. So, in 2003, I quit the shop and joined the IT department at my family's company. This was my first shop doing Linux. At this company, we were deploying Linux installations to all users, replacing legacy proprietary software with open source software. Our budget was pretty tight, so we had to make sure we spent as little as possible on new hardware. The computers we deployed on the network were mostly thin clients. These were old, used machines that would be considered useless by most people. But we gave them a new life by running a very small Linux image on them that would just control mouse, keyboard, video, and network and run a local X server. This X server then connected to another machine where all the X client applications were actually running. That way, we needed one powerful machine for 10 ortho thin clients. This allowed us to deploy computers to many more users that would have been possible with standalone workstations on our tight budget. And we were able to quickly expand our deployment to more and more machines. Some of the people that were getting these thin clients hadn't had a workstation assigned to them before. In fact, I recall one person telling me with pride that they had never used a computer before. Like, at all. The upside of that was that people didn't care about Linux or Windows. We taught them how to use the system to do what they needed, and that was it. The downside was that when something went wrong, the reports were usually not very helpful. One typical complaint was, it doesn't give me access. This expression could mean anything. Maybe the keyboard or mouse weren't working. Maybe the network was down, and so the connection to the X client had failed. Maybe the login screen told them that the password was wrong. Really, it could be anything. Anyway, we kept deploying more and more thin clients and some standalone workstations as well when the users needed some dedicated processing power. After a few years of this, we had deployed some 350 computers. And at the time, this felt like a lot to me. I was really proud of how I was adding more Linux users to the world. I didn't feel like we had our own Debian distribution. We were just using Debian. But looking back, we had basically all the traits of a Debian derivative except for a cool name. We had our own internal repo where we kept additional packages that we needed. These could be packages that were not yet in Debian or fork packages with special patches applied to them, or packages that we had developed specifically for our needs. We also maintained the configuration of the machines in the fleet using Puppet, keeping our customizations instinct. So yeah, I think we basically were maintaining a derivative, but it's a bit of a philosophical question. Where does a customized deployment of Debian end and a Debian derivative distribution begin? Anyway, not really the subject of this talk, so let's move on. I mentioned Debian a few times. I already said that I'm a Debian developer, so you can imagine Debian has played an important role in my journey. As I called out earlier, my first steps with Debian were at the end of the year 2000. And as the years passed, I became more involved with the distro. By 2003, I considered myself a Debian bag reporter. I took great pride in creating good bag reports with clear reproduction instructions, log, et cetera. And for a while, that was enough for me. But this changed in 2004 when I attended the Debian conference in Brazil. Up until this depth of four, I considered Debian developers to be some sort of elite geeks that were kind of super human, completely out of my league. I felt that maintaining packages was such a difficult and complicated task that I wouldn't be able to do it. I was already sending patches to bugs, but I felt that that was the maximum of what I could do for Debian. But then I met all these awesome people. They made me feel welcome. They valued my input. They explained how things were not so complicated. They became my friends. And so when I came home, I was a different person. I wanted to become a Debian developer, and I no longer thought that maintaining packages was reserved for elite hackers. So a few months later, I started maintaining my first Debian package. And the following year, I officially became a Debian developer. I didn't do this on my own, of course. I had the help of the Debian community that welcomed me and encouraged me to keep learning, keep growing. Once I was a little bit more settled, I helped others get started as well, which was also very satisfying. Throughout the years, I've done a lot of different things for Debian, not just maintaining packages. I particularly enjoy fixing bugs. So for a while, I spent a significant amount of my free time fixing release critical bugs on other people's packages. I also helped organize a few DEBCOMs. In particular, I was part of the main org team for DEBCOM 8 in Argentina and DEBCOM 15 in Germany. I'm currently the chair of the Debian technical committee, the body that helps resolve conflicts among developers when they can't agree on their own. Sometimes I think back to those days back at my first DEBCOM when I felt like the only human among a bunch of superheroes, and it's pretty amazing how my journey has been. Anyway, back to my professional life. I worked at my family's company in Argentina until 2012 when I moved to Munich to work for the team that maintains the internal Linux distribution used at Google. There's actually more than one internal Linux distribution at Google. My team was in charge of the one used by humans in their workstations, not by containers running in servers in data center. Obviously, moving to Google was a huge change. One thing that was very different was that the new hardware was not an issue. All our Linux users got beefy workstations and they could even refresh them after a few years, even if they were working fine just because the hardware was old. It took me a while to get over the shock from this. To be fair, lots of Googlers are conscious about waste and won't really upgrade their hardware until they actually need it. But still, the fact that people could replace a perfectly working machine just because it was a few years old, it was bonkers to me. More mind blowing was the size difference. In my previous job, we started with a team of two people which had grown up to five of us by the time I left. And as I said, we were in charge of something like 300 to 350 computers. So we had less than 100 computers per team member. After nine years working there, I knew all these computers by name. I knew who used which host, which type of keyboard was connected to it, if the mouse had had issues in the past few months. And I even knew the internal IP addresses of most of them. At Google, my team of 12 people was in charge of tens of thousands of computers. And while the team grew a bit over time, the amount of hosts in our fleet grew a lot faster. At any point in time, we had more than 5,000 computers per team member to care for. This meant that it was no longer possible to know the host's by name. And more importantly, that everything needed to be automated. Not just the obvious, like managing configuration with Puppet and automatically upgrading packages to their latest version, but also other things, like automatically checking the health of the machines and reporting back when there were problems. Or automatically creating backups for our users and giving them tools to restore data when they needed it. As our team couldn't possibly deal with all incoming user requests, we provided a bunch of tools and documentation to help users deal with issues themselves without the need to contact us, including being able to reinstall their computer without having to feel a single prompt. Some of this automation was already in place before I arrived. Some of it, I helped build myself. Part of what my team did was keep on top of developing new automation tools as needed. And of course, automation is not enough if it's not complimented with robust and thorough testing. In my team, we had developed a pretty thorough test suite that verified that a lot of different use cases worked as expected. This allowed us to push changes to the fleet on a weekly cadence without panicking that we would break everything. Of course, not all of the possible problems were caught by our tests, but whenever bugs lived through, we would make sure we added any necessary test cases to not let this happen again in the future. At Google, there are a lot of teams maintaining different parts of the stack. As I said, my team was in charge of the Linux platform, but there were a lot of other teams providing software on that platform. And an important part of what we did was enabling them to do that. This meant that some changes needed to be coordinated across many different people, across many different time zones. I'll admit, this could sometimes be quite challenging. Finding agreement across stakeholders is not always easy, but what helped was remembering that everyone was a capable engineer looking to find the best possible solution. One difference that I really enjoyed compared to my previous job was that our users were technical people. Many of them were also using Linux at home. This meant that bug reports were usually pretty good, including good reproduction cases and sometimes even a patch to fix the issue at hand. No more, it doesn't give me access, but rather good technical reports. This could sometimes backfire, like say someone wanting us to change the global settings to match their personal preferences. But most of our users actually understood the challenges of building a distribution that would need to satisfy the needs of tens of thousands, not just their individual taste. Putting aside the differences, there were many things that were the same. When asked what I did before working at Google, I would reply that mostly the same, just at a much, much smaller scale. In both cases, we were keeping a rep of packages on top of what the distro provided, managing the configuration with Puppet, keeping all our changes and configurations in version control, with the goal of enabling our users to do their jobs. And all of this using a distribution that was based on Debian. Well, sort of. The team I joined in 2012 was called Ubuntu. It was a Linux distribution based on Ubuntu that followed the long-term support releases. Right when I joined, my team was going through the migration from Lucid to Precise. A couple of years later, I led the migration from Precise to trustee. I mentioned our fleet had tens of thousands of computers, so you might imagine how big an effort it was to migrate such a large fleet from one LTS to the next. Through several iterations, we had developed tools to help us with that, but even then it was painful. Those two-year shumps meant that too many things had changed in between. Lots and lots of issues needed to be fixed before we could get the new release out. We spent so much effort getting the fleet updated, only to have to do the whole thing again two years later. So after going through this painful migration twice, I convinced my teammates and other interested parties that we didn't want to keep doing that anymore. Instead, we should switch to a rolling release model where packages got updated progressively. We would no longer have a huge shump every two years, but rather small increases every week. And what better target to follow than deviantesting? Ha! Well, I don't know, other people might disagree with that last part, but I convinced the powers that be that this was a good idea. We decided to stop tracking Ubuntu LTS and track deviantesting instead. So, in 2017, we renamed our team and our product to G-Linux. We migrated the fleet from Ubuntu trustee to what was then deviant stretch, or a kind of stretch because as we were working on it, stretch got released as stable, and testing became faster, and we kept tracking testing as it kept changing. And I'm not going to lie, there was a lot of work involved, and not all of it was easy or fun, but in the end, when Buster got released as stable, G-Linux users at Google had received the updates progressively without having to actually do any manual action whatsoever, just as part of the normal weekly release process. All of this was pretty awesome, and I learned a lot, and I grew a lot in the almost eight years that I worked at Google, but eventually it was time for a change. So, earlier this year, I took on a position at Kimfolk. As I mentioned, Kimfolk is a startup based in Berlin. It's a small company dedicated to working with open source software with a special focus on Linux, Kubernetes, and containers. In my case, I've been spending most of my time on the development of Flatcar container Linux. Flatcar is a container optimized OS, based on the now deprecated CoroS, which was the first container optimized OS. CoroS was itself based on Chrome OS, which is based on Gentoo. So, yeah, currently, I'm not working on a devian derivative, for now. Moving to Kimfolk was, again, a big change for me. The thing that impressed me the most was the speed at which one could get things done. At Google, whenever I started a new project, there was a long ramp up time of getting used to new technologies, putting all the complex pieces together, getting buy-in from all stakeholders, and so on. A medium-sized project could easily take a year to get launched. Working at the startup, there's a lot less red tape, less complexity, less stakeholders to consider. So, things just move faster. Projects usually take a number of weeks, maybe a couple of months if they are large. And, sure, timelines might slip a bit, but everything just goes faster. The other big change was that while I was still doing Linux, I moved from working on a distro used by humans to a distro used by containers. There are a lot of difference there. A big one is how security weighs against usability. With any software out there, there's usually a struggle between security and usability. You need things as locked down as possible, while still allowing you to get things done in a reasonable amount of time. And, when security and usability clash, you need to make some hard choices. I found that the choices made for G Linux were different than the ones made for Flatcar. One example of this is the read-only user partition. In Flatcar, as in CoreOS and ChromeOS, the slash user partition is read-only. You can't install new software on it. There's no package management system of any kind. The OS image stays exactly as it was shipped. You can still customize things in Etsy and sideload software via containers, but you can't install new programs in slash user, nor can you tamper with the programs that are already installed. For a human user, this might be really limiting. But for an OS that used to run containers on it, as long as it has the necessary tools to run these containers, this might be exactly what you want. All right, so here we are. I've taken you on a quick journey through my career as a Linux engineer. I started as a young, inexperienced techie, and through time and effort, I became an old timer. Now, my goal with this talk was not to admit how old they are, but to share some of the lessons that I've learned through all of this with you, and also to look into the future and see what it might bring. So let's get to that. Let's start by talking about automation. As a software engineer, system administrator, and site reliability engineer, I love automation. Automation is what enables us to do interesting things with our time. We automate the boring stuff and move on to do things that are more challenging and more fun. But there is such thing as too much automation. How can we tell? Say it takes you five minutes to write the script that will save you one minute of manual work. In this case, it makes sense to write the script if you are going to do this task more than five times. What if it takes you two hours to write the same script that will save one minute of manual work? The math is simple. It only makes sense to do it if you will apply it more than 120 times. This example is pretty obvious. I know. Unfortunately, in real life, we typically don't know how long it will take us to write the automation, nor how many times we will execute it until it becomes obsolete. Still, we can try to use some rough estimates, knowing that they won't necessarily be accurate, but they can help us decide how we spend our time. When faced with a boring task, some of us will start thinking how to automate it before we complete even the first manual run. So if you are in that comp, it's a good idea to ask yourself, how long do I think it will take me to, roughly, write and debug the script that will automate this? How much time will it save? And how many times will I use it before it's obsolete and I need to rewrite it from scratch? If after asking yourself all these questions, the math tells you that you should spend time writing the automation, then, by all means, go for it. Automation is indeed awesome. Now, let's illustrate this with an example from my Linux experience. Handling upgrades is a challenging task that I had to deal in each of my shops. The kind of automation applied differed with the operations and sizes involved. At the small company in Argentina, we were tracking Debian's table. So whenever there was a new stable release, we would start by upgrading an unused test machine, followed probably by our own work stations, followed by the computers of people that, for some reason, needed the new tools first. Throughout this process, we would document any issues that we ran into, creating a script that took care of most of the work. And then we would move on to applying this script to, say, the workstations of people that were on vacation, which would help us find and fix some bugs. And finally, we would spend a weekend or two upgrading all the other machines. So the script I mentioned automated the most common basic steps. But it was a very optimistic script. If while upgrading anything broke, it was up to the person doing the upgrade to figure out how to fix it. It just wasn't worth it having the script take care of all possible problems. Handling each and every problem through code would have taken way longer than the amount of manual work involved multiplied by the time those problems appeared. At my shop at Google, this, of course, wouldn't work. There wouldn't be enough weekends in the year for my team to go manually upgrading the machines in the fleet. Back when I joined in 2012, the main way of upgrading from LTS to LTS was to reinstall. And it was done by the users themselves. As I mentioned earlier, the install itself was fully automated. The user just decided when to do it and the machine could get re-imaged. Users could be asked to reinstall their machine with the new LTS during a window of time. And if they failed to do it before the deadline, they would lose access to the network resources. But reinstalling their workstations each time an LTS came around was not popular among our users. So during the transition from Lucid to Precise, one of my teammates started a project to develop an in-place upgrade that would let users upgrade without reinstalling. This was an experimental tool, so not everybody used it, but it did get some traction. When the migration to trusty came around, this script got improved. We released it as an official way to upgrade, and plenty of users used it. It got around 95% success rate in upgrading machines without any manual intervention, which is quite good given how diverse our fleet was. Finally, when we moved to our rolling release based on deviant testing, the in-place upgrade tool got improved even more. And it was able to handle the migration from Ubuntu trusty to a moving target of Stretch Ambassador, also with more than 95% success rate. This was three years later, so not only was it a bigger jump in the software stack, the fleet had also grown significantly. And this, with a fleet of machines where users were allowed to install all kinds of different software. So yeah, it was no easy feat to get to that 95% success rate. It was possible, thanks to the script, being extremely pessimistic. It expected that any actions could fail, and had ways to recover from these possible failures. It included code to fix the most common problems, and when an unknown error was encountered, it had a bunch of different healing techniques to attempt, to get the machine back into a working state, kind of like a broad spectrum antibiotic. And it was not just this extremely pessimistic coding. It wouldn't have been possible to achieve these high success rates without a ton of testing. We had a battery of automatic tests that run the in-place upgrade from different initial trusty states to whatever the current state of the rolling release was, so that if something that had worked in the past stopped working, we could detect it before it reached any users. And finally, the tool included automatic reporting of what it had done. This allowed us to investigate any unsolved failures, and add whichever solution we found for them to the battery of fixes that it applied. Every morning, we could look at the dashboard of updates done the previous day, and check if any of them had ended in failure. Looking at the logs and information collected by the tool, we could try to solve issues like weird dependency problems, file collisions, or whatever, and these fixes could get incorporated into the next version of the tool. Unless the package management system was completely out of commission, the affected users would then be able to update to the latest version of the in-place upgrade tool and rerun it. The tool was able to restart from where it had left off and get the machine successfully upgraded. Of course, developing this tool took quite some time, a few years of different engineers in the team, and as I mentioned, it took three iterations to get there. But by allowing users to upgrade their machines in place, my team saved these users the time they would have spent reinstalling their applications and reapplying their custom settings if they had started from scratch. Given the sheer number of users, even if you estimate only a couple of hours saved per user, the total amount of hours saved is huge. I don't have the numbers, but one of my colleagues had calculated roughly how much had been saved and it was significantly more than had been spent developing the tool. My point with this long story is to show how the effort spent on automation pays off when the numbers are big enough. When the fleet I was in charge of was 300 computers, it made sense to have a simple, optimistic script that would save us most of the work and then we would have to fix any one of problems manually. With the fleet two orders of magnitude larger, it made a lot more sense to spend time developing robust automation because the time saved got multiplied by a much larger factor. Now, the case of Flatcar is a pretty different story. As I mentioned earlier, Flatcar uses one of the main ideas behind Chrome OS, which is a read-only user partition. Not only that, it actually has two user partitions, user A and user B. When a new release comes around, it's deployed on the partition that's not currently being used. When the machine reboots, it boots into the new version. If for some reason, this new version fails, the system automatically falls back to the previous partition, the one that was known to be working before the reboot. So it's not like there are never problems with upgrades, but whenever there is a problem, the machine keeps working by reverting to the previous version automatically. This minimizes breakage and is completely transparent for the user. I don't know how many hours were spent by Chrome OS and Core S engineers developing this mechanism, although I'm sure it was several months of quite a few people. What I know is that this has saved time for tons of people because the users of these operating systems don't have to spend hours trying to recover from a bad upgrade that break their machines. This just doesn't happen. So I've mentioned a couple of times that we need to decide how we spend our time, whether spending time in automation is worth it or not. This is one of the most important skills people in IT need in general, managing priorities. Let's talk more about that. One thing that I've learned in my different jobs is that there's always more to do than time allows. You're never going to get to the bottom of the backlog because new things have more priority will come around before you have time to do the other things. I mean, I've heard a small number of people claiming to be idle and bored in their jobs with nothing to do, but my experience and the experience of others working in the IT industry that I know is that this mostly doesn't happen. There's just too much stuff to do, too many bugs to fix, too many new features to add. So we need to learn to prioritize. In a business that's trying to make a profit, the priorities are generally going to be driven by whatever brings in the money. Back at my job in Argentina, the first priority was to have the invoicing system online. If invoicing wasn't working, anything else had lower priority. If there was a new government regulation that needed to be in place in order to generate invoices, and this being Argentina, it wasn't a rare thing, we would drop everything and make sure that worked. And after invoicing, bug fixes and features were generally prioritized according to how much they would influence the bottom line. As my IT team was in charge of anything IT related, we had to do this balancing act of setting priorities for everything related to system administration, software engineering and user support. At my job at Google, I was thankfully removed from invoicing, but still we needed to deal with a huge backlog and struggle with priorities. In that team, usually the criteria for what priority to give to different issues was related to how many people the issue was affecting. Was it 1% of the fleet, 5% the whole fleet? Of course, it was not the only criteria. If the issue was making a team of 10 people completely unable to work, it would have a higher priority than if it was only mildly inconveniencing a hundred or even a thousand people. One thing that we struggled with was laptop related issues. If you've dealt with laptops running Linux in the past, you probably know that they can have a myriad of different hardware issues. Maybe the Wi-Fi driver doesn't work. Maybe the fancy touchpad doesn't respond correctly. Maybe the laptop drains the battery at a high speed while it's suspended. Or worst of all, maybe it comes with this horrible 2500 display that's higher than HD but smaller than 4K and so looks horrible when running GTK applications no matter if you set them in single or double scaling. And if you're gondering, yes, those were all issues that we had to fix throughout the years. Fixing these issues takes time and expertise and sometimes you are not even sure that you will be able to fix them, which can be really frustrating. In our case, laptops were a small part of the fleet and so sometimes it felt like a bad investment of our resources. But we had taken the explicit decision of supporting these Linux laptops and that meant that they were a priority regardless of the relatively small fleet size. Working with the Flatcar team now, the issues that we deal with are pretty different. But as usual, there's too much to do and too little time to do it. So we need to find a way to prioritize what to work on. As Flatcar is container oriented, most of our work is focused on enabling users to run their containerized workflows and supporting more cloud platforms. And because we are security oriented, keeping the system as secure as possible takes the highest priority of all. So what happens when you just can't get to do something because other things have higher priority? One thing that I learned throughout my journey was that even if I agree with a person reporting the issue that it's a bug and it should be fixed, it's better if I'm upfront about the fact that I just won't get to it. If someone files a bug about a problem they encountered and I respond, yeah, this is a bug, we should fix it, but then put it at the bottom of the backlog and never get to it, that someone isn't going to be very happy. If instead I tell them that it's totally a bug and I understand their frustration, but unfortunately I won't have the time to solve it because there are other things that are higher priority. It's almost certain that they will understand and thank me for my honesty. The user can then figure out how to solve the problem themselves without waiting for a fix that will never come. This is a skill that takes time to master, or at least it took me time to master. Admitting that I won't be able to do something feels like I'm admitting I'm not good enough. When it's just recognizing that there's only so many hours in the day and so much that can be done in that time. Some people summarize this patch as welcome, which kind of works but it's short and it might be misunderstood as passive aggressive maybe because it has been used in that way by some people. I prefer to be a bit more verbose and make sure the other party understands what I'm trying to say. This touches on yet another lesson that I learned with time. Empathy and human connection go a long way. When the person that you're interacting with is just a nickname, a handle on your screen, it's very easy to dismiss their ideas, treat them as bots rather than humans. When you've met them in person, this changes a lot. You know what they look like, how they sound, but more importantly, you've shared time together doing something different than typing on a computer. This creates a bond that is very hard to replicate remotely. Yes, you can have a video conversation, see their faces here that voices. It is better than nothing, but it's not the same as sharing a meal, talking about hobbies, visiting a museum, or playing a game together. That's when other developers become important people in our lives rather than names on a screen. And so when we don't have the opportunity of flying around the world to meet them, we need to make a conscious effort of creating these social bonds, humanizing our fellow developers as well as our users. This takes work, but it will lead us to much better results in the end. All right, I've shared my journey and a few nuggets of wisdom that I've acquired through my time. Let us now look into the future of Linux as the OS and think about what's to come. Of course, I'm not a fortune teller. I don't really know what the future will bring, but there are tendencies and we can talk about those tendencies. I hope this doesn't come as a surprise to you. The tendency is that Linux distributions are becoming less and less important. Over the past 10 years or so, installing libraries and modules outside the main OS became the norm rather than the exception. Python has Pi Pi, Ruby has the Gems distribution, Node.js, it's separate packages for everything, including to check if a number is even. And it's not just about programming languages, this tendency to ignore the underlying distro means that a growing number of programs come with their own package management system for extensions and plugins. This includes a wide array of programs like Firefox, Cinnamon and even Beam, which has like four of them, including their own way of installing add-ons has two main advantages. It avoids the bottleneck of packages getting ingested by the distribution and it obstructs installing these add-ons from the underlying distribution. Of course, there's a reason why Linux distributions exist in the first place, so bypassing them also comes with disadvantages. Less or no quality control, less or no security updates, in particular for stable version and no integration with the rest of the system. In other words, by making it super easy for users to install the latest flashy thing, it also makes it super easy for them to install random, buggy, insecure or even malicious software. But users want the flashy thing, so they will mostly ignore these concerns. The next step on this trend, so we're making the distributions less and less important, is running some form of contained applications on top of a thin layer of OS. The contained applications could be Docker containers, snaps, Flatfax or what have you. I remember back in 2015, I attended a talk by a Docker developer who showed how she run each of her graphical applications inside a separate container. Back then, this sounded rather over the top. Today, this sounds normal. I do this myself for applications that I don't trust. Running everything inside containers also comes with some bad consequences, which could probably feel its own talk. Personally, the thing that worries me the most is security, because many people forget to ensure that all the software inside those boxes needs to be up to date. So in this world, what's even the meaning of a Linux distribution? Will distributions still matter in a couple of years? For desktop and laptops, probably yes, but less and less so on servers. If you are running containers on a VM or a Kubernetes cluster on a bunch of nodes, you care very little about which destroy is running on these machines. What you do care about is that this destroy is as secure as possible, and that it allows you to run your workloads on top. The less software you have running in the base OS, the less you will have to worry about possible security issues. On top of that, the less you have to worry about keeping it up to date, the better. So distributions with a minimal footprint on automatic upgrades are clearly going to be favored. Of course, we will still need to get the software installed on those containers, so it's not like distributions will stop existing, at least not in the immediate future. But as more workloads become containers or containerized applications, the underlying OS will matter less and less. In summary, for server-side applications, distributions will either be a minimal layer to run containers on top or a repository of packages to create the content of those containers. What about desktops and laptops? For desktops and laptops, a general purpose distribution still makes sense today, as we don't yet have the tooling to make sense of running each application in its own container. Avoid wasting tons of bandwidth in this space or re-downloading the same applications for the same things for different applications. The tooling that will be the most relevant coming up will be the one that helps us make sense of all these different origins of source code. Things like keeping our PyPy packages up to date, managing all our Node.js libraries, understanding what's installed in the system and what's shipped by a container, ensuring that our containers have their security issues patched, and so on. Some of these utilities are being built today, others are yet to come. When these tools are mature enough, it's likely that even desktops and laptops distro will stop being relevant. They will become a thin layer on top of which we will run containers. If you're part of a community that's building a distribution, you might not like this perspective. I understand, as a devian developer, I'm not a fun, but I don't believe that closing our eyes to reality is the way to go. Instead, we need to accept that the world of Linux distributions has changed and build the tools that we need to make sense of this brand new world. I invite you to be part of this challenge. And with that, we come to the end. This was my presentation on building Linux distributions for Fun and Profit. I hope you enjoyed it and that you learned something new. Thanks for listening.