 We wanted to kick it off with a little tribute to the Linux, right? Everyone knows Linux. So large, so huge, and 30 years old as of last year, but last year there was no conference to celebrate it. So we wanted to celebrate it here, but it's also pretty hard, like how you can celebrate something which is so large, so huge, and just everywhere, and so diverse, and so we were lucky that we found few panelists who each know some part of the overall Linux ecosystem, and I'm really grateful that they accepted to be on the panel and tell us something about their areas of Linux. So this unfortunately won't be the full history of Linux, which might take a little bit more than five minutes, but it will be hopefully at least something interesting for you. So thank you for being here. Would you like to introduce yourself? Hello, I'm Vitaly. I am from Virtualization Engineering Team at Red Hat and I work mostly on Linux kernel, taking care of Linux as guests on various hypervisors as well as KVM hypervisor in Linux kernel. Hi, I'm Mikhail. I work in Plumber's Team, Core Services Group, and I work mostly on system D and other low-level user space components of the operating system. I'm Veronica. I'm the CKI tech lead, and for those who don't know the project, CKI is a project that delivers continuous integration as a service for Linux kernel, both upstream and downstream. Thank you so much. So we have very limited time. Let's jump right into that. We have Vitaly on Virtualization, Michael from system D. Hopefully, folks have heard about system D. It's one of the big events, and Veronica is our master on testing, which also is a big thing. Vitaly, can you tell us what's virtualization about? Why did it happen in the past? What projects there have been? What options? How did it evolve? Oh, the first was probably Zen Project, which started in early 2000 when some smart guys at Cambridge University find a way to run virtual machines on x86 processors, which didn't have any capabilities, hardware capabilities to do so back then. And it was very successful because at the same time, a small company called Amazon started offering their infrastructure as a service, something which is now called Amazon Web Services. It was fairly small, and they took the technology which was there and started providing virtual machines on Linux. And that's actually how Linux conquered the world. And then shortly after KVM project got started when hardware renders were already providing some hardware capabilities, the idea was to, contrary to Zen, instead of rewriting the whole operating system just for virtualization purposes, just take Linux which had almost everything and just add missing capabilities there. And this project was fairly successful, and over the years it basically took over. So nowadays, all major hyperscalers, they run KVM underneath. Only Microsoft doesn't, they're Azure runs on their own hypervisor. So nowadays, even if you go to Amazon Web Services and you're running some very old instance type and Linux tells you that it's running on Zen, don't blindly trust it because there is an emulation layer in KVM which allows to KVM to pretend that it's some other hypervisor. And in that case, Amazon did the job to make KVM look exactly like Zen to its guests. So here we are, and the majority of world computing is running virtualized and it's running on Linux. That's pretty cool. So where is Zen today? You mentioned KVM to go over, but where about Zen? Zen is still alive. It found its place in some regulated industries, in some appliances where it's very important to be able to do some audit of the hypervisor because KVM is part of Linux kernel and doing audit for such a huge project is very hard. Zen is a much smaller operating system and so it's been used there. Thank you. Let's move to SystemD. That's hopefully also well known and I think many people are aware that there used to be init scripts which were able to boot a system just fine and then suddenly there was a lot of talks about SystemD. Michael, Michale, can you tell us about what was that about? Yeah, we didn't like init scripts that much. So yeah, SystemD, the project was started in 2008, 2009, I think, and the idea was to develop a new init system or it's an init system, but more importantly, it's a service manager for Linux and some of the developments were done on other operating systems. We saw that in the community and Linux had a lot of technical depth in that regard. The system five init scripts, these existed since 80s essentially. So the project was started, it got introduced in Fedora and fully integrated into Fedora in 2012. That's around that I joined the project, so I'm working on it for 10 years now, roughly. And yeah, the main sort of idea was that, the system started to become different in a sense that you had smartphones, you had traditional servers, desktops, so it wasn't just the computer sitting under somebody's desks for all the time. So systems are becoming dynamic, like hardware can come and go, stuff can be plugged into machines, and one of the other problems that we wanted to address was parallelism, so at that time, almost everybody today and even at that time, almost everyone already had at least in the desktop, the CPU which had multiple cores, but the previous software was not able to take any advantage from this. So yeah, that's why we did it. And we didn't like any scripts because it was just hard to debug and hard to maintain. So are you saying that system D was only addressing the problems of, can we run a smaller system and can we offer the parallelism or is there more and why and how did it happen? Well, we did a bit of code after that. So yeah, so as I mentioned, the main idea was service management, like previously on Linux, you didn't have any service management at all. Like system five in it, didn't even have any notion of what service is and if you ask yourself why do you use computer? Why do you turn it on? Especially if it is server, it is supposed to serve something and this is done by services, but how you can have a system where the core component which is supposed to manage all of these services is not even aware what service is, like what constitutes a service. So that was one idea and the other one, since there were no service management, there was essentially nothing on top of which you can have an API. So we wanted to provide APIs and command line tools and sort of good user experience so that these were the motivations essentially. So it eventually became like a unified interface for Linux operating system? Yes, something like that. That was one of the problems also that project started to address a bit later once init system was mostly stable and sort of done in big wear quotes, but we wanted to provide sort of basic building blocks for people that want to let's say build a distribution or build some appliance, some embedded system, essentially a project that provides init system which has a service management, but at the same time you have a lot of other bits and components that you can just take, put on top of a Linux kernel and have sort of more or less complete system. Thank you. And Veronica, what about testing? How did it look like, like 10 years ago, 15 years ago and why people care about testing? It works, right? Well, you probably don't really like your machines being broken. I'm told that's not really funny if you actually have to do some work. So from the start, the testing was very much community-based. You had the power users grab a configuration from their machine or something that they wanted to run, compile the kernel, boot it, and hope that everything works and report the results back. And on a certain scale, this is still something that is being done. It's just being done on a completely different scale. So where you had a single user trying to run something on their machine, you now hope, for example, maintainers with larger labs or various CI systems. And the way of the reporting is still kind of the same. It's still being done on the mailing list. Just the content of the reports is different, but this has really not changed from the past. Also in the past, before we would even get to this user-based testing, you had developers and maintainers who hopefully did run some more Toro testing. Unfortunately, usually everybody had their own custom scripts that the majority of people didn't have access to. So like good luck trying to find out what was actually tested, how it was tested, how to reproduce those failures that people find. Some of the pretty old test suits are LTP and XFS tests. And to these days, these are still one of the largest, widely known test suits and still being used. But besides that, the automated testing, it wasn't a thing in the past. It's, well, given that's been accessed 30 years old, it's like relatively recent where you have the development of a lot of different automated testing, a lot of different CI systems. And it's basically just scaling out the power user. So what has changed in this then? Like where we are today, you mentioned automated testing and you mentioned that like we didn't have like good definition of what is the actual test case, what is the environment we are testing it. So how do you know where we are today? Like do we have this, do we know this now? Yes, you have a large amount of different automated tests. I did mention LTP and XFS tests, but now you also have K-unit, K-soft tests that are integrated in the kernel sources. So we don't even need to browse the internet to find out what tests to run. You have some of those directly in the code. We have a lot of CI systems that are running different workloads. And maybe somebody's thinking like, why do you need multiple different CI systems? But the Linux kernel is, you can have a lot of different configurations. It attempts to handle a lot of different hardware. You have a lot of different workloads of, so unless you are really trying to like build a kernel with DevConfig and run it in an X86 VM, chances are there is still room for more testing. Thank you. How do current days look in virtualization and how it will evolve in the future? Is virtualization done, hopefully, or not yet? Well, it's never done, right? I think that KVM has a fairly bright future because currently all major hyperscalers, infrastructure providers, they found this common ground in Linux and they all use KVM. They use it for different purposes like providing infrastructure as a service as well as running some emerging workloads, like for example, Amazon uses it to run function as a service like Lambda. It's also done on KVM. It's not going anywhere and we have new and exciting hardware capabilities coming for virtualization. For example, I can name confidential computing or so-called like encrypted virtualization which makes it possible to not trust your cloud provider because nowadays, if somebody is running your virtual machine, you kind of have to trust him because he owns everything, he owns your disk, he owns your memory, he owns your CPU and you may not want to, but in that case, the only advice for you will be run it on your own infrastructure. Cloud providers don't quite like it and CPU vendors came with these new technologies like AMD-SEV which allows you to encrypt guest memory, SEV-EES which also allows to encrypt CPU registers, Intel is working on TDX and I think that it's gonna be the next big thing in the virtualization in next couple years. Thank you, so secure computing. Michela, what about system D? Is it done finally or not yet? Any more changes coming in? No, no, no, we are far from done, I think. And it is kind of similar as in virtualization space. I think the biggest set of challenges in next couple of years will be connected to security and specifically the security requirements will be sort of an outcome of different deployment scenarios for Linux. Like Linux is now being deployed in cars and in manufacturing in the industry and all of these industries has very different security requirements as, let's say, cloud provider. So, and I think, and because the system D is a central component of the Linux operating system today or Linux distribution, I mean, we will have to deal with that somehow. So, we will have to deal with stuff like how do you make sure that not only kernel is trustworthy and firmware but also you can trust the operating system itself, like system D is responsible for decryption of encrypted devices and mounting file systems and whatnot. So, we do a lot of stuff during the boot and it would be nice if you would be able to trust that the system that you have in your car is actually what you think or what manufacturer put there, right? So, that will be a set of challenges and then the second one is also related to security, like system D is written in C and as much as we don't like it and as much, a lot of effort went into testing but still we have a lot of correctness issues, memory correctness bugs and stuff like that. So, we are looking into other programming languages that could potentially address this situation or at least improve it. So, maybe some of the newer components at some point will appear and will be implemented in Rust, let's say but this is still to be decided but this is something that the community is working on and this is one of the directions. So, you're saying security and Riverite system D in Rust. That sounds pretty cool. Not really. I don't think the, you know, the rewrite of the whole thing is something that anyone should try because it's just too big at this point but you know, some of the newer components like let's say we introduced in recent years system D Resolve D which is not even used, you know, let's say in Raul and it's written in C and you know, there is very little reasons to keep it that way. So for example, one of the team members mentioned that maybe at some point we would like to rewrite that in Rust. So, at least for the foreseeable future system D will stay in C, like the PID one. Thank you. Veronica, what about the future of testing? Well, that's a really broad idea but right now I think we are in a pretty good spot when it comes to the types of the testing. We have unit tests, we have tests that are targeted to various subsystems. For example, we have Cisco fuzzing. What I would really like to see as a different type of testing to come in is more broad user workload testing because as we know we don't break the user space and we need to find the issues before the users hit them. So I would imagine this is something that will come in. When it comes to the scale of the testing we talked about how we have a lot of combinations of configurations of hardware. There's just no physical possibility of centralizing that testing. So we need to keep that distributed model it has right now. I mentioned a lot of different CI systems including the one that I'm working on. We talked about the email reporting of the test results and this is really difficult to track. So a few years ago, kernel CI was promoted to a Linux Foundation project and one of the first goals that it has is to constantly, for example, database of tests but database of results where it would be easier to track when an issue occurred which test case found it, what happened. And I imagine that this will be more widely used right now it's kind of in a proof of concept state but I imagine that we will make all of the results and testing more discoverable because if it's easier to use a database in their world and try to put together analysis from free form emails and if we want to and we want to make developers care about testing more then this is really a needed change. Thank you so much. We are out of time but I really want to say thank you so much for all your thoughts and thank you for being here. It was quick and easy but hopefully there have been at least something interesting in what our panelists told us. Thank you. Enjoy the film. Thank you.