 Devron today with Oliver's talk about challenges in modern visualization. The stage is yours. Thank you. Hello everyone. So I'm Oliver and as you said, I will talk a bit about the new challenges that's in fact already existing, but are also coming into the virtualization world. So first, I'm the CEO of VATES, a company doing open source software in virtualization things in general. I'm a former system administrator. I'm using Zen since a while now, so I'm more Zen guy. I created the project called Zen Orchestra, which is a backup management solution for initially Zen server and then XCPNG, which is also a project I started as a fork of Zen server. And as I said earlier, this project is now hosted inside the Linux Foundation. Thanks to last. If you want to have access to some details of the presentation, then it's available on this link. So sometimes I will go into some deep details and if you want to learn more, you will be able to take a look at those links. So I'm not someone really deeply technical, so I'm not a kernel hacker or Zen hacker, but I'm more, as I said, from the user side of virtualization. So the thing is as my position of, you know, selling support for open source solution, I'm a lot on the field. It could be this kind of conferences, but also meeting with developers or people who might actually decide to take this or that solution. So this gave me a way to, I think, feel a bit what's going on in this world. So that's a non-technical talk, but more another view on what's going on. And the funny thing I hear a lot on the field is that people seem to forget that virtualization is almost everywhere because people said, I would say people's more exact in general, but anyway people said I'm reading or seeing a lot of stuff about, you know, the trend of going everything is a cloud using orchestration, whatever. And without even noticing it, they are very likely using, not them directly, but the thing that they are using are on top of virtualization. So I think it's somehow a good sign that it's pretty everywhere and even now in places that we didn't suspect maybe a few years ago. But against that it means that there is new challenges that will come with this position of virtualization. So those challenges are three main topics. The big one is obviously security, I think you might know why, about the performances and finally about all the new use cases. So what about security? When I use the word we, I'm not talking about very technical people or people aware or security researchers, but the average admin or the average exec maybe. The trend that we all saw is coming from the layer one, the hardware, and then adding more and more layers on top. So I mean there is an explanation because it's easier for people to do some stuff because you will have more abstraction. But the thing is by going more and more on top, then you tend to forget what's going on on the bottom. So for example, I started with virtualization, then a public cloud, then orchestration containers, containers orchestration, serverless, but it seems that somehow your code has to run on some hardware somewhere in the end and even developerless. So this is a thing on Twitter if you click on the link. There's some people that can imagine that you can get rid of all the developers forever and just in few clicks create a great application that scales, et cetera, et cetera. So in the head of a lot of people, we are going far away from the metal. And in the meantime, another phenomenon is happening and you probably know that in the hardware there is more and more software somehow. So it's like car manufacturers, you know. In your modern car you have tons of software that you didn't have maybe 20 years ago. And it means that companies focusing on hardware are delivering more and more software. And somehow those teams working on those features on the hardware doing this low level software aren't really sometimes have not really priorities about security. So we will see you with an example later. So in short, we again, not especially you, but we thought that our hardware was secure. We thought that when you purchase something from Dell HP, whatever, then it's something that you just put in your infrastructure and you don't touch it and you will add all your layers on top. But if you do that nowadays you will forget about the foundations. The foundation is the layer one and what's going on if you have weak foundations and have a lot of stacks on top of that, well the result is not great. As you can see here, this is exactly the same metaphor. You have weak foundation and all the layers on top are going down because if you are able to hack on the low level then you can get everything what you have on top. So what about those problems on the hardware? Well, I think you know them right in the silicon. You have all the issues with those at least Intel CPUs but also some others. But the biggest impact was on Intel stuff. So I want details, I think you know some of them and if you don't Google them. So this is right in the silicon. I insist because it means that it won't be able to change tomorrow. What about BMCs? So BMCs, if you don't know the word exactly, that's your ILO iDRAC, your extra small CPU and network card on your motherboard that's allowing you to do very cool stuff in the data center so you can install an ISO remotely, update your firmware, spore cycle the machine, so that's great. But in the last 10 years we had a huge amount of CVs on security flows in those hardware. And this is again from the hardware at this lower level you can access to a ton of stuff even if your top layers are secure. So we learned that even your hardware, what you think it's purely hardware must be updated often like you do for your operating system, like you do for your application, et cetera, et cetera. You might also decide to disable some features on your CPUs, for example, hyper threadings to be secure against some side channel attacks. And again, I repeat that because it's really important. The security should be considered as a whole. So if you have a problem on one state and lower is the layer when you have the security issue, bigger the impact will be. Those side channel attacks and hardware bugs are here to stay. By that I mean that you won't change all your hardware tomorrow but even if you can do that, I'm pretty sure that we'll continue to find some flows in the current silicon designs for a lot of reasons. One of them, for example, is that x86 is very complicated and then researchers will continue to find new issues. So it will stay. The problem won't vanish tomorrow. So I talked a bit about some general stuff. What about the impact of everything, like I said, on virtualization? So, funnily enough, the first answer might be not, especially on the technical side, but you need first, because you know that you will have some new flows coming, you need to have a great security workflow for your projects. For example, for the Zen project, there is a really great security process. So if you want to see what it is exactly, basically it's somehow flow charts or a checklist or at least a process that will guide you that if you tomorrow you got a big problem, then you will go how to handle this in the project. So the first answer might be just a way to organize the project to react to these problems. Also something that's really important is to have a good design documentation to how your hypervisor is working, how it works. And because if you have a great design documentation, I mean not going in the full details, it means that people from outside, for example, security researchers, can tell you, okay, I'm not inside virtualization, but I'm in CPU security and I see that through your design documentation that you could probably mitigate this or that by just removing this or adding that. So that's why it's more important than ever to have a clear explanation of how it works. And obviously the usual community stuff that we know in open source, but for hypervisor it means, for example, a great communication between different projects, for example KVM and Zen, because more brains means more ways to tackle issues and even if there is some design differences, there is some influence on how to work on a problem and how to solve it. So for example, I have in mind this core scheduling which is an interesting thing that have been thought by people in the kernel, in KVM and also in Zen and mixing ideas is always great. And external people, so it goes back to the good design documentation so you have more easily people on board telling you what's wrong on your solution. So regarding the technical solution, the first one is because it's more and more complex and there is more and more flows, you need to modularize the code. So it means that you let the user deciding to use some parts of the code or notes so it can be mitigations, but it could be also features of the hypervisor. So people we use, for example, stripped version of in this case Zen for OpenXT which is a project used in defense sector because the attack surface is really lower than the full feature thing. So the other issue with that if you gave a lot of choice to users then you have to guide them to tell them if you want security then you might choose this or that and if you don't care then you can use all the features etc. Also we saw that updating the low level software is pretty important and to do that you must have ways to do it in a way that won't disrupt your everyday operation. So we know that if it's hard to update that people won't update. So giving you a few examples, so applying microcode to your CPU without doing a reboot, it's called late microcode loading Zen. That's interesting to avoid any production disruption. Live patching, live upgrade, I think it's called live update by IWS people that contributed to Zen allowing to make a major upgrade to the Zen version in live on a physical machine without having to even to reboot or disrupt the VM. So that's really interesting. If you want to get the details you can click on the link. And on the technical solution I will be brief but in short you probably heard some of them. So the goal is to have a better isolation and to avoid all side channels attack. So there is a lot of strategy. It's a non-exhaustive list. There's a lot of different approach but we have a lot of ideas on how to improve that. So what about the performances? And there is a link between the security and performances and the link is you must continue to work on the perf side if you are also working on security because if you forget about performances and doing more and more mitigation then you will have a problem because there is always a balance on security and performances. Sometimes you can have both but it's not the usual case. I think what I didn't write it but the idea also is to have benchmark tools to be sure that for example with the CI platform to be sure that your modification won't ruin all the performances of your hypervisor. On the compute side as you may know CPUs are bigger and bigger at least in the x86 world. So you also must have to maybe rethink some parts of the code because you need to adapt to the massive number of cores that are existing right now in the CPUs. On the storage side well it's not a new trend per se but in the end we have cheaper NVMe drives and some storage stack have been made when we only had HDDs. So they are where in bottlenecks but now for example an NVMe drive is able to wait because the CPU is too busy to write stuff on it so you have to think on the way to write on your disk without making syscalls, using modern libraries like IEU ring etc. But it's also something that you need to do carefully because it's always the balance between isolation and performances. So again benchmark when you make some modification there. About new use cases it's pretty funny because for some of them you don't saw them coming. So for example the embed world people in the embed world really love virtualization because it allows to make isolation on software level so it might be cheaper in some cases but the challenge by doing that for a project like for example the Zen project is the compliance because you need to have standards of security standards especially inside the IR space or automotive projects. So you need to have a kind of framework to validate or to be compliant with those standards without disrupting the way you are building software every day. Again finding the right balance. You have new architectures and I think that's really interesting because there's a lot of great things to get from there so Rix 5 for example. There's a lot of talks about it and I think having for example porting Zen to it might be really something great in the future. We saw also more arm deployments in the server world and I personally think that could have been a thing maybe eight years ago something like that but it didn't but maybe this time it will be a bit different because the performance level is pretty great. Okay so in conclusion I think that's my opinion by seeing everything that virtualization will see more challenge than ever because everything is more complicated especially in the server world and in the X86 world and we have to take into account the security that isn't maybe great on some hardware on the BMCs so all the physical layer should be able to be updated quickly without disrupting everything so as the layer just on top of that there is some heavy responsibilities to do that's something that just works because otherwise the overhead won't be contained and virtualization is a great tool because that's really flexible you can do a lot of great stuff with it like migrate or have your hardware abstraction isolation et cetera et cetera but it will stay relevant as long as the work is done to avoid all the issues on the layer on the bottom. And this work which is I think more and more important and huge nowadays because as I said there is porting to new architecture all the security work et cetera that will need more bigger collaborations between security researchers or benchmark people multiple open source projects and I think this is in the open source world the challenge is to have everyone on board to find solutions together because one project on one side won't be able to keep up alone so it's all a way to continue to nurture those communities to work together that's really important but the reward to do that is pretty high because it means that if we can do that if we can keep up the pace but if we can also work with people at the limits on the hardware and software we could not only use utilization in new use cases but also maybe bring a lot of innovation that will help utilization to thrive in this world so there is a lot of opportunities on those current KVM and Zen projects and I think we will see more and more stuff coming on the limits on the hardware and software because in the last six months or so I saw more and more companies created working on very low level things like CPUs, architecture or even building x86 servers so people that aren't Dell or HPE etc you know there is somehow some people want to disrupt a bit this world so I have the feeling that it will continue and then we should take a look at that because if we've been there pretty early in this stage of research it can lead to really interesting things I'm done thank you so if you have a question go ahead so that was pretty clear