 Good evening. This is a talk about Debian virtualization present in future by Jan Lübe Please if you want to make a question or an assertion ask for the microphone So everyone can hear you and if we're going through this camera that one Please crouch as in contrast like games Yeah, thank you. Thank you So I'm here to talk about what is possible with virtualization in Debian right now and how that will change in the near future So I want to thank every one of you to come for coming to my talk I've been Debian developer for only one year now and So maybe I don't know all the details of the other projects that we'll see first I'll talk about what virtualization is and how that compares for example to emulation Which different projects use which approaches So we can see Which approach in which project is best for each use case and at the end I want to talk about how that is Usable right now and then any for Debian and how that will change in the future So why do you want to virtualize a computer? There are very many different use cases for it At least in Debian for a developer. It's mostly useful for testing software networking For others it's more mostly useful for consolidation increasing availability and redundancy by Supporting the same software the same stack on several different machines so we can switch easily It is also used for example by Amazon and other big companies for the so-called cloud computing So we have Software stack that runs on any machine, but you don't really need to worry where it is running you can rent the computing time by the hour Something I haven't seen yet myself is the follow-the-sun module that is for for example global global corporations who need to have Really low latency in the part of the world where currently the working hours are But they don't want to shut down their service So they use virtualization to move them between the different shifts Without a big downtime to another location Yeah, something else What are you using virtualization for? anything interesting So for example the browser to keep it separate from the rest of the system So any internet fitting facing software if it is in a virtual machine you and which doesn't have anything real on it Then if you get broken into all they have had access to is your toy virtual minimal operation and we also use a sea Linux in our virtual containers So it is very hard to break out of it. Even though you compromise a patchy or something Okay, I really like to use snapshots on virtual machines Well, I can just say okay. I want to make an update of something so I can take a snapshot which is done in one second or something like that and Then I put this update there and if it works, then I can say drop this snapshot Otherwise I can go back. So I really like this feature Containing badly behaving applications So basically containing anything you don't want to have physical access to your hardware Yeah, there are a lot of different methods to reach this I go from the full virtualization over the more complex stuff to hardware emulation So with a full system virtualization I mean something like VM virtual box KVM whether Guest system doesn't need to know that it's being virtualized There is power virtualization that has been done by Xen August and even before by UML There's container virtualization something like Linux V server. Yeah, that are different terminologies But you use it for similar use cases so at least listed here and you have Virtuoso or VZ which supports that and on the basic level jails and change routes Then there's I API translation for example wine and System emulation We don't even need to have the hardware the guest software was written for There is just QMO PPC dustbox calculus. There are a lot more in Debian so for emulation the host has software which Replicates all the hardware that is necessary to run the software There's an advantage that you can any guest software on any hardware system But it's really slow so it's Mostly useful for testing not for running service or so But even with the emulation it's sometimes faster than any real hardware which it's available So for example arm built ease on really fast and B64 machines are faster or have more memory than the real machines The full system virtualization It's for the same goal, but you need this to have the same architecture than the guest except for AMD 64 and I 386 so you can run on many five guests, but it has higher complexity because The guest expects to have access to the hardware and you need to only emulate these parts of the system This really depends on the hardware you're running on because if you're in an architecture that's actually designed for Like the good old mainframes The complexity is not actually that high But even if you're an x86 as long as the actual emulated hardware you expect is low enough It's not significantly more complex very good example for that is a VM ver that Very carefully picked the hardware they emulated to avoid this complexity But it's still harder to get right than just emulating Then we have power virtualization which Xen is doing that is the first time we tell the guest that is running on virtual hardware So it can cooperate So we have lower overhead because we don't need to emulate things like disc controllers network devices and so on But of course the guest needs to be modified so we don't can't run any operating system Then we have container virtualization which may not be virtualization at all Because we run just one kernel, but the applications see different environments With a minimal overhead we have better resource sharing because we can share the page cache Can share the memory when one guest is using less memory But of course we can only run one kernel so all guests have to use the same kernel And at least with the current implementations The isolation is not as strict with as with power virtualization or full virtualization API translation We've seen wine this has taken a very long time To reach the current state so you need to reimplement for api which is really really complicated but You don't need to run that OS once you've re-implemented the API so it's low overhead So now we'll see what projects there are There's QMU which is An emulator Has been around for a long time emulates a lot of hardware for different architectures run some different architectures It uses something called dynamic binary translation Which is it looks at the code which is running and in the guest and translates it to native code And stores this native code somewhere else The device Library for is used for Xen KVM for virtual box also So it's some sort of base for some of the others That already can do snapshots it can do USB path through through the guests we can use all sorts of USB devices in your guests and there's an Additional kernel module which can accelerate the guest on x86 and mv64 hardware Then it's a lot faster than just emulating There's an basic management interface that's called QMU later. It's also available in Debian Then we have virtual box It was started by Endotech a German company and is now developed by Sun It's this the core of a commercial product There's this virtual box OSA for open source edition and They have a commercial edition which has more features for example, it was be passed through is only in the Commercial variant there's some interesting features. You can use the So-called seamless windowing module which support so you have windows for example windows on your Linux desktop Which you don't have a windows desktop only the windows And so it's all of virtual box earlier this year After they acquired it from Endotech Only this virtual box open source edition, which is the core and This is a commercial Yes, you can download it for free and try it, but you expected to buy it then and The commercial edition also has something like RDP Access to the virtual machine and so on yes so Xen Has also been around a long time was open source from the start currently supports x86 AMD 64 and the IA 64 titanium architecture they started Paravigilizing Linux and Probably started the Paravigilization trend in Linux and It also supports full system virtualization on modern CPUs, which have these Intel VT and I am AMD s VM support, which is on the current CPUs You have the small hypervisor Which starts before the kernel then loads the Linux kernel and runs that as the so-called domain zero which has access to all the hardware and Then use from there you start the guest systems, which are called the DOM U and they use virtual devices which run in DOM zero and Currently redhead is pushing For inclusion in mainline it's For the server space probably the most mature open source version variant of virtualization There are a lot of management tools available You have live migration that means if you have Two systems we can access to the same this devices you can Switch to the running machine from one to the other in less than half a second or so So it's not noticeable to use us which access these machines of the network You can pass through PCI devices So if you have special hardware that needs to be used in the guest you can pass through and The management tools for example Xenman Xen tools Xensher Garniti are all available in Debian There are currently some problems with using Xen There is no support for the DOM zero in the current mainline kernel and Only basic support for the DOM zero for the DOM U in mainline in The current cars they are merging mostly the DOM U in 2628 we hopefully have DOM zero support They are switching their Virtualization interface from the propriety one to the one that is used in mainline inox, which is part of it ops That is that's not ready yet The problem with using a separate hypervisor below the Linux kernel is that you don't have suspend You don't have power saving can't really use it on notebooks because you lose suspend There's The problem that 3d acceleration on the host is difficult because you also lose some direct access to the hardware and In general, there's a lot of code duplication They used some device drivers and ACPI stuff from Linux and copied it to the hypervisor Then more recently we have KVM Which is hypervisor completely based on Linux It's currently supporting x86 and indy64, but there are supports going on for IA64 power PC 440 which is an embedded power PC and also used in some large IBM boxes and It has support for s390 Which are not yet ready You need in contrast to Xen need these hardware support Even if you use the new power virtualization stuff Which is only used to speed up IO the guests run as User space processes on the host with some kernel support it of course uses co-emote who emulate all the hardware and Company who has sponsored this development is called one run it and it's now selling their first product solid ice which is some sort of desktop Virtualization stuff you run the Employees desktop on the servers and they just look at it over network And with the last release Ubuntu has chosen to use KVM as their default virtualization method And only use Xen if you have no hardware support The features are similar to QAMO so You have the snap shotting and so on with as with QAMO. You also have live migration as in Xen you can swap the guest pages because they're just normal user processes and There's a migration path for Xen users Which is an experimental project called Xenna, which can you run Xen guest kernels on KVM? And of course because you're just running normal Linux. It really works well on the laptop Also, it's compatible with the real-time extensions from Which are currently being merged so you can run a hard real-time Linux kernel and for example a Windows GUI on top Seeing people use that for controlling welding welding lasers Still some problems because it's under heavy development. They don't have stable releases so sometimes you have regressions the performance is not really predictable from version to version and There are no integrated management tools yet so If you don't want to run a whole system in the virtualization you can choose something like v-server or open vz where you only have the linux kernel and Several containers below that Linux v-server was a community project which has stalled in the last time a little bit Open vz was first a commercial product by parallels and is now open sourced They are pushing for mainnet integration and are with that further along than Linux versus v-server It isolates file systems users processes at works and devices per container from the other containers So we have schedule long scheduling inside the container and each container has limits on what sort of resources it can use They are shared among the containers using this approach has Some advantages compared to full virtualization because because you can just access the file system from the host you can see all the processes on the host Because it's just one file system page cache can be shared between virtual containers, so you don't need that much of memory Can you run more guests on the same hardware compared to okvm or xn and it also now supports checkpointing and live migration But it's not really as seamless as with the others Because first need to save the snapshot to disk transfer that and boot it on the other machine So that takes a lot longer than a half a second Yeah v-server Has until two days Not being ported to the current kernel of that now is a patch so that may Come into Lenny if it's stable enough no for managing the soul stuff there has been Initiative by Red Hat that is live word It is a abstraction layer over the different virtualization projects so thoughts Xen QMA KVM LXC which is Linux containers and OpenVZ We have a GUI interface for managing the guests creating guests which and virtual networks can attach them reboot them access their consoles This split in two parts the live word daemon which runs as a route of course and Use interface which can connect to the demon via various ways Locally via debas and policy kit over the network via SSH with different authentication methods Also announces the presence of service via avari Yeah, where do we stand with the daemon? We right now have no Xen dom zero kernel in Lenny or in unstable Which is problem for all the people who installed Xen with edge They have no clear update path It is possible that you can use the old edge kernel with the current user space and We hope that when Xen has been integrated upstream Maybe for Lenny and a half if that will happen we will then have a current kernel again, which can run Xen dom zero It's not yet There's no decision yet. Whether we want to have the old Edge kernel supported in Lenny or what should happen there then the v-server support Is supported of course in edge, but because until two days ago there was no patch It's not yet in Lenny, but it seems that it will be supported in Lenny There are open vz packages in unstable, which will probably migrate to Lenny. So we have support for that and And QEMO virtual box KVM open vz The bird are already in Lenny and are really easy to install and use Yeah, where will that go? Xen will be merged probably into 628 Open vz is currently being merged step by step to the large set of changes Then Xen then redhead will continue to develop lipvert So we have a consistent console to manage all of the virtualization stuff Then there's some Development in QEMO going on don't know what the roadmap is Perhaps somebody here knows more About QME there is there are a lot of change upstream currently They've switched from using GCC as the code generator to something called TCG tiny code generator So it will be able to we will be able to remove the dependent build dependencies on GCC 34 and Support more architecture. It's also a bit faster. And so the QMU has been in standby for a few months, but since the end of April actually there are a lot of new codes merged mainly from KVM So I hope we'll have for Lenny plus one support for Spark64 a guest support that will be useful given that the behind doesn't support Spark32 anymore and Maybe also the para virtualization para virtualized devices like our network card and our disk Thank you. And in the long term, which is maybe two or three years. We will see that most of the S's are available of a very virtualization so We will have common para virtualization interface Which can of course improve resource sharing performance you can have ballooning which means adding and removing memory from a running guest and Maybe hopefully pass through for 3d graphics But there's still a lot of research to do in that area Yeah, does somebody not have ideas what will happen beside this list Dom 0 on 2628 seems Very soon from my perspective as a kernel developer. Yeah, we've not seen any patches yet and given What bunch of crap all that consent code always was it will take a while to be didn't shape Open vz as is will not be in merch line Which actually have been merged main line is that the container folks which is a lot of the open vz folks a lot of IBM folks people like Eric Biederman who was not part of any of those projects a little bit of the V server folks but not much actually working on Basically doing everything from scratch or sometimes taking good parts of the existing solutions and do this and this has been ongoing for At least a year and a half if not you and it's not really gonna be finished any time soon That's the perspective from kernel land The number of two six twenty eight was from status report from extended a lot of us all that may be built by Gonna say no, but I'm gonna say I would be very surprised to someone But at least there is Development going on to get it better supported or at least based on the based the current patches The stuff that is currently main main line So hopefully we will have current sense apart on the kernels again. Yeah So that is a call for help We need probably to do some Improvement in Debian real in virtualization You have all the tools there, but you still need to install them tweak stuff manually Livered will have there, but I think we can still do something to make make it easier to use Debian as a platform for virtualization Currently most people probably use redhead for Xen maybe Ubuntu for KVM But It's not that way integrated in Debian yet So to make that happen we probably We really need to have more testers at least further for the power virtualization stuff stuff because it depends a lot on what hardware you have some bugs are only Visible on one processor with a specific guest So it's not really possible for One or two developers to test all the possible combinations combinations that would really like to Encourage every one of you who has some use case for that to try some of that stuff report bugs tried with different guests So we can get a lot more coverage that way Because I can't do that on the portal machines for different architectures. For example PPC It's not possible on the portal machines because you need root access you're restarting the machine you get kernel crashes They don't let like that on the Debian portal machines So if someone has access to that hardware can run those tests that will be really helpful Yeah There's I also I also want to make a call for help on for the QMU package There is currently a team, but it seems Currently I'm the only one working on that So I would like to find some comment on us. So if someone is interested, please come to look with me Any questions? Yeah, the current situation with Xen is basically that we only have the kernel Supported the yes supporting the edge kernel. Yes, so the next When we switch to Lenny hopefully soon Did the only the only way to keep running our Xen servers would be to use that kernel? Yes, and is there any Because of the the problems meant that I Don't remember his name mentioned. It's basically that the Allurement fork it too far from mainly in kernel and that's why Xen hasn't been Merged back in yeah They sent developers started Designing their own visualization interface and they are the only ones that use it and right now in the Linux kernel There's this paravirt ops interface Which the old Xen patches don't support? That didn't even add an interface But instead they claim they were new architecture supported by Linux think of like power pc x86 and then Xen Which was at that time, of course only Xen on x86 So this would have duplicated even more if they had more architectures and we kind of develop a set This is really not maintainable because I'm 90% of the code is copy and pasted from x86 anyway So we better have to find some interface and that's what part of what ops went and that's what Well, I'm KVM is running on that's what What the Xen dom I'm dumb you support right in now is working on that's what the strange VM ver of VMI binary blob runs on and that's what they finally after Ignoring it for a year have started porting their dom 0 to but the problem is that of course just porting to this interface alone It's not enough that might be parts not yet covered by that interface because no one has done as strange and drug induced things as the Xen developers did or Because it's of course not just that interface in the machine specific code But a lot of device drivers which are very different for dom 0 versus dom you so it's still a lot of work And the last time I saw the the redhead patches. They were peeing all over the kernel tree and that's something generally not looked upon very well Well from what I've seen well and reinforcing what you're saying Xen is Well was there the ray the great booze words some years ago But I think it's going to fade away because on one hand it's a lot of code duplication It's much dirtier and well you were asking what what should we do with our send hosts a after Lenny released I think the easiest way out will be to tell to advise our users when it's possible to migrate to a I mean when you're they're using full virtualization The best thing is to migrate to KVM other users may be me great to be server. Why? Because they in the end they are much cleaner in the end instead of adding An external operating system. Well, the hypervisor is just that Well, they the approach is quite saner in in in my opinion Of course, there there's many things that they send does offer that the that is not integrated in other solutions But I think that's that's what already happening by itself The way it's right now the switch At least for the linear release will be quite hard for other people using Xen Because KVM only runs on machines which have this hardware support and it's not as stable yet as the Xen on Edge was and If you need to run for example Windows then or Systems where power of virtualization is not easy Then Xen then KVM right now isn't really a full replacement for Xen I mean if if you have the Virtualization capable hardware actually is a pretty neat migration fart and that part that you already mentioned that's Xenner Which works very well, but about the stability. I don't know I've only heard that stability argument from people who are not actually using it So I use KVM on my development and traveling laptop All the time because that's where I have my virtual machines and where I do all the kernel testing file stem testing Distributions in and I never ever had a single KVM related incident in the last year and a half I mean stability wise from the Source code side because there's still a lot of change going on But even that is not a problem because the user space is backwards Compatible, so I still have a KVM user space from over a year ago And I just put the new mainline kernels on it. I just run one built yesterday, and it still works perfectly Yeah, so we probably need Xenna support and devian soon And the other part is like this is why me as back then in each times part of the kernel team Was very unhappy that it's released at all with Xen Because I was expecting exactly that that no one can be bothered to maintain it anymore It was shipped anyway And now we have the debacle and we'll see that same debacle with The v-server and open vz and whatever valley wants to push in because that's just not going to be maintained at some point anymore So everyone who's thinking about it. I would strongly recommend against starting to use it At least for a production system for playing around it's perfectly fine, of course Hello Right now. I'm in the process of updating the debian kernel handbook for Lenny And if people who know about this stuff would like to write a paragraph or a chapter for the handbook to document the current situation with Xen That would be a very good contribution Where can you be reached? That's Yuri at debian.org Yeah, something else. Oh Thanks for your attention