 Okay, let's start then. It's almost time. Thank you for coming. I have a tendency to overrun, so start looking at earlier. So, I was going to give you a brief overview of the SYN project, of how the hypervisor actually works, and some of the technology around the SYN project. You've all been here, I guess, for the last two days, so I'm not going to dwell on any of the news that we've come up with an expanded collaborative project. And I'll just go straight into the guts of things. We'll keep it coming in. Here we go. So, let me talk a bit first about what the scope of the SYN project is. So we have a number of themes. Originally, in the old SYN of the world, we put in sub-projects, but SYN projects are projects sound a bit odd, so I think even with a little bit better. And there's really four areas. Four teams, really. One's the hypervisor project, which does the core hypervisor. The other one is SYN API, or SAPI, which is a tool stack on top of SYN. Then we have the ARM hypervisor, which comes in two flavors. One for servers and one for mobile devices, which has originally been pioneered by Samsung and Samsung's students on Monday. And a new addition is Mirage OS. From a governance perspective, SYN project pretty much is a mixture between how the Linux kernel works with a few elements of Apache governance. So from a code perspective, it's GPLv2, signing off patches via the DCO. All discussions happen on a mailing list. But then there's a number of additional things that are a bit more Apache-like. So we have consensus-based decision-making. We have a life cycle for sub-projects to be included into the scope. An incubator and projects have to graduate. And if they don't take off, eventually they get archive. And each sub-project can have a PMC-style leadership structure, if they choose to do so. So before I explain a bit about the architecture, I wanted to just show you some of the progress the community has made over the last two years in terms of diversity. And if you look here on the left, 2010 you see that 60% of the completions to the project at that time came from Cyprus. In 2011, more colors, but not quite as many, as in 2012. And it's really been driven to a large degree, A, by new future development, but also by a process of formalizing the governance in the project. In 2010, pretty much, we had no written-down rules, it was just come to the mailing list and figure out how it works by watching other people, which is kind of a pretty bad way of running a community and encouraging them to make conclusions. You see that 2012 proportion go up a little bit. That's mainly because the Cyprix scheme almost doubled between 2011 and 2012, and also some of the committers in Cyprus retired some old code, which then got counted to that proportion. So in reality, actually this is probably a little bit less. So let's talk about hypervisor architecture a little bit. So really, typically we have two different architectures. The type one, where you have the hypervisor sit directly on the hardware, essentially replacing the OS highly, and you have device drivers in there, and all the functionality for scheduling and then view management and so on and so forth, and then VMs running directly on top of it. And that's the sort of ESX type of model. And then we have a type two, which is KBM is an example of that, where basically the hypervisor runs as a set of device drivers within the host OS. So I'm not going to make any judge, I think there's good use cases for both of these architectures, and it gives everybody within the Linux community different choices and different trade offs they can make. So let's look at Xen a little bit more. So I took Xen a type one with a twist, and let me just explain what I mean by that. So on the left you see your traditional type one, which has all the device drivers re-implemented in the hypervisor. It also has to implement functionality to emulate IO and so on, if you don't use TV. Now in Xen, actually all the device drivers and the device models aren't actually part of the hypervisor. They sit in a SING, which we call DOM0, which is a special VM. It's the first VM which gets started, it's privileged. And basically, you know, within that VM, we just reuse the device drivers of the host operating system, which typically is a Linux distribution, and also within that there's obviously device models. And then on the right, you have on the left, oh, this is your right, sorry. I'm confused because I'm on stage. You see all the guest VMs as usual. So let me say a bit about the Xen project in Linux. Xen hypervisor itself isn't in the Linux kernel today, but actually everything you need to run on top of a Linux kernel or distribution is in Linux. You can just install any Linux distribution and run it as a domain zero as a guest. And from a user's perspective, really, the way how this works is you basically choose your favorite DOM0 disk row, you install that on your machine, and then you download the Xen package and install it. That then changes your boot order such that once you've rebooted basically your Xen hypervisor gets started, it kicks off your domain zero, and then within that domain zero, your host OS runs and then you start configuring the system. So I wanted to cover a few more concepts around Xen. So we've seen some of the basic building blocks, hypervisor, control domain, VMs, and then we have to sing which we call a tool stack and the console. So you see there's a tool stack in there, basically that contains order management functionality, create VMs, shutdown VMs, do migration, all that kind of stuff, and by a remote protocol it talks to some sort of console which could be a command line tool, a cloud orchestration stack, or some other sort of UI. I forgot my clicker, so I have to keep on going back and forth, unfortunately. So Xen has also this idea of taking functionality out of the control domain, and we call that whole approach disaggregation and the idea is that I can, for example, take a device driver which normally runs within your control domain and just put it into a deep privileged VM. For example, an Ethernet driver, or I could take my QMU, my device model which is typically implemented in QMU and just run that in a separate VM and it's then called a stuffed domain and that whole concept is called disaggregation and I'll get to that in a little bit more detail later because that's quite interesting from a number of perspectives. At that point it's probably worse talking about a number of Xen variants. Xen's been around for a while and for that reason what happened over time is we have a number of different tool stacks which work together with Xen and that provides you this choice but on the other hand it also provides additional complexity. So Xen itself comes with the default tool stack which the command line tool for that is called Excel. I mean there's a dedicated old one called XM and that sort of single host functionality. There's a libvert integration as well which gives you a little bit more functionality and then we have a tool stack called SAPI with a command line tool called XE which allows you to manage multiple hosts clusters of hosts grouped together from one console. Then if you look at these then there's a number of commercial products built on top of these. So we have Oracle VM, that's Huawei UVP and there's also Citrix Xen server with all use Xen and all of them use different combinations of tool stacks as well. Then of course we have also big service providers, cloud providers using Xen today and here's a couple of examples of them. Amazon Web Services, Susie Cloud and Rackspace Cloud Service. So at that point I wanted to talk a bit about types of virtualizations and how it works under the hood for Xen. So this is kind of what I wanted to show here is what we call parallelization. So in that picture you see that Xen hypervisor at the bottom, you see your guest any guest VM on the right and then your control domain in that picture as well. So in your IO, basically what happens is your application talks to a PV front end which is basically a shim driver which passes through your IO input to a PV back end which then talks to the native hardware driver in your Linux distribution and passes it through to the hardware. And that's what we typically do for the disk, for network and it's a really fast type of approach of virtualizing. And it also works on really old hardware without virtualization. I mentioned this aggregation before and basically the other concept is exactly the same. The only difference is that your PV back end and hardware driver are separate at the end. Which in that case, you know, if for example you have an Ethernet driver running in there we would call that an Ethernet driver domain for example. And that has a number of advantages because you get extra security, isolation and also additional robustness. You could for example imagine basically just regularly wiping and restarting your driver domain and that way protect against any infection of that domain. There's actually a couple of really interesting papers which show you sort of security attacks and I'll have links to that later on in the presentation. The second approach to virtualization in Xen is what we call IBM. Now that uses there you do IO without parallelization. So fundamentally what happens is if your guest accesses an IO block it gets trapped into the hypervisor. The hypervisor passes control to the DOM zero in which you then run Q and U to emulate the device and then the result is being passed back. That's pretty much also the approach which KBM uses. And then as before we can also use this aggregation for that where basically we have the device model run in a separate VM which we call stop domain and there's just one setting in a conflict file where you say enable that approach actually I can't remember what the conflict is called I have to look it up and fundamentally every single time in that case when you start a guest it also starts a stop domain which runs in the device model and that gives you again Q and U is actually quite a big code base and for that reason there are fundamentally 30 floors which are exploits in there which could be exploited at the other question. Yeah just looking at your picture I just wanted some clarification so when you run a stop domain you send yourself a process talking to hypervisor are you still utilized the DOM zero? No so if you use a stop domain to emulate your device DOM zero is entirely it's not really in a data flow at all so basically what happens is you access your IO block control gets delegated to the hypervisor the hypervisor knows that there's that stop domain which runs with the device model and goes back in So your stop domain for each virtual machine you have that code, stop domain there There's another question behind you Yeah just to clarify a bit you do need still need a DOM zero to do management Oh yeah you do need a DOM zero I just wanted to be clear about that and also you might decide that you could have multiple guest VMs using a single stop domain driver as well But the easiest way is to just set it up automatically with one complete line option Last but not the previous slide you also saw the device domain What's the relationship now between device domains and guest VMs one to one or one to n or n to one? Hang on, this slide? Yeah basically you kind of can set this up and configure it however you want You could decide to run Do you want to take that? Just take it then Okay sorry My name is Mike Bestel I'm a product manager for a Zen server and I also used to be the network architect for Zen server so I know a bit about this So a classic way you might do it for instance you would have a drive domain per NIC so you'd have basically a network mapping or you could have two bonded into a single one that sort of thing where you might have storage drive domains in a similar setup but it's up to you how you set it up and there are a variety of different ways you might want to do it you might even think about layering them in the future but that's a just a thought Sorry last one That's okay I can get you to go up and have the presentation No I'm just being Anyway So any more questions to this or one more Sorry Russell for making you run I'm almost deaf so that's not going to work Can you hear me? I was wondering if all three of these scenarios still do not require like VT support on the bare metal components So the DOM zero solution like the para virtualization it's the kernel and the hardware that doesn't have to be virtualization aware or what have you So for HVM you need to have virtualization extensions for para virtualization you don't and you know if you look at the history of Send it started out this PD because when Send started there were no hardware virtualization and now you kind of can mix and match and it's also one of the reasons why there's a lot of complexity around Send today because we're carrying around some of the history And then the subdomains the same thing it has to be the operating system has to be aware of the virtualization enabled and the hardware Yeah but you know if you use a modern Linux version all that is integrated so you don't even have to think about that you just configure it What about like VT enabled processor and all of that like any of the direct IO type features are we even at that level with any of these other solutions Yeah So I'm kind of not covering this in this presentation but that's You know you're talking about the VT extensions from Intel so Send can benefit from it but it doesn't require it and it's the Send has a broad history of how it got where it got here so hardware was at that time and where we are now so you can take advantage across the breadth and another example is for example the TPM cluster platform module one of the first implementations we had done was virtualizing it to where like for example the hardware TPM you can tie to DOM zero and then you can have virtual instances of that so you do not require VT extensions but as was mentioned earlier that given the history of Send you can benefit from it depending on how it's implemented you know for people who are setting this up you know for testing or whatever else or just kind of proof of concept like are there any or are there any links to like recommended best practice for you know which solution to choose based on the desired outcome So there is some best practice out there but in my personal view that's something you know where we need to improve it a bit more and actually use it a bit more guidance as for you know which workload which option is best right so that's something where there could be a lot more around it and actually Russell who just been having you the mic it's going to help me fix some of that in the coming you know in the coming months anyway I have to move on otherwise oh I'm really behind here we go and we'll just have more questions towards the end so so we've talked about the two fundamental concepts of virtualization right and what this table shows you is actually how this sort of has evolved over time right so we started off with power virtualization then when hardware virtualization came along you know full HVM was added in what we call a fully virtualized mode then you know after a while it was possible to add PV drivers to that and another mode you know what you call PVHVM and was being implemented right now it was originally supposed to be in 4.3 but it's just slipped into the next release because the patch review process took a lot longer than anticipated we have that PVH mode which we believe will be the optimal mode going forward now you see that matrix you know where software is used, where hardware is used to virtualize and where power virtualization is used and I'm just overlaying that you know with some basically you know some color scheme to show you sort of how good it is from a performance perspective right and where there's still scope for improvement and you know this looks like a scary picture but actually from a user's perspective really you really just choose either HVM mode or PV mode and then Xen just figures out based on the capabilities of the operating system and the hardware what you know the best combination for you is then you still have to and then you can overwrite some things that you want to anyway I think I'm going to move on a little bit and talk about the Xapi tool stack so just coming back to Xapi just to remind you there was that picture with the different tool stacks Xapi is the one on the right which gives you the capability to manage you know multiple hosts and put that all together so really what is it and what do you get so I'm not going to read this list off to you I'm just going to pick out one really interesting set of features is what we call storage XenMotion which basically allows you to migrate VMs from one host you know to the other while it's running without having to set up shared storage and that's I guess might want to project again but that's actually something really nah he's not I'm just joking that's really interesting if you don't do your planning of your infrastructure well enough and then you can still do migration there's a lot of usability features in there as well so if you're interested more in this follow that link at the bottom I will also post the slides on the XenProject website and also there will be posted obviously on the Linux foundation so Xapi really comes in two variants one as an ISO and it contains is it version 1.6 right now contains Xen4.1.3 with Xapi TrueStack XenOS 5.3 has a number of kernel patches OBS as well and basically that solution you just have an out of the box solution just put your CD you basically install the entire appliance on a virgin machine and you're up and running a second variant is basically you just get those Xapi packages via your Linux distribution and basically build up you know use a package manager get Xen get Xapi packages and get up and running that way so right now we only have that in Weezy and Ubuntu but there's work going on on other distros right now I just talked this morning to the project lead and I was hoping we've had it in XenOS 6.4 yet but it's not you can build it and I guess we'll just put that in at some point in the coming weeks so so what's interesting about Xapi as well is that it interfaces with the major cloud orchestration stacks as well so you can drive Xapi via open stack Apache cloud stack and open Nebula and as a Linux management UI there's this neat little project which just started out recently called Xen Orchestra and they're releasing at a high velocity release cycle of 2-3 weeks and really get a lot of user feedback to make this a really neat management UI let me talk a bit about challenges for open source hypervisors and it kind of links back to this whole point about security and so on so I did a bit of research on various surveys and typically what comes up as top blockers for cloud adoption is security and reliability we did our own survey as well we had about 5,000 community members now applying to that survey and basically what they felt where really the top issues was robustness, performance, scalability and security in that order as well so that's a neat point to come back to that idea of this aggregation so I covered that conceptually already earlier there's a couple of interesting interesting talks interesting papers to look at if you're really interested in this in a bit more detail on this one paper called Breaking Up is Hard to Do it's this whole idea of this aggregation to an absolute extreme now basically what they do in that little architecture and implementation is every single time for example a tool stack is accessed they spin up a VM with a new tool stack in it and when it's not needed anymore it's shut down again and so this whole concept is basically taken to the extreme and then I started to do some benchmarking and measurements around it as to whether this is a feasible architecture going forward and some surprising lessons really came around there that actually even so it's maybe a little counter-intuitive some of these aggregation approaches increase scalability as well as security as well as robustness and performance but I'll get to that in a little bit more detail and this isn't all high in the sky stuff so this aggregation actually is used today by a number of products not necessarily in your traditional server bit space but there's an open source project called CubesOS which basically uses that this aggregation approach to sandbox applications and your ethernet driver and so on so you basically have applications running in different BMs and then they have modified X Windows version which creates the illusion of a common Linux UI and in that little screenshot down there you see the different colored frames of basically different BMs your applications running so that's kind of quite cool and really worth looking at and I think there were at release two now and planning release three as well but experimenting as well with also running different not just Linux VMs for sandboxing but also in a Linux environment of Windows VM as well if you wanted to so that's some really interesting innovation going on around that now it's happening in in tool stack land for Xappy actually this is being prototyped for Xappy as well to make this almost an out of the box smooth experience such that you don't have to deal with conflict files and you kind of get predefined configurations for it so but I'll show you what that means in a second so what are the benefits so more security because you have more isolation it's actually really interesting also from a serviceability and flexibility perspective if you have a complex system running you can quite easily you have your DOM zero running and you could just update your network driver domain or your tool stack driver domain or any one of these service domain without impacting the rest of the system and that gives you increased flexibility as well as serviceability also because you have a much more distributed system it's a lot more robust so say your network driver domain goes down while you just restart that and the rest of your system is unaffected count in two of them is also increasing performance and scalability because you don't have a bottleneck where everything is going through DOM zero you just have a much more distributed architecture so it's really good from that perspective as well and I put a little picture up there I guess we ran an experiment where we basically on a cron job killed and restarted the incident driver domain and it takes about 275 milliseconds to do that restart which is quite acceptable in your typical use case so let's just show you how this whole thing how SAPI actually looks like before and after this aggregation and what impact that has on the tool stack architect so here we have I've been in a previous diagram I simplified SAPI quite a lot as just what the tool stack but it's actually a tool stack plus a lot of other components which interact with each other now it's quite a few components agents and so on which are basically running within DOM zero which which are part of SAPI they're all now in a traditional environment running in domain zero after this aggregation the system looks like that where you have the absolute minimum running in DOM zero and you have for example network drivers, v-switch and so on running in a separate domain and then on each host we introduced that a fast communication channel called D-Bus to really and that's one of the reasons why performance and scalability gets improved because it removes the spotting gives you an additional back channel for communication between those specialized domains Russell is somebody having a question over there so Lars, does this mean that when I do a live migration from my left host to my right host all my all these subdomains need to be migrated to? that's a good question so the nice thing about this is it's generally completely transparent to the VM at the top the guest VM doesn't care because all it cares about is that it's in this case the NS and the BF they can talk to something which provides the functionality they need so one of the good things about this is it should be very easy to migrate from an old style disaggregated aggregated to an old style disaggregated host very easily because it should make absolutely zero difference to the guest VM at all so I agree that for the VM it's completely transparent but I guess underneath the cover it would mean that all these subdomains need to be started off on the new target host during a live migration not necessarily at all there's no reason that the migrating to host has to have the same underlying architecture as the migrated from host at all as long as it can serve the front end drivers that are in the guest VM in whatever way it wants as long as they're available to that guest VM there's no need for them to share the same architecture in a way how you can almost think about this like the PB drivers on the PV front end sits within your guest VM and then basically that sort of gives you a data plane leading somewhere and the hypervisor kind of handles the control plane and knows where everything is and then just sets it all up for you to compare it one more question over there it's kind of your right time wise I was wondering if the user VM can actually see the user the VM to domain communication you know like the communication going between the VM and the network driver domain for example can I say run like you know TCP dump or whatever and see any of that communication or is it all just kind of you know at a lower level to where it's obfuscated or hidden from oh okay I mean it is at a lower level right I mean you could insert something in on there or which doesn't you generally not expose any of that to the guest at all if you need to see that stuff that should be in Dom's area because if you could see it as a guest you're suddenly aware and that's a bad thing and there also might be a number of attacks you could make and that's a bad thing from a security point of view a case where a user VM on host one could contact a domain on any host to host I don't I don't take that pressure I mean actually there's more of a I think it goes kind of more into para virtualized versus you know architectures in general of that would depend on the architectural choices made whether between the various user domains if the communication is allowed or not but that really just two different hosts I mean I don't want to derail your presentation either so it sounds like probably not applicable and maybe you want to chat to Mike to Mike afterwards so I didn't expect that I had to talk through a lot of this diagram so you cough me out a little bit no that's okay actually it's good to have interaction and we have a few minutes left I do have to rush now anyway so the key point about this slide earlier is that actually even one of the implications of this aggregation is that your trusted computing base becomes a lot smaller and then you get boundaries between different domains and it gives you quite a lot of security advantages but then Xen also has this concept of Xen security modules and basically the Xen security modules it's the Xen equivalent of the Linux security modules and then we have something called Flask which is the Xen equivalent of SELinux and that's been developed and is maintained by the NSA which is one of the bigger contributors to the Xen project on average they contribute between 4 to 6% to the project every year and also if you want to set that up it's basically entirely compatible with SELinux and you use the same tools the same architecture you just have object classes which basically map onto Xen interfaces and then what that means is that for example I can implement the Flask policy for a network driver domain and basically say that any network driver domain can only access interfaces related to networking and I can also say that for example any guest VM can only access specific operations when it's trying to talk to the network driver domain and of course within within that domain I can also configure SELinux to tie that down even more so that gives you the possibility to really build a hyper secure system but of course it comes at some cost and flexibility is all up so we talked about the hypervisor itself we talked about the SAPI we talked a bit about disaggregation I wanted to cover quite quickly just the ARM hypervisor and as we're running out of time I'm just going to skip through some of this quickly so as you may know the Zenfors 3 which is about to come out basically has functionality for ARM based servers based on but they do require the virtualization extensions in ARM v7 and v8 the current port has been validated against the versatile express board the underboard and it also works on the Samsung Chromebook and we have it running on the ARM v8 fast simulator now one of the things which is really interesting I have a bit of a history with ARM I've worked for ARM for a while so this was kind of a little bit of a pet prep project of mine to sort of nudge the community into getting started with this project and if you look at an ARM SOC what you have is the device tree which describes the system we have a generic timer you need a generic interrupt controller version 2 and a two stage MMU and all of these you typically have on your on your ARM v7 and v8 platform from an architectural perspective ARM has a number of architecture features related to virtualization there is a hypervisor mode at the bottom and then obviously it has kernel mode and user mode and it's got a nice and neat hyper pool interface as well and if we now map this we'll extend components back on that so you have to extend hypervisor running in the hypervisor mode making use of generic timer generic interrupt controller to stage MMU translation if you look at your regular guest it runs in kernel and user space as you would expect and then a two top to each other via the hypervisor interface and then actually we have DOM 0 which handles all your IO via the PV interface and actually I think right now Xen is really the only open source hypervisor which makes use of all the architecture features for virtualization and that has a number of really interesting consequences so if you look at if you look at the virtualization modes I just picked out the two optimal ones for x86 we have pvh and pvhvm then we have those two different approaches whereas for ARM we only really need one mode we don't have to have this distinction between pvh and hvm anymore and a consequence of that is that it actually really simplified simplified the code base and the architecture of Xen for ARM we can kind of see a brief table which shows the size of the ARM or x86 specific portions of the code and for ARM v7 and v8 and it includes 32 and 64 bit support it's only 17,000 lines of code and what's really happening now is that some of the lessons we're applying back to the x86 port in the future really simplify the architecture there as well that's going to create some really interesting innovation in the future so 30 minutes so I wanted to talk about the last project a little bit and to do that I really wanted to introduce the concept of library OS first so if you look there at the right you see your typical domain zero architecture with Xen and we have a guest VM and the block at the bottom basically there's no Linux for example in there it's basically just a library which talks to your application which and that talks directly to the Xen APIs as well as the PB backends and what you then can do you put for example and there's a number of examples where this has been implemented there's a project called Erlang at Xen which basically takes the Erlang runtime and has it talk directly to the PB and a hypervisor interface and you can run basically the Erlang applications directly on any Xen base file and it has a number of really interesting benefits it's really small footprint very fast startup fast migration of VMs but it's also ultra secure it's cutting a lot of complexity out of your system and that approach has been implemented for Erlang for Haskell for a language called OCaml and just recently just at the beginning of the year Cambridge University which developed one of these library OS as Mirage OS approached us and basically said well we want to become an Xen incubation project so we went through the community process there was a boat that baked about it and they got accepted in they're in beta stage right now the first release is on its way and I'm not going to go into this in detail if you're interested just check out that slide deck at the bottom so what's next well Xenfrost 3 just it's Wednesday today just today we went into code freeze there's a lot of new features in Xenfrost 3 I mean typically it takes the community about 6 to 10 weeks to create a release from the point of when code freeze starts and again I think there will be a blog post tomorrow on Friday detailing what's exactly coming what's in and what's out I haven't been able to keep exactly on top of it because I'm preparing the move of Xen to the limits foundation what's coming along the term and some of this is already happening so there's some a push to establish a shared test infrastructure for the Xen project right now in a situation where most of the major vendors and computers are duplicating some effort that's actually where also Mirage OS is really interesting you know it's this whole idea of library OS just having these sealed VMs what's the title provides a really neat way to test the infrastructure we're seeing a focus on more usability and better distro integration so there's a project going on right now to get Xen and Xen API into Xen OS 64 there's also an increased focus on on downstream projects for example OpenStack Xen Orchestra but also other cloud orchestration stacks I already covered disaggregation and I think another thing which I started to see and where this momentum building up behind is better than BERT and BERT Manager integration which I think ultimately will also embed Xen a lot more into the Linux user ecosystem you don't have to learn anything special given that you know and it would create a lot a much better user story so just to conclude a few easy ways to get started we have monthly document days where the whole community comes together and fixes up documentation I usually get loads of contributions during our document days the next one is in two weeks we'll start having test days part of the Xen 4.3 release cycle now I will sit together with the team in the next week to start scheduling these around the release cycle obviously development happens via mailing lists and IOC and if you're interested find me and I'll hook you up and I'll show you how to easily get started we also have a hackathon planned in May in Dublin where all the core developers come together and start planning the next release and also we'll probably also fix some problems around the existing release so that's also a good point to get started and that's it there's just a few links to resources and we have a few minutes for questions before the next speaker starts so yeah features available in Xen security was mentioned so I wanted to mention those who are interested in using the trusted platform module there's a complete implementation of virtual TPM on Xen that was done together with IBM and Intel when I was at Intel actually so it is fully compliant with TCG's TPM 1.2 specification so what it does is actually maps your hardware TPM to DOM 0 and then you can create instances of VTPM what we call and it kicks off VTPMD demons for each of your VMs so anybody who has that level of necessity of security I just wanted to mention that there is that option and it is an excellent implementation if people are interested any other questions it's fine it's okay I was just worried about go ahead I'll find you after one more question there go back to Wave, back to your slide where you showed the driver running DOM 0 where Steve Gass have that what do you call that the little stuff so I'm just wondering with a piece of hardware what do you call VMs is the choice of going to DOM 0 where is the stuff is it either one or the other or can you have some VM actually goes to the DOM 0 some have stuff I'll let Mike take that I think that's various options but you know I mean it's a tricky question there are actually some interesting bugs in the PCI Linux PCI stack which create problems about sharing them the amount of security there are ways you can share them sometimes and if the things like SRIOV Nix there are obvious ways of doing that the weather you'd want to is an interesting question so there are opportunities to do that and the good thing about the ways it's been designed is you can cut it pretty much any way you want there are very few constraints by the architecture if you want to have three Nix into two and blah blah blah whatever you want to do it's very open there's very little constraint in terms of the way it's been written looks like we're done thank you thank you for coming and enjoy your rest of your set