 Let's start. Okay, ten minutes. We'll do it very quickly. So my name is Jens. I work for Red Hat on Werder O. I would just want to give an update on what's happened since last FOSSTEM, what we've been working on. So we'll skip this. Everybody knows what Werder O is, right? And so last year I was here and I talked about the Werder O 1.1 specification and went into more specific of what we did for the new feature called packed word cues and other things. So that's... You can go and watch that. Now I want to talk about what we did for the Werder specification in the meantime and later in my talk I will go about some other things that happened in the Werder O space, some topics there. So Werder O 1.1, so Werder O is a specification for Werder O. It's a standardized under the OASIS umbrella and for Werder O version 1.1 we're focusing on performance improvements, but also on how we can make it easier for hardware vendors to implement Werder O in hardware and to make... So there's a few new features, feature bits for that. There's also other things changed in Werder O not related to hardware and we have some new Werder O devices that I will talk about. Lots of companies or vendors or people in the community contributed to Werder O and current specification that's under final review for 30 days, I think maybe a little less now. It's final review, so people can look at it now and then after this period it will be published. So what did we do for hardware support? Last year here I talked about packed word cues and how it's a simpler way of implementing word cues. We went from having two ring buffers and many other data structures in shared memory to one ring buffer. That makes it more cache friendly, but also more friendly for hardware implementations because they have only one place in memory where they have to go and look at if new descriptors are available, things like this. We introduced some feature bits. So one is for memory barriers basically. Hardware devices have different requirements for memory barriers than a software implemented device has. There's a feature bit for that. So if the device turns this on, that means the driver has to use different memory barriers. There's a feature bit for restricted memory access, so on some platforms it can be that the device cannot access all of memory for different reasons. There could be IMU, there could be some other address translation from bus addresses to physical addresses, something like that. So that would be a way that the device could say, hey, I'm on this, I'm restricted with respect to memory and then platform code would have to handle this kind of things. There's another feature bit for enabling specifications only for specific descriptors. So in our descriptor ring, let's say that I want a notification when the script or X becomes available. Only then a notification is sent basically. And then another thing is that we can attach additional data to notifications. So things like what's the latest available descriptor, latest used one, the case of packed word queues also, the rep counters, things like this. I talked a little bit in more detail about hardware, hardware accelerators and what we did there at Defcon, the slides are online and I think there will be a video as well soon. So if you're interested in that, then you can go and look at that. Another thing that we're working on is whether or not failover device. So this is basically for in the guest in a VM, you would have automatic failover for networking. So you have basically three devices, a failover device, a primary and a standby device and the failover device basically handles switching from one to another. So this primary device could be a pass route SRIOV device. So far you have a very fast data pass and then you have WERDER or NET as a standby device. And this is nice for a few things. One thing would be that you can do hypervisor controlled life migration. You basically unplug the fast SRIOV device and it automatically switches over to the WERDER or NET pass which is slower, but you can use the WERDER or life migration code, WERDER or life migration framework to migrate to a target system and they are available. You can plug another fast SRIOV card for fast networking. So the guest part of this is upstream. In QME we're still working on this. There's still some discussion about different approaches, how to involve management or not, a management layer or not, things like this and so still work in progress. We have some new WERDER or devices that were worked on during last year. We have WERDER or IRMU. We have WERDER or crypto which is basically so that you can make use of a crypto accelerator and a host and your guest. We have WERDER or VSOC. It's basically for guests to host communication via a socket. You basically open a socket with a new address family and you can use that to implement guest agents or services in the hypervisor, things like this. WERDER or GPU, we had this before I think, but now they have 3D support. They basically push down Graphics Day to QMU and that translates it to OpenGL and then passes it to the GPU. WERDER or MAMMAS device that's still being worked on, it's for memory hot black for the virtual machine and it's basically a unified approach to handle all of this in QMU, handle different page sizes, support numers, things like this. WERDER or balloon was there before, but it has a new feature for free page hinting so basically the guest can say, hey, it can report to the hypervisor, hey, these pages are free and one thing where this could be used is for faster live migration. There's WERDER or S that's still in progress. It's to share files or folders between guests and the hypervisor. And then I also heard someone talking about these working on, or they are working on WERDER audio, but there's no patches yet so we'll see if that ever shows up. WERDER hardware accelerator, so I talked about what we did for the specification. There's actually one device that supports this now from Intel. It was announced last year at OpenVswitch.com. It's an FPGA based card. It's very powerful once one of the things it can do is WERDER net offloading in hardware. And not only for the new WERDER 101 packed word queues that can support WERDER back to 0.95 I think. So yeah, there's also work going on on the software side. Intel has been doing a lot of work here. Basically they implemented a new framework that, so that you can make use of the hardware card with the existing virtualization stack. They have patches for QMU. They introduce a new MDEF in the kernel to have a generic device interface that will work not only with the Intel card, but also with other accelerator cards. And the basic idea of this framework is to decouple data pass and control pass so that you can have the data pass pass through to your VM and you have the control pass still going via QMU and via the VFIO interfaces. And so basically you have the advantage that you have a very fast data pass, pass through like performance, but you have features like live migration can work for this rather easily. There was a talk about VDPA at KVM forum last year that goes into much more detail than I can do now in my 10 minute slot. This is something I will not cover now and you can go and look at it. And coming to my summary, there's the new Verder 101 spec. The final review is only on for a few more days, so if you're interested in this then go and look at it now. If you don't have time now, we can always add features later. It has a lot of changes especially for hardware accelerators but for lots of other things that I didn't cover now. And what I think we will see in the future is more work on page hunting, more hardware implementation features and also I think that there will be work on VDPA for containers. At the end of my talk, I have a monthly meeting that we do on the phone, so there's mailing lists and everything of course, but if you're for example, if you work for a hardware vendor interested in implementing Verder on hardware on your specific questions, you can also join us in this monthly meeting that we have. The next one will be February 13. Just contact me and I will put you on the invitation list. I think that's it. Thank you.