 Thanks all for coming, especially if you're not just sheltering from the rain. I think it stopped, so now is your chance. All right, so I'm here to talk about Zen and its relationship with Debian. Before we get into that, I thought maybe I would just give a brief history of Zen and sort of talk about its architecture and define some of the terminology that I'm going to be using. So Zen span out of the XenoService project, which was a research project at the University of Cambridge. And they tell you not to read off the slide, but this is from their research project webpage. So the XenoService project is building a public infrastructure for wide area distributed computing. We envision a world in which XenoService execution platforms will be scattered across the globe and available for any member of the public to submit code for execution. So it was spun out into a separate project. Version 1.0 was in late 2003. It went fairly quickly to 2.0, sort of major rearchitecting. And then again, another major rearchitecting, we had version 3 towards the end of 2005. And that's the architecture, which we still use today. We've gone past 4.0, but the compatibility and the architecture remain the same. Currently, we're at Zen 4.1, and we're currently frozen for 4.2. So some basic Zen concepts. Zen is essentially what we call a type 1 hypervisor. It means it runs directly on the hardware, as opposed to a type 2 hypervisor, which would run in a host operating system. Now there's a little bit of a twist with Zen. It's not quite exactly a type 1, because rather than having all the device drivers and what have you in the hypervisor itself, what we do is we have one or more privileged domains which are able to see the hardware. And so you run your drivers in those, and they use those to provide services to actual guest VMs. So typically, in a normal Zen system, you have one such privileged domain, which we call domain 0 or DOM 0. And this is the first domain which is loaded on boot, and it contains your NIC drivers and your storage drivers and tool stacks and host console access and things like that. And obviously as well as the control domains, we have your actual guests, which is where your customers or your users can run their workloads and things. One of the interesting things about Zen's architecture is that you can actually split up this privileged domain and you can take services out of it into a variety of different types of service domains, drive domains, stub domains that all terms will come across in a bit, which kind of lets you deprivilege those down to their minimum privilege level. And it gives you good properties for robustness and isolation and security. Hello. What do you think? How's that? Yeah, so that's that. So guest domains. So there's basically two forms of guest domain. The first of which, and the most long-standing, is a para-virtualized domain or a PV domain. These have been around... That slide's all gone horribly strange, isn't it? These have been around since, basically, since the very early days. And the key thing about a para-virtualized domain is that the guest knows that it's running virtualized and it will, rather than doing things directly with the hardware, it'll make hypercores to the hypervisor and do things in a virtualization-friendly way, which means that they can be, you know, pretty fast. The disadvantage is, though, that you have to modify the guest kernel, and that's a lot of work and means that you have lots of patches to get upstream, and that's something we'll talk about later. So a PV domain has access only to para-virtualized devices and how they work is you have a front end in the guest and a back end in the control domain and they communicate with shared memory. There's a PV block protocol and 3D network protocol, and so data gets transferred to the back end, which then bridges it to a real NIC or writes it to an LVM volume or, you know, does the appropriate back-end-type thing. So we talk a little bit about driver domains. So as I said earlier, you can take functionality out of DOM-Zero and put it into its own domain. One of the sort of easiest things you can do is to take drivers out of DOM-Zero and put them into their own domain. So you might take your disk or your network and put them in a domain. And then the guest, instead of talking to DOM-Zero, it talks to this drive domain. So, you know, that's good security, isolation, all that kind of good stuff there. It means that maybe if, you know, the driver domain doesn't have to run the same kernel even as DOM-Zero. So maybe if you like the BSD-PF firewall and you want to run that instead, and you want to keep DOM-Zero as Linux, then you have that option. Maybe you've got a bit of a shonky driver from some vendor and, you know, it crashes a lot. So you can put that into its own domain and you can restart that without having to take the whole host down, which is a pretty good thing. So the other form, guest domain, is an HVM domain, hardware virtual machine. And these use hardware virtualization extensions, which pre-virtualized domains don't require, provide a complete PC emulation type virtualization. So the guest operating system thinks it's running on a normal PC. There is emulation here, so it is slower. You know, mainly IO is pretty slow. But the advantage is that you don't need any special knowledge in the guest kernel. It doesn't really need any special code. So for purposes of IO emulation, when the guest tries to do some IO, we trap that in the hypervisor and shuffle that off to what we call the device model. So the device model is a process per guest domain, per HVM guest domain, which runs in your control stack. And it's a QMU-based, it's basically QMU with all the CPU emulation stuff ripped out. And so it just emulates NICs and PCI buses and storage controllers and what have you. But there's another opportunity for disaggregating here. So what you can actually do is you can take that device model and you can link it against a thing called Minios, which is a little monolithic kernel that runs directly as a PV guest on the hypervisor. And you can have one of those per domain, which takes all of that emulation code, which is notoriously tricky and prone to bugs. And you can stick that into a domain which is only privileged to do things against its partner guest domain. So that encapsulates that privilege out of your main tool stack. So I said there was two kinds of domain. There's a bit of a lie. There's actually a sort of a spectrum. And one of the things that we have is PV on HVM. So that's taking a standard HVM guest and giving it the ability to use several PV interfaces. This has kind of a mix and match of the advantages of both the other types. You can have the same install experience as native with pieces like hardware. I mean, the main thing, if you're not going to do any other PV to your HVM guest, the thing you really should do is device drivers for your NIC and your disk. We're talking gigabits a second, instead of megabits a second on your NIC if you use PV, Ralph, and emulated. There are some other ones. You can have PV interrupts, interrupts designed to avoid existing back to the hypervisor in order to do EOIs. PV spin locks means you're not spinning, you're waiting for another CPU which isn't even running. You can actually sleep instead. And so the PV on HVM gives you a bit of an interesting trade-off on the spectrum between PV and HVM. Interesting one of those is the memory. So on PV, you would do one of the main things that you do to prioritize guest as it drives the page tables directly. Now, that means that page table updates are relatively expensive, but things like TLB misses are native performance. Whereas if you've got nested paging, which is on HVM you have the hardware to provide the illusion of a second level of page tables, then page updates are actually pretty cheap. But TLB misses are very expensive because for every level of the guest page table you have to walk another page table in the second level paging. So a TLB miss can be like 24 memory accesses instead of four or something. Something stupid. Okay, so I mentioned that for PV you need to modify the guest kernel. So originally what we had was the, what we call a classic Zeno Linux port, which was a very heavily modified Linux kernel. But basically you rip out the MM sub-system and you replace it with Piper calls. But that gave you a compile time choice for a kernel to either run on Zen or run on bare metal, which is all fine, but for distros it's not that great. You have to have two kernel packages. You have to have special flavors. You have extra QA, you have extra testing and it's confusing for your users because they need to figure out which kernel they want to do. Also when someone tried to upstream this to Linux kernel, the kernel container said no. Quite rightly, I think. So later on, sort of around 2006 came up with this idea for what's now called Power of Ops. The idea here is that you take many abstractions that already exist inside the kernel, added some new ones in order to allow boot time selection of either to run PV or native. So these are hooks in, those existing APIs for interrupt handling. So we plug in the Zen interrupt handler at boot time instead of the APIC interrupt handler and we invented up ones for doing MMU updates via hypercalls instead. They can get swapped out at boot time and one of the goals was that it wouldn't perform any worse on native when running with this configuration option turned on. So there's some quite clever patching stuff in the kernel where for hot paths, basically it's an indirection via a function pointer so for hot paths like that then the kernel will patch into the five or nine bytes the actual instruction that does it and therefore avoid a lot of this. There's some incredibly complicated macros which avoid GCC spilling registers and clobbering things that you don't want it to. But yeah, I mean the goal of running that no performance loss on bare metal is pretty much there. So PV Ops, Power of Ops, DOMU that work started around 2.6.22, 32-bit 2.6.24, that kind of came usable. 2.6.27, DOMU support was completed at 64-bit and then 3.0 is when we eventually got DOMZero upstream. There are some other operating systems that have been power virtualized, NetBSD and FreeBSD are the big two that are still going today. But there's also I think a Hurt port to Zen which one of the sort of side effects of the PV device stuff is that you don't need drivers for your, you need one set of drivers for your guests so you're not forever porting drivers to the Hurt, you know, run Linux DOMZero and Hurt DOMU in it. It's kind of cool. All right, so that's kind of what Zen is. Hopefully to find some terminology that you need. So let's talk about how Debian and Zen fit together and what they've been up to. So Zen arrived in Debian pretty early in its life. Adam Heath packaged, the earliest I could find was 1.2 in the change log in March 2004, which is not long after the 1.0 release. I think that he actually had versions available quite a bit before that. Really it was when Version 3, Julian D'Angu uploaded that in 2006 and Edge was the first release, you know, that really contains Zen support. It was both DOMZero and DOMU. Guido and Bastion have been maintainers since then and mainly it's Bastion that he says. Right, so Debian is his guest. So in Edge we had a special kernel flavor which had these in a Linux patches applied. So this means you have Linux image, well, kernel image back then, with a Zen suffix. Installation, I mean really a virtual machine root file system is a lot like a true root, so you would use Debootstrap. Unlike a true root, you do have to set up Fstab and you have to set up consoles and networking, so there's a little bit more to do than that. So somebody, I don't know who wrote Zentals, which is a bunch of scripts to help manage this. You know, they run Debootstrap, they tailor the resulting file system, they help manage your LVM volumes, they output the necessary config file. And that's, you know, people still use that today. It's a really useful, quick way to deploy a Zen guest. Then in Lenny, we got our first Paravid Ops for i386. So Lenny had a 2626 kernel. So the 686 big mem kernel flavor then was enabled for Zen support out of the box because 64-bit didn't arrive until 2627, so Lenny still had a classic Xenolinix flavor, which for AMD64. But because the standard kernel is now supported in Zen, I mean, we could use Debit installer to install guests, which kind of gives you the sort of the same experience you'd get on native and you can pre-seed and you can set things up the way you like. Obviously Debootstrap and Zentals was still available. Squeeze, we've got Paravid Ops. Squeeze was 2632, I think. So Paravid Ops for both the AMD64 and i386. We did some more work on DI, so you can install both 32-bit and 64-bit. We added Netboot CD images and the multi-arch DVD images supported now in support Zen. So if you're offline and you want to install Zen, then you can boot from one of those and, you know, install in the normal way. Debootstrap and Zentals are still available. So for Weezy, that's a 3.2 kernel. It's a newer, more featureful Paravid Ops kernel, basically. But the other interesting thing was someone noticed the Blu-ray didn't have any Zen support on it, so we added that. All right. So as a host, I mean, Dom0 is really just kind of a special DomU. So there's a little bit of extra code. So basically it lagged about a release behind. So we had a Xenolynex flavor at edge, as I mentioned. In Lenny, we still had a Xenolynex flavor for Dom0 usage, so a special flavor. Then in Squeeze, we switched to Paravid Ops, but not a mainline Paravid Ops for the kernel flavor. And finally in Weezy, we have no more kernel flavors, and the main standard kernel packages are where Zen support is at. Because 3.2 supports Dom0 out of the box, and that was quite an achievement for us. Okay. So where are we now? We've spoken a little bit about Weezy in the past section, but for Weezy, it seems pretty certain that we're frozen. We're going to be shipping Zen 4.1, which is the current upstream stable release. As I say, there's no more Zen flavors. We're using the PVops code throughout. That's really good. I mean, that means there's much less overhead for the kernel team to keep that stuff. It means I just fix a bug upstream, and I CC it to the stable kernel maintainers, and then patches flow, and eventually, Damian gets the fix, which is really nice. So one other big thing that's changed in Weezy, we ship XEP's Zappy tool stack now as part of Weezy. And if you weren't in Tamar's talk earlier, you're probably asking yourself, what does all this mean? All these three and four letter acronyms. So XEP is an appliance virtualization solution. Which it ships basically as a sentos-derived installation ISO, and you bung it into your machine, and you hit go, and it chugs away, and the other side comes a host capable of running virtualization. And it's based on sentos, and maybe I'm preaching to the choir, but I think we all know that sentos isn't as good as Damian, and so that's not really great as it is. So Project Kronos, some guys at Citrix, decided that they wanted to split out the tool stack used by XEP, and sort of disentangle it from its XEP roots and its sentos roots, and make that into something that, as a project could be shipped and packaged by any distro and included, and they initially did their work targeting Damian. We're all big fans of Damian at the Zender organ. And so the goal was to have the ability to get installed XEP Zappy on Weezy and turn Weezy into what looks like an XEP host, but it's running Damian and not sentos. And I mean, that's pretty much there. I mean, that works today. So I guess I should big up Mike McClurk, Thomas Goran, and Richesh Raj Sharaf, who did a lot of the work on this, John Ludlam as well. Now, you might be asking yourself, why would we want this? So Zappy, which is the tool stack in question, it supports a XML RPC interface, quite a rich sort of powerful management interface for Zen hosts. And it was designed to be pretty programmable and there's bindings for lots of languages. And that API is the preferred API for various cloud management stacks, open stack, cloud stack, open Nebula, I think, uses it. And so there are other cloud management layers. You're going to hear about Gennetti later, which I think doesn't use this interface. It goes at the lower level interfaces. But there's plenty of tool stacks there that use this interface. And so by supporting this in Debian, it means that not only can Debian be a good Zen hosting platform, it turns it into a really useful thing for your cloud infrastructure. And building completely free cloud infrastructure I think is something to strive for. So the future. That's kind of where we are today, what's happening as we freeze and freeze some more for Weezy and then what's going to happen in the next release. Hypervisor wise, continuing to track upstream releases in SID, the package Zen project on Allioff has 4.2 snapshots in it. I don't think they've been uploaded even to Experimental. Going forward upstream, we're just starting to think about the Zen 4.2 3 release. There's going to be a Zen Summit in late August co-located with LinuxCon and LPC and Kernel Summit and everyone's kind of descending on San Diego in late August. So if you're there and you're interested in Zen, then you should come along. One of the big things that's happening from 4.2 onwards upstream is the transition from the old Zen D tool stack, that's frankly not a maintainable mess and no one wants to go near it. We've been working on a replacement tool stack for that, sort of nice clean architecture to... We can maintain it. If you look in the Debian bug tracker, you'll find that there are tons of bugs against, there are really Zen D bugs and there's nowhere to send them because nobody wants to maintain it upstream or in Debian. Hopefully by having something that's maintainable that tool stack will... allow those sorts of patches to get fixed. Yeah, in the future, better documentation. It's always something that everyone wants and very rarely do you get. There's a pretty good wiki page on Debian. The Zen wiki has a category of Debian-related stuff, how to install a host, how to install a guest, this type, how to do this, how to do that. And upstream have regular document days. So that's the last Monday of every month. We all down tools on our compilers and edit wikis for a day. Make quite a lot of progress that way. It's surprising. Kernels. So all I've got really to say about kernels, upstream, we get a lot of support from upstream now, so there's not really anything special and no special flavors and less work for everybody. There are some other kernels in Debian that I think have PvE support and a fairly short K-free BSD does. So if there's anybody who's interested in that or the herd making that stuff work well in Debian, that's something I'd be really keen to talk to you about. That'd be really cool. So Zappi. So the XEP Zappi thing, that works today. It's quite a new project. It's unwinding it from its history. It's still ongoing. I think everybody would encourage you to try it in report bugs, usual way report bug or package end developer on Alioth. There's also an upstream wiki page on the sorts of information that is useful in these kind of bugs. Going forward, there's more work, I think, to be done to separate Zappi a little bit more from XEP and integrate it a bit better with Debian. I believe there are some limitations of the current thing. There's plenty of work to go on there and I bet Tom, I would love, if you would have come and talked to him and offered to help out and what have you. So guest support. So something I personally want to work on is integrating PVHVM support into the installer. So with Squeeze and Weezy, you can do an install as an HRM guest from the usual media and then you can kind of mess around and install special kernels and tweak stuff a bit and get yourself a PVHVM on reboot. It's all made quite easily. You know, we default to using UID-based mounting and things. So the fact that your device name's changed under your feet doesn't cause as much trouble as you might imagine. In fact, I think this, by coincidence, works with the Weezy named D64 today. It's just kind of a side effect of the ParavotOps thing. V just needs sort of tidying up and the rough edges filing off and made, you know, very automatic. And I wanted to, you know, as I say, there's trade-offs between PV and PVHVM. So depending on your workload, you might want one, you might want the other, and we should try and make both available to our users, I think. So there's another interesting thing coming in the pipeline from upstream is what we call hybrid guests. So if I describe PVHVM as adding PV features to HVM guests, then hybrid is kind of coming at it from the other end. It's to take a PV guest and enable the use of more hardware features to, again, to get a good mix of, you know, the best of both worlds. The reason this is interesting compared with PVHVM is it means you take QMU completely out of the picture and you take emulation completely out of the picture. And, you know, that's removing a whole load of code, which is always good. There's been some initial prototypes for running hybrid as both Domzero and DomU. I expect that would land in 4.3, which ought to mean it would be ready in time for Weezy plus one. So disaggregation. There's a lot of room for... Essentially, Debian does none of this today, so, you know, there's always room for improvement. I think a really easy one would be to make it much easier to do a network drive to make with Debian. You know, if you could install a Debian VM, you could have to get installed then network backend or something and, you know, fettle around a little bit inside your Domzero to make it start that instead of IFRB0. And, you know, that's kind of it. And it's there. A bit of a harder problem if you're talking about storage. Obviously, booting from a thing that you want to actually shove later on into a drive domain is pretty tricky. But by no means impossible. And, you know, these sorts of things generally could run from RAM for the most part. So, we have in it random first generators. Maybe that's an interesting avenue way to approach that in the future. A more difficult problem is the Minios-based stub domains. So, Minios is this monolithic single application kernel that we use. It's Minios plus NewLib plus the application get linked to one blob and run in a single address-based kernel mode thing. It doesn't really fit into the sort of the usual distro model. So, I don't really know how to approach this. If anyone has any good ideas and, you know, maybe multi-arch, you know, we talk about partial architectures. You'd be talking about, you know, maybe what you might call a Pico port, half a dozen libraries, able to link QMU against, you know, just in a cross-buildy kind of way. Maybe, I don't know. Yeah, if you have any smart or cunning ideas about that, then I'm very much all ears. Okay. Another new thing we've got coming upstream is we have an ongoing new port to the arm. So, the new, there have been arm PV ports in the past, but here we're targeting the new virtualization extensions where we've announced for the V7 architecture and going forward the V8 stuff that Steve was talking about yesterday. So, currently, I don't think you can buy one of these. I've had some leads this weekend, this week about maybe places where I could find one, but currently we're targeting the FastModel emulator, which Steve demoed yesterday. The V7 FastModel is actually pretty fast and quite usable. But eventually, we're going to be targeting the Cortex A15. So, because these processes have virtualization from the beginning, we're kind of going directly to the hybrid-style thing. One of the big problems getting Zencode into the new upstream kernel was the MMU stuff. So, the fact that we've got nested paging available in the process from day one means you kind of skip that whole section. It was lucky that when we started doing this, Linux was just busy discovering that really all these multiple kernel images were a pain. And so they've been moving to Vice Tree, and we figured that we ought to learn a lesson from that and try and get things right from the beginning. So, just before I was coming away, we booted our first guest from a PV disk to a console, which was quite exciting. There's still lots to do. There's still lots to clean up and what have you. But I kind of hope we can get this done upstream in time to hit Weezy plus one. Debian has a long history of good ARM ports, and it's got a long history of good support for Zen, so it seems like a bit of a no-brainer to me. I think in the short term, mostly that's going to be upstream work. ZenDevail will be the place to come or come and speak to me later. I'm particularly interested if anyone knows much about UFI and that sort of thing on ARM and what's going on there and how we're going to boot these things and all that good stuff. So, hopefully you've seen Debian as being an excellent distribution if you're interested in Zen. It's been one of the most consistent in terms of support for Zen over the years. It's had a pretty good story. But also, I think there's a good opportunity for us here to become a leading cloud infrastructure operating system. There's a whole layer that Debian could fill really well, and Debian is really good in the data center and there's no reason why it couldn't be an excellent choice for the cloud as well. But plenty of other interesting stuff. If there's any other projects I talked about, I'm more than happy to wax lyrical. So that's it. Here's some places where people hang out sort of in Debian and Zenland where, you know, if you want to come for a chat or ask questions or whatever, you can either do that now or you can come find us in many of those places. Any questions? Hello. We have one question from IRC from Indy. Have you used the Zen Remus high availability system recently? And if you think it could be supported in WISG class one? So for those who don't know, Remus is a high availability project based on Zen from University of British Columbia. A couple of guys there. And it's basically, it's rolling checkpoints. So you're doing continuous live migration of the guest to a remote site and you fence the externally visible IOs. You don't commit disk or transmit network packets until the checkpoint has been acknowledged. So if your site goes down, then the other site can come up. And so the question is, have I ever used it? The answer is, I think I've run it once when we integrated the patches for it into the new tool stack. At the moment, I think I would say it's sort of proof of concept in the new tool stack. It's been well supported in ZenD for quite a while. It's not in itself a complete solution. You need to build a sort of an HA failover and an election system around it to actually do the actual failover in a safe way. You know, you need to make sure that you've, you really have died at the source before you start doing stuff at the destination. But yeah, I mean, I think WISG plus one would be eminently doable for anybody else. There's another one on IC. Another question from Demon Keeper. With Zen 4.2 raising and fast ZenD decision, what's the future of ZenAPI? Is there an Excel transition plan in the works? What's the future of what, sorry? ZenD decision. ZenD, so ZenD, I mean, it's effectively unmaintained. If someone wanted to step up and maintain it, then, you know, I guess we'd be happy for them to do so. It's truly horrible internally. It's kind of been started off as a twisted thing. It got untwistified. Then it got this XML-LPC thing, and it kind of got half turned inside out, and then that guy wandered off, and so maintenance-wise, it's not ideal. So 4.2. So Excel is a command line-compatible replacement for the XM. So if you have a script that says XM, create, or whatever, then you should be able to write Excel, create, and we would consider it a bug if you couldn't. So it should be fairly easy to... Sorry? Almost. Yeah, it's nearly there. More of this in my talk about compatibility nightmares, which will be up next. Yes, so Ian has got some stuff to say about this later. So there is an initial version of Excel going to be in Weezy, which doesn't quite meet these goals, and it's not yet the default upstream, and it's not yet the default in the packaging, but you can switch to it and use it, but I don't know, maybe it's 80% compatible rather than the 90-odd we're aiming for. But I would think that upstream for 4.3 will have sort of completed that transition. So the question that was relayed from ASC was actually about Xen API, not Xendi. Xen API? Yeah. Ah, okay. So Xendi supported an initial version of the Xen API interface, but never particularly well, and it's not really been maintained for several years. Zappi, on the other hand, has been. That is the maintained, supported, useful way to get an API remote protocol. So basically Xendi is sort of like a 0.something version, and Zappi these days is the 2.0, 3.0 thing. Yeah. So you said Excel is almost compatible with XM. It should be. What's not there? So the big thing that's missing deliberately is support for Xendi's managed domains. So Xendi kind of supported two ways of creating a domain. There was this idea of XM create, where you just give it a config file, and this thing comes into being, and when you destroy it, it's gone, and it's kind of ephemeral. See, you keep its disks. But it also had this idea of XM new, where you kind of introduce the domain, and it has a life cycle, and you can start, stop. And so that's gone. There are other ways of doing this. There's actually an init script that runs, that sort of does a pretty good effort, of starting domains at the start of day, which is, for some people, that's all they want out of managed domains. Zappi is really the excellent option for that. If you want that kind of functionality, I'd recommend that. If you're so inclined, then Libvert and the associated tools do a similar thing, and they support that. So Libvert is Red Hat's management stack. Will there be a wrapper, so that when we use XM, then it calls Excel instead? Because that would be a very good way to keep compatibility. So I think Bastion has made a wrapper now in Weezy. I think it's just called Xen rather than XM or Excel, that will call the appropriate tool stack at the appropriate time. And you can kind of configure that under, etc., default Xen, which tool stack you want to use. And, you know, in Weezy plus one, I should imagine the default backing up that script would turn to Excel from XM. And, yeah, with time, XenD will eventually die and be removed upstream. You could also make a Simlink. Or run said on your script. Hello. So you just touched upon Libvert, but didn't really mention it in the rest of the talk? I realized that, as I said it. Is that because it's a Red Hat thing and more tied to KVM, or is there any commitment from Xen to actually be involved in making sure Libvert works well with Xen? So one of the big upstream contributors are Suze, Nevelle, and they have a bunch of guys working on an Excel backend for Libvert. So one thing I didn't mention is that Excel is kind of a tool stack, but there's this thing called LibExcel, which is a library for writing tool stacks. And so the intention is that Zappy will use it, and Excel will use it, and Libvert will use it as their kind of backend. Because at the moment with Zappy, there's a bunch of duplicated code for building a domain and migrating a domain. And that should be the same stuff. So that's the target there is for LibExcel to be, and this is something you'll be talking about in the interface. Is that what Ian was going to say, or do you have another point? One of the things we've been doing in 4.2, Zen 4.2, is making some significant improvements to the infrastructure in that LibExcel library with the view of Libvert as one of the main consumers of that. So certainly in upstream, I think it's fair to say that upstream is any much keener on Libvert than Debian is. We see it as a, you know, certainly another way that people can use Zen, and we have no intention of deprecating it or anything. Yeah, so I mean, LibExcel in the 4.2 release, we're committing to keeping that API stable, so almost precisely so that people like Libvert can continue to consume it as time goes on, and as Ian says, there's a whole bunch of infrastructure in LibExcel to do event-driven tool stacks, which, I mean, if you look at the LibVert event-driving thing, and the Excel, it's, you know, fit together in a couple of thin-sheen functions. Any other questions? It looks like we have only a couple more minutes. All right, well, thanks very much. Thank you.