 So, for those of you that are doing high-performance workloads, the aspect is a bit off, but we'll live with it. Okay, thank you. So, my name is Marc Baker, I'm part of Canonical, I work on the product team there with a special interest in OpenStack. My name is Vincent, I'm 6-1 to you and I work a lot on packet processing. Good. So, given we're a little behind, we'll have to go fairly quickly, so please bear with us, but afterwards please come and ask questions if we wish. So, in terms of the layers, we need to operate and optimize, so hardware, of course. And I'll show you in a minute how we engage with hardware, but optimizing the hardware from different types of vendors in terms of taking advantage of particular accelerator technologies, particular hardware features that are in there, there are, of course, the platform. Now, here we're talking largely in an OpenStack sense, but are the platforms exist? As you were here in the last session, you saw Kubernetes, the canonical distribution of Kubernetes running on top of an OpenStack environment, you can also run that natively on bare metal. And, of course, we have Canonical OpenStack, one of the most popular, if not the most popular OpenStack platform out there today. As you probably know, around 55% of production OpenStack is on Ubuntu, 65% of large OpenStack is on Ubuntu, so we have the platform to optimize. And then we have the application, right? Those can be VNFs, that could be data analytics, that could be more traditional style applications, but these are the layers that we need to engage with. How we deal with that, the first is we use a technology called MAS, it's an extremely popular product, this is a bare metal provisioning product that maintains an asset inventory, and I'll go into some more detail about that, but maintains asset inventory of the capabilities of all of your hardware, features and functions that it is capable of, IP address management, some other pieces, right? But once we have that asset inventory and we have the hardware being managed and controlled by MAS, then we are able to do some smart things via the API, give me a machine that has particular capability, right? That matches my workload. Within, of course, the platform, we have areas there, and then we use a tool called Juju, Juju is a modeling tool that allows us to be able to model complex applications, software applications as we call them, so OpenStack or Kubernetes, Mesos, Docker, lots of different other applications that we model, right? Through tight coupling, we'll show how MAS interacts with Juju to be able to provide machine resources, in this case physical machine resources, with the capabilities that we have tagged within our bare metal environment. And then, of course, we have the application environments themselves. Now, if you're a VNF vendor or if you're an application vendor, then really optimization of that is up to you, but you will need to optimize for a particular, you'll need to optimize for a general cloud environment, whether it's an OpenStack environment or whether it's a container environment, right? And we feel very strongly, actually, that the platform should remain the same range, consistent optimization. Of course, we can optimize, but we don't want to be optimizing lower down the stack for particular VNFs, as otherwise we're building areas of isolation. So, and then we have the three core primitives. So the things that OpenStack is very good at, other platforms are very good at, is providing compute, storage, and network resources, right? And so we look at those areas, how do we optimize them? And again, not necessarily optimizing for very specific location of VNF, but how do we optimize to ensure that we're getting very fast storage, very fast network throughput, and really being able to take advantage of the compute horsepower that exists in the hardware, you know, perhaps by removing layers that induce, introduce latency or performance problems. So one of the ways that we try and address, that we address this performance is through giving you architectural flexibility. So there is, there isn't necessarily a one size fits all architecture for your environment, because your environment is your environment, right? And it has its own characteristics, its own network characteristics, it'll have its own data center characteristics. And of course, your environment, your management tools are the bits that will have their own properties. And so we, at Canonical, we provide you with full architectural flexibility. Via the tooling, and again, I'll show you this in a second, via the tooling, you're able to be able to architect, place services. Architecture is all about placing services, choosing where you place services on the available resources you have. We can architect that, deploy it, do some measurement, do some testing, move services, re-architect those, place them in different areas, re-deploy, and we can do all of that within minutes. Now, whilst this isn't a sort of scientific paper level of optimization, it's the optimization, actually, that people do in the real world every day is by trying it in one way, re-architecting, trying it in a different way, measuring the deltas, right? And I'm sure all of you do this on a daily basis in terms of your testing. And that's, it's key for us to make that super easy for you to be able to do. Try it one way, try it another way, measure the difference, understand how an architecture is going to work in your environment. But we also come with opinions, right? So we have, we're canonical, we work with some of the biggest customers that are running OpenStack today, in fact, the biggest customers that are running OpenStack today, the people like Walmart, the people like Deutsche Telecom, the people like Box. And from working with those people, we get a lot of input, a lot of experience as to, okay, what is a saying, what is a good architecture? What's it going to provide us with availability and performance? And so we can, we've taken a lot of that knowledge, a lot of that experience, encapsulated it into a tool called the Autopilot that will deploy, manage, scale, and operate your cloud in a very real way, right? And do that to a reference architecture. That reference architecture is fully converged, but the important thing for you is that it's not a white paper level architecture. It's not a thing that, you know, we're handing out down on the booth. It's actually encapsulated in code, right? Because that's where architecture, software, lives and breathes. So we talked very quickly about MAS. MAS is a Metal as a Service. And the reason that this is important, because to be able to take advantage of bare metal performance, you have to understand what you have and you have to have a means of making that easily available. So MAS Metal as a Service is an extremely popular tool used by a great many different customers, as I've already said, Walmart and Best Buy Sky, who you saw on stage, for example, on Tuesday running all sorts of infrastructure, not just open stack infrastructure, all sorts of infrastructure using MAS. It's bare metal provisioning, dynamic allocation of workloads. It has very sophisticated IP address management. So if your IP address management's a problem for most people, there's no requirement for a separate tool. And the key piece here is really the, MAS is not an Ubuntu-only tool. So whilst I'm stood here with an Ubuntu shirt on, running on Ubuntu laptop, you're all wearing Ubuntu lanyards, right? But we know that there's a lot of sent-offs out there, there's a lot of windows, there's a lot of other operating systems. And so MAS, as a general purpose tool, allows us to be able to manage that hardware of entry and support it with many different environments. Like many of you have probably put these things together using Pixie Boot and TFT Boot and other services, MAS will allow us to be able to commission hardware, maintain an inventory of that, and then allocate workloads to it, bring up an operating system, allocate workloads to it. I'll, in the interest of time, show you that very, very quickly. So let's get rid of my email. There we go. So here's a MAS system. It contains a number of nodes, so you'll see these listed down here. Some of them are allocated, some of them are allocated. If I go and show you just any one of these across my fingers that we're still connected to the network, yes, we are. So you'll see that we have some machine information here about the cluster name. MAS is able to break things into a region and a cluster so we can have, provide redundancy and start to map our physical resources into our cloud resources very, very effectively. You see, we get a lot of basic information, but if we then scroll down, we'll see about network interfaces, different events that have been associated with that, and as I scroll down further and further and further, you'll see here some of the commissioning information. So information that we have started to flush out during that build time. You see, that is what allows us to be able to create, to see what type of NIC this machine involves. How much memory does it have? What type of CPU does it have? Is the NIC capable of S-I-R-S-R-I-O-V, for example, or some of those network accelerated technologies? Right, which Vincent will come on and talk about in a second. So very simply, it allows us to be able to, say, commission, manage this hardware inventory, and actually the piece I was just about to show you there and came back off was tag them. So here, it's a relatively simple tagging. You'll see that we've got tags about which sort of installer to use, whether it's a virtual resource or whatever, it's actually a real physical resource. And likewise, we could tag it with S-I-R-I-O-V, capable, DPDK, capable, for example. If I want to bring one of these systems up, you'll see that some of them are deployed, some of them are allocated already. If I want to go and bring one of these up, very, very simply, I can take an action here, go and deploy, it'll ask me some questions, I'll put Ubuntu 16.04 on, it's the most current version of, certainly of our LTS edition of Ubuntu, release of Ubuntu, contains a lot of those technologies that you will want in high performance environments. Go and deploy that, and off it will go. So maintaining this inventory of hardware, understanding what we have, and then making that available to a higher level tool to start to be able to model these complex applications is really the first step on optimizing for performance. As there essentially is waiting for instruction, it's waiting for instruction from a higher level tool, that could be Juju for modeling and service orchestration, but it couldn't also be other tools. So you could use Chef, for example, or go directly in via the interface that we just saw. Problem number two, if we move higher in the stack, is modeling big software. What do I mean by big software? Well, OpenStack is a great example of big software, right? It is very many different components, six core components, but then a great many other components that sit, many of which not ready, and we wouldn't necessarily recommend for production, but some of which may be interesting to you. So, the way that you connect, deploy, connect those, it's too big for using sort of traditional package management, it's too complex. And what we need is a system that allows us to be able to model it, not just the deployment of it, but the modeling, the operations, and the connection of those things. We have Juju in the last session, you will have seen some of that. This represents a service model of how what OpenStack looks like. And if I, how are we doing? If I look through that, we've got a number of the different services, Keystone, Nova, Horizon, et cetera, and the relationships between them are all defined in something that we call a charm. So, when we want to deploy an OpenStack environment, and manage an OpenStack environment, we will put this either onto the canvas in the GUI environment that we have right now, or via a command line, or if we wish via an API, and say go and deploy to this endpoint. An endpoint in this case would be mass of physical environment, that maintains that asset inventory. But if I dip out, you know, the best way is to go and show you these things. So, if I go into here, let's go into OpenStack, you'll see we've got OpenStack bundles, it's a collection of services that have the relationships between those services predefined. I'll go and choose one of those, add it to my canvas, and commit, and deploy. Now, because of the interest of time, I'm not doing this for real now, I'm going on to the backup system, so you can see that it's, off we go. Boom, boom. And if I will zoom out, this is the, here's one I prepared earlier, kind of moment in the demo. You'll see that this is what our OpenStack environment looks like. Now, if I go and drill into Neutron, for example, I can go and see, we've only got one instance of, not Neutron, Nova running right now, our Nova hosts. I can want to go and scale that up, I can add two units, very, very simply, confirm, to add more Nova compute. It's able to do that very simply because we have a model, right? We have a model that can manage these systems, but you'll see on one level, it's a service model, on this level, we have that physical view, which services are mapping to which machines. This is what allows us to be able to rejig the architecture and test, if we wish. But you'll see that it's also giving us information about the machines that's coming up from our mass environment so that we know how much memory they have, the number of cores they have, the number of CPUs they have, the type of disks that they have. So this allows me, in this case, I'm just scaling up very, very simply, but if I wish to, via the use of a constraint, I can say bring up a Nova compute host that has X amount of memory and is SRIOV capable. Because we maintain that as inventory, because we have the connections between those systems. This is what allows us to be able to do that and take advantage. So we're not necessarily optimizing at a code level and tweaking, right? Which some people may advocate to do, but what we're doing is making it extremely easy for you to be able to take advantage of the horsepower that's there, right? In a very automated, modeled way. I mean, flip back, I wanna make sure that Vincent gets his time to talk through the great technology that he has. So, Juju, in this case, we're modeling applications. If we look at the higher level, how do we then deploy applications into our OpenStack environment, into either even public cloud or containers? Whilst we're talking primarily about OpenStack here, we can do this in other ways. And so we're working with a great many of the VNF vendors, people like Affirmed, people like Metaswitch, Xpeto, Huawei, Ericsson, Nokia, for example, to enable their technologies to be deployed, managed and scaled across these platforms in exactly the same way, right? And again, we're working with them. We have something called VPIL, our VNF performance and interoperability lab, where we work with them to test and optimize their applications for a general OpenStack cloud, a general purpose OpenStack cloud, right? That wouldn't necessarily look exactly the same as it would in your environment. It gives you a nice benchmark, gives you a nice way to be able to compare. Because when you get in your environment, you're going to want to deploy, rearchitect, test, rearchitect, redeploy and test, right? To start to understand how the performance characteristics come through. The final piece, actually it's not quite the final piece, for a hand over to Vincent, is that one of the key parts of this is that not only are we managing the deployment of that and managing the relations, the connections between the services, but we're also managing the operations. And so a very simple action here, a simple example here is a set of actions that we have on an OpenStack service. In this case, it's the horizon dashboard. But the things that we want to be able to do, upgrade, for example, or pause whilst we perform some maintenance on the box and then resume those things that we do in very automated ways, right? Through a set of actions. Now, the interesting part here is that the actions are not defined by us. Yes, we start some of the work on that. But a lot of the input that we get on the best way to perform these operations comes from our customers. So we have had people like Walmart and Deutsche Telecom and Best Buy and Others and Sky feeding into this process. This is how they best start these processes. So compute, I'm gonna go through this very quickly. Mark covered it in the last session. But really, LexD is a container hypervisor. It's a way of being able to manage machine containers. This is a container that looks just like a VM, full Linux environment. Not a process container, like a Docker container, which is very, very sort of dedicated to one particular task. We have this, we've seen great traction with this to be able to run containerized machine workloads. It's super fast. We're seeing super fast start time. It's 15 times the density, so much greater density and therefore efficiency for you, operating them. We have numerous different ways to be able to interact and live out with our REST API via the OpenStack Nova scheduler, for example. Ooh, what's happening with the build there? It's fully integrated into OpenStack, so if you're here in the last session, you will have seen Mark Shuttleworth and James Page deploying workloads into a running OpenStack that are using LexD as a hypervisor in exactly the same way as KVM. So it's just representing a machine, but instead of a KVM machine, it's pulling a container, a full machine container. The benefits of that, of course, is that you're getting much better raw performance. And again, if you're in the last session, you'll have seen that the raw performance is within the margin of error of measurement between running natively on the bare metal and running within the container. Why? Because the container is effectively the bare metal. And so without the overhead, all of the things that we've worked with carriers, for example, that want to do, let's start scrolling away in Livevert and QMU and OVS to start to try and optimize to the nth degree on a single machine bespoke tuning, those things we do not have to do in the container, because we have the kernel that is able to manage it much more effectively. We're also able to do that to provide full machine containers that are exclusive on a single machine. So if you have raw performance on that compute level is super important to you, we can do that. Ironic is one way of doing it. We don't think that's a particularly effective way. A much more efficient way is to use the exact same framework that you have with Nova today, that's deploying machine containers, is by deploying KVM containers, but to do it with machine containers via integration with Lexity. And we have a lot of data, various different workloads that we can provide you to just shows why that is the case. So the final piece, optimizing storage, very, very quickly, Bcache. Has anyone heard of Bcache? Yes, good. So Bcache is a way of being able to put an SSD, or NVMe front end in front of traditional rotary spinning disks to act as a cache. So when I do right, perform a right operation, it goes into the SSD, it goes super fast, and then there's a right back into the traditional spinning disk on the back end. And likewise, if I'm doing reads, right? So the benefit of this, it allows me to be able to get SSD performance across my entire storage, whilst only a proportion of my storage is actually SSD. Is anyone using that? Gentlemen that were nodding in the middle, are you using that? Not yet, okay. Well, certainly our experience, and a lot of the testing that we have done, one of the reasons that James Page who was on stage yesterday on the interrupt challenge was the quickest to be able to install, as he says it wasn't a race, but we came first, because they were using this exact architecture, a lot of SSD, Bcache front end on top of traditional spinning disk. With that, I'm going to hand over to Vincent and let him run through. Thank you. Thank you, Mark. So maybe let me, I need to get you into your mouse. So first, thank you. So, when you enter this update, you say we need to optimize networking performance. So just wondering in the room, who is trying to run at 10 gig in the data centers or more? Okay. So that's what we're going to check, I mean, is, let's move on. So unfortunately, we are not able to use these nice lunchboxes for the demo today because we want to use some xias to use some EV system to do a lot of benchmarking. So what I'm going to show you, okay, thank you, is first to demonstrate how to deploy accelerations of the V-switch that you have in an OpenStack environment in an easy way using charms environment. So why we had to use it is that still a year ago when customers, partners were trying to do benchmarking, I mean, they were spending weeks until they could push packets from the XIA because open sign solution was complex because some DPDK dependencies, I mean, it was just an ArctMessup. So there was two problems to be solved. So the first problem was OpenStacks. So Canonical is solving it pretty well. And the second problem was to get as efficiently as a virtual switch injected into the system. Then I'm going to show you how we boost the performance of Linux. And I will talk a bit more about that when I come back on this topic. And then let's see some numbers out of the XIAs. So we are going to use a simple topology. We'll start with a mass server that Mark has introduced. We'll have the OpenStack Novice Controllers and two compute nerds all interconnected with two 10 gig interfaces. So it's a pure OpenStack, typical from an AV environment. Even if I don't pitch too much out of an AV because it kind of applies to some cloud, enterprise cloud, private clouds, anyone that need low latency is for high transactions applications. So first we set up our local cloud. In that case, it was in 6WIN Labs. So if you have been used, we'll do so. Then you get our credential. So if you cut it and you get to 6WIN Labs, you'll be able to connect. Then we're going to bootstrap and to deploy the OpenStack environments in a few minutes. So maybe you can command by the way, Mark, feel free. Sure, yes, absolutely. So what Vincent is showing right now is the process of deploying an OpenStack cloud. So we have the model defined. It's our standard reference bundle, something called the OpenStack bundle that we're deploying. And so he'll go through, it's actually talking to real hardware. It's talking to mass, exactly as he said. It commissions that hardware, brings it up, and then we'll take the model and lay that down onto the hardware. So it'll go through that process. It's accelerated here as it normally takes a few minutes. Yeah, so unless we all get a coffee and we come back, it's a, I don't know, what's a machine coffee? I saw a gentleman there. I was asking, can we increase the font size? Unfortunately, we can't because it's not a real terminal. So it's a video. Yes, I feel bad for that. So, okay, here is the bundle. So now as you can see, we're going to load them like the new environments on the systems. And we're going to run the watch. So here we are doing, we introduce status every second. So you see a live environment but accelerated. So no time to go for the coffee right now. And so which means that in a few minutes we'll see that all the nodes will get ready with the novars and neutrons and all will become active. So that's running, right, that was running in fact in the server that I was going to introduce previously. And once it will be done, on that bundle, you'll see I will try to add some NFV accelerations. So we are almost done. So now we are done. So now it's the next step. So I could have included in the previous bundle but I really wanted to show that it can be twin-dependent steps and incremental steps because sometimes some people say, okay, they want to deploy their systems, they want to try it, to fill the performance and then they want to fill the benefits of adding some accelerations. So as the next steps we're going to add from the charm store. So if you go on the Juju Charms and you look for virtual accelerators, you'll get these sitch arms. So you are free to try it. So we're going to deploy it on the environment that you saw previously. So since virtual accelerators need to get the packets out of the QMU KVM Vietaio, we don't want this packet to go through the Linux kernel. And then we want to send this packet straight to the physical link. So we are going to be in between the Nova and the Neutron. So that's why we had some relations of this virtual accelerator with Nova. So when Nova will bootstrap the QMU back-end VHOST, we will plug the VHOST users that you may be familiar with, with DPDK. And then we'll process the packets. So anything that Neutron will commission, we'll configure, like Linux bridges, some OVS settings, some boundings, we'll offer it. So since it's going to be quicker because we just have to deploy the virtual accelerators on the compute node, so OpenStack is already set. So as you can see, it's here. So the virtual accelerators is going to be deployed. So now it's almost a real time, it's not an accelerated video. I know we are. So since it's Juju and everyone likes to see the Juju GUI, which is pretty nice, by the way. So here we are. So as you can see, we get this virtual accelerator inserted right in between the Nova and the Neutrons. I just want to underscore what has happened here. So Vincent has deployed an OpenStack environment and then via a single command has added in the six-winded accelerator technologies. And because all of the integration that's required, you know, between Neutron, between OpenStack and the six-winded technology, is defined and encapsulated in the charm, it's just that single command. And that's how the icon that you'll see there on the right, the six-winded technology, has arrived into our OpenStack environment. There hasn't required any consulting, it hasn't required any experts in either OpenStack or Six-Wind to be able to add this in. It's just part of the model. So we will have to try to deploy some DPDK application in the OSTM, the OVS, BDK, or some other OSTM, which are here, I mean. So maybe not that many. So because yesterday and every time we get the same question, oh, I remember there was a presentation from a gentleman yesterday and right after the presentation, the first question was, how did you make it work? So here, as you see, as you just mark, it was just thanks to the charming. In fact, it works right away. So of course then you can play with different tunings, settings of the services that is being deployed. Okay, so once we are done, so now, nice things. The second step about the Six-Wind technology, it's not like any DPDK applications. So we at Six-Wind, we strongly believe that the Linux networking data model is very important. We want to use IP tables. We want to use OVS, VCTL. We want to use BRCTL. We want to use ETH tool. We want to use IP route too. So this data model for networking is key. I mean, we don't want you to learn yet another CLI, yet another shell. I mean, as you know on Linux and some Linux environments, you can use some Python to manage your networking and exactly that's what NAT1 is doing. I mean, through some bitons, very sophisticated logics, you can configure networking with some of the excellent overlay, GI overlay, with some security groups. You can use some calic logic if you want some layers for your models. You can use different SDN logic. For instance, you can use OpenDL, which is some Java that will configure the networking on Linux. So it's already there. So why should we have some complexity? So let's keep learning. So let's, so I hope the display, so I hope you will trust me. If you see the number 25 here, which is ETH4, it's a net device. So which means on that net device, and it's a net device, which in fact has been stolen by DPDK, but it's still available when I do an IP link show, which means I could do IP-S link show. I will see this statistic. If we scroll down, you'll see on line 42 the net device tap, because I've already spawned a virtual machine, and that type interface, in fact, is a VOS user. And I still see that from my IP link show, which means same, I can run, for instance, SNMP, and I can see the interface mebons on it. I can see the benefit of some Linux name spaces. So everything is just like it would be in Linux. But then you'll see now that using ETH2, it's not like any net device. ETH2 is telling me now it's a DPDK net device. So that's the case for the ETH4. That will be the case for the tap interface. So other things that sometimes you have to do, depending on the workload you have in data centers, so if it's very low latency, you don't want to have any other TCP offloads. You want to disable all the LRO, TSOs, because sometimes it can add some overhead in your latencies. Sometimes you want to have them. So if you want to check what the NIC can do, what you do on Linux, use ETH2-K. Every time you do it. Here you have it. We, as I said, we have a net device. It behaves just like Linux. So you can use the ETH2-K to turn on, turn off the different hardware offloads from your NIC. So same, of course, on the tap device. Another thing that I did capture here is, have you ever tried to debug your OpenStack like that? It doesn't work. Unfortunately, as soon as you start a DPDK NFVI, that's what's happened. If you ever tried to debug your SSH, why can't I do my first SSH to my VM? You do that because you don't have a device anymore. You cannot run TCP-DEP on your interfaces. So it's for people which are, let's say, used to run in OpenStack governments out of Linux. For you, it's obvious. For many people, it is not. So here, we are going to see 10 packets from the XIA, proto-661, and running on these ETH4 interfaces. As I showed previously from ETH2, this interface is under the DPDK interfaces, but still managed through the Linux data model. OK, now let's move on with some few benchmarks. So remember, I had two computers. So we're going to use, in a case, some benchmarks through the OVS. So we can use that as an OVS, a Linux bridge. So it doesn't really matter. In fact, the performance is about the same. Since we have this VXLAN framework, and VXLAN is very costly at the Linux node, then we're going to do the same benchmark, but accelerated by virtual accelerators. And you'll see the numbers and the latency for both cases. And we'll monitor the workload using Grafana. So about the throughput. So in that case, so now we start with the first case, where we push a packet from the XIAs to a VNF. So for that VNF, it has been another 6-in product, which is a virtual router. And you'll see here that the maximum that we can get out of this VNF, which is, by the way, running another DPK application in 5GIG, which is very frustrated. And if you see the Linux CPU usage, I have four cores, which are busy on the other side to keep pushing 5GIG out to see the VNF. So no way. I mean, it's not optimal. I mean, you cannot deploy that for some telecom applications. So now let's, OK, we jammed to the computer that runs the virtual accelerators. We didn't change anything on that one. I mean, exactly the same networking, same VXLAN overlay. And then we get to 20GIG. In fact, we are full line right because we don't have more ports on this setup. And if you look on the CPU usage from Linux, we are 100% busy on one single core. That's because of DPDK. That's it. And we are good, 20GIGs. So now let's check what does it mean for the latency. So why does latency matters? Is that, I mean, when you, of course, for transactions, for database, things like that. But in telecom, it matters because when we do some phone, the latency is very, very critical. So here, we're pushing traffic through this Linux virtual switch and the same traffic through the virtual accelerators. So as you can see, the mean latency for the previous case is about 112 microseconds. In our case, without changing anything, still VXLAN, we are at 9 microseconds, so which means a lot of improvement in the latency, which is very important for the VNF. So that's typically the kind of key message that I wanted to bring you, that you don't need to break your networking model. Keep it as it is. It's very important. Just add some acceleration. Some people are thinking, like, hardware acceleration only. But hardware acceleration like SIOV brings some complexity. You lose security groups. You lose your IP table. You lose sometimes live migrations. Now, we can keep software modeling for packet processing, but at the same time, keep it as a Linux. I mean, keep in mind that OVS Linux, which IP tables, are Linux things. So we don't want to lose the Linux story. And DPDK is not here to kill Linux. It's here to accelerate some Linux applications. So that's what we do with the virtual accelerators from 6WIN. So thanks. Thanks, Mike. Well, thank you. Thank you, Vincent. And thank you all. I know we've run over. And so I appreciate you staying here and sacrificing some of your coffee time. We will make this video available. Obviously, the video of the session will be available, the video of the 6WIN demonstration, the integration of the virtual accelerator technology with the Buntu OpenStack. So Juju will make that available to you. And so you can view it in your own time. I appreciate it. It'd been hard to see. Does anyone have any questions quickly? No, everyone needs coffee. So thank you very much. And we will look forward to seeing you soon.