 Hi, everyone. Thank you for coming. The last talk of the day. And we're going to talk about whether OpenStack is ready for 5G. So it's a big topic. There's OpenStack. There's 5G that covers everything. So this talk could have been about everything or about nothing or anything. So I decided to use the anything approach. So I'm Malini Bandaru. And I'm a principal engineer at Intel and a cloud architect, and I've been working with OpenStack for about five years now. So this is nearly my ninth OpenStack summit. And I'd like to call out my colleagues, Yun Hong and Brian, who helped me with the slides and Sandra, who had some very good graphics. So let's begin. OK, what are we going to cover? So we're going to cover 5G. What is it? When will it be here? What are the expectations and challenges of this 5G? It's just two little letters. But what does it mean? And we'll find out that it's all about jitter and latency. At least remember, I said we'll talk about anything. So this is the part that I'm going to focus on, on handling jitter and latency. And you can't do anything today and achieve anything today without taking the whole open source ecosystem into consideration, leveraging it and contributing up to it. So I do like this blue color and all these worlds. So 5G, it's an evolution. There's a progression, 1G, 2G, 3G, 4G. And here comes 5G. And 4G is catching up. It's providing you a lot more connectivity and speed and things like that. But 5G is the total next generation of this. It promises you higher speeds, higher capacity, and ultra-low latencies. When do we need this sort of stuff? You have a connected car. You have a map. It has to come and reach you before you cross the interstate and things like that before the exit arrives. You need it to control your breaks in real time. And if you're doing some virtual reality or augmented reality kind of stuff, there's a lot of data that has to come down to you. And also, all the processing, like you've moved or you're trying to shoot somebody or laser zap somebody, all that stuff has to kind of happen seamlessly. Otherwise, there's going to be nausea. I mean, it won't be an enjoyable experience. You want to control your drones. You have these connected cities, traffic lights, name it. So there's a lot of data that has to move. Some of it has to be in real time. Some of it can be slow if it's an internet of things kind of device like a tractor somewhere in a farm. So that's the kind of space we're looking at. That's the promise of 5G. Where are we today? Today, we're in 2017, we're pre-5G. The standards are going to be out in 2018. So people are really feverishly working on things. It's just called a new radio standard. And we're looking at things like 20 gigabits per second and one millisecond latencies. That's still not enough for the financial market, by the way. And when do we expect true 5G? We're expecting it around 2020. So this is a good day and year to ask that question. Is OpenStack ready for 5G? We're asking the question at the right time. We can get in the right path and we can achieve that goal. So the devices and true 5G, 2020. Okay, so what does this universe look in three years time? We're talking 50 billion connected devices. I mean, just about everything. It's not just the nest in your house. It's not just your iPad and your iPhone and your FooFoo gadgets, okay? It's just about everything from healthcare to traffic lights to trains and planes and cars and just about everyone these days have so many devices. So that's what we're looking at. And we're looking at terabytes of data. Your aeroplane, every part in it is gonna be sending some data down. So your black box is not just a black box of what's happening in the cockpit, but it's all about the other things. Every engine, every plane, every rotor, every, you know, airfoil out there. Smart factories, that's a whole new paradigm too because when you're having all these machines controlling things like you wanna toss in a rivet over there, is the whole ready for the rivet type of thing. And those things have to kind of like align at microsecond time scales. You can't have one thing trying to punch the rivet before the hole is in there. So does everything need high connectivity? Does everything need high bandwidth? No. So think of whatever you're having in your network space as like slices of your network. There'll be some that need high bandwidth, some that need ultra low latency, others that just need a little bit of bandwidth, and yet others that need ultra low latency, that is like your virtual reality type of applications. All this will eventually have to also feed into your data centers. And one of the most important things, and it was like an aha moment when I realized, it's not the north, south traffic that reaches your data center. That's still rising, but it's still a slow rise. What's going up in leaps and bounds like 5x, 10x is the east-west traffic. It's amount of stuff that you do once your data reaches a network, the processing, whether it's like deep packet inspection, whether it's a firewall, whether it's load balancing, whether it's like finding out who your Facebook network is, who you should inform, who you should tweet, who's following you, and all the kind of analytics that go on there. So we'll see how all that affects us as we go along in today's short journey. As I said, I like the blues. So now we're gonna talk about latency and jitter. But before we get into that, what kind of time slices and scales are we looking at? So voice over IP is like 150 milliseconds. Radio access networks, it's about a millisecond, but things are getting even, even smaller. So we're looking at one-tenth of that for 5G and automated stock trades, they're looking at the microseconds time scales. Now pause that and take a look at what you have on your platforms. What's the kind of time scales we're looking at when a CPU is talking to memory to retrieve a data line? That's about six to eight gigabits per second. PCI Express is also around that same time scale, about five gigabits per second. That's if you have 16 lanes and you can have way more lanes than that. Ram access, that is if you don't have something in your cache, that's about 200 nanoseconds. And if you find it in your last level cache on an Intel processor type thing, you're talking about four nanoseconds. So you kinda see, you're talking milliseconds, microseconds, and here you're talking gigabits per second and nanoseconds. We have to kinda see, these are what I have and where am I gonna lose it? I mean, you can't let your software stack fitter away all the stuff that you have here. These are the limitations of your hardware today. So you wanna take care of the software stack and make sure that you're balancing things well. Context switch, every context switch is about a thousand nanoseconds. And I could have written that as a microsecond, but why did I point out the thousand? Because that's an important number. You don't want to like, when you have to reach things like milliseconds, you don't wanna fitter away things in a millisecond or a microsecond for a context switch. And remember, we have network interfaces, they're getting faster, they're 10 right now, they can become 40, they can become 100, but you gotta watch all this. So what consumes time? System calls, context switches, data copying. Can I avoid any of these? Interrupt handling and resource contention. I want it, you want it, and we keep bouncing each other out. They're gonna be cache misses and things like that, that again are gonna cost us. Because if you go back here, go back there. You have a context switch, thousand nanoseconds. You have a line miss in your core. You know, when you're on the CPU, then you have to go not from the third level cache, but you'll have to go all the way to RAM. And that's about two orders of magnitude right there, okay? So sources of latency. So I'm really like diving deep in here to kinda show anybody who's using any software, the Linux kernel here or your hypervisor, your KVM, whatever, where all this time is going away. It's like a cookie and these crumbs are falling away from you. So you gotta watch out in every which way. So when you have something like an OpenV switch, which is running on every compute node of yours in an OpenStack, there's several tunnels out there. That costs you time. Something's coming off a nick. There's a queue out there. The incoming request queue. That's another place where data's sitting waiting for being processed. Is it gonna be there and then there's an interrupt and hey CPU, there's a little packet for you. Please come and handle it. That's intra-processing for you, okay? Then what else can you see out there? There's gonna be, if you're using a hypervisor, there's gonna be VM enter exits. That's an overhead for you. You have multiple cores there. You have an operating system that's gonna schedule work over there. That's also a scheduling overhead for you. Now suppose you're in the virtual machine at that point. You have applications running there. They're gonna have their own scheduling. So that's again, vCPU scheduling. Another context switch there. Another scheduling overhead. So as you can see, there's time kinda slipping out of your fingers every which way. So we have to address all that. You might not get to all of it, but we have to try and get to most of it. And in the old days, I was saying, okay, so I bring in more processing power, inter-puts more cores, you put more CPU lanes, more buses, more capacity, you can handle more, but you can't handle latency without taking care of these issues. The whole software stack matters, not just the hardware at that point. So as we said earlier, we have to deal with our solutions through this ecosystem. It should be inside the Linux kernel, whatever you do so that everyone can leverage it. It might have to be something, another open source project like Open Daylight or Onos that does your software-defined networking. Something else might be like own app the latest initiative to kind of like bring in all this NFV kind of workload and make sure they all work end-to-end, okay? I didn't expect this to be a build. We're gonna have to do zoop-zoop here. Okay, so we did the zoop-zoop. So this is an iChart because it's OP NFV. There's a lot going on in OP NFV and sometimes I like to think of it as a ring that rules them all. Why do I say that? Because it brings the whole software stack, okay? And you can't just say, hey, it works in my open stack or hey, it works in ODL. Do they work well together? Can I change things and configure them so that yes, my firewall does my protection or my load balancer does its thing type of thing? So that's why OP NFV is like the ring that rules them all and you see all these boxes? There's a little bit of color coding going on there. The blue ones, like fuel on the left-most side, Apex, the blue ones are the ones Intel is involved in. So we're not everywhere. Be nice if we could be, but limited resources. So we have a focus on some things. We've been working on fuel for installation of the whole software stack. More recently, we've been involved in Apex along with Red Hat to install open stack again. And our latest little entity, Kola out there is something that we're working on. We wanna bring in an entirely open source installation solution, and that's Kola from open stack that we wanna bring in containerized into OP NFV. So what else are we working on? We're working on barometer models. What do we mean by models? Like a model of your hardware, the model of your VNF. How do they relate? Like I need something in terms of latency or jitter or acceleration. So that's what my VNF needs. That's the hardware provided. So you need those two models of the hardware, the software, to see if they can match and then launch things. So typically what's happened in open stack is we have a definition of flavors. We have a definition of filters. And these are together used to orchestrate, but how nice if you could do it all declaratively. What the hardware has, what the software needs, and then just do it instead of having an explosion of filters, okay? A more declarative way. We've also been working on SFC, that's the service function chaining and you'll see that later on. Yardstake, VS Perf. So those are those things at the very end. I have a laser pointer, let's try that. Ta-ta-ta-ta. Oh, here, there. Wait, little red dot. Never mind. Can you see a red dot? Never mind the red dot in me. Okay. Sorry about that. So we're gonna talk more about KVM for NFV in the bottom rectangle, KVM for NFV. So what's it all about? Kernel virtual machine for NFV, okay? This is the place where we're talking about latency and jitter and things like that. I won't talk about DPD care. I won't talk about OpenV switch. So I'll just focus on pretty much KVM for NFV today. Okay, so what does KVM for NFV do for us? It's got a few things and we'll kind of walk through them like four slides with them pretty much. The first most important thing that KVM for NFV did is like, yes, you have all those knobs, you have all those dials. What should you set them for for NFV workloads? So if you're an open stat, you have a compute node. What kind of OS is it running? What kind of settings have you on it? Is it NFV friendly? And that's part of the way OpenStack uses its filters or it defines host aggregates. So we can define, oops, we did bounce fast, didn't we? There we go. So is that host node got a hypervisor that's NFV friendly? So one of the ways you can achieve NFV friendliness is try and reduce jitter. How do you reduce jitter? You basically have to kind of close your ears and say, I don't want interrupts, okay? You basically have to also say that, hey, I don't wanna worry about power. So long ago, before I joined all the cloud effort, I used to be a Xeon power performance engineer. I was like, how can I save every little bit of kilowatt here? So I kick, the workload's not using much compute, I can reduce the frequency of that core. So that was the birth of per core P states. And right now, sitting on this side of the fence, they're like, oh, we should turn that off. You shouldn't be bouncing around the frequency of your core, okay? Let's just keep it pegged at the highest possible and let it run so that there is no jitter from that, okay? Because the frequency of the core determines it, what hertz it's running at, and that determines how fast your instructions are gonna be executed. So along that paradigm are C states, P states, turbo. I mean, everybody who's a game player knows about turbo, and that was where we would save some energy on one core to give it to another core to run it super fast. The core that might have been running your, you know, man of war or whatever it is. Another very important thing is like keeping a child happy, don't have to share your toys. The same notion here, disabled sharing. What do we mean by sharing? Long ago when we first brought hyper threading on a core, it was basically to hide latency, latency to get data from your memory. That allowed you to get about 30% performance improvement. But the moment you share your core with two execution threads, what can happen is one might cause an eviction of a line in your cache, because there are in Intel processes, we have three layers of cache. There's a cache that's closest to the core, and that's called L0, the one next closest is L1. So at that tiny little cache that's super, super fast, they can be contention when you have hyper threading. So say, no hyper threading. Don't throttle things if it's very busy working saying, hey, now I wanna interrupt you for some real time work. Isolate your CPU cores. Another notion, no sharing. Don't let anybody else preempt you from this core. So that reduces jitter. Then allocate memory for this workload. It belongs to only this, like direct memory access type of stuff. Huge pages so that you have fewer page faults. You can go up to like 64K, no, no. You're going like 4K. You're going to 1G. That size of huge pages you can have. And those are settings that you set on when you set up your BIOS and your operating system, that the huge pages and operating system thing. So you have to be able to say, this core is one of those NFV friendly cores then. And so you can add it to an NFV host aggregate. So what can you do in terms of reducing latency? You can reduce things like synchronizations, push some things to background tasks, push something saying, don't even deal with it, or delay it. Like if you're doing a virtual machine migration, delay some of this cleanup tasks afterwards. So another very important thing is affinity. So if you have a multi-socket system, make sure your workloads are NUMA aware. So they have their data in the memory that is closest to the cores where they're working. So don't like kind of put them on two edges and say keep crossing the bridge and things like that because the bridge has finite bandwidth. Again, direct access. And then a very important thing. As we've moved to these modern processors that are running at like, you know, gigahertz, three gigahertz, 3.5 gigahertz, the clock tick can be at that rate. Do I really need the clock to be ticking at that rate? Because every time the clock tick happens, I check, is there anything for me to do? Is there some interrupt handling that I have to handle? So not all clocks have to be at that frequency. So reduce the clock tick frequency just for the sort of timer interrupts but let the work go on at the full 3.5 gigahertz rate. So it's really all about closing your ears. I want fewer interrupts type of stuff. And then in the older processes, there was some amount of drift and you couldn't trust your clock so much. So you had a watchdog process that again consumes time. So you get to turn such things off. So this is kind of the level of detail Intel engineers have taken for this KVM for NFV project to make sure you get the maximum benefit to reduce latency in jitter. All of its upstream in KVM for NFV. Now there's no escaping that we do need live migration. When do we need it? Your hardware is failing. Or you have some security patch like some hard bleed virus, something. You want to put a security patch on that servers. You have to migrate the workloads. Or sometimes you might just have some new operating system that you want to download. These are cases when you need live migration. How do you speed that up? So originally people were thinking, hey, let me compress the workload before I migrated. If I've compressed it, there's less for me to move on. The network in the data center and it's less to move. So it'll move faster. It needs less. So there's gonna be less contention for the network bandwidth. But that wasn't exactly the best idea. So we did play around with that. The reason is because when you do compression and you do it on software, you're using your CPU core, which means it's busy doing something else. It can't do something else that you need for here. Which introduces latency, which introduces jitter. And then we had another idea. Like how about doing auto-converge? Especially if there's a workload that's doing a lot of dirty paging. I mean writes, it's doing a lot of dirty paging then. You want to pause it and say, hey, you know, you'll stop here and let me move you and then you can start. But that doesn't work either because then it increases latency. So after all those false stats, solutions that were ineffective, the optimizations we have upstreamed are deffer some cleanup operations. Like after you've moved the workload, you can do cleanup on the old host. So your workload has moved, your firewall can do its thing wherever it goes next. Then Intel also has certain new instructions. So these are vector processing instructions and there have been generations of them. The early ones were SSE and that was the one that was used for zero-paid checking. Now if you move it to AVX, you have like, you know, a double two X kind of performance speedups. And then again, we reduce something operations. So moving on, we had mentioned earlier service function chaining. So what service function chaining? You can have maybe a firewall followed by a load balancer followed by maybe some deep packet inspector. These are virtual network functions and sometimes you wanna chain a bunch of them. Do you wanna chain them across different physical hosts? Maybe not. If they're sitting on the same host, can I leverage something? That the fact that they're on the same host, can I maybe speed up the intercommunication? So that's what we wanted to do here, speed inter-VM communication. How do we do it? Through shared memory. And then you can have some things like access control to that shared memory. Like VM1 can see what VM2 is putting in there and then you can have some limited sharing. You can have some, you know, for a certain time sharing and amount of sharing. So there are a whole lot of things you can do there. But that's how we're gonna speed up inter-VM communication. This is work in progress, not yet completed, but the concept is to reduce the communication time between VMs on the same processor. Okay. Now, take all those ideas that we talked about, about reducing latency, about reducing jitter, about, you know, isolation, about shared memory, and all that is encapsulated in an implementation called DPDK. That's Intel's Data Plane Development Kit. It essentially does just that. It does poll mode to see is there a packet waiting for me on a network interface card as opposed to interrupt handling? It's NUMA aware. It uses shared memory to reduce locking and synchronization. And, you know, open V-switch, we have integrated DPDK into it. And then the same concept is used in FDI and VPP too. So it uses vector processing and the DPDK principles in there. So it essentially skips a whole bunch of the, you know, networking stack in Linux and uses these sort of solutions to speed things up. So that helps us bring down our latency. Since it's so awesome, it's like a hardware package generator, it can handle it at that speed, okay? So we talked about KVM for NFV. We've talked about things we've done there to improve performance by, you know, capturing and encapsulating and controlling litter and jitter and latency. But what else can we do? Intel is Intel. We're a processor company. We're a hardware company. We really are not the experts in the network space in the sense that we aren't the ones who create those firewalls, load balancers, the 5G, VCPs and things like that. So this is where the users come in. This is where you vendors come in. This is where you come in and say, we need this or please test our workloads. So this is where we ask you to bring in your problems, your solutions as black boxes, as open source solutions and contribute them to our benchmarking solution space. So put it in there, let's see how it works and let's see what the problems are and let's see how we can help you. So in this space, we invite you to bring your workloads. We wanna understand what your scalability and agility issues are, what your system needs are. So it's like a win-win, then you bring it in and it doesn't work and then we change something in our hardware or we tell you how to tweak things and then we also can then help you build your total cost of ownership model so we can understand things better. Hey, that wasn't me, it's jumping on me. It really wants to jump on me. Hey, stop, I keep it here for a second. So what do we plan to do in terms of testing your workloads? We wanna test them on a standalone Linux environment. We wanna test it inside a cloud. We wanna test it in as part of a managed workload so the whole slew of things and I'll tell you where that's important. We were talking about live migration. Live migration, you can do from one host that has a hypervisor to another host without open stack in the mix. We can just have a script and say, hey, I have 10 VMs, just move it. How long does it take? Then you can have VMs of different sizes. You can have VMs that do more writing and things like that, how fast do they move? Then you can put them in the context of open stack and say, are those same numbers obtained? And they weren't. And there were two bugs in open stack. One was the way it was determining whether there was convergence. Another one was a race condition so we weren't getting those numbers. So that's one of our reasons where we say we wanna do this benchmarking across standalone, managed, native Linux so we get a better feel for it. We also wanna see scale up and scale out. Could I have put two instances and do I get good performance? Or do I need a much faster processor or is it Intel Generation X, Y, Z? How is it performing on different generations of our platform? And we also need from the vendors, from the users, what your key performance indicators are so we can leverage that as a feedback mechanism. Are we doing the right thing here? And then we have some open source solutions but we'd also like your black box solution so we can test them on real, real NFV workloads. Now it doesn't wanna move. So last but not least, it does like to move and I don't want it to move. No jitter. So part of the OP-NFV project that's awesome is that a lot of people know they have real workloads and they also know that they need to test them on real hardware and so there are labs. There are about 15 labs provided by several vendors. You know, Ericsson, Intel, Vaave, et cetera. They've all contributed hardware. I didn't even touch it and it's moving. It's a limit. It's got jitter problems. Are you doing something? We've been hacked. Now you've seen all my slides before in any order. There's no secrecy here. Stay here. So I invite you to use our hardware and one of the things we will do is as we bring out new generations of hardware, we'll be adding them to these labs. So say you have a workload, you're now very happy with how it's working on maybe some Intel X processor. You say, hey, how would it do on Intel Y or Intel Z? So give it a try and then if you are a big company and you would like to contribute hardware, that works too. And you can also schedule these. They're typically given out in two-week time slices. So just drop your workload, try it. All these systems typically at least have five servers, two of them control nodes for resiliency and three for your compute. So you can even play things like chaos monkey, like knockoff something, does it resurrect itself? You can use micro services any which way you can use these systems. And it can give us feedback saying, hey, it worked, it didn't work, I'm happy. And you get a sense of what you might need if you're hosting your own cloud too. We talked about reducing jitter and latency and we talked about sharing and not sharing, like having CPU isolation and et cetera. But being a hardware company, which other way can we support you to reduce sharing? Typically the cash on Intel processors is all shared. Why was it sharing? So that if I don't need it and you need it, you can use it. But when we wanna, am I done here? Is it hinting to me like, I don't know what's happening here? I guess so. Okay, this is totally weird. Do you wanna try something? And I wasn't even dancing here. Thank you. So this is something I really wanna share with you. It's called resource director technology and it's all about allocating resources, okay? So look at that left hand, right hand side. You have these things called cost, zero cost one. It might be a little hard to read, but it means class of service. So in this class of service, if everything is the same class of service, the whole cash is pretty much shared, but you can even chunk it. You can say, hey, you know, I'm gonna let my workload right in 10 megabytes and then the next person has another 10 megabytes so I can isolate it or I can do some amount of sharing. I can put two, three different workloads in one class of service. So only those three can share that amount of cash. So essentially what we're saying here is those shared at the hardware level, we've provided some hardware support so that you can do some isolation. And that's awesome because now at the cash level, remember we talked about four nanoseconds? If you find it in the last layer of cash and then it can be 200 nanoseconds, you have to go all the way to memory, resource directed technology can try and help you reach that four nanoseconds and it does make a difference in time. So on the left-hand side, you can see what I mean by kind of reducing the jitter and latency over there. So you wanna have everything, you don't want that long tail and then in this chart, you see things, it's called CAT, CAT is cash allocation technology. So we're showing you workloads with cash allocation technology and without, and you kind of see that shift, that long tail disappears, all the yellows are nice and huddled. So that means it's reduced the jitter. So if you're something like a high trade, you wanna really reduce that jitter. So you say I'm gonna give you an isolated CPU, I'm gonna give you so much of cash, so you have more predictability. Hello. Another thing we have coming are our accelerators. So remember we talked about when we wanted to move the VM from one host to another host and it could be slow because you're doing software compression, you want hardware compression and that's where QuickAssist technology comes in. It fits in like a PCI device, you have the same notions like SRIOV, so you can have a direct channel for your workloads to move and compress themselves. So in QuickAssist, we offer you compression and cryptography. So take that another step, make it more flexible, programmable and that's where our FPGAs come in. And through our Altera acquisition and on our roadmap, we wanna integrate this FPGA more tightly with the core so it has access to the cache lines, et cetera, that's where you can program stuff. So maybe you have a new machine learning algorithm or maybe you have a proprietary encryption algorithm, so those kind of things you can start pushing to the FPGAs. So that's work in progress and the whole idea, so you can have more hardware acceleration, improve your performance. And what's the future holds? So I just mentioned resource director technology in the context of cache, but the next layers of this are what? We have three layers of cache, so allocation and isolation at those layers of cache. You can also start doing some memory bandwidth allocation, monitor it and then allocate to it and then also network bandwidth. So that's the next level of all this. So those are still coming. And another very important thing is I can allocate, I can measure, I can monitor, I can do salameter or anything else, no key, but what happens? If I have to collect this data on a compute host and then bubble it up to the cloud orchestrator and say, you know, I'm suffering, I'm not getting what I need, salameter even sends it up about once a minute. And then let's say the cloud orchestrator's busy, it's got a thousand nodes, it does some more processing and then it takes another minute or even let's say a few seconds to come back, you see what that kind of latency is for this feedback cycle? That's too much sometimes. When you're talking one millisecond, it took one minute to collect this data. So one of the things we're working on in my team is a local monitoring and adapting process. So you have resource director technology, but we have defined something called a resource agent, resource monitor, learn, see what these footprints are and there's some level at which it can try it, can kind of like, you know, juggle things around between two, three workloads and if there's some virus or some denial or service happening and then there's just no more resources to handle, it's kind of fallen out of those boundaries and then it can escalate it up and say, hey, all's gone loose, try and do something here. But within its context, it will try and juggle, adapt and ensure performance. Other things that are coming, 5G modems, so that's the work in progress and then given that we wanna reach these kind of time scales and sub-microsecond kind of response times, it has to be across the whole platform, software stack and hardware platform and that's what we're talking about as our microsecond platform time granularity. So there's an IETF standard and we're talking about it across Intel, where all can we shave time? How can we kind of make these sort of things happen? So it's all you and us and we're all in this together, so please get involved and 5G is not gonna be about just big clouds, there's gonna be a lot of little clouds, the edge clouds. There was mentioned to it in this morning's keynote and there's gonna be a bird of feathers type of stuff. So please join that, kind of just look for the edge cloud word in your schedule, join those efforts. So originally we say, oh, can open stack scale? Sure it can scale, but it has to work at a little edge cloud layer, be efficient microservices so it can kind of like have maybe content distribution networks that are deeply embedded in your cities. Collect all your data, analyze it, see if something's going wrong and only bubble things that are important and necessary. And this comes back to my mind when I was first working on deeply embedded networks. There were these little devices that were the size of your quarter, they don't have much power and performance and things like that. So you can't transmit everything, you aggregate, you watch out for boundary conditions and then you only bubble up essential stuff. Then as I mentioned earlier, please share benchmarks, please use Farros and then what do we start off with? We started off with a question, is open stack ready for 5G? So I can say we're closer, please get involved. Any questions? No, come on, one question please. Thank you. So if it's closer, then you think there are a few things that need to happen. Can you pick one, like if there was one thing that needs to happen in open stack to make this better, what's that one thing? Let me see. I don't know if that's too hard. No, no, no, it's not hard. One of my concerns is that today the way open stack is, I have an issue with the way Nova works. And this is me and my five years of experience in open stack and it's a very hard project to push forward. I don't want hundreds of flavors, I don't want like this NFV flavor and an NFV filter and what my workload needs and I really like it to be more declarative. I really want that part to happen. Good. So he's on the spot. And he didn't think I might have said that. So that's one of my things. And another issue I had, again coming back to Nova is the way it schedules, it kind of looks at everything out there and I said, okay, you know, here are my 1000 nodes and this is the best, you don't need to do it. And one of the things I would propose is if your different nodes have different capabilities, kind of organize them in the most rarest and knock off things that are not going to be even viable candidates. So along those lines I'm thinking, to speed up scheduling. Okay. Okay, sounds good. Thank you. Any other questions? One more. Yes, please. Absolutely. So, and Intel's not sold on it having to be a hypervisor always, we're not sold on it being virtual machines. So, and then your, you know, VM enter exit latencies go poof. So that's really nice to go the container path and that'd be quiet. So that's perfectly fine. Go the container path and we're actually having a container work group, which we handle such things for, for NFV, containers and NFV. Another thing is there's some commentary about multi-tenant systems and, you know, containers don't give you enough isolation and security. So to handle that, we have like an intermediate ground, very, very lightweight virtual machines, very, very lightweight hypervisors with virtual machine and then a container and that. So we call them a clear containers. So, the spectrum. Thank you for that question. Yes, yes, yes. Okay. Thank you so much, everyone. Thank you.