 I'm going to leave it like that, otherwise I'm not going to talk about it. Yeah, that's fine. Okay. Welcome everybody to the fourth, I think, edition of the SDN NSV government here at FOSTA. We've got a great lineup of talks today. So without further ado, given the various todays related to AV, I'd like to introduce Ray Kinsler from Intel, who's going to give a presentation on the path to data fame microservices, some of the things to do in containers as well. Containers, and I'm going to talk about containers in cloud native briefly. Thank you so much. Before I start, who here is from Brussels? Anybody from Brussels? Nobody from Brussels. That's amazing. I was at the Brussels Hacker space yesterday, and I had the most fun. I had so much fun, I bought a t-shirt. And it's soldering irons, rescued machine shop tools, open source, and beer. It's a great combination. So I was going to give them a shout out because I had such fun there. So I'm here today, and I'm talking about data plane microservices. But I'll get to them in a little while. I want to first talk about some trends. Sorry, this is a rough agenda for what I'm going to talk to. I'm going to briefly talk about network function evolution. That's the wrong one. There we go. Network function evolution. Again, I'm going to talk about the containerization of network functions and why network functions are being stripped of virtual machines and why they're being put into containers instead. I'm going to talk about cloud-native network functions, why they're coming and what that means, and then I'm going to obviously summarize at the end. So I took great pains in making this slide to kind of sum up the general trend that I'm seeing, which is that network functions were... Originally, network functions were all built in ASICs. A lot of companies made very good business out of designing routers and switches and BRASs and firewalls based on ASICs. And then sometimes in the naughties, they became more and more software-defined and more and more put on top of general-purpose processors. But you could describe that the way that these boxes were designed was they were using general-purpose processors under the hood, but they were running very proprietary software underneath. They were running, you know, sometimes it would be a RTOS, sometimes it would be a CustomOS, rarely it was Linux. And the software tended to be mon... What I would describe as monolithic. So that software assumed it was the only thing running on the system, kind of like the good old bad old days of DOS when the software and DOS assumed it was the only thing running on the system and just grabbed all the system resources and that was fine. And that's typically what you tend to see with monolithic software. It owns all the cores, it owns all the system memory, and it grabs all of the IO. And anybody in this room who develops on top of DPDK, that's probably a very, very familiar to pattern that you're... Probably a pattern that you're kind of familiar with, right? It's monolithic. I own all the cores, I own all the... I own all the IO, I own all the memory. So what we saw then was we saw a drift from custom operating systems and RTOSes and those kind of things to Linux. And we saw a drift from proprietary software to more open source software. And that's when software-defined functions became... moved from being, let's say, monolithic on top of discrete appliances to being virtualized. Now they're being deployed in virtual machines. And this is roughly where we are today from 2012 on. We started to take network functions and wrap them up inside virtual machines and deployed them in OpenStack. And so we're in the age of virtualized network functions. Today what we're seeing is happening is that virtualization is... Well, you know, more and more of network functions are still getting virtualized, but some people are starting to play with moving away from virtualization to containerization. Now there isn't much change in the software design from here to here to here. The software design is still monolithic. The software design still assumes it owns all the system resources, whether that's all the resources of a virtual machine or all the resources of physical system. The software is still quite monolithic, except in this case it's getting stripped of the virtual machine and being put in the container. It's still monolithic software. But that's where we are today. That's moving on from the age of virtualization to containerization. That's kind of... we're on that precipice today. In the future, we have to ask the question and what's been happening today is network functions have been pulling the same patterns as data center software because this is exactly what happened to software in the data center over many, many, many years. So we have to ask the question is, if the network functions continue to follow the trend of data center software, will ultimately network functions become decomposed and be for cloud native deployments as microservices? Now, when I was back here... By the way, I've been working on this stuff for this long. When I was back here, people said these network functions will never get virtualized because they need to have deterministic environments. They need to have assigned resources. They need these kind of environments that are very, very certain. But yet network functions were virtualized. And I think we're at a similar point now where people are looking at embracing containerization. But I think it'd be too much to say that network functions will never become microservices. And maybe that's what I'm going to talk about a little bit today. First, let's understand the trend to containers first. Why network functions are becoming containerized? Why we're stripping network functions under VMs and we're putting them into containers? How am I doing for time, Dave? 15 minutes. Good man. So I'll make a long story short. There's a whole bunch of reasons. A whole bunch of reasons in here. Some people talk about a virtualization tax. I don't necessarily agree with them. But the perception is that virtual machines are big and fat and you unnecessarily use system resources. I don't necessarily subscribe to that view. Other things people talk about is the software licensing costs associated with virtualization. You've got to pay for the operating systems that run inside the virtual machine. You've got to pay for the operating systems that run on the host. You've got to pay for a hypervisor. And that's certainly good business for somebody. And then also, finally at the end, people talk about the complexity of OpenStack or the getting things done in OpenStack, the brittleness of OpenStack. People talk about all those kind of things. I don't think it's any one trend that's causing people to look at stripping virtual functions of the virtual machines and moving them into containers. I think it's a little bit of all of these. But if you take a virtual function out of its virtual machine and you put it into a container, its characteristics are still exactly the same. It's still monolithic. It still steals all the cores. It still steals all the IO. It owns a network card. It doesn't want anybody else. And then it steals all the memory. It owns all of the memory. So these are quite greedy. And I didn't really get it for a long time. I didn't really get what the core problem was because I approached it from a microprocessor optimization point of view. I was like, yeah, it's greedy, but that's how you get the performance. It uses all the cores because you don't want anybody else using the cache. It uses all the memory because it needs certainty. You don't want TLB misses. It uses all the IO because it needs that dedicated IO. And then somebody, I still didn't get it, but then somebody made a really, really good point to me. He said, Ray, he said the virtual functions that people are deploying today in virtualizations, in terms of management, in terms of your ability to deploy them, in terms of your ability to scale them, in terms of ability to realize scale out and scale up, look exactly the same as the discrete appliances that we were building 10 years ago. I've swapped truck roll for virtual machines that are almost as hard to manage. Does that make sense? So I've gone from a discrete appliance which every time I wanted to upgrade, I had to do a physical upgrade. It's improved the situation by moving them into virtual machines, sure. But if you have a virtual machine that's using all the cores in a single socket, it makes it very hard to do things like live migration. It makes it very hard to do things like high availability, failover. Those kind of things become very hard because the application that's inside hasn't changed dramatically since that application that was being deployed in the discrete appliance. It still is quite hard to manage. So that's why people are talking about application decomposition. And this is the 12-factor app. This is the template for application decomposition. Those people who practice agile development methodologies will be familiar with the agile manifesto. This is the 12-factor app. It's the manifesto for designing microservices. And what it really talks about, and it talks about 12 things, some of them are related to software development, some of them are related to DevOps. But the things that I find most interesting are the ones related to application decomposition. So your process, your application executes as an app of one or more stacus processes. It exports services. It has a way, microservices have a way of finding each other. You scale up by running multiple processes and running multiple microservices. And it's disposability. They come up and they tear it down very, very, very quickly. So containerization and cloud native are two separate trends. One is switching the VM for a container. The other one is taking the monolithic application that's deployed in the container and decomposing it into microservices. So I'm going to talk for a few minutes about what we've done to make containerization of network functions easier. So we've built the CPU manager for Kubernetes. So before we did this, in getting a very deterministic execution environment for a container from Kubernetes, it was quite hard. Now what we have is we have the same kind of things that you expect from virtualization, application isolation, sorry, core isolation, core pinning, those kind of things. These are now available through the CPU manager for Kubernetes. We then also built the, pretty simple, you want to get huge pages inside your container. We built the CPU, huge page enhancements for Kubernetes, which was basically a way that you could specify hey, my container is going to require huge pages, because it's going to be running DPDK or VPP or a similar-based network function that requires huge pages. And we also built platform feature, platform node feature discovery, which is basically if you need a node, if you need Kubernetes to put your container on a node that has things like AESNI, that has things like AVX-V2, any of those kind of platform features, or you might need a certain security accelerator. So Kubernetes now will understand that a given node will have those kind of accelerations, have those kind of features on board, and will now place your container on the correct kind of node. We're also adding platform telemetry. So, you know, this is something I talk about a lot, and I think Emma Foley, I don't know if Emma's in the room yet, but Emma will be talking more about as part of the BROMER project later in the day, which is that a lot more platform characteristics influence whether you're going to have a good experience or you're going to have bad experience on that platform and exposing those platform telemetries, so it's more than just CPU usage, it's more than just memory usage. It's how's your cache doing, how's your memory bandwidth doing, how's your PCI Express bandwidth doing, and we're exposing those up as well. And then finally, because we're talking about network functions, we get on to IO, right, the core thing. So we have MOLTUS, which enables you to provision. It's a CNI provider for Kubernetes, virtual multiple interfaces into your container, so you can have things like control and data plane network separation, you can do things like NIC aggregation. So for the first time, a container can have multiple interfaces passed through. There's other options on MOLTUS, there's projects like CNI Genie that do much the same thing. We have a bunch of different IO interfaces for containers. You have the traditional one that you're probably used to, which is SRIB pass-through passing virtual function into your containers. You then have pure virtual interfaces like virtual IO user, which is talking for talking from one VNF deployed in one container, network function deployed in one container to a network function deployed in another container to a virtual switch or onto the wire. Or then if you need to support socket-based applications deployed in a container. We have a new approach that's called the master VM approach. So it turns out that supporting socket-based applications inside a container is more problematic than you would think because it involves hairpinning traffic through the Linux kernel. So you might think you have a user-space-based virtual switch, you have a socket-based application, how does the socket-based traffic get from the user-space-virtual switch to the socket-based application? That's the hairpin through the kernel, which isn't great for scaling. The VPP guys are going to be talking later in the day about a very novel approach that they have to that solution. We have a new approach which is for DPDK, which is that you run your containers inside one giant VM and then you can use Vert.io to support either VNFs running inside a container inside the VM or a socket-based application inside a container and it all happens transparently. Okay. So that brings me on to Cloud Native. So what I've just covered, the breakneck rate, was how to take the work that we're doing to strip your virtual... just strip your virtual network function of the VM and put it inside a container. Ten minutes? Nearly. Okay. So now... Okay, let me pause for a moment. So we just talked about all of the activities that we have about stripping your network function of the virtual machine and putting it inside the container. But your characteristics of your network function haven't changed at all... haven't changed at all at this point. It still looks very similar to the network function that was being deployed on the discrete appliance 10 or 15 years ago. It still makes all the same assumptions. It still owns all the I.O. that are given to it. It still owns all the memory... it still owns large chunks of memory and it still owns all of the core. And one of my colleagues, Bruce Richardson, describes these kind of network functions as being greedy. They own large chunks of system resources and they don't share very well. So... we start looking at why this was happening and we found that a large... a large part of the reason why this is happening is because this is why that we designed VPP. This is the way that we designed DPDK. We designed it to make all of those assumptions in order to get good performance. But if you're taking... but it turns out to deploy those kind of applications because if you're the poor DevOps engineer that actually needs to go and manage your network of these actually executing in a cloud environment, it turns out that, as I said earlier, it's a very, very hard thing to do. So maybe... that's a square... maybe we're just a round peg and that's a square hole. Maybe we can't solve that problem. Maybe in order to get performance we need to operate with those assumptions. And we need to enable the person who's actually doing the deployment to choose. Maybe we need to stop making choices for the people who are consuming our software. And that's where my... that's the understanding that I'm coming to. We make all of these assumptions and then we make all of these assumptions for the people who are consuming VPP and people who are consuming DPDK. Maybe we need to get out of the way and let them make a choice for themselves. Maybe in certain deployments they value flexibility over the absolute performance they get out of the system. But today we're not giving them that choice. Today they don't have that choice at all. So we need to empower them. So what we need to do is realize models that enable them some circumstances to give the absolute best performance on the system, but in other circumstances enable the most flexible deployments. And then to put the power into the hands of the DevOps engineer who's actually doing the deployment to make the choice whether they want to realize the most flexible or the most performant deployment possible. So we need to develop APIs for CPU sharing, for sharing I.O. and for sharing memory. So we need to be greedy when we need to be greedy and we need to share when we need to share. My son is in kindergarten. He's just finished kindergarten. His little catchphrase, they teach him at kindergarten, is sharing is caring. So we need to care more for the DevOps engineers in the world. But yet we need to maintain the same API, the same programmatic interface so the people who are designing the software don't have to manifestly change their applications. So something that we're looking at at the moment is how do you break up a monolithic application, an application that's used to owning all of a core. Well one way is you can build an in-process scheduler and it turns out if you look at projects like LibDeal, LibMil, they actually realize extremely fast in-process scheduling just 140 cycles for a context switch inside the process. And by doing that, you can actually move from a monolithic application to an application based on an in-process scheduler with a very, very light overhead for actually scheduling microservices running inside a DPDK based execution environment. Then if the engineer wants the most flexible deployment, the engineer should be able to flick a switch and have those separately executing microservices become separately executing processes. APIs don't change, environment doesn't change, but whether these become separate processes or these are separate microservices running inside the same application space becomes a DevOps choice, becomes a configuration choice for the person doing the DevOps engineer. And there are times when you want to be able to share, have multiple small processes that are able to do things like migration, able to do things like scale out, like cloud applications, and able to do things like high availability. And there are other choice other times when you want the highest performance possible. But yet by maintaining the same scheduler API, it's maintaining the same APIs for sharing CPU time, it then becomes a choice for the deployment engineer. We're doing work in the 1802 release to make the DPDK memory model lighter. So in the short term, I don't know if there's many people or engineers like me, but what I do today with DPDK is when I run DPDK and I give it like 128, I give it a large amount of memory and if it doesn't crash, I'm generally at school, right? So we don't really have a lot of introspection of how much memory DPDK is using. Whereas we're moving to model in 180, I think it's 1802 or maybe it's 1805, I think it might be 1805, where it'll be huge pages on demand. You'll only actually get allocated huge pages as you use them. So DPDK will become much better about sharing system memory with other processes in the system. And then you can imagine there are certain use cases that don't need huge pages at all, things like virtual use cases, things like containerized microservices. There are 4K pages in the future in order to make that happen. And there's a whole bunch of ways to decompose to have more scalable I.O. for things like microservices. Again, where we are today is the DPDK-based application. Uses all, it gets assigned all of the I.O. We've been doing, the guys, the guys in VPP, the guys have been doing great work in order to make VPP a more scalable platform, a more scalable V-switch for containerized deployments. So you can really have a very large number of containers talking to a VPP DPDK-based virtual switch. But there are also schemes, and I think there's a speaker later today who'll be talking a little bit about this, which is MDF-based schemes where you can use, maintain your V-switch, but then use hardware-accelerated V-switches. Again, for more decomposed I.O. So this is where we are today. DPDK owns all of the I.O., where in the future we'll use a virtual machine to do a better, where we're moving to, we'll use a virtual switch to do a better job of sharing I.O., and then very quickly we're going to move to models where we have hardware-accelerated virtual switches. But the important thing here is for the application, for the DPDK-based application at least, we want to maintain the same environment. You don't want as an application engineer to have to use different APIs in order to support different deployments. So that's why we need to maintain the same scheduling APIs regardless of whether the scheduler is the DPDK scheduler or it's using the OS scheduler. We want to have the same memory models, the same memory allocation APIs regardless of whether you're using huge pages or you're using 4K pages. And then we also want to have high-performance transport whether that's based on the V-switch or whether that's based on a hardware-accelerated V-switch. And then you give you Empower the DevOps engineer to say, well, actually I want to use in-process microservices because that's going to be the fastest execution environment. Or I want to use multi-process microservices. It's still fast, certainly not as fast as in-process, but that gives me a nice place between flexibility and performance that makes it much easier to deploy. Or you might look in the future even to have multi-node microservices, and that's microservices and running on multiple different nodes, communicating across multiple different nodes which is exactly what happens in cloud-native deployments. So I've kind of gone at that like a train, apologies, I hope it all made sense. So just to kind of tell you what I told you, we're moving from an age of virtual network functions deployed in virtual machines. We've done a ton of work to support the containerization of these network functions. You can go to our GitHub and grab our enhancement for Kubernetes. And we also have guidelines, application notes, and all source code are associated with it in our experience kit, which you can grab here. In terms of cloud-native network functions, these are things that we're adding to FTIO, VPP, and DPDK at the moment. One of the first things that's coming out is the new enhanced memory model, which I put down for 1802, but I'm pretty sure that's wrong. It should be 1805, and that's coming at you pretty soon. And we're going to see more and more of these kind of features to do a better job of CPU sharing, do a better job of memory sharing, do a better job of IO sharing. So I hope all that made sense. I don't know whether I have the time for questions. You have about two minutes. More importantly, do I have any questions? One or two questions, Max. Okay. Any questions? I can be the de facto filling. So a question I have regarding cloud-native specifically when it comes to containerization. Microsoft is one thing. Cloud-native mandates a service mesh as implementation. And from my perspective as a developer into a containerized environment, I'm going to make different decisions based on whether I'm executing in a service mesh or not. How do we overcome that challenge to get to that flexibility that the user can choose? Because if I'm cloud-native, I don't get a choice. I'm in a service mesh, right? And that's maybe something that I'd like to get your perspective on. To be honest with you, I think that's a good question. I think Jan Meddev is talking today. I'm pretending to be Jan. You're pretending to be Jan, are you? So Jan's going to talk a little bit more about legato. I think it's a good point. I'm going to deflect and say I don't have a good answer for the question. We could maybe talk about it during the legato presentation. It would be a good time. I'm afraid that's all we have time for. That was a long question. Thank you very much, Ray. Charles Echo is coming up next. So a couple of minutes to let people... What's the defragment the room and get out and get in? Charles will be giving a presentation on Open Daylight as a platform for network program and building. Did you use a HDMI or a VGA? Sorry. I'm joined. Hi.