 Well, that's good. You've all seen all of my slides already, so we're done. So what I want to talk about is Ligato, which is, as we said, a platform for developing cloud-native microservices that are focused on basically VNF, so network forwarding containers. So I guess Ray was saying earlier, this is one of the challenges we've got, is that so far most virtual network functions have been delivered as virtual machines, and it's been fairly straightforward that we just take appliances that are generally, you know, they run on x86, you just take that code, put it in the virtual machine, and there you go. But we're trying to move this to containers and not come onto why we're doing that. So really part of this is just mirroring what's happening in the rest of the world. So this is interesting, I can't actually see my screen while I present, so I have no idea what's coming next, because not only am I pretending to be am, but I've stolen his deck, so your guess is as good as mine as to what follows this. So you know, the whole app world has been moving away from this very monolithic, you know, physical servers, waterfall development, the rest of it. And the thing is, as I was saying, you know, we haven't really mirrored that enough in the network, so what we're trying to do here is mirror what's been going on in the rest of the world, which is out there in the conference with thousands of people, with what we here in the network with all what, you know, 50 of us are doing. So if you look at how this happens in applications, you know, your apps get broken up in a way that we never did in the past, and the good old days, you know, you literally have your... I mean, I don't know who here remembers the days of three-tier client server, and it was the big buzz, wasn't it, back in the 90s? And you would have your app tier and your database tier, and those would tend to be fairly monolithic, each one was delivered on a server and you had a network connection between them. And this is probably where I should start talking about pets and cattle, because we all love this in containers, don't we? Everything was quite bespoke, so you'd have a server on which you deployed a particular app that would then talk to a database instance, and it was managed, you know, very sort of monolithically, but also small numbers of things. And what we're trying to go to is a world where both the servers and the containers that run on them are cattle now, not pets. You know, we have thousands of servers, each one running hundreds of applications, different pods coming up, going down. So as we do that also, the problem becomes that we are... because we're disaggregating the apps into smaller and smaller pieces, naturally, as a result, you're going to have more and more traffic flowing through those components. So the network itself, you know, the danger is that becomes your bottleneck, because what you want to do is instead of saying, well, we're going to build a topology, like a network topology for a specific application, you're saying, right, we're just going to build this giant leaf spine architecture, we're going to stick all our servers on the end of it, and then our apps are going to have to run over that and we'll use overlays to do that. So the performance of the networking is going to be absolutely critical. So as I said up to now, you know, it involves mostly being focused on sort of more generic applications, not networking stuff. And so, you know, certain range of problems have been addressed, but it still leaves us with a bunch of issues as network people. So, you know, how do we actually do networks and network security over this? I think Chrissy talks about, you know, the whole service mesh thing. Service mesh, great, you know, we want to fit in with that, but that's very much focused on the sort of what we as networking geeks would call sort of layer four and above stuff. And that's going to be about TCP, HTTP. Cloud network functions, we're going to be talking layer two, layer three. We're going to be wanting to bridge things together, like stitch, almost like pseudowise from the WAN, but going through the data sensors, so we're just chaining these things together at layer two. And that's very different. And of course we want to be able to deploy this, you know, not only in our own data sensors, but across clouds because it's the cloud native thing again. We want to be able to put this in Amazon or Microsoft or whatever. The other problem we have, and I guess, the bottlenecks of networking and how do we get around that? We know that we need to get to user space networking. We talked about VPP briefly. That's been presented multiple times today. So everything I'm doing here is really stuff that sits on top of VPP to orchestrate it. So it's high performance user space networking. But of course storage has got to come into this as well, because at the end of the day anything you're storing, you're storing it because somebody wants to access it. You're not storing it for fun. And if they're going to access it, then you're going to have to get to it over the network. So in container networking today, we have always great orchestration and lifecycle management, but the real problem, I guess, comes down to how has the connectivity done? It's all done as an overlay that has NAT in it. And then with the service mesh, we then tend to put web proxies on top of that. And the NAT really isn't something we want to do for NFE use cases. You know, it's very handy, because you can have a Kubernetes cluster IP that then gets deployed effectively to as many pods as needed. But NAT is not something we want here. And I can see all of us sitting there in the front row. I mean, every time I say NAT, Ola throws up a little bit in his mouth. Why did that move on? This is really interesting. It must be as I move or something, it steps on. Okay. We already had the name check for 12-factor app design, you know, we're trying to be very cloud-native here. But at the same time, an array said, you know, sort of performance, you could choose it or choose not to have it. Well, you know, here we're kind of laser-focused on performance. And I think Jerome may talk about some of the performance numbers later. You're going to, yeah, the performance we get out of VPP is quite extraordinary. Almost as good as the performance we get out of this projector. Maybe it's just telling me to speed up. Yeah, let's just do that. You can always look at the slides afterwards. And I hate talking about what's on my slides. I just... I did want to talk about that one, though. It's remarkable what's triggering it. Hate you people. Why did Jan do that to me? Maybe because he talks faster than me. So we're putting together, you see, Kubernetes, Conti, Ligato and VPP, Fido. I was sure there were no timers in here. Maybe it's a Mac PC thing. Yeah, exactly. Now this is more fun. You can keep looking at it and every time it moves. Actually, why don't I do that? Oh, no, of course. I can't do anything, can I, because it... I can't actually see what I'm doing. Oh, this is great. This is awesome. Let's just carry on. Actually, yeah, that works, doesn't it? Yeah. Sorry. Talk amongst yourselves quietly. There you go. Yep. Just leave it, exactly. Yeah, so bring these different things together. So Kubernetes for container orchestration, Conti. So Conti is our container networking solution. The next iteration of it will be using VPP, and we'll come on to that. Ligato, which is the main subject for today, is really all about how do we do cloud-native network functions. So you could call them CNFs instead of VNFs, if you like. And then underlying it as the data plane, VPP, Fido. So how does it come together? Well, you can think about sort of the logical layering, but then also how does that get deployed? So we have this component, the SFC controller will come on to. The other component in Ligato is the VPP agent. So the SFC controller basically takes input either from YAML files or REST APIs. That's a centralized component that used to define what you want your VNF deployment to look like. Then the individual network functions run this VPP agent, which programs VPP. So all of this, what we've tried to do is make it as cloud-native as possible. So all written in Go, we're using XED as the distribution mechanism between the two. So if you come to look at how do we build it, what we use XED as the distributed data store between the components, and then we're using Kafka as a message bus. The layering itself, for example, in the agent, we have an implementation of a VPP API written in Go, and then on top of that, we built this infrastructure. Of course, all of this stuff is open sourced and I'll show you the GitHub stuff later. So I guess two or three different aspects to talk about. Firstly, Conti VPP. So in terms of Conti VPP, well, what is this? We were saying, well, forget the containerized network functions for a moment. But think about, okay, so you're deploying just generic container applications and you want a higher performance for your networking. How are you going to do this? Well, we can take the same stuff that we built for containerized network functions and say, well, we can build a V-switch, which effectively is a containerized network function, and just use it as a generic V-switch to connect applications together. So the Conti VPP infrastructure does everything you'd expect of a CNI and Kubernetes. So allocates addresses, programmes of infrastructure. I mean, initially today, we're using absolutely standard container networking stuff in terms of its VXLAN over IPv4. We effectively create a VNI that interconnects all of our servers, and then we route over that. And then as we come to containerized network functions, we're going to create more VNIs and switch over those. And of course we implement Kubernetes network policies in terms of the network services themselves, so the cluster IPs. Initially what we did for our first demo was we let all that stuff run through the kernel for data forwarding after that. We've now implemented the NAT, effectively the ACL-based NAT in VPP, which enables us to host the cluster addresses through VPP. Of course that, as I said, with service meshes, all this stuff kind of gets interesting because there's proxies there anyway. So the next thing we'll come on to is how do we then tie things like Istio and Envoy into this? And in terms of performance, we're in order of magnitude ahead of what you get from anything else from the likes of Calico. And that really comes down to the fact that we're using VPP staying in user space, avoiding the kernel. So entirely from user space, so how do we, the question now I guess is, how do we get past the kernel? So there's a couple of different approaches. We have the MEMIF, which is what we use for containerized network functions. So this is just a memory interface, typically between two different VPP instances, one running in the V-switch and one running in the container. But in recognizing that most of what people are running is applications and Kubernetes as TCP-based and HTTP-based, what we've actually implemented is a TCP stack inside VPP. And so what happens here is if the two pods are running on the same server, it just sets up a FIFO between them. If they're running on different servers, then it's an implementation of TCP, but that's all happening in user space in VPP. And the work we're now doing is to hook in with things like Envoy so they can hook straight into our stack instead of hooking into the kernel stack. There's two ways to do that. One is we can LD preload it and we use a CRI to enable that. But the better approach is that we have our own, effect of our own socket library. So you were talking about sockets earlier, weren't you? So yeah, we have our own socket library that you can hook into and that will give you the best performance for this. But equally, and if you, you can still use the kernel stack, of course, because we can use viths and taps. But, you know, if you use the kernel stack, of course, in order to make a new performance difference. So as much as possible, try and move stuff onto this stack. So the second thing to talk about is, you know, how do we actually do the cloud-native network functions? So we want to take advantage of containers to get performance. There is a challenge there in that, you know, typically then what we want to do is to break applications up into smaller components rather than deliver them in these big monolithic apps. We have some use cases out there where we're deploying sort of broadband infrastructure using this kind of approach. It looks very good in terms of performance. And what it means ultimately is that your network functions then become part of your service topology. And so the network function is just another service as far as we're concerned. The question then, of course, is, you know, what does policy mean for that? Because the policy that you'd apply at layer four or layer seven is probably different to the policy you'd apply at layer two or layer three. We want to get the benefit of this whole cloud-native world in terms of velocity of implementing stuff. So legato, as I mentioned earlier, is the mechanism we use for this. Written in Go, and we have this SFC control that does all sort of management, and it pushes stuff down to these VPP agents that run on the hosts. And so what it changes compared to the last picture, you can see the whole Kubernetes space is still there, but then we have this legato stuff that we sort of out on the side is another means of getting through to VPP agents. And so just as we can program the V-switch VPP agent through Conti VPP, we can now have VPP agents that sit in the VNFs. And then, as I mentioned, typically what you would have is you'd have a MNF effectively between two VPP instances, one running in the V-switch and one running in an application pod. Only it's a different kind of application now. Of course, you know, the fact is that basically what you're doing here is you're deploying network functions in chains, or that's the goal everyone wants to get to. So it's not just, hey, here's a firewall, that's it. Let's have a chain of things, you know, firewall, web inspection, filtering type stuff, and chain them together. So you'd have a root-read egress, and then chain the network functions together. And so, as I mentioned, the way we stitch them together is effectively just using separate VXLAN instances, BNI, for each sort of hop in the chain. Longer term, we're looking at other approaches, and I know somebody speaking later today about SRB6. That's another future approach. But what we need to do is we need to have this kind of logical representation of how's that chain going to be built. And then we need to have a rendering function, which is part of the SFC controller, that says, okay, let's figure out how to deploy that onto the network. But as I said again, you know, the servers are cattle and the containers are cattle, so we don't want to, as much as possible, we don't want to have constraints in terms of which servers we put stuff on. Modulo, the issue Ray mentioned earlier about, you might have certain accelerators on certain servers that you want to leverage. Plus also, in some cases, you might say, well really, these things are very tightly coupled. It's a chain for one customer. Ideally, we'd like to just stick them all on one server, because actually, bouncing around the network is going to be a performance issue however fast your network is. In an ideal world, you stick it all on one server, you're good to go. And then locally, on via one local V-switch, or we can even have pods just go straight pod to pod without going through the V-switch. Of course, in that case, you don't have to assign any networking resource. There's no VNIs being assigned, et cetera. But then where they are on different servers will set up a tunnel between them. And so the sort of logical view of this is we tend to think in terms of having some kind of management overlay cloud network, as we call it, that basically stitches everything together, and here's where our et CD instances, our Kafka, et cetera, is running. But then we're going to have one or more data plane networks which are going to be effectively connecting to the routers on Ingress and Egress. What you probably find, I mean, we have done some work enabling us to use basically one NIC to do both these functions. But I'm guessing in most cases, people might have like a giggy LAN on motherboard NIC, and that's the one that they're hooking into the kind of control plane. And then you would have these, you know, port 10, 25, 40, et cetera, gigabit NICs that you'll be using for forwarding plane. And I believe there's a vendor of such, such devices sitting in the front row. And so then that's how it, you know, how it ends up getting rendered is we have the NICs, the high-performance NICs hooking into the data plane. DPDK is the layer between the NIC and us so that we're running in polling mode rather than interrupt mode. And then we instantiate multiple VRFs in our V-switch, which can either connect through those NICs or through the LAN on motherboard, et cetera. And as I mentioned, typically what we'll have is one VRF for our sort of generic Kubernetes apps, or there could be several. But then we'll actually do virtual switch instances for data plane stuff. The final thing to talk about, I guess, is we talked about controlling and use-place separation. I think Charles, you mentioned that in context of OpenFlow. And I know it's a big sort of thing that people talk about in the world of 5G, isn't it? I'm really just to say that's how this ends up, is that effectively you end up probably with a cluster of servers that are running your control plane for this. And so that's where Kubernetes, SCD, the SFC controller et cetera is going to be running. But then in the data plane you'll have servers running these BPP instances. And so you have your master controller that does all that control plane and then effectively that would be that cloud network really sitting between here and here. And then you have your data plane network hooking into the network. So, where can you get it all? Where else but GitHub? My thanks to GitHub because they provided me a very nice coffee this morning. Which is probably why I'm managing to get through this. And so, just look for Ligato on GitHub. It's all up there. In terms of components we have the CN Infra. I'm sure you can't see it, it's way too small. But the CN Infra is just kind of the base infrastructure and then on top of that we layer things like the VPP agent. So the CN Infra is just a platform. It's really just a platform for building any kind of cloud native function and in that sense it doesn't have to be VPP. So again I mentioned the MEMIFs and typically a MEMIF would go between a VPP instance running in a pod and a VPP instance running in a V-switch. But of course you might not always be using VPP that could be other forwarders using. They could be using some of the stuff from Intel. So why not have a more generic Infra that we can go into. The way it works is very much that everything's a plugin. That's tended to be our approach in this and in the VPP agent, which I'll show next. Everything's just a plugin. I guess the ask for today is get out there, take a look at it and start writing plugins. The VPP agent again all about plugins, hooks into the CN Infra and then we build things like policy etc into that. As I mentioned, the Go VPP work which lets hook into VPP itself at the bottom layer. Again, that's out there and can be consumed separately from Legato. Oh yeah, we're done. Fantastic. Any questions? That's right, we can take questions. It's always good. Sorry. So you're using Kubernetes and you're not using that. I think that means you're using DNS, is that right? Sorry, using. DNS. Cube DNS gets used. Thank you. You mentioned VPP under Envoy. Does that still use Legato and is Contip involved in that stack or is that an alternative implementation? We're using it as part of the Contip VPP thing but again, it's separate work so you could presumably hook... I can't see why you would need Contip to hook that in. That's just the VPP stack in the user stack in VPP the TCP stack in VPP and you could just hook into Envoy without this stuff. Cool. Now it's just that we recognize that Envoy, I don't know what people see out there, but Envoy just seems to be winning in this space. Always go with what's winning. Sorry, I missed if you had a background slide but are you a your service provider? Oh, I'm from Cisco and I guess we're just throwing this out here as open source to see who catches it. No, because we're still coding it I guess. We're demoing this stuff at KubeCon in before Christmas but certainly we're planning to use it ourselves. Would you mind repeating the question? Oh yeah he was asking if if there were any users of this technology yet and I was saying we were just demoing at KubeCon before Christmas and yes, we plan to use it ourselves but also it's out there at open source so if any of you guys want to pick it up then please do. Any other questions? Okay, so we've got to... Thank you so much.