 Hey guys my name is Billy McFaul. I'm an engineer at Red Hat. I one of the two or three Red Hat representatives on FIDO and specifically VPP. So I'm going to talk a little bit today about the northbound connections of VPP for containers. As I was going through the slide deck I realized that the title has NFV in it but I really didn't talk a whole lot about NFV. A lot of the discussions earlier today if you're in the room talked about alternatives to DPDK, you know a lot of XDP and EBPF but a lot of their statements were we're not as fast as DPDK but we're getting there. So in the context of NFV the reason a lot of companies do use DPDK for NFV is because it is fast and NFV use cases need the high speed and the low latency. So even though I don't discuss a whole lot of NFV within the discussion the context is plugging in user space interfaces specifically DPDK into containers and Kubernetes. So luckily I looked at the agenda and saw that before my talk there was 40 minute talk on Legato and KONTPPP because I would not have been able to do justice to the project as well as they did so I decided not to add that to my discussion here. I'm going to talk about user space CNI and in conjunction it requires another CNI multis. Both user space CNI and multis were upstream by Intel and so I've been working a little bit on the user space CNI portion of that and then I'm going to talk a little bit about the network service mesh. So what is multis? Like I said it's a CNI that was pushed open source by Intel. During KUKON and I think it was around 2017 they formed the Kubernetes network plumbing group. It was a way to standardize how to add additional interfaces into a container. So they've released the Kubernetes network custom resource definition defecto standard 1.0 and multis is basically a reference implementation of that standard. So what is multis? Multis is a meta plug-in. Kubernetes basically only wants you to have one CNI because it only has one interface on a pod and so what multis does is allows you to multis is the one and only CNI from Kubernetes point of view and then multis goes in and lets you define custom resource definitions which are basically what other information other CNIs that you want to use and the data associated with them. So Kubernetes will call into multis and then multis will circle through its set of custom resource definitions calling each CNI and then once they're done it returns the information of the default CNI back to Kubernetes and then it squirrels away in some log files some information about the other CNIs results that happened. So because it is outside of Kubernetes, Kubernetes is not aware of these additional networks that multis adds in or these additional interfaces. Kubernetes is only aware of the default network. So the user space CNI, it was also pushed upstream by Intel and Intel and Nokia and Red Hat and now Mellanox are working on this user space CNI. The user space CNI allows you to insert dbtk interfaces into a container. What this allows is high speed user interfaces and it also allows additional layer 2, layer 3 and other toning protocols to be pushed up into container instead of limiting the container to just IP traffic. It is leveraging multis and so because it is doing that Kubernetes is unaware of the additional interfaces and networks that are being added to the container. It is currently supports both VPP and OBS dbtk. I'm working on the VPP part and some of the Intel guys are adding some of the OBS part but I believe the VPP is more feature rich and so we'll be able to add a lot more features to it going forward from the VPP side. Alright so just a little bit more information about what the user space is doing. It is very early in development. We don't even have a nice little logo or image. I noticed that when I was putting my slides together everyone else had the nice pictures and I didn't have a logo so it is very early on. What it does when the CNI is called is it goes in using the Go API and calls into, well okay sorry, it calls the local V switch to create the interfaces locally on your local V host. For VPP it uses the Go VPP portion of it that was discussed earlier. Then the OBS dbtk does not have any Go API so they're using some Go calls into their OBS CLI. But this allows you to create either a V host user for the OBS or a MIMIF for VPP on the local V host. Then the CNI will add that interface into a local network based off some input JSON. Currently it's just supporting layer 2 bridging so you can define the bridge that you want to add the interface to but going forward it would be easy to extend that out to other protocols as needed. So once it also will call into your IPAM CLI and CNI and then take that data can be then passed up into the container if need be. So once the local V switch is provisioned the data is then squirreled away and passed up to the pod so the pod can consume the interface. So a little bit on network service mesh. I have to make going into this, I do not work on network service mesh. One of my colleagues is going to present the service mesh portion and he cannot attend so he left it up to me so I'm going to do my best to describe it but I'm not sure if I can answer a lot of questions on it. Also what I'd like to say is when service mesh, network service mesh, one of the key concepts is it is data plane agnostic. It can work with multiple data planes however probably one of the first data planes that we'll use is VPP because of all the features that it has so I would like to talk to you in that context but it is data plane agnostic. Another thing about network service mesh is it has a strong play in running containers and Kubernetes but Kubernetes is not required it will be useful going forward in that but you can run service mesh without Kubernetes if you wanted to. Service mesh is service abstraction that allows you from a Kubernetes point of view to plug a pod into a different pod or pod into an external network much like I was talking about with Moltus. It creates these networks outside of the Kubernetes default networks that Kubernetes is not aware of the networking that it's doing. One of the advantages of network service mesh is it enables heterogeneous network configurations. It will support a large variety of tolling protocols and it brings like Moltus brings in multiple payload types into a container where that the Ethernet, IP, MPLS or any other tolling protocols you might need for some type of NFT application. One of the most powerful features is that it allows container app programmers to go in and do what they do best as far as the workload and not have to worry about more complex complex networking outside of the container. So if your workload needs to connect or it happens to connect to a VPN or it happens to connect to a firewall the app doesn't have to worry about any of those while programming. It's taken care of outside of it. So network service mesh I mentioned earlier is a service abstraction but I'll mention it again because it's a key point of it is that it abstracts out your it makes you think of your network as a service and it also because of the upfront planning and design the networking payloads are not an afterthought. It is the upfronts designed to feed these additional layer to MPLS payloads into a container. It plays well with Kubernetes. It doesn't require any changes to Kubernetes or it doesn't affect the Kubernetes default network at all. So summary and overview so legado well you had a long discussion on it before but it allows you to insert user space interfaces into the default Kubernetes network. I may not be quite sure after hearing the presentation but it's a large feature rich set. Multis and user space CNI adds CNI interfaces outside of the Kubernetes default network. It allows some separation of control and data plane for your container. It's very early in development to your question. It doesn't have a lot of the network policy or the cross pod configuration. It's right now very early on just doing plumbing of the DPK interface into a container. Network service mesh is a service extraction. It's data plane independent and also inserts container networks outside the Kubernetes default network. It could leverage Multis maybe legato going forward. Just have to wait and see if it's possible or needed. It's also early in development. I think they're trying to get up to 1.0. I think they still have some work on integrating with the data plane is where that stands. Which is better, I guess it depends on your use case and what you're trying to do but all of these leverage the high speed and reach rich feature sets that are in PPP. One thing I would like to say is a call to action is a lot of these projects do need help. They need coders. If you don't like to code, you like to tell people what to do. They could use some architects as long as you do it with a smile and a please. We definitely need a lot of valid use cases so that these can be tailored for real life situations that are needed. That was it. Thank you very much. I have a reference slide at the end with links to all the projects. Got to talk to you fast. Sorry. Sorry. We only have a bunch of questions if anyone has any. Good question. Full disclosure, I'm a CNI developer, multi developer, open chip. Hey. Hey. How much of this could be replaced, I mean, multi is great and having multiple interfaces is awesome but for these user space interfaces some of them don't even really feel like network interfaces in the traditional sense. How much of this could be replaced with a Kubernetes? How much of this fits more into the Kubernetes device plug-in model versus the CNI plug-in model? One of the current things we're working on with the user space CNI is making a device plug-in so they can handle NUMA, CPU pinning, all the above. So the discussion is whether it's going to become a full device plug-in or whether we're going to have a partial device plug-in but still use some CNI for, I think it should be a little bit of both leaning more heavily on the device plug-in but time will tell. We're still talking through some of that stuff right now. I think we also need to use cases for because when CNI was founded the ecosystem was a lot simpler and coming up with a proper boundary between device plug-ins and CNI is still not ... It's very ad hoc and I think best practices for that are definitely not. The way I've always been told is the device plug-in is you use it if you have a limited set of resources so it's something that will expire and you need to use up stuff like your NUMA and your ... Well, not necessarily your NUMA but you get into your CPU pinning and how many CPUs do you have and where can you place it so ... But I could see ... It is one of those that walks the line and which side should you mostly follow. All right, thank you all. Sorry I talked you fast.