 Hey guys. Hello. Hello. Good morning. Give it a brief moment more and then we'll get started. Okay, so let's go ahead and get started with the network service mesh for you. So welcome. And so is anyone able to share the screen? I believe I'm sharing my screen. Cool, I see it. Okay. We have only two people signed for the call. So maybe the rest would like to add their names here too. Okay, so we have this meeting every Tuesday at 8 a.m. the civic time. And we have a bi-weekly and it's an Asia call which occurs every Tuesday at 10 a.m. CET time which is 8 a.m. GMT. There is a link to the doc to the meeting notes within the chat that was posted. Thanks for posting it. We are also involved in CNCF Telecom User Group which occurs every first Monday at 8 a.m. and every third Monday at 4 a.m. There's also a CNCF networking working group which is in the process of being recruited and had its first meeting at CUP. Major events coming up include DevCon in Bernal. The call for proposals is already closed. We have FOSDA in 2020 and Belgium coming up. And the call for proposals is currently open. We have KubeCon coming up in Europe in March 1933, April 2nd. Just a reminder, the call for papers closes on Wednesday tomorrow. But also recall that there's no time zones listed, so keep that in mind. It may be a European time zone. We also have a schedule that the schedule will be announced in January 22nd and there is also a call for proposals for a telecom networking and CNF group. Is Taylor on by any chance? I think he might be at the CNCF talk. This is the telecom networking and CNF talk that they're putting together for a set of talks. So we'll have to ask Taylor about it, whether they're planning to submit this into the link in the corner, whether this is a side of them that they're planning to run. We also have open networking and edge summit in North America coming up for announcements. Is there any announcements anyone would like to make? So no announcements on my side. I know that it is not joining and CNF is not joining. This is for the social media community thing. We need to cover that on our own. I probably would like to have a question about the cloud native, like the KubeCon EU submissions. Is there anyone on the call that actually is preparing getting submission? Maybe we need to sync on something. Here, what's going on? What are the plans? It's quite fast just after the KubeCon NA and then Thanksgiving. So we need to quick sync up here to see what are the plans there. Actually, we discussed with Ed to propose a security talk. I guess it's in progress of discussion with Ed and some guys from Spire. Okay, that's good. I mean, you have kind of 24 hours or something like this? I already have a draft doc just discussion with guys from Spire. Okay, perfect. That sounds good. Okay, I hope that will be good success here. I know that there are some other things that are going on or some other parallel calls and discussions. Okay, so Fred, are we discussing the maintainer's track here or are we taking this offline? So in terms of the, can you repeat the question? I think we've got that. The maintainer's track sessions that we probably will have to submit. Yeah, that one I think that's conversation that we need to have with that. We will definitely submit something for the maintainer track. For those of you that are unaware. So what tends to happen is they have the main set of sessions that go on and then they have a maintainer's track. And so for projects that are within the CNCF, I believe like there's a separate, I don't want to say it's a separate submission. But it's a separate, there's a section specifically for CNCF projects to present. And so we're going to, it's not a guarantee that you get in, but we're going to post to that as well. And outside of that track, I am planning on submitting an integration between open telemetry and NSM. And so we'll see how that goes too. We are also working on the initial set of planning for NSMCon. So that's something else that we would like to run in Europe if possible. We'll have more information on that once we get some more details. And it'll, it'll very likely be ran in the same, in the same way that NSMCon number one was ran, which was huge success, at least from my perspective. So once we have more information on an NSMCon, then we will add more information. So right now this is tentative until we can, until we can drive things together. But this is also another option where if you're, if your talk is not submitted, if your talk is not accepted into KubeCon, there is a second opportunity. And there's a potential second opportunity to submit it into NSMCon. So if anyone needs help as well with getting their talks in, I'll be, I'll be around the most of the day on Slack and I'm pretty sure there's others in the NSM community who are going to help as well. So if you're, if you want to, if you're putting something together and you need someone to help review or bounce an idea off, like, I know time is short, but you know, we'll do our best. So we have Twitter account information. So we have 597 followers, so that's seven above what we had before. We're following 2006 or seven, which is plus four. And we had 850 total tweets, which was an increase in 10. We posted the futures now. The five cool things video. It was our Q-Con talk. And we retweeted multiple mentions. The plan is to post the NSMCon slides. Retweet and the various retweet mentions. And once we get 600, just send a thank you tweet. And we're still, are we still waiting for this for the contributor podcast? Has that not been released yet? Do you know Nikoi? I know nothing of any updates. I was conducted by Lucina just before the call. She didn't mention anything about it. So if that's what written, that's, that's what, what we know. Okay. And with that, I'm not sure if the RFC is. And we also are screening LinkedIn updates now. So that's it for the, for the main status. Is there, right now we have a relatively empty agenda. Is there anything that anyone would like to discuss? Hey guys, Przamek from Intel here. Actually I was asked by Ed and Steve Kremins to do a quick demo of the. SRIV for order of NSM. That would be really cool. Yeah. Love to see it. So I, this is when this is one topic I've been very excited about for a long time. So also recall that this video is recorded. So whatever you show here will, will be available for others or not on the call today. All right. Are you, do you want to share your screen? If now, if now is the time to do the demo then absolutely. Yeah. I, I will stop sharing, but maybe, maybe you might, you might want to introduce a little bit the subject, what, what you're doing. So things like that. So that we have a common understanding of what's going on on the screen. I'm sure it's pretty cool, but you know, without comments people might be wondering why this is cool. Yeah, absolutely. Okay. I've started sharing. Let me know when you can see it. Okay. Cool. It's going to be a quick introduction to what the SRIV actually is. So SRIV is a technology, it's a technology. It allows the user to expose or create a number of virtual function devices that represent the single physical device. So for example, you can have a single network interface controller, single nick. And then based on that presented as a single network interface controller. You can have a single network interface controller, single nick. And then based on that presented as multiple number of nicks, the operating system into the applications that run there. And this allows the applications to have direct access to the hardware. So for example, you can have a virtual machine or Kubernetes spot talking directly to the hardware instead of going through some software-based solutions, right? So for example, without SRIV, you would have a Linux breach or open V-switch breach that would move the traffic between virtual machine or Kubernetes spot and the external networks. With SRIV, you can provide direct access to the hardware, to the applications that run inside Kubernetes spots or virtual machines. So I have a single node cluster here, single node environment. Then on that node, I have a SRIV capable network adapter, which is actually some pretty old Intel controller with the speed of 10 gigabits per second. And I have four physical ports available. Only one of them is connected. This would be the port number four here, the fourth one. So in order to enable SRIV, or maybe how would this look like when the SRIV is enabled? I'm going to start with that. So if you run an IP link command, you expect that it will list all network interfaces that are available to be used on the node. With SRIV enabled, you get something like that. For the single physical function here, which in this case is called DNS78553. We can create up to 64 VFs depending on the hardware. At the moment, I have 16 virtual functions enabled. So after that, each of these virtual functions is visible not only here under this physical function, but also as the regular network interfaces. So for example, that would be ENP2S15F7. So you get the idea. Single hardware interface that can be exposed as multiple virtual network interfaces. So coming to the Kubernetes site. In order to use these virtual functions as network providers or network interfaces that can be attached to the Kubernetes port, we need to have some mechanism that does that. So in case of the NSM SRIV forwarder, we need to... The first one would be the regular SRIV device plugin, which is maintained by Intel and GitHub. Basically, no big modifications have been done to that SRIV device plugin at this stage to make it work with the NSM. There's only a single patch that kind of increases the configurability of the device plugin. It allows to configure custom resource prefixes names for the resource pools. So what is resource pools in case of the SRIV device plugin? I'll show you the configuration, which is in this case just a simple config map. There we go. So the most important piece of data is this JSON config. So SRIV network device plugin configuration allows us to create or specify a number of resources pools. Resource pool is the correct term for this. So as you can see, we have an array here. And then for each of the resource pool, we can configure resource prefix and then resource name. And then this is an option for the SRIV device plugin to... It uses it to basically just choose which device is assigned to which resource pool. I have a quick question. Sorry, Przemek. So what I see here is that you essentially are demonstrating different resource prefix for each of the items in the list. Last time that I checked the SRIV plugin, it was more or less static resource prefix. Is this the new thing that you mentioned? Yeah, this is a new functionality that I added to the device plugin. And this is upstream in the SRIV plugin? If I download the SRIV plugin now? Not yet. I haven't even started the upstreaming process yet. Also, I just chatted briefly with Ed before this meeting. And he also had a couple of new ideas that can extend this configuration even more. But I'm expecting to push this PR probably still this week. Perfect. Just for quick context to the rest of the audience here and the viewers lately. This is very crucial because this resource prefix we're going to use to enumerate our network services in local or remote domains. So for us, it was very, very important that we have a way to have different resource prefixes for the various services that we would like to expose. So that's a good step forward. Thank you. So maybe quick describe on the note and to continue that the resources are created and managed by the device plugin. So as you can see here at the bottom, we have this exactly what we have in the device plugin configuration, right? We have the kernel service one Intel.com slash 10G and then the same goes for the user space. And obviously, because this comes from the configuration file, this is fully configurable. And we can have multiple resource prefixes running alongside inside a single Kubernetes note. Okay, so. I'm interested in how does the SRV device plugin allocates the virtual functions to the pods. Yeah, maybe I can actually show that. So once we schedule a new pod. And this pod would be actually the example from the NSM repository ICMP responder. So I updated the pod spec for it or the hand template actually. Let me show you how does the pod spec look like now. Let's see first. Yeah, so in the resources sections section in the pod spec, you can see we have a new addition, which is our kernel service one Intel.com resource, which is virtual function of the SRV NIC. Then new addition that currently it's injected by the webhook admission controller is this environmental variable. This is kind of hacky solution I came up at the moment in order to be able to pass this environmental variable. And why this is important I'll show you right now. So instead of execute, let's get to the inside pod. Yeah, so after the SRV network device plugin allocates the resource for the kernel based interfaces, not using the accelerated user space data path. The only information that we see will be the environmental variable. So after the resources requested in the pod spec, the SRV network device plugin allocates that we have from the resource pool. And then it will inject its PCI address as the environmental variable inside that pod. So in this case, sorry, it will be visible only in the in this container, so not here, but there we go. So this is exactly the environmental variable that is injected by the SRV device plugin. So as you can see, it includes the resource prefix and resource name. So as you can imagine, this is dynamic thing that you know we need to figure it out dynamically. So currently the only solution is to reference this environmental variable as part of another environmental variable here. So this is kind of a hacky solution I came up with at the moment. This goes mostly because of the limitations of SRV network device plugin and also because of the limitations of the NSM client applications at the moment. Because there's a lot of hard-coded requests and other stuff that I need to play with to fix it by introducing more hard-coded stuff at the moment. So this is something that we definitely need to improve in the future. But for the moment, it works fine. Okay, so now that's pretty much it regarding the SRV device plugin. Let's get to the second more interesting component, which is the... I have just one small addition here. I mean, I get it why the hack is needed and I'm sure that we'll be able to figure out a better solution in the end. One small note about the labeling the limits that you showed in the Yamaha for the NSM. So just for viewers to know that this is needed in order to be able to schedule the results. This is the way that the device plugin mechanism by itself works. That by adding these limits, this essentially tells the group scheduler that it needs to schedule this workload wherever this resource will be available. And in a normal Kubernetes cluster, you probably have nodes that have this resource available, nodes that don't have it. So selecting the proper work like worker node is very important and that's why it's needed here. I'm saying this because in general I'm very sensitive when we are adding additional annotations to our pods. And I needed some time to kind of assimilate this and say, okay, fine. Yeah, and the resource itself isn't even added in the home template. Currently I'm doing something like that. I'm just adding two new annotations with the resource prefix and resource name to the template. And then I let the admission web hook figure out what should be the resource name and then inject it into that pods. This makes it more dynamic, less hard coded, but no, it's still not a perfect production ready solution. It will take a couple of iterations, but I'm sure that's kind of trying and retrying. Yeah, absolutely. Okay. So are there any other questions regarding the device plugin part? Can we move on to the forwarder? I hear silence, so I guess we can move on. So as you can see, we have a new application running here. It's an SMS server forwarder. It's kind of a replacement for the existing forwarders, for the kernel forwarder and the VPP forwarder. So maybe I can show you some source code. So the PR for the NSM and it's actually something that gets upstreamed to the NSM opened the PR, I believe, last Friday. So it can be reviewed. But what it does is keep the obvious part like forwarder registration and other stuff because it works exactly the same way as in the other forwarders. What's important is how we handle the requests coming from the client applications and what types of mechanisms we support. So in case of this server forwarder, we're going to need two new local mechanisms. I'll comment it out for the moment. One would be for the kernel based interfaces. So for the visual functions that are bound to the kernel module, so the entire packet processing will be performed in the Linux kernel network stack. And another mechanism type is needed for the user space connections. So for example, for the TPDK applications and other similar solutions. So currently in the current implementation only SROV kernel mechanism is introduced. I rebased this PR on Monday. So, you know, the initial implementation was based on the when we still had the local and remote APIs. So after new unified API was introduced, I had to rebase it and currently there's only one SROV kernel mechanism available. But, you know, in the future, the second one, the user space one, we definitely add that here as well. So the only new field that's in the request in disguise is the PCI address of the virtual function. So now if you are connecting the pieces here, what we get from the SROV network device plugin, only the PCI address, right? So this is the only thing that we can probably use to request a new interface inside our Kubernetes spot. So that's exactly what is being done here. So we take the request, we take the PCI address, which is the most important piece of information here. Then we obviously have an option to configure the name of the link. And also we also obviously need the network namespace inode identifier to know to which network namespace we should inject that virtual function. Then we do some configuration. I'll show you what type of configuration is currently supported in a second. And then we obviously do exactly the same thing for the destination interface. Now, again, I will need some help here, some brainstorming. Because with the purely software-based data path, it's extremely easy to create new virtual Ethernet link pair or to inject new VPP interface and so on. But here we are dealing with hardware resources. So if you want to attach, for example, 20 clients to a single NSE, then we have a problem. Because attaching 20 virtual functions to a single Kubernetes spot seems like an overkill. And probably there is some way to reuse that VF. So I'm currently trying to figure it out. I think I found that across various requests, the workspace name is matching. So I was kind of wondering that maybe I could reuse that and use it as some kind of key for the connection or something like that. So I don't need to allocate new separate destination VF each time. But yeah, that's something to consider and something that I'll be working on and I'm looking for feedback as well. Okay, so maybe let's jump to the configured VF interface. What do we do here? Well, first of all, we'll get the network interface handle. That's pretty much obvious. But then based on the PCI address, we need to get the link name or what's the representation of the link in the kernel namespace. So in this case, for example, if I'm requesting, you know, how the PCI address looks like. So for example, a network device plug-in gives me something like that, like 02 colon 0F.1. From this, I need to know what is the actual interface representation. So based on the information of the PCI address, I need to get something like that. ENP to S14, for example. Then having this information, I can use this link and t-jackets into a Kubernetes spot. So the steps are as follows. Pretty similar to the kernel forwarder actually. We get the link representation based on the link name. Then set the link down for the time of operation. We move it into a network namespace of the Kubernetes spot. Then we set its IP address, set the name again, and then set the link up so it can be used. And we do that for both source and destination link. So once that's done, we can see the virtual function properly injected and configured inside a pod. So to show you that in an SM system. Then using our example, I see an IP responder. Sorry, nice space. Start with the client site, selects execute IP address. So we can see we have three interfaces, loopback, which is loopback. Then we have interface provisioned by the C9 plugin. And then here we have our virtual function that is the hardware resource that is assigned or attached directly to our Kubernetes spot. And then on the other side, the endpoint side, we'll see the other side of the interface here as well with different IP address. So from that, from the client side, I can easily ping the endpoint and get the response. So we have the cross-connection established between two pods. So this is how it currently works. Yeah. Okay. Okay. Thanks. That was very, very detailed and excellent presentation. I'm sure that it will have to go through some iterations before this gets in for emerge. But I have just a quick question here. So how dynamic is the resource allocation here? Is there a way that I can drop this interface now then inject it in another pod? Yeah. Yeah. Absolutely. Once the pod is dead, basically, they're terminated. And the VF will be released by the SROV device plugin. It will be returned to the resource pool and then can be reused again for another pod. Okay. So for example, we can scale the deployment. And this is something that doesn't really work well yet on the network service endpoint side. However, it kind of works for the client side. Scale deployment what we have here. And then we have ICMP responder NSC. So the new pod we provisioned now. Let's watch it. So now the init container, which was also updated by me as part of this PR, will take that environmental variable and we will use it to provision a new interface. Okay. Okay. Okay. Good. Are there any other questions from other people here? Because I feel like I'm the only one asking questions. So once this is done and merged, do you have any intentions of pushing on the user space path? Yeah, absolutely. Actually, I guess this would be very connected. We have to be done in parallel, both data posts. I guess there will be some overlap between the two. Okay. Fantastic. Yeah. We're very excited. Can I speak and describe how much people have been asking for this type of functionality? So thank you very much for working on this. Yeah. I'm sure this video will be viral very soon. One supply. One question I had. This is Ryan from SUSE. I'm kind of new here. So forgive me. Yeah, first time here as well. My question is, is there any help that you need moving this forward? Well, help is always appreciated. I talked with Ed before the meeting and there's still a lot of, maybe not a lot of work, but there's a significant portion of work that needs to be done. So we need to divide the plugin to extend the configurability even more, beyond that's resource prefixes only. But for example, to allow dynamic rebinding of the driver that controls the VF, right? So for example, you start with a VF that is attached to Kernel driver, and then you reattach it to user space interface. So now you can run the PDK application on top of that. We have a lot of ideas like that. And we definitely could use some help with that. Also the user space mechanism here currently with the kernel mechanism, it's easy because you have link representation that you can work with. But with the user space, what can you configure, right? So for example, we already had similar problems in DSRO, VCNI plug-in or the nested PDK driver for OpenStack. We know that there was nothing to work with. So we ended up adding some annotations or creating files inside Kubernetes spot file system, stuff like that. So we definitely needed some help with design of that part as well. Of course, any help and three views and feedback is highly appreciated. Cool. So a couple of comments on that. So on the design, in our GitHub page, we have a specs section under Project. So it's part of the Kanban board. So you go to GitHub, network service, on the top you see projects and there's a board called specs. And in there, you can get a hold of me as well if you don't remember this. But if you want some help with design from the community, one thing you can do is create a Google document and link it there so that people can have a place to know where it's at. And we often will review many things through a process. And so a second thing is well that we made, or that we should make sure that we eventually get submitted in. So we have CI across multiple clouds. I know for certain that packet.net supports SROI-OV. And I know they support, at least I've been told, I've been told to either tune the cards that they have. One of them is from Melanox. The other one is from Intel. We should make sure that we include this work that you're doing. Once it gets versioned, we should make sure that we provision some packet.net resources to make sure that this path stays fixed. So that's something that we can help with. And I don't know what MacArthur's occurrence is having them. So we'll have to take a look at that. So on the forwarder side and probably on the CI side, we already have a Radoslav in contact with Pyjamec to just work together. So we wanted to have someone from the community dedicated to help him. And because Radoslav did the kernel forwarder, I believe that he's in a very good position to know what it takes to implement the forwarder for NSM today. So that's, of course, I think, but of course everyone that wants to help in any way is more than welcome. Ryan, if you have additional resources, I know you made that comment about asking about helping. If you have additional resources, definitely feel free to join in. And if you need some help with joining in, you can always talk with... My recommendation is to join us on Slack. We have a NSM channel. Yep. It's in the Cloud Native Computing Foundation in Slack. And feel free to ping us in the NSM channel. You can hit me, Ed. You can hit Nikolai and Andre. Yeah, there's plenty of people that's... There's almost always somebody on there. So... Yeah, I've been in touch with that. I'm in the Slack channel. Yeah, I may be able to, you know, make up some hardware that we have to help, you know, test things and do some useful work. So if there's specific things where I can help, I'm kind of new to this project in the Kubernetes ecosystem in general. So I might be a little slow putting together PRs while I ramp, but yeah, I'm happy to help in any way I can. No problem. Yeah. The two other things that we ask for help on from new, from newcomers is take a look at our documents while you're building things up. And if you're, if you're able to suggest some fixes through pull requests, that'd be, that'd be fantastic. You'll approach it with fresh new eyes. It's hard for us to approach it in the same way. And when you run into a blocker, definitely come, definitely come and grab it. It's like, don't waste time trying to, trying to negotiate the system. Like the thing, definitely grab one of us and let us know as well. And we'll do our best to help you with that path. Okay, great. Cool. Are there any other questions for the SROV topic? Okay. Well, thank you very much for, for coming over here and presenting. So we're, we'll definitely make sure that this video gets circulated around. There's definitely strong interest in this size. So again, thank you. Thank you very much. No problem. Happy to be here. Thanks guys. Cool. Are there any other, are there any other topics that people would like to discuss? We have 10 minutes left. Okay. If there are no other topics, and then we'll yield back 10 minutes of time. Let's go. I have just one quick. I think Ryan, you sent a question today, like a PR, an issue today about Helm 3. Yeah. Yeah. Yeah. So yeah, that's, that's a good topic. I mean, I think that it's also a good stuff. You know, newcomer issue. It should be that complex. So yeah, if you need help with this. Yeah. Yeah. I'm, I'm cooking a PR on that right now. Oh, okay. Just to help, help get myself over the hump and share it with others. Perfect. Okay. Okay. Great. Thank you. Yeah. A friend of mine who is kind of involved with a lot of different open source projects. He's told me that the magic number that she has seen is around five, five minutes until people start to feel somewhat comfortable with many patches. So my recommendation is to, is to make that a, make that the goal. Cool. Are there any other. Is there anything that we wanted to talk about on that topic or is everything good on, on keeping that with Slack? Okay. And that's scenario. I want to thank everyone for, for your time. And we will see you all again at the same time next week. Thank you. Thank you. Thanks. Bye. Thanks. Bye.