 Great, so we have meeting minutes shared. So thank you whoever's sharing them. Cool, we have enough people here, so yeah, let's get started. So welcome to the next network service mesh meeting. So we have this, every week we have this particular meeting. We have two others which are currently on Hades, which is the document use case meeting. We may reconvene them as needed. We have the CNCF telecom user group which we join in on which occurs every first Monday at 8am and every third Monday at 4am Pacific. And of course this call which occurs every Tuesday at 8am. So we have some major events coming up. We have ONS Europe, which is occurring in Antwerp with four accepted talks. We have open source summit in Lleon with one accepted talk by Ivana and Radoslav. We have KubeCon and CognitiveCon in North America and San Diego. And so we have announced network service mesh con as a day zero event. Please, please register. There's limited space and there's also a call for proposals so please the most important thing that people can contribute here is going to be content. So talk about what you're doing. Please note the CFP closes this Friday so get your talk proposals in. Yes, so we also have sponsorships available. So we also have multiple, I believe the the agenda has been posted for KubeCon as well and we have a maintainer, we have a maintainer track talk. Yeah, that's the only one that I am on. I don't know if anyone else got anything else accepted. Yeah, no, we on the main program we just have the maintainer talk. Okay. It's getting harder and harder for things to get on the main program. Yeah, so I reckon they have less, my guess is they have less than the 5% acceptance rate at this point. So my little math brain just tells me that means we need to get 20 talk submissions in next time, but Taylor, Taylor, do do you have anything accepted by chance? For the KubeCon? Yeah. Well, doing maintainer tracks didn't get accepted on the other ones like we were looking at doing a panel with you and Ian. Okay, it might be good if you could add a link to the because I suspect that you know your talk will be of interest in the NSM community as well because you guys do a lot of you know we there's a lot of good interplay here if you could add it to the that would be great. Yeah, sure. Yeah, which reminds me there's also in in ONS Europe there is the the tug of meetup which is going to occur believe it's Thursday at 1145 a.m. and for local time. So if you're still around on Thursday that may be interesting that may be an interesting area to meet up. Correct me if I'm wrong with that, Taylor. That might be a different meeting but I believe that's what it was and let's see we have a social media community team. So Lucina, I saw you on the call. Can you or rather you have the floor? Good day. Thank you. Awesome. So this week I was able to post about and I'll start backwards is the OVS orbit podcast everything is clickable so if you have a Twitter account please feel free to click through on these meeting notes and retweet like all that good stuff to promote the OVS Twitter podcast that's been published. There's also an announcement and a reminder of the network service mesh in the CNF testbed session at open networking summit this month. I've also posted for a call for CFPs for network service mesh con at kubecon North America. There was also a post that I retweeted about the CNCF webinar intro to network service mesh that will be on October 2nd. Details on how to RSVP are available in that tweet that's linked in the meeting notes and I also created a thread for all of the kubecon sessions that mention network service mesh. That one got a lot of traction. I tagged 10 people to that one and then didn't really realize that when I created comments with each session listed then those 10 people would be tagged again. So that was a learning experience but it also got a lot of eyes on network service mesh. So if you're curious as to which sessions will be at kubecon North America in November that tweet is one place you can find those things and it's also a reference point if we want to copy paste the URLs dates and times into these meeting notes we can do so. It's all together there and I also posted a reminder of today's working group session. There was a really good comment from an account that said look into network service mesh it has a winning architecture. So I put the link to that really high praise into the meeting notes. So congrats. That's really cool thank you very much. You're welcome. So we have announcements. So CNF testbed there's an announcement with Nikoi and Michael. So today we had another small breakthrough with Michael. So essentially for the last 10 days Michael. At least a week. Yeah I think I've been working on it more extensively in like just the last couple of days. Okay okay so Michael has been trying to essentially be able to inject an external interface and create the so-called gateway which is not the final story that we want to to show but it's somehow the beginning of like the precursor to have the final solution I believe. So we have an end-to-end external host then going through this gateway through a physical interface into the NSM and you know you can pass Pings Pack and Fort between the external world and the chain of services packet filter, VPP based clients, kernel based client and all these things are chained together and it's working. It's something that we want to show in two weeks at Open Networking Summit. One of the examples that we want to show so we are kind of proud of it. At least I am Michael. Yeah it's a good step in the right direction at least it was this I guess it's last minute but that's that's more the usual I guess so that's at least good. You guys should be proud of it. That's a big step forward and now we just have to get the hardware Nick's stuff working in Network Service Smasher and we can make it all NSM. Yeah I think just to add I guess we're doing a few workarounds right now I guess it's in particular the kernel driver we're loading from the host and just pretty much having a privileged container which can then access it and do using the dbdk plugin for VPP we're able to to attach VPP to the interface and and and do whatever we need to do on there. Okay yeah actually I'm doing quite the same using Miltus. It's an alternative but it's working too. Yeah and I think we've discussed that briefly today as well that we probably need to look into some better way of doing it and I guess there are quite a few options Miltus being at least one of them where we can we can do this in a bit of a prettier way than we're doing right now. I think my biggest concern is just we need to have it ready for O&S as well. I mean one of the things that is coming down is we're starting to get some folks who are interested in taking a look at and doing the hardware. The hardware Nick's stuff and I'll probably go and revisit that spec here and and pretty it up with a little bit more recent stuff because that'll handle not only how do you get the Nick in there but how do you have something that will call properly in an orderly way the write API is to set up the particular VLANs and what not correctly for you so that you actually get the network service that you want there. Yeah it's basically bringing proper dynamicity into making sure the tors which is offering the right network service to you too. Well at least for me this this step is valuable because we kind of you know stumbled in some unexpected problems maybe. I know everything about setting DPDK. Do you need a privilege container? How do you isolate a specific device because we currently are not really able to do that. It just maps whatever is out there so it's it's a bit of a I don't know the beginning of the learning curve I think at least for me. So yeah I mean whatever we have in the higher level specs it will surely be able to take advantage of this work in any case. That's my point of view. Yeah we'd love to have your involvement in in the creation of this stuff as well because you have experience setting up and managing these things and that's invaluable. So we have the STK evolution so it was updated Taylor. Sorry Taylor. My apologies. You guys roll back. Taylor CNF testbed road map you have. No worries and this is just building on what everyone's talking about and I can share the I'll just send the slide via Zoom I guess. Who's can someone click that if you're sharing? I guess I can add that to the notes as well as a link. If you can click that and open it up. Whoever's sharing. Yvonne can you please click? Yeah this one. Yeah so there's also one in the repo but I need to get that one updated with what we did here. It's a little bit off. Anyways so most of the as you can see most of the use cases that we're focused on are going to be around NSM for the next couple of months and what Michael and Nikolai were just talking about is that second one in September the NSM physical net gateway that we're planning on using as the example that we talk about both in a tutorial walkthrough type thing as well as the talk that Nikolai and I are doing. So that's the I guess the big one that we're trying to have ready for ONS and we're also in the middle of refactoring a lot of different things in the CNF testbed including like what's listed as about the use cases. We're also working on the provisioning of machines and clusters as we're finishing some work on that we're also doing in the CNF CI project and switching over will be using kubespray. But this part is something we probably will see for ONS moving towards on at least on the kubernetes side splitting up the use cases into reusable components and Michael, Denver and a few of us have been working on this and it's right now in a different branch but ideally we're going to at least have the NSM packet filter use case ready by ONS in this new setup and probably if timing is right then we can also get the this physical knit gateway use case that we're also working on depending on where things are. But we think this is going to be a lot nicer for people to contribute and they can come in and work on service chains or CNFs or whatever they want and add different pieces and further on down the line you can see Dan M's on there and we've had a request to get Dan M into the CNF testbed as well as the Nokia CP puller. I've put it further out because I don't know when they have time but I think by the time we hit November we'll be able to focus on it and other people and if Nokia is not available and that hybrid Kubernetes open stack use case is the one that we were trying to get accepted with Nikolai, Yangyu and a bunch of other folks and ideally we can still target getting that up if we can get some open stack help on a some of the VPP GP tunnel issues that we're running into that'd be one of the main things and that could be a talk that would be good at the NSMCon and we got some others further out including talking to some folks about switching to Kola open stack helm there's some packet projects and people that are wanting to help on that and then the MULTUS Intel stuff comes from the Intel they have a container experience kit that Michael's already tested and on packet so we're hoping to pull some of that stuff in as well the last thing if folks are interested in getting involved then let me know put it pretty far out for January kind of thinking about Mobile World Congress in Barcelona which is in February next year would be a GSM 5G type of use case and ideally within the NSM connecting those and packet has facilities that are connected to Sprint's 5G network and they're willing to work with us to have access to various things so if if you're interested and like to talk about that and see if we can put something together Taylor yes in the 2020 frame when you think about 5G gateway you're thinking about the UPF thinking about what the UPF mode of 5G well I guess I'd say to be determined on what we what we have available to test yeah I've been talking with packet a little bit to see some of that and I think we're going to have some conversations with Sprint probably post-ONS okay but if you have some thoughts on something that'd be interesting or doable or what you'd like to see then let me know or any of these all I will I will let go the rush ONS and then get returned to NSM but that you continue with shut can shut on that all right yeah that's that's fantastic so this we we did have a very minor presence at the last at the last mobile mobile world congress so basically at the Cisco booth they had a they had a micro boost there with some people talking about upcoming things that Cisco's been involved with but getting it to something where we can actually have some something that's working there I think would be absolutely fantastic so I'd love to I'd love to make that happen um so is there anything else we want to talk about on the on the roadmap okay with that let's move towards the status of the projects so um for this I will hand it off to Ed and Ed can start poking the various people yeah so um the SDK evolution work finally landed last week um this is not only um stuff that is going to make it easier to write fragments of your sort of change pieces of your network service endpoint but it also is set up in a way that gives you internal tracing uh so that you can sort of see the progression through the many pieces as well which can be very helpful in figuring out what's going on particularly when you have tiny issues on requests um and also is set up in such a way that any logs inside your SDK fragments show up in the spans so it should make things quite a bit easier um to sort of run through the SDK now there is a need there for better docs um and then there's also a matter that we need to discuss either now or we can discuss further down around multi-go mod uh multi-go modules in the main repo one of the problems for having with the SDK that I think a number of people have hit is the SDK because it's in the main repo pulls a bunch of requirements that you don't actually need via go modules um which sort of makes things harder than they have to be um and so we're sort of looking at solutions to that and the two that have come to mind is it's possible to have multiple go modules in the same repo or it's possible we could break the SDK into its own repo do folks have thoughts or opinions or I know you were hitting some of this Frederick yeah I mean my my preference would be to uh to eventually to do one of two things that absolute best case scenario would be to convince the the go team to do some analysis of uh of what they we actually need so in other words downloaded and do like a pre-compile step but I don't think we're going to get that anytime soon so um I think that having a um it's the problem is not the size of it even though that will be the problem for some the biggest problem we're going to run into is that when you pull in something that has a very large number of dependencies then you're now creating a uh and a burden on the integration of that library with others and you're limiting the scope of what others can upgrade to potentially unnecessarily considering that most of the dependencies that are not being used so the biggest one that I can think of from the SDK side is the Kubernetes dependency so I'm not 100 positive yet but I don't think that we have we I don't think that we have a dependency on the Kubernetes repo within the SDK and that's by far the the biggest one that we need to jump over the second thing that happens as well is a problem with go mod tooling so go mod because of the way that Kubernetes is versioned and released it's not very go mod friendly so you have to put a list of like 15 different replacements to actually scope out exact versions and once you've done that then it'll it'll work but it basically turns into a magic incantation that feels fragile to me so I think that these uh I think this will go away if we were to split it off so those are my those are my primary concerns at this point yeah I mean it's also sort of good hygiene uh reducing the scope of dependencies that makes everything quite a bit easier for folks um so okay do do folks have other thoughts or opinions I mean yeah I mean I think that we have a lot of projects already in the same so for example our wonderful testing written by Andrei I mean I think that it should leave its own life in a separate repo we have also AWS and various SDKs for the public clouds there which which are just inherently there because we have scripts and whatever whatever is needed to do our CI and as you said then when when someone wants to utilize our SDK they essentially are depending on AWS whatever which it's not great right by thinking it been I'm kind of curious in the short term let's start trying to move some of these some towards go multi modules because there's some complications for breaking things into separate repos at this particular moment in time because we're doing some things with API refactoring that make it a little more complicated so particularly around like the multi data plane support in the kernels and moving mechanisms being strings instead of enums but if we get to multi go modules then once the API settles down a little bit for for 0.2 it becomes relatively easy to break these things into separate repos and I think it makes a lot of sense plus the the go module separate go modules will force us to think about the the interdependencies between pieces of things like so for example right now if we were to break the SDK into its own repo the first thing we discover is that it's pulling the APIs from the main repo and therefore pulling all we have exactly the same problem yeah we would identify in the process of going to go multi modules does that make sense yeah so maybe that's the that's the way to go okay cool so anything else on this before we move on there's one experiment i'd like to try which is having a small kubernetes repo that we import it's not really kubernetes what it is is all the replaces stuck into a go mod and that's all it is that way we can if we try to include it in we'll we'll back force the the go modules to properly load properly because if it can't if it does this you know it this will it won't solve the problem of downloading all of kubernetes but it will solve the problem of keeping them all aligned across multiple repos so it's just something to something to think about okay cool okay that's all i have cool all right so moving on to the in progress stuff i we're tantalizingly close on security so andre um sorry ilya i think you've you've got just a little bit of stuff to rebase and then we hopefully are relatively good to go oh yeah already rebased and waiting for some good results okay so ci is running and then we've got some more things coming um as we move along on the security stuff um srv6 so um ardom i think you you have a little bit of a blocker there on a bug in vpp is that correct do we have ardom on the call we don't have ardom on the call okay so when i last spoke to ardom he was saying there's apparently a bug in vpp on deleting srv6 sids and that's sort of the last blocking piece on the srv6 support so we're working with the vpp to get that result is it in vpp or legato uh we think it's in vpp um don't get me wrong we we found some bugs in legato that got shaken out as well and legato team has been wonderful and the vpp team is being wonderful about engaging to start these things out um so we're just relatively more dynamic than so many so i'm quite that well you'll be happier you'll be happier when we actually get the bugs fixed um so and then you can tell us all the things we've done wrong daniel because i'm sure there will be a list i don't know i'm actually following it pretty closely and we're working on this in other angles with vpp and how to make it one no i understand that daniel but but no one actually fully envisions what they need until they try and use it and then they discover all the little things oh yes i'm stuck in that right now so we appreciate them um okay so we do have a discussion that's still ongoing about moving moving some of the remote mechanisms stuff around vni selection into the nsm forwarder from the network service manager i think everybody agrees it's kind of a good idea it's just a matter of sort of working out exactly what we want to do in what and how and there's some refactoring going on in the data plane that hopefully will make that simpler um so do we have rudder slob do you want to say some things rudder slob about the kernel forwarding plane he's actually on btto this week that's excellent news i'm always but i know a bit because i'm using his matrix implementation so he's still he has a work in progress pr and i'm not sure if it's ready he left it as whipped before he left but he pushed some fixes and changes it seems to me that it's close to end that's awesome news that's awesome news because i'm very excited about the metric stuff which is you're up next for on the list um do you want to say a few things about the metric work on our yes i'm currently stepping prometheus server on the cluster and i have i wrote the other implementation for tracking metrics in prometheus and now i'm setting up the server to test all of that and regarding the vpp issue i think you have seen the issue that there is open team legatus ripple uh they agree with having configurable matrix configurable in periods i mean so i think from what they're quite see that they are going to implement it i don't know if it's something that it should so this is my this is my impression that they said they're going to yeah it sounds like they are going to implement it and i think some of the initial confusion was vpp is able to collect metrics at a speed that almost no system is able to consume them and so they when what they thought we were asking which was every time you update your metrics would you send us a grpc message like sometimes people ask for things you like i don't think you really want that um because they collect metrics so fast and so many metrics and when we sort of said hey how about just every so often you give us a summary that made a lot more sense to them so because they can literally they can throw off unbelief they throw off so many so much metrics that they're actually innovations in vpp to make it possible to make them more consumable that way so rather than providing metric events um they actually will allow you to share memory where they will update metrics because it's the only way you could possibly keep up but obviously that's not what we're going to want to do here we're just going to get some grpc messages so that's actually also very good news um shared memory grpc messages is what we need yeah i'm i'm i'm not so sure about that but okay cool um cool so refactoring to simplify so do you want to say a few words andra i know you're starting the chain refactoring refactoring of the network service manager yeah it's in progress but i think we'll need more time uh since it's quite complicated uh i need to split all requests and close into two separate hierarchies for local and for remote so just trying to make all things easier um so the other one i actually want to make sure we capture here is that we actually have a full request out for refactoring vbd agent data plane into sort of a more chaining style as well let me go ahead um and that's yarn number 1569 so essentially doing a similar kind of thing to refactor the data plane in the hope of making it much simpler to work with and also hopefully making it easier for various to build their own data plane with or without vbd agent because i expect i mean you guys have done a great job of the kernel forwarding plane that's a great stuff forward but i suspect particularly as we start looking at hardware nick support where the hardware nicks may have special features we're going to have a lot of people writing to write their own anisom forwarders for a variety of reasons and we want that to be as easy as it can be so cool are there other things that people are aware of that are in progress right now well i got a question for the kernel forwarding plane uh are we using uh ip tables or uh any related machinery in that no no okay cool because there was a there was a tweet that was put out by tim hawkin about the shift from um uh did you recall what the what the shift was uh ed i can just read the tweet out it was from ip tables to whatever was coming after ip tables um i'm trying to remember which everything it was and it's part of the progression where they keep thinking they're to solve their problems sorry what so and i think yeah yeah so just wanted to make sure that was on on all of your radar because it has if there's an unstable api at this moment that's the breaking things so if we're relying on you that uh it's just best to keep that in mind so awesome so that anything else that folks wanted to bring up for in progress this week cool um there were a couple things that i came across i wanted to discuss past sort of in progress things one is we just had someone who opened an issue um saying hey i went to go try and get the latest of this thing and we don't and you don't have it yet and what i realized when i went to go look at that is in our production rebos um we have a tag for the branch like a tag master we do not have a released version 0.1 of any of our stuff as a tag anywhere i looked so did that not get taken care of when we did the 0.1 release the release mechanics i was on vacation that week for 0.1 there should be tags or have i probably didn't push them yeah i i went i looked for them for example like on um ns the minute i'm not sure you're able to release without attack but okay yeah so um when i went to go look at tags for kind of seven knit let me go ahead and so a question on this is this get the get tags or docker tags docker tags so we're we're missing that was the first thing i noticed and then the second thing i noticed is that we're missing latest tags for everything as well well we agreed that there are no latest tags well i i i don't recall the full content of that conversation um but i thought that we were going to have a branch tag and then a release version and then a latest tag that pointed to the most recent release no we have master tag right right for the ongoing continuous release things for master branch my point is if we've got a version 0.1 release i thought we were going to have a latest that pointed to the most recent release version does that make sense yeah i i think docker we need um we need a latest one because uh if you do docker pull nsm in it or or nsm or so on then it's going to defaults to to latest and that should go to the latest uh relates yeah so i think we should uh i think we should change that particular policy and just make sure that we have a latest tag yeah i think we still have a tag because we want to be able to get whatever was recent on master but latest shouldn't point to the head of master that way lies by this yeah so okay so the question is who wants to pick up getting the 0.1 tags pushed and the latest tags pushed pointing to the 0.1 for the stuff that from the 0.1 release should that be me i guess that is much appreciated i i know it's not a fun job but it is much appreciated just can can get it how i'm not there that's that's i don't understand particularly what happened there with mechanics um um but i did want to sort of bring attention to when you get effects and then you know some of the tagging stuff so i see that for some of the containers there is oh okay but some of them there isn't yeah okay so it's good to know that there's there for something not necessarily for others so we should definitely get it fixed for the ones that it does we should um see here we should also watch it on occasion just to make sure we don't have an overly aggressive script that um that cleans things also are you sure we have any same unit in the in the 0.1 because i'm not sure that there's any same in there oh that may be true i i may be i may be that rename may have taken place later than longer ago than i thought um but that's also a possibility um because all the home charts when you download them that's explicitly tested they download images on a clean docker local cache oh okay okay okay so here's the thing that i'm seeing so i know one that hasn't changed names in a long time which is msc um yeah yeah let's see how it's both the b zero one and it has the latest yeah okay so i may have misunderstood because i thought the asm in it change was more recent um well the latest latest is three months old when we stopped using latest in the in the main got it got it got it got it yeah i for some reason that that i apologize that i've created undue confusion um that's no i mean it's always good to revisit and that's i was a bit like i remember pushing all these things and testing five times right well this this is why you sort of ask the question because good okay the other thing i'm actually sort of periodically cleaning up a little bit is um we have a few remaining ci tags like running loose in some of these repos yeah and it's just because the switch over like things on older places blah blah blah so and i'm just deleting those as i come across them they're not many um so okay good i'm glad to know that i was i'm delighted to be having been sounding a false alarm there all right cool so um then the other one i wanted to bring up was and this is just purely for discussion at this moment is somebody you know undre put it out with the switch to mechanism type being from enum to string we might be able to collapse the local and remote versions of the apis to a single api um and that that you know that there's some things to work out there around things like how to properly limit the local remote mechanisms to the proper context because for example it makes no sense to allow a pod to ask this local network service manager for an srv6 uh connection because like there's no way to represent that to the pod right the pod can only be represented local things but i think that's probably solvable um but i i take it from your comment nicolai this sounds like a good idea to you in general yeah i mean i think that we have discussed this already i mean does he has anything changed on the mind of me i didn't remember discussing this before um maybe maybe i'm just having a senior moment that happens to me sometimes so okay but i mean does that sound like a good goal for people i mean there's some details to be worked out um yeah yeah yeah i mean but yeah it sounds it sounds like a good yeah okay yeah you should hopefully simplify a lot of stuff um yeah simplification is always a good yeah i i i also like if it warms the cockles of my heart that we're likely simpler as we get more feature rich it just makes me happy yeah so all right cool anything i guess this is your line frederick cool yeah there um let's see is there anything else anyone would like to discuss okay um i'll remind people that we have until next uh weeks in a semi-call to work out if we're going to have a call during on us so uh for those um for those of you that are going to actually what we'll do i think is next week what we'll ask uh we'll ask how many people will be will be around and if we have enough people then i will leave that to to ed to organize and um but does does that sound like a good plan this uh i actually just figured out that i have a question do you want to just cancel these meetings that are not that haven't happened for the last couple of months because they're just sitting there and we keep repeating that they are on break but yeah it's september already and yeah i would say that we probably would go check with the guys who organized them and say look you know are you going to be bringing these back if not we're going to cancel them in sort of a friendly, yeah i think that's a good idea and part of it is like uh jeffrey is an example so jeffrey jeffrey was out uh for uh some other things that he needed to to take care of and so he's back now so he may have intentions of of doing some more things so i don't i'm not comfortable just outright canceling them at this point without having a good discussion with them uh but yeah we definitely need to to reach out and i think it's okay it's okay as well like to to put them on on hiatus but if they're going to be on hiatus for longer than some period of time then we should probably uh remove them from the calendar and that doesn't mean that they're gone forever it just means because we can always start them up again as as needed but yeah let's let's reach out to she's jeffrey on the call i'm not seeing him on the call so let's reach out to jeffrey and let's reach out to uh to uh romkey and uh she's on the call yeah yeah so let's reach out to them and ask okay cool is there anything else that anyone would like to discuss what's that uh you'll you'll back some time and uh thank you all for attending and we will see you all again next week take care yes bye bye