 Okay, so let's get this started. So to start off with, if you have not added yourself to the attendees list, please do so. The meeting notes are listed on both on the GitHub page and there's also been posted to the chat window. So welcome to our network service mesh meeting. And so as always, we begin with agenda bashing. So is there anything that anyone would like to discuss that is not on the agenda? I'm gonna take that as a no. So let's jump straight into events then. We have KubeCon EU in Barcelona, Spain. So the call for papers is closed as of January 18th and we have multiple submissions in. March 5th is the notifications and the announcement should occur in March 7th. So KubeCon itself will be held in May 20th through 23rd. So my recommendation for first time conference goers is to book early because those hotels fill up pretty fast. And very often they'll also have accommodations listed that are officially set up between the conference and some of the nearby hotels. We also have a co-located event at KubeCon EU. We have the final mini-summit, which will be on May 20th. The location will be in Barcelona, but the final location in Barcelona is to be determined. And there are no call for papers yet listed as far as I know. We also have Mobile World Congress coming up at the end of February, February 25th through 28th. This tends to be more demos on stands as research provider centric. So the entity use cases are more interesting. If you have a booth there and are willing to showcase Nettbrick Service Mesh, let us know and we'll see what we can do to help you put together something that's successful. We also have the Open Nettbricking Summit ONS North America in San Jose on April 3rd through 5th. The call for papers is also just closed recently. We'll see what the schedule is. And actually the February 5th looks wrong for notifications because we need a little bit more time than that. So I'll double check the dates on that. Yeah, program committees, that's a really aggressive review timeline, especially for something this size. Yeah, they're asking for program committee to actually turn in their time on February 10th. So that's why I'm surprised with the deadline. So I need to check up with them and make sure that they have that correct. We also have FOSDEM, which is also gonna be streamed live on February 2nd and 3rd. And actually is that correct? Sorry, there's FOSDEM streaming and then there's, it looks like there's multiple conferences listed and under, yeah, sorry. Yeah, so we have FOSDEM streaming live on February 2nd and 3rd. We have upper side conferences, NPLS, SEM, MFV in Paris on April 9th through 12th. The deadline was sometime late last year. So, but it'd be interesting if you are conveniently located to check that out. We have Container World 2019 and we actually have a network service talk, network service mesh talk in Container World being put on by Prem from Luma Networks. Yeah, I'm excited about that and yeah, the talk got accepted. Yeah, thanks for bringing in. Cheers. And so we have Service Mesh Day. Call for Paper is going to close on February 8th. This is conveniently located near Mesa. I will definitely be putting on a talk for that. And we have KubeCon in, CloudNativeCon open source summit in China in Shanghai, which will be from June 24th to 26th. So if someone wants to travel to China and bring the word of network service mesh there, then February 15th is your deadline. So are there any announcements that anyone has to make? I should have asked this before. Are there any events that we missed that would be good for the list? Okay, so in that scenario, let's jump into the pull-request template. So Nikolai, you have the floor. Yeah, so quickly. Yeah, we have, I have put this PR where I propose this new template. Yeah, so the final version after the two remarks that I fixed here looks like this. The purpose here is to improve our PR messages and have some better structured and better looking git history. And yeah, that's it. So I suggest that, okay, we don't go into details here now. The PR is, I don't remember the number, but it's there, the link is there. So if someone has any objections, ideas, suggestions, just put them in the PR and then we will do merge it in a couple of days if nothing major shows up. Yeah. So I mean, one of the nice things about a PR template like this is it kind of makes you think about things, right? So like, I can't tell you how many times I have pushed a PR where I probably should have changed documentation. And if someone just said, have you changed the documentation like we do in the checklist, I probably would have. So it moves some of these things front and center. Okay. So the other thing that I wanted to quickly, just kind of update in form, announce was about, we had a quick chat with it and so I have proposed this a long time ago to start having a separate examples repo. So I have something going on this PR, which I call proxy NSC, which is essentially a HTTP proxy, which also has an SC on the other side. So I will start by trying to push this example in this new repo and then we can share it and see how it works. So it integrates with the main repo, how the integration testing can be done and all these things. So this is kind of where, what is going to happen in the next days, I hope. So maybe by next Tuesday, we will have this new repo and with some initial feeling of how it looks like to have a separate examples. Cool. That's it. If folks have comments on any of that or questions. So does that mean we're extracting out the examples directory in their network service mesh to its own repository or is this something that's new? So for the first initial version, it will be just the new examples that I am, I have a new example and I will just put it in a separate repo and see how this works. And if this goes well, we can discuss if we want to move some of the already existing things there or we should keep part of them in the current repo. But at least I think that for the future, all the new things that we want to do probably should go in this new repo. I mean, new things, meaning new examples, new use cases. Yeah. Yeah, so it'll take a little time and there'll be a little bit of adaption because among other things, we're going to figure out the CI as we move the examples. Yeah. It gives us a place for them to live. All right, so let's go over the... Or is there anything more that we want to say on this or should we move on to the specifications board? I'm fine on my side. If someone here's any comments. All right, so there is a brand new board that go ahead and open it up. And so the purpose of this board is to start tracking, to start to make it easy to find what discussions are going on within Network Service Mesh in relation to architectural changes or specifying new features or to also be a place for the community to ask for help on revealing things that the community may want to do. So for example, if you take a look at the spec in progress and if you click on that spec, so this is attached to an issue and on the right, you see a Google document that's embedded in. So the short description of what it is and there's a Google document. And inside of this Google document is where we can discuss what that specific feature is. Now, the reason it's a Google document and not being discussed directly within the issue at this point is that the Google document is a bit easier for people to collaborate with and look at things in a long format without having to worry about markdown or worry about how GitHub. So GitHub is not really optimal in this space, but what the idea is is that once we have enough feedback here and people are generally happy and we reviewed and we as a community have reviewed it and this is something that the core team wants to incorporate, we will take this document, convert it to markdown and we will then commit this into a to be determined a directory within the network service mesh project that that'll just then. So the idea is that this then becomes part of the documentation of what that specification is and the end result should be a document that the community can then rally around in order to implement new features where it should be very clear not only what the feature is but what is the path to reach that particular feature. And so if there's any changes as we learned during the implementation is as we know, specifications don't always meet reality then the ask is that by that time we'll have the markdown in the GitHub repo and then we can iterate on that for the details. But the main idea is to drive the community towards being able to reach or try to understand to be able to ask questions, poke holes in an easy way. And if you want to get a review of something you're proposing so suppose that you have a firewall network service endpoint that you want to expose or you want to create a network service endpoint that exposes your firewall to be more exact then you could also ask questions in that board create the Google document in the same format the end result though will not be any markdown in the repository because that belongs to a third to a third party project and that project and then decide what to do with that documentation. So are there any questions around that? So some of this quite frankly is an attempt to get a little bit more transparency which I think is good in general. The other one is there are a bunch of small things here that are actually pretty easy for people to pick up like the mutating and vision controller readiness probes and I know we've got a bunch of people in the community who would like some small meaty chunk of thing they can work on. And so my hope is that we can get these to a place where folks can sort of pick up something like the readiness probes and do that, right? Which would be super helpful because right now our integration tests are doing the only thing they can do which is scraping logs to determine when the NSMD and the VPP agent data plan are up and going. And so there's a lot of little details like this that can be broken into bite-sized chunks that people can work on. Yeah, so our hope is to try to make it as easy as possible for people to contribute and this is also a nod to the fact that some contributions are not necessarily for on the coding side but also on the design side. So for example, if we have something that's related to SR IOV, like I'm pretty sure Ian would have a lot to contribute in that space. And so then that would allow somebody, if Ian does not have the time to implement such a thing, it would help instill that knowledge in somebody who can potentially pick it up. And we do have folks, I know we do have folks like I know Daniel, you're super interested in getting the SR IOV, sorry, not the SR IOV, the SRV-6 remote mechanism in. And quite honestly, I know you're super busy, you're not gonna have time to code that yourself. But if we can get that worked up as a spec that somebody can follow who wants to go write code, that gets to be super easier for people to pick up and get done. And I think the same thing holds for some of the stuff, Jeffrey, I know you care a lot about MPLS. And although I think you're a little more prone to write code than Daniel is, but you're probably wondering where do you start, right? And so we could help shake some of that out as well. Does that make sense to folks, especially the folks in the community who are looking to pick up a shovel and write some code? Makes sense to me. And then OGSI, you've something to pick up and work on as well. And hopefully some of these are things that you'd be comfortable with. Yeah, absolutely, because so basically, now there's one thing I wanna ask a quick question is that actually, so now we are discussing about adding those as RV6 or something like that. So what we are planning to do is that we're gonna, like say to implement those like VNF functions, or basically we just like encapsulate those things and try to route things to our app pod. So I'm still a little confused about that part. Yeah, so there are sort of two sets of things here, right? The things that are within the purview of network service mesh are that you want to remote mechanism, basically an end cap to get from A to B, right? Thing to carry all two or all three payload. That's the stuff that network service mesh deals with. So when we say something like SRV6, we're actually talking about a super simple, almost stupidly simple thing in network service mesh where instead of using the excellent to carry your packet or frame from a client to a network service endpoint, you could use a SRV6 or you could use MPLS over Ethernet, UDP, GRE, whatever they in that next, right? So these are all within the purview of network service mesh. Now, there's a ton of interesting stuff that somebody might write into a network service endpoint for a CNF that does much more complicated things with MPLS or SRV6 than we're ever going to do. And part of what I think Frederick intended with this specification process was say you wanted to write a network service endpoint that did some complicated MPLS thing. Then you, in that case, you might have, you might say, okay, well, I know that I want to do this thing and I actually want to do this thing over here in my own repo, but it would be good to get advice from the network service mesh community on what they think might be an optimal way to do it. And this gives you a venue in which to go and actually start that conversation and basically say, look, I'm building some complicated SRV6 network service endpoint. I'd like some advice about how to make that work optimally with network service mesh. And it's not required for you to commit something to this if you're doing something as a third party endpoint. So if you don't feel comfortable or you don't want to or you feel you already understand what's going on, then you don't have to post here. So it's not a gate. It's something where people can ask for advice like I'm a newcomer or I have some complicated thing and I want to make sure that I'm thinking of this in the right way. So I would like feedback, here's my stuff. Yeah, I personally not at all a fan of hard gates. I think they end up being problematic but I'm a super big fan of transparency and I'm a super big fan of having a well-defined venue for people to get advice if they want it. Hey, Ed, could I ask a quick question? Just I've been trying to like wrap my mind around like network service endpoint versus just, you know, NSM in general and stuff. And so when we talk about wanting to do something interesting with like say SRV6 or MPLS, et cetera, where I get a little bit like fuzzy on like where the network service endpoint lives versus, you know, just the data plane that I bring to the picture, whether it's SRV or VPP, whatever. Am I writing my network service endpoint to call and configure say potentially like VPP so that way I've got some intelligence living inside of this network service endpoint that this then programming my data plane or is VPP living inside of this network service endpoint and then I'm actually just hard coding in this functionality into that network service endpoint itself. Yeah, so generally speaking, your network service endpoints would bring their own data plane whether that's VPP or something else. So say for example, that I, you want to go right and Daniel I'm gonna embarrass myself here so be kind that I wanted to write an SRV6 network service endpoint that did some complicated routing on SRV6 headers, right? So it understood a lot about them and I was pushing information into it about my physical network because I wanted particular SRV6 headers added to my payload, right? So a packet comes out of a client and I want payload to have SRV6 headers not the thing that gets it to the network service endpoint. In that case, one way to approach this would be to write the network service endpoint and the SDK is super helpful for this by the way and my network service endpoint would have its own VPP that processes the packets when they get delivered and does whatever the more complicated network behavior is. Network service mesh is only about I think network service mesh is managing virtual wires between things, right? And so you get one end that plugs into the client you get another end that plugs into the network service endpoint once that packet arrives at the network service endpoint then you have all kinds of interesting processing you could do with it. There is literally no end to the creativity of network people in this dimension. Got it, so really I'm just gonna set up my service chain then, right? And then that network service endpoint is where once I wanted to leave this cluster this is the final end cap that I'm gonna stick on this. Or any number of those. Yeah maybe I can explain it a way as in the next instance I don't wanna burn a network service mesh call on segment routing V6 but one notion of segment routing V6 is the service programming aspect. So you create a mesh of services to address the function you wanna create and you assign them using local fields or IPv6 addresses. So if you look at this if I can create a mesh of services using IPv6 addresses using SRV6 but I still need to program the control to make it happen so where an SM comes in it's easy to say I wanna create a mesh so that the workloads know what to request and what to get. And then I can translate that back into SRV6 at end point so I know how to data plane to make it stick together. So I get the kind of a control plane for what the service discovery and how to mesh those services together. So there's a really a good tie in service programming versus the network service mesh. That explains a bit better. No, that's absolutely exactly what I would expect. And so does that answer your question, Jeffrey? It does, I just, but reading through the documentation and stuff it's been a little bit fuzzy on me like kind of where the actual like CNF logic lives like if it lives in that end point or if the endpoint is just some dumb forwarding plane that I'm stacking services behind and then using an SM to stitch it to it and that kind of clears it up for me. Yeah, would you be willing to help at least start some documentation that might make that clearer? Because if you're having that confusion lots of other people are and it's super helpful as people figure things out if we can get documentation on it because you will not be the last one to have this question. Yeah, no, I think that would be good. Like, and I'm definitely willing to help with that. Just have some of you smarter people kind of make sure that my information is correct I would agree like, the gets great but like when I go there and I'm reading about Sarah and her story book adventures and stuff like if you really wanna start building this out in your lab and you're coming in from just ground zero there is definitely like an exponential climb that you have to make before you hit that plateau and things start to make sense. Yeah, and we really wanna make that easier and you have an advantage right now that the Buddhist refer to as beginner's mind and that's a super helpful advantage in these scenarios. Yes, so guys just one thing, just wanna clear it. So actually the NSM plays the roles of wiring between does it connects between like say the first part the app part to some certain VNFs or to actually connects between those VNFs? Like say, just to... They can actually do both, right? Oh, okay, cool. If you connect your client to AVNF or CNF in this case, Cloud Native Interfunction or even a VNF that's writing outside of a cluster because network service mesh can actually interoperate with VMS and physical networks, right? So it connects your client to a CNF or VNF and then but it can also connect to CNF or VNF as part of a chain to another CNF or VNF and so you can build out in a sort of Cloud Native policy based way super complicated graphs of CNFs if you want and network service mesh will dutifully provide the virtual wires to connect them point to point to each other and then inside those CNFs that you're building what we call the network service endpoints you would then have whatever it is you're going to do to the package and there's an infinite variety of those things you might choose to do to the package. That's much clearer, thanks so much. So we see really good question to have raised. So something that I would like to do as well in regards to the architecture is if you think about this documentation and this in this way, the documentation it's the spec itself is also documentation and vice versa. So what I would suggest is we start to specify more formally specified what network service mesh is through the specifications board as well we just happen to have running code but this also gives us the opportunity where to work out like not only where things are but also to fill in any holes that we have because we can always change the code to match the specification if we have any design changes that occur. So what I'll start to do is I'll start with some of the more simple use cases and we'll say what does it mean to be an NSC what is the relationship between an NSC and an NSE? And I think if we start off with those and progressively work our way up the chain we'll get down to like what does it mean to be a remote mechanism? What does it mean to be a data plane and so on? So I think we can drive that documentation through the same board and the end result is marked down as because it's exactly the same process and that would give us the ability to discuss like does this wording make sense? Do we need to specify this out more or is this too complex or so on? So I think this would be a good way. So would you be willing to help participate in that style of discussion? I would say the only other piece of that, Frederick, is both overarching like spec for this and those individual components, right? Like I think when I go to the git right now and I'm reading through things, I mean, I kind of piece it together in my mind, reading through Sarah's adventure. But I think, especially when you guys move away from the technical community and all these different vendors start trying to productize this and put it in front of executives is fundamentally there needs to just be like network people because this is who we're kind of catering to. We like pictures with circles and squares and lines connecting them, right? Like having blocks. Well, so like you said, you make these individual specs but then there also has to be like a higher layer abstraction where it's like, you know, where it builds upon in layers, right? I mean, once again, we like the OSI model, right? Like I mean, when we start talking about like making this consumable by the network people because once this gets more mature, less and less code focused people are gonna use this and more and more like network consumers are gonna take this and they're gonna go and just grab a network service endpoint. So there needs to be like very clear like representation of all the individual components and then how all those Lego pieces fit together as far as the documentation goes. So once really smart people like you have like written 80% of the code and it's like in some usable format, some end consumer can go and say, I'm gonna grab these Lego pieces. This is how they fit together and I'm just gonna put this into my network. Yeah, that's excellent advice. So one, that sounds like correct. One of the things I would strongly, strongly, strongly suggest to you guys as you dig into this. You've got folks like Fredrick and Nikolai and myself and others on the NSM IRC channel all hours of the day and night. So don't hesitate, jump on the IRC channel, focus and say, this part is confusing to me. Could you help me understand? Cause we would be delighted to do that. True. I don't know if the line is right word. So there's something above delighted. It's not strong enough. So yeah, so I think what I'll do then is for the documentation of specifying what's there, perhaps what we should do then is not start off with an individual component, but we start off with a high level. These are the major components and we can set them up as like the overarching document that you can say, what is network service mesh? And if you wanna drill down, then you can click on the link and then it'll say, well, what does that in this, what does that network service clients really do? You then you can drill down on a network service client. And then see the relationship and talk about the interactions. Are you familiar with model based systems engineering? I was a very long time ago. I can refresh myself on it. I mean, it's not something we have to employ, but just something like that concept, right? Like, like what you were just describing is we basically need like a high level architecture document, right? That you can put in front of people to say, you know, this is what an SM is like, like I said, I'm gonna keep picking on Sarah because I've read her like four times trying to puzzle this out in my mind. And like Sarah is really good at explaining to me, you know, how NSM might make my life easier, but then like when I'm ready to take that next step, like, you know, I've only just got these very vaguely abstract associations in my mind on like what NSM is versus what the demon is versus what the end points are, you know, when you start looking at CRDs and Kubernetes, like if we don't have like a sound explanation right off the bat, there's like lots of people who like literally they're tummy drops of their, you know, knees when you, your CRDs. So like saying like, no, this isn't a scary thing, you know, and this is how it fits into all of this, I think. And I just bring up the system, you know, model-based systems engineering because it's just this concept of, you know, you have like basically just these continuous like rabbit holes that you can go down, whether you take the blue pill or the red pill and you start at the top layer and then it's on you to decide how far deep down you go down one of these rabbit holes. Like if you want to get all the way to where you've clicked all your into the network service endpoint SDK, then so be it. And if not, you know, you stop it later too, but at least you have conceptually an idea of what problems this is solving and how you might potentially employ it. Yeah, and this is where your contributions are so critical, right? So if you go look at the network service mesh deep dive slides, right? This was my attempt to try and do that. Clearly it's not sufficient for purpose, right? But, you know, it's, you know, it gives you some pictures you can start with and then we can hash this out and get something that's actually going to be comprehensible to more people. And that's something that is super easier when you come at the problem for beginners mind. Sure. I'm also interested in that part. So maybe I can help try to figure it out. Since this is also, I mean, during the process of learning to fully understand of what NSM really kind of rose what NSM really plays. Yeah, and it's going to be a progressive thing because the thing with network service mesh is it is unbelievably powerful, right? And so it's the kind of thing that you wrap your head around in layers, right? So like layer one is sort of the Sarah thing. How do you make my life easy, right? And then layer two is okay. So how do you understand how the pieces fit together? Because among the things we could do in network service mesh, even beyond what we've sort of specified for Sarah is network service mesh could be used to request network services from physical networks. So it provides a super easy way for somebody who's just writing a workload to ask for something that may be a super complicated thing that happens in the physical network. It can be used to give you a way to get back to VIMS as well. It can be used with PNSMs to do all kinds of interesting hinting in your physical network. Or even like we've got a spec here for create PNSM, right? We've sort of come to realize, do you guys remember the create verb that we talked about in some of the talks in passing? I do. Yeah, so imagine being able to have a network service endpoint created on demand at the scope that you wanted. You know, maybe you would prefer it to be on the same note as your client. The PNSM lets you do that in a very simple way. And we have done just a terrible job of documenting the space of possibilities and they're super exciting. So if you guys are willing to sort of help sort of do that documentation in progressive layers in a way that makes sense to you, that will make sense to a lot of other people too. Hey, I'm here. So I can also pitch and also I'm preparing few slides for the container world talk where in my intent is essentially to sort of give a one-on-one so that if people are not familiar with service mesh or network service mesh and how all of it plays together. So I can probably work with Jeffrey to define it in fact, when we started we, in fact, created a use case document just to showcase how network service mesh would play out with the use case. Probably we may need to have a simplify it or we can probably relook at it. Yeah. Excellent. So I appreciate you guys being willing to jump in on this. It's super, super important. And as I said, it helps tremendously the work that you're stepping up to do. Sounds good. Yeah. So Jeffrey, I'll probably connect with you and also with others who would be interested. Yeah. Yeah, probably. Yes. Cool. Shall we move on next? Yes. So, Nikolai, you have the floor for the 2019 roadmap brainstorm. And I want to remove myself on that at this point because I've said everything I wanted to. Yeah, okay. So I don't think that we had anything added since the last couple of times. So I would propose at this point that we define, start kind of converging into at least the first deadline or first milestone. So I'm not sure what would be the right format to actually fix this, but we should figure out the milestone name and we had some comments here about same verse, same verse and things like that. But at least the proposal that that is here and today where we seem to agree is that it would be announced at KubeCon.U. KubeCon.U. It would be the very first official release of NSM. And yeah, I don't know how we can go through this list of features, but I think that the most important thing that we should be able to demonstrate is stability. I mean, somehow to demonstrate that this is a project even with the very basic features that we can offer in the beginning. It's something that is very reliable, something that can play well in a larger environment. And it started also working on this proof of concept for Google Cloud, so things along those lines. No, I think this is why in addition to sort of Sember, and I think Sember is sort of a universal goodness. It also gives us the opportunity to pick a release code name because release code names are super fun. And so we need to figure out some really cool code name for the first release, but I think the timing you proposed is actually correct. We want some kind of a release going into KubeCon in April, 2019. And I think you're right that the most important things are sort of stability and resiliency, meaning, and when I say resiliency, what I mean is I had a slide where I was in one of my talks where the whole slide was just pods die because they do, right? The entire design in Cloud Native is that, and it's not just the pods die, pods restart, right? And being able to show that we can kill off the various pods in the system and stuff keeps working with at most a small blip in network connectivity, right? If you kill off the data playing, you will get a small network connectivity blip, right? But if you kill off other components, then you shouldn't get a network connectivity blip, but you should be able to have the system recover to the correct sort of state and move forward. So that seems like the most important thing in my mind. And then after that, there's a lot of other cool stuff that we could do. And part of that is going to depend on people's interests, but I would almost say after resiliency, usability becomes a big matter, right? So things like the mutating admission controller to make it easier to use network service mesh. Okay, I mean, I think that the bigger question that we should answer now is, what would be the right format to fix these things and start having a dashboard where we can track and kind of, because, so February ended essentially. So we have, okay, sorry, January ended and February started. Don't scare me like that. Don't scare me like that. Yeah, sorry, sorry. Okay, so essentially we have February, March and by sometime April, we should kind of start fixing things into the release. So that for the KubeCon UU, we have something that has been tested for at least a couple of weeks before we go there. What might be good would be to put together a project board with things that are on the critical path to that release because I don't want to give the question that we can't do other things because we can, but it also gives us focus to sort of capture the things that are most important for getting to that release, the things that have to happen. Yeah, it's also probably worth this because I think if we want to have a release branch or everything is done in the master branch, I mean, this is some kind of different release strategies. So the question is, are we fixing this within this goal? I mean, this or next week or how are we going to proceed with that? So I think there are a few things that we effectively have agreed on in this call. I think we've agreed that we want to do a release going into KubeCon a year. I think that anybody have a problem with that or anybody have other ideas? Okay, cool. So I think we've agreed on that. I think your proposal of a 0.1.0 release semver is probably quite reasonable. I think we have an open question that I'd encourage folks to brainstorm on sort of a codename for the release. I think we have a rough notion of sort of stability resiliency followed by usability as being sort of the primary critical path objectives for that release. And I think we've agreed that we want to do a project board to track the critical path for that release. Does that match up with the things that seem like things we've sort of agreed on so far? That much is my view. So we focus on like what are the primary things for the release? And that doesn't mean that the community is locked out. If there's something that's very important to you that you want in and are willing to work towards it, we won't say that's not part of the release as long as the architecture has been accepted by the community and by the core development team. So if you have something that's on the roadmap or something you want to pull the trigger on earlier, like don't say, oh, that's not on the roadmap. So I won't do it and wait till later. But other than that, yeah, I think that's a good approach. The other thing I would suggest you talked about pulling a release branch, my experience has been, there are a bunch of different ways to do it. But one way that works really well is that you basically do all your work on master up until a point where you pull a single throttle branch and the point of the throttle branch is just hardening and bug fixing. And then you basically do the rest of the work for that release for hardening and bug fixing on that throttle branch. And this has the benefit that it always keeps master open because you've pulled your throttle branch. So somebody wants to do something a little risky, they can do that in master. And so that may be one good way to look at this. And then we would just need to figure out at what point do we need to pull that throttle branch? For actual releases, I think that we should make use of tags. And that's because the entire go ecosystem is, and tooling is based upon tags, semantic versioning, so. I'm usually in favor of that, usually. Cool, so it sounds like we have a plan there. Nikolai, would you be willing to start putting together the board and a sort of a concrete proposal for when you think might be a good time to pull the throttle branch? Yeah, yeah, I can do that. So about the proposal, should I do it here in this doc or from the brand or okay, okay, I'll figure it out, okay. Yeah, I trust you, you'll figure something good out. The specification board, no. Yeah, yeah, sounds fun. Okay, is there anything else that we wanna talk on this or should we jump towards monitor metrics? Cool, yeah, I'm having to talk about monitor metrics and we have Matthew on the call, which is also good because I think he's the one who's possibly most excited about them in the immediate term. And what this really comes down to is right now we have this monitor connections and monitor cross-connect calls that we make. And all they really do is basically say, give you information about the states of connections and cross-connections, and this is how the move with Skydive is building out the topology. But it's super useful to have metrics. And so one of the things that I've been bouncing around and chatting a little with Matthew on the board about is sort of what metrics do we wanna report in network service mesh? And my initial thought for this was interface stats because from that you can derive a lot of stuff, right? From if you're getting the interface stats updates periodically you can use them to derive information about throughput and all kinds of other things. So do you have some people like to say about this? Do we have Matthew if he was here earlier? We do not have him. Oh, okay, cool. I didn't see him. You wanna say a few things? I totally agree that we need these kind of stats and something that would be really interesting or so would be to have the delay of the packets. But this is the thing that we can deduce from the flows by observing the flows going in and out from the wiring. And also I was thinking about developing a dedicated Skydive pod for VPP. So I think I would have the same metrics that you are going to provide me, nobody. Cool, question. Matthew, would you be willing to do the metrics spec? Yes, if you want. Cool, okay, cool. So I've added that to the spec board. If you could pick that up and start an issue in a Google doc so we can get that hashed out. Hopefully we have that hashed out. We can talk about it next week, if not before. And then we can start executing on getting new metrics. I like the latency metrics idea a lot. One of the things I think we may have to do is do this in two steps because interface stats are relatively easy. For latency, we would need something like IOAM and we can absolutely do that. It's just a little more work. Does that make sense? Yes, it does. Cool. Or there may be something simpler actually that we can do than IOAM even. It's really a matter of, you know a lot about the kinds of technology, you're a network guy, you know these kinds of technologies. So I'm super open to sort of things we might do to measure the latency across these different L2L3 connections. Yes, okay, I will write some ideas about that in the spec. Cool, that was basically a complicated discussion. Cool, so we have a few more minutes let's go over the Envoy spec. This is a first draft and it's covering a proof of concept for Envoy, not the hardened version that we wanna settle on. And so the proof of concept, so for those of you who are unfamiliar with Envoy, it's a open source edge and service proxy. So basically it's a proxy that you then can apply a configuration to and then it does whatever that configuration is. It works primarily at the L4 layer and L7 layer, mixture of L3, L4. And so typically the deployment is a Docker image. You can build a binary from source, it's not necessary to run from a container, but the container is the most common way. So Envoy is the thing that actually ends of deriving interesting, the interesting networking part of projects like SDO and Ambassador. And so there's a series of different features that you can perform inside of it, such as chorus filtering, fault injection, you can bridge DRPC or you can write Lua bridges or so on or Lua scripting rather. So there's a quite a significant amount of interesting things that you can do within Envoy to implement something that's of interest to you. It's also incredibly fast. It also supports things like live upgrade of the binary without dropping connections. So basically Envoy though falls straight into being a network service. So you can ask for Envoy as a network service and at least that's what the world, we believe the world should look like. And so the application could request for an application proxy of its type that it wants. And so the initial version should be entirely written with current interfaces. So basically you have a kernel interface coming into Envoy and then the kernel interface to get back out. Actually in this scenario, we might even just use a Kubernetes Network Inc and not even bother with the network service client at this point to get back out of Envoy. So in this scenario, there's a couple things that we'd have to add in here as well. So first one is create the deployable Envoy deployment. So the pods don't need to be created on the fly. So in other words, assume that we have an NSE that is being created. We have the create PNSM later on that will help with deploying this on the fly at a pod level or node level. We're gonna run the pod as privileged to start off with and in the long run we don't want it running as privileged. But in the short term, we need to add in some IP tables which will require some privilege. And what the IP tables does very specifically is it injects what's called a redirect role into IP tables. So a redirect role does two things. First is the redirects all incoming traffic to your local host to a specific port. So all the traffic coming in from one port is now redirected to Envoy. And then Envoy then uses one of the IO Cuddle commands to work out for each TCP IP stream. What was the original destination so that it can then do something intelligent with bad data that's been lost from the stream. So once that's correlated and then Envoy can then do whatever it needs to do in order to work. So in the initial version, the pod will need to have some privilege. Ideally, we'll be able to extract this privilege and remove it and have something else own it that is that's more trusted or even better would be a fine way to have Envoy modified so it no longer requires privileges. But we're not that's out of scope for this proof of concept. So we're going to create the idea would be to create a network service endpoint which can accept an L3 connection. Technically you do L2 as well but for the proof of concept, we'll focus on L3. So once we have a network service endpoint created then we should verify that that works. Then we need to create a new network service endpoint. We also need to create a new Envoy network service endpoint pod image. So what this means is you take the original Envoy which what's been released by Envoy and then we add a layer on top of that which injects the network service endpoint binary. So that way when you run it, it doesn't run Envoy directly instead it runs our network service endpoint which then acts as the unit for Envoy. So the Envoy network service endpoint will both receive requests coming in and simultaneously control the Envoy process. I will expect that out more. I was in the middle of writing this and didn't have time to finish that. And the last part was ignoring the network service client side. So how do the packets get out? In this case, we just use a default Kubernetes networking to communicate back out in the initial proof of concept. In the long run, we want to actually have a network service client in there as well. They can make the type of request that you want out and wire them together. Or we make use of the network service wiring that was coming down the road later on. I will add an image on here because it's unclear. It has to, like said with some of the ideas, it makes a lot more clear if I have the image. So I'll create one. So we're running out of time and I will comment here for the last part but essentially with SDK it should be pretty easy to just start the connection downstream or upstream. But probably the proper way, as you said, Envoy is, you don't need to restart Envoy to update this configuration. So just, you can just update the actual configuration of Envoy with the new IPs that you get from the NSC and it will just work. I will write it. Yeah. You guys, this would allow. So first, the issue of community is talking about how would they move Envoy from being a sidecar container to being a sort of a separate pod. They're currently calling it a metapod. The nice thing with the service mesh is we're getting, we're moving in a direction where we could do this very cleanly. The second thing is your Envoy basically acts as an ingress to your application service mesh. And you can imagine a situation where you have a pod and it's got the application service mesh for the communities cluster. There's a separate application service mesh it wants to be part of that ties to your corporate internet. And then you might have a third network service, your application service mesh that you would want to be participate in that ties to some kind of a partner network, right? Maybe you're in a partner network with a bunch of other financials or a bunch of other parts suppliers or something like that. And so at this point, then you've got three different application service meshes you want to participate in. And by using the strategy with network service mesh, we could actually wire you to all three and you would get the correct behavior. And that ends up being super, super powerful. Okay, I guess that we are out of time now. Yeah, it's very good meeting. Yeah, so the same. So again, thank you everyone for attending. We'll be available on IRC. Next meeting is same time next week on Tuesday. Definitely looking forward to continuing this conversation on Envoy where we'll have it more specced out at the time. And with that, let's close it out and you all have a good day. Thanks. Thank you. Bye. Goodbye. Bye bye.