 not this to the point of annoyance. We tend to have people like straggling in here all the way up into 15 after, so they can catch up as we go. So one of the things I wanna cover today is, so there are all these CNF white papers and now there's multiple ones in flight. And Frederick and I had been talking a bunch like months leading up to this and now it's kind of fracturing because all the vendors are getting involved. I kind of wanna talk about what the strategy is for kind of what the two main white papers are gonna attempt to cover right now and where we might want to insert NSM since this is gonna be a big piece of documentation and NSM making it into foundation published white papers. We obviously want it to be in there in some type of prominent fashion as an alternative for cloud native networking models. And additionally, I will share here in a second kind of like some of the content I'm putting together. I'm pretty sure it's gonna go into the first white paper because it's kind of the whole journey thing. But then also, Watson, I'd like to kind of chat with you too because a lot of the CI work you've been doing, all of those of us in the NSM community have been benefiting from, so kind of getting your take on where you think some of the content I'm putting together should go where some of the CI, documentation and limousine are putting together should go and then kind of how we fit NSM into either both papers or maybe super prominently into one of the papers. Let me see here, this guy. Yeah, I'll share for a second. Sharing is good. And so on my side, just a quick remarks about this. I don't know if we need to specifically embed NSM like in the core concepts, maybe in some amendments to the white papers. I don't know, some form of appendixes somewhere where we talk about examples and specific concrete implementations. I think that is more important for us to more or less embed the concepts of what we are trying to do within NSM into these white papers. I mean, like to try to avoid the usual, I would say maybe the traditional mistakes that people have done before and to not have something that contradicts NSM. That makes sense. So I think it's going to depend on the white paper, Nikolai, because at least, and I'm not sure which one of them they're going to live in right now, but because we seem to kind of try to be duplicating efforts at the moment and I'm working with Dan on trying to get some separation because I'll probably end up helping with both. And I want to make sure that they both can kind of stand on their own. But at least in one of the white papers, like as we get into, like there's been requests from both Dan and our pit that like, at least one of the white papers has like, you know, no BS like implementation information, right? And with the disclaimer that like, all of this stuff is super early and we might decide that these implementations are completely wrong. To your point, like just getting a white paper out on like what the cloud native concept should be for CNFs and things like that is absolutely pivotal. But at some point, like, I know people like me and Daniel, we want to consume information that like literally, there is a standard of tests that we work, you know, in the test bed on that, like look at Dan, that look at MOLTIS, that look at NSM, that look at different networking models and like talk about like what performance looks like, talk about what, you know, difficulty in operation, like I mean, at some point, I think that, you know, some real, real meat on like, what it actually means to deploy some of this stuff is important. And it doesn't necessarily have to even go with papers, but that's exactly why I wanted to talk about us, you know, in the documentation group, talking about like, where does an SM plug in? I know Frederick has opinions on this. Yeah. It seems like NSM has a strong position for declarative APIs, which talking with people, it seems like, and a lot of this comes from, you know, Ed and Fred. Some of the problems that are getting trying to be solved by, you know, MOLTIS or CNI, other CNI plugins, some of the other ways of doing things. And then also with the VNFs, which are being brought in on some of the papers, but the old way, you know, the journey and everything. I think even if we don't say NSM directly, you're going to be saying it and you're going to be talking about NSM in a different way, like for all intents and purposes, because of the way that the declarative APIs are positioned with NSM, it seems like it's a first order citizen kind of thing. Yeah, the way that I would try to describe that would be to describe things with like cloud native principles and to be able to say, we care about things like declarative APIs, state what you want, not how do you get there. And to be able and let the scheduler and infrastructure render that for you. So, I'll give an early example as well. So when I first met up with the, when I first, I was at the first inaugural meeting of the, at the time they called it the multi-interface group and they renamed it to be the network plumbers group, I think, or something along those lines. And their initial thing was, okay, we need to be able to specify things like what VLAN and how, what interface in, and this needs to be kernel interface, et cetera, et cetera. And so I told them beforehand as well that like these type of things, instead of deciding them in that way, they need to describe what thing they actually, what the end result is that they want. Like I need a faster connection to my data plane. To my storage system with these quality of service metrics or service level objects. And so I think that we should try to aim towards similar things and say, okay, we declaratively say what you want and we've tried to design and assemble along those principles, but I would also argue that if something else came along that should, that is trying to, or that is replicating like a different project in this space. My argument would be that in order for them to be cloud native, they should also follow similar principles as well. And so that's, so I think starting with the principles is important. And I'll usually roll that under loosely coupled, right? Because the declarative APIs, the reason we like declarative APIs is, is because they loosely couple you to the implementation, right? And the reason that we don't want to do things like stick fricking IPs and VLANs in the declarative APIs is because they then strongly couple you to things that are implementation details. Because you can write incredibly strongly coupled declarative APIs if you do it badly. I've seen lots of people do that where you've got to enter every little IP address and every little VLAN tag and every little everything else. And those are, even though they declarative APIs, they're every bit as over coupled as anything else. So the really fundamental principle to me is loosely coupled. And that's the thing in their service mesh that I think we do super well, because the workloads may have an opinion about the thing that is most immediately before them, but they have no opinion about, for example, the tunneling types that go over the network, because that's not the workload's business. Why are you strongly coupling your particular CNF to a choice of tunnel that someone's going to want to make differently next year? So for the SP-led white paper, I think we're going to shift it a little bit to less about some of the real granular details on the plumbing. Well, I take that back, like, I kind of look at this first one as like a book report with some technical meet to give people who don't know anything about cloud native and how it maps to NFV and what pitfalls to avoid. That's kind of what I'm going to now try to shift this first white paper to and then work with the vendors on the second one to kind of talk about, like, you know, what we're actually trying to do right now. But so unfortunately, because we have some crazy, like, you know, rules, I'm rebuilding a bunch of images, but I'm trying to create, like, a bunch of things that like literally show step by step how something like, you know, this is the packet flow for vertio-net. Like, and I've got, like, what all these numbers correlate to, right? Like packets received, packets put into the buffer, the copy from the buffer to DMA, et cetera, et cetera, right? Like the different IRQ calls that are called in between. And I've got, like, this for most of the majorly accepted, like virtual networking things, I just need to convert all the images into something that's not going to get me in trouble. But the goal here then is to do that. And I've, I can't show the other images now, because I haven't cleaned them up, which is kind of frustrating for me. But I'd also like to get to the point where I'm going to build like equivalent images for all the container networking stuff too, right? Like literally here is pods sitting on a host. Here is, you know, where things may be netted or where, you know, I'm avoiding that. This is what, you know, the multis model looks like. This is what the dana model looks like. NSM model looks like. Show the difference, you know, pass through the host in and out of it, like, because really that's what all of us care about in these communities is, how do I get packets through that, Nick? Have something done to a packet and then shove back out onto the network. And that's kind of like, I think I'm going to focus on unless people think that, that should go into the other white paper. But I just kind of see this first one as being an explanation of the technologies and like, you know, kind of like how people start to migrate, you know, and integrate what they currently have now and like the NFEI space to kind of taking on a more cloud native approach. And then other than, you know, just high level information and me building like some, you know, really granular data plane diagrams. I mean, we want anything else from an NSM context in this first, you know, service provider one. And I think I'm getting a little resistance on just doing like the five principles because when I try to shot the other vendors or sorry, the other providers, they're like, you know, we don't, let's just skip the principles. We need to come up with requirements right now. And I was like, well, then the vendors will just ignore us and say that the industry's fractured and just do their own thing. Like we just need to come up with like some general statements of, you know, the CNI shouldn't like be the choke point where you try to lock me in, you know, like things like that. But I don't know, thoughts. And it kind of wants in your purview, like with this whole guide to things, would we put the limas in this first one, which is kind of just talking about the journey or would do we put it into the next one, which is gonna try to just be like very granular. This is where the industry is right now. The limits document I put together to describe cloud native, the word, the phrase and to try to, I know the problem is like the vendors, everyone wants to, everyone wants to say that their implementation is cloud native. And then we use that phrase all the time. And it, you know, there's a spectrum of where it's buzz. It's a buzz word, buzz phrase. And there's the other side where it does have a definition. And that's really trying to tease out what it is that the users, so the service providers actually want the benefits from it and then get the definitions from those benefits and then say, okay, if you do this, then you're cloud native. Then we can say things like, okay, is it, you know, one of them is microservices. It needs to be something that's a developed based off of business capability for each service, not a bunch on and within one box or one container kind of thing. And we know that that's one problem with, you know, putting everything all in one big giant VM and saying that it's cloud native, which by definition, the lemmas say, nope, it's not. So that's really where I would say it might not be something that you need to bring in directly just saying it's something to pull from for the papers to say, okay, remember, we want this benefit and this is this definition. If we're saying cloud native networking, then here you go. We're- I just pasted in the link to the CNCF talks definition of cloud native, which is really good. It is however, several paragraphs. And so I will often cherry pick out some things like immutable infrastructure, loose coupling, minimal toil as a word is to conjure with from that, right? Because if you tell me, oh, here's this Tosca template where you specify every little detail of everything, that's clearly not minimal toil. It's also not loose coupling. And if you're mucking about with screwing with things in the Kubernetes to do weird things, that's also not immutable infrastructure. So you're not at the least cloud native. Right. Yeah, Mike. Right, I was trying to get a bunch of references as much as possible to get rid of any type of- Yeah, I mean, that's absolutely true. And that's a good way to approach it. My suggestion would be pick out bullet points from the cloud native definition because you can then back those bullets for the cloud native definition that are useful from being always stupid. Well, if you look at what Watson's written as well too, like this is just like a generic, which is, I'm literally probably just gonna link directly to this and copy and paste that in there and just reference it. But Watson's is way more just like, how do you actually code to these bold principles here? Like the lemmas are actually giving developers like concrete data points on like, it's one thing to say it's immutable, but what does immutable mean? Cause I can tell you, I've talked to three different, DEs at Cisco, Ed, who have completely different views on even what immutable means. Oh, I'm more aware. And so I think that the intermediate series that Watson is actually pushing out is super critical. The reason I like the cloud native definition and pulling from it is it gives more authority to the kinds of things that Watson is doing. Does that make sense? Yeah, within the lemmas that I have, the cloud native definition, which I was pointing directly to what Dan was saying, the latest and greatest thing that he had on his talk, but it's this same verbiage, the declarative API is immutable infrastructure and microservices. And then I'm making arguments for each one of those things. So immutable infrastructure. Why and what? Both. So you have the benefits and from grabbing from all the different authorities on the subject. So infrastructures, code books, all the other different types of native descriptions and things like that, declarative APIs, Kubernetes literature has lots of arguments for declarative APIs outside of whether you're doing Kubernetes or not. Well, I know that that's good. The one point I wanted to make sure is clear about declarative APIs, though, is it's sort of a subset of loose coupling because one of the things I've seen is the Kubernetes guys are really good at declarative APIs. They do really nicely in a way that loose couples. I can point you to examples over in some of the SP projects where they also have declarative APIs that are insanely strongly coupled because they've been done so poorly. And so I think it's really, declarative APIs is a means to an end, but it's important to keep the end in mind. I'm gonna declaratively state my IP and tunnel type. Yeah, I think that it's an interesting thing. I had this debate and I was on mute and you guys didn't hear me, but I was talking with Denver about Ed's position on how you talk about when you say subnetting and you say IPs and things being hard cut and all that and I was kind of translating that into location and then that's kind of saying how and not what. Like you would say, we don't wanna put IPs and subnetting and these types of things, it's too specific. You're saying right now, you're saying loose coupling. I'm saying that's closer to a not declarative but imperative in the sense of how, like here's where there's a location, how you're going to set up this network instead of what it is you want from the network. And it's like a spectrum. And I have my other thing that I was teasing out of the other papers, it's like there's a spectrum of declarative where there's Sarah at the very highest level, I want the corporate intranet and there's the operators who are composing things and they're taking things from, they're taking pieces from CNF that were made that exposed somewhat of a declarative API and they're composing them into a higher level one that Sarah can understand. But the CNF makers, they're obviously doing things imperatively but they're exposing a declarative in as much as possible. Loosely coupled is the best practice for all the way through on all these things. You should only be coupling those things that have similar rates of change and those types of things. But that's how I was looking. It's like a spectrum, it's how I was saying it. But the pushback was- But it's both correct and useful. But I have received pushback on how you, you know, I agree with you but I'm saying, it seems like people like subnetting and they like IPs right in the middle of everything and they think that that's still the clarity. Vendors like it, I think. Yeah, vendors love it, it's in the world. I mean, if I put my vendor hat on for a moment, oh fuck yeah, because I will then sell you an incredible array of things to manage the complexity that I've not created, right? So it's great if you're a vendor, I don't know how great it is if you're actually someone who's trying to get something done within your SP network. Yeah, so I would simplify what I was saying if I understand correctly, as like a position on saying location is not declarative. That's what it seems like to me. It seems like that's what your argument is and that taking that out as much, in as much as you can out of your API, out of your descriptions, CI, everything, the duration, you will make it so that you can have, and then now you have all the arguments for what declarative helps where it's easier for doing self-healing systems and things. See part of what you're gonna end up having is like, it's similar to the European safety certification you have like the CE self-certify for the general class, except the difference is that when they self-certify, they take on liability, but in the cloud native, if you self-certify yourself as cloud native, you don't take on the liability of your user's complexity. And so that's one of the risks that we have by is basically cloud vendors or the rather these like cloud vendors, with these vendors basically self-saming themselves as cloud native and end up over, basically diluting the meaning just through sheer numbers. And so like, that's where like things like the definition become very, very important and pushing people towards declarative loosely coupled environments is in order to manage that complexity, like that's, and in order to get your system to render your infrastructure based on your needs, then like we need to push these type of ideas forward because yet the alternative and the happening is as Ed mentioned, you either end up with a vendor selling you something very large to manage all those complexities because of how they're coupled or you end up with somebody having to program all these things together or configure all these things together and manage things that have been a highly coupled way. I mean, part of it also is like the need to replicate data in multiple places. If you have to go and specify IP and some of that information for your CNFs but you also have to specify it for the infrastructure it runs in, now you've had to specify two things in two place, the same thing in two places that you have to manually keep in sync or go do painful things to keep in sync. I think we ran out of comments. Yeah, that's a good sign the conversation has run down. My internet keep, because you know we're super good at it here and I keep losing audio. It made you how phenomenally gifted we are both video and fricking ISP. So yeah. What's comical is as I'm watching your screen and it's saying trying to connect. So we're getting the outbound packets. You're just not getting the inbound. Yeah, it's driving me nuts. So anyways, I put this intro to the white paper and that's the one that like a lot of people. So this is the one that Watson, Taylor, Frederick, myself, Daniel, we were all working on. I'm gonna continue to build out these guys, redo a lot of these images and then I'm also gonna try to start making some of these very granular diagrams for some of the cloud native stuff. But like I said, it's mainly gonna kind of be a book report for SPs on, you know, this is why NFE was really hard and this is how we can start to incorporate cloud native principles. And in my opinion, like starting to use those even in the VNF space, right? I mean, if you read once again where was the talk definition, containers says it right there, exemplify this approach. It doesn't say that you have to be containerized to be cloud native, right? Like if I've got VMs that are minimal toil, immutable and support declarative APIs to provision them, then I mean, I would argue that you could even have bare metal stuff that could quote unquote be potentially cloud native. So, you know, I'll kind of continue to figure out where like Thomas and GearGay and all those guys stand and work with Daniel on the tug white paper. But for this one, this will kind of be like a for service providers, buy service providers on kind of stuff to look out for. And for people who are completely new to this space, kind of like, what, you know, from a networking person standpoint, because that's what they'll understand. What does a path of the packet actually look like going through this virtual infrastructure? And, you know, how does that map into like, because I think if they understand like how some of this stuff works and like getting stuff in and out, like, you know, imagine now, instead of this being a generic VM like in, you know, right here, this is like multiple pod name spaces, right? And what it means to actually put interfaces in these pod name spaces. And I think that things like network service mesh will make sense to them, at least the networking people when they actually understand what the path of the packet means and what it takes to secure this traffic and ensure that, you know, just, you know, CCNA level basic network principles are being maintained even as we move into this space. Just as a heads up, I acquired a domain name or purchased a domain name cnf.dev. And one of the things that I intend to do with it is to eventually see if I can get people to write about specific technologies that they're experts in. So we'll see about getting maybe someone like Ian to write about SROV or we can find, get someone like Mochek to write about, like, how do you form a good benchmark or so on and just have like a wide range of topics that people can write about these individual technologies. And that way people can have a central place. Like, what does it mean to be SROV and be able to give like the high level definition but no, what does it really mean to be that? Like, what are the details? So I have a landing page for, because I don't think MSM is the appropriate place to store all of this type of low level details, even though we have to use them. And I think that we can populate this other place as a landing spot, because these details are important regardless of like whether they're NSM or you decide to use Multis or something else. Like, these SROV still works like SROV regardless. Yeah, and I definitely won't have the same level of depth as Ian, but in this paper, like I have an entire section for SROV and it goes over like the VEP, the VEPA, like what a PF versus a VF is, like it shows the individual data paths if I'm doing like VM to VM on the same host versus what it means to like, you know, bypass the kernel that just, I have to completely like redo everything and make it agnostic because I'm not allowed to do stuff and put it out there that I've already done in the past, which that's a different discussion. But I basically have it for, you know, the first just what is virtualization, right? Paravirtualization, full virtualization type 1s versus type 2s, because our pedastas to start with like where B and Fs are, and then it'll talk. I've got like some pretty granular diagrams on like what Neuma pinning actually means and like what, you know, it means to actually get, you know, your memory lanes and line with your PCI lanes and stuff. And same thing like the VPP stuff and the OVS stuff. I've got some decent stuff on it, but, you know, I kept it short and sweet for the internal paper so I'll probably rely on the people in this group so like we could go way deeper into virtual switches just because I think we'll see virtual switches pop up a lot in the cloud native space as well, just because as anybody who's seen in the Tug Black channel, there's already tons and tons of arguments on whether or not we should just be like this pure, you know, purest user plane switching versus purest, like, you know, well, maybe it's okay to go in with kernel interfaces sometimes, like everybody's got an opinion. So really the goal of this paper is to just lay out all of the information, talk about things like, you know, like I've got a very like long section on what VHOST user is, right? And then it would be nice to just like after that, say this is what MIMF looks like. And this is how you leverage these technologies is why you might use it here or there. And these are the pain points that you're gonna encounter when you do use them. But anyways, you know, Frederick, Ed, and you know, chat with Nikolai and stuff, let's kind of think on where NSM might fit into some of these papers or if it's gonna fit in there, you know, Nikolai seems way more of the opinion that it just needs to be like this very, you know, just kind of implementation agnostic approach. I'll be honest though, the vendors deciding that they're gonna qualify what, you know, the guiding principle should be, makes me nervous, especially in the telco user group, but we can cross that over beer. I think far more important than NSM or not at SM is we've got to get the mental shift from virtual switches, which are the wrong implement to virtual wires, which are the right implement. Does that make sense? Because the fundamental sin of network virtualization has been the obsession with V-switches, which tend to be strongly coupled, enforce a common set of behavior across workloads that may or may not need them, which then radically increases the complexity of what goes with the V-switch, et cetera, et cetera. And so I think that's the really fundamental mental shift that's crucial. NSM just have us be one way to get to V-wires. Yeah, well, this makes sense as well, because I can sell you a V-switch, but I have a hard time selling you a V-wire. And so we have been helping through that. I would be careful with that too, because I don't necessarily think V-switches are the devil. I think people like having bad architecture and not employing V-switches, where V-switches should be employed is what gets them in trouble. But they're not the right fundamental thing. And I'll sort of give you a very concrete example. The V-switch attitude towards CNI is why it is that everyone is fighting over who's going to own the CNI, because it's one shiny object to fight over. If you start with V-wires as your fundamentals, then you can say, look, I actually do have a need for a V-switch, but it's something that you go into a CNF to do V-switching. Because the V-switch is no longer the fundamental piece, it's a tool like any other tool in the toolbox that you might use as a CNF. And you can pick the tool that works for you by using V-wires to connect to it. Sure, but I mean, I don't know, I think we're getting a little bit into semantics because just because I shoved the V-switch into a pod and call it a CNF, it still doesn't change the fact that there are going to be instances when I want something just capable of switching for me running inside of a virtual core container. And here's actually the really fundamental thing that it does change this crucial is the way V-switches traditionally have worked is you have one per server, right? And so they have a distinguished place that is welded to the underlying, effectively the underlying server infrastructure by moving them to being CNFs. They no longer have a distinguished place. You can pick the one that serves you and they're no longer welded to the underlying server infrastructure, nor is the underlying server infrastructure welded to them. Sure. Like I said, I don't know, like in my open stack world, I do Linux bridges for slow path and I stick VPP in a container and I run it the way you're saying, even in like the traditional NFV sense. So like, I don't know, we just got to be careful. Like I don't want to like say like concrete, like, you know, V-switches are bad. We should do V-wires, but then instantly turn around and say, but you can still put V-switches in a container because now it's not, you know, directly dictating everything that happens in your physical infrastructure. Like I think that you're actually making a very good point. It's not that V-switches are bad. It's that the V-wires are the fundamental thing. V-switches are tools. You kind of make, you kind of switched a V-switch from being part of the infra, as being part of like the path layer that comes with the CNF, for example. Yes. So based on what you want to have as a framework, then the V-switch implementation you would use would follow that kind of a path structure. So you might end up with having a complete CNF with your own structure, decide another complete CNF with its own path structure independent and then you could have, you could move them around. I see the point. I think having that kind of definition accepted by the kind of the industry, I would say, is going to be, it will take a bit of time because nobody right now has understood that abstraction layer. No, let me ask you that said, I think this is what you're really trying to circle around is that to form a connection, I shouldn't necessarily have the entire like point to point require a standalone subnet, a standalone layer two domain and all this other stuff that you typically have to do in a Vim slash hypervisor world, right? Like I can't make two VMs talk unless I fully define this subnet and I fully define all these ports and I fully define all this other stuff. Like that's not the virtual switches fault. That's the software's fault for saying that you have to have these eight million things just to make two VMs have a point to point connection. Yeah, I mean, but my, I'm not really quite, I think Daniel's right, it's going to take a while to shift the mentality is the fact that you do have to do that is a side effect of thinking that the virtual switches the fundamental thing, right? So it's not, you're absolutely right about what you actually care about, but the reason you find yourself in this situation is because traditionally a multi point L2 domain was the fundamental primitive. Yeah, so cool. Cool. So I also dropped an item on the agenda for this week around the website rework stuff. So we've actually had a PR pushed a very kind person at CNCF pushed a PR for reworking the look and feel of the website. If we have time it would be good to take a look just I'd like to socialize it fairly broadly before we put it in. Yep, cool. So, you know, this is the site. It's actually really quite impressive because not only did they do a good job of the look and feel, but there's a lot of new copy in here that shows that they really do understand what we're doing. And I apologize. Like I have like the intro paragraph done. I haven't like worked on the example yet and I've just been overwhelmed. So I'll do some PR soon to get some of the stuff that we had worked on like a couple of weeks ago on the whole generic what is an SM thing, try to push it. But this is definitely way, way, way cleaner looking than it was in the past. It seems really good to me, but I wanted to make sure to do point people at it and have folks go take a look broadly and you'll comment on the PR if they have anything they wanted to comment on. But overall, I'm super pleased. Anyone have any comments on it or? It's way prettier. Yes, I agree. Cool. I'll click through a bunch of this stuff, make sure I'll then. Yeah, I mean, I figured that we'll probably, in question there are always going to be Nets, but we can always do follow up PRs for Nets. So. What's this failed? Anything else on the website, Ed or do you want to dive into the technology tree? That was it. I just wanted to make sure I rose that up to people's attention. Technology tree is great for now. So I mean, the thing here I think is basically, this was my attempt and I will admit that it's incomplete, right? So there are some things here that should probably be added from specs and whatnot. But this was my attempt to try and figure out, okay, what is it that we have coming up that we're sort of speccing through and working through. And so what are those things and where do we, which things depend on which? So we can sort of put together the tree of what has to happen before what. So we can start working through it. And in showing this to the NSM meeting, I think folks sort of thought this looked like the beginning of a roadmap, but of course it is not actually a roadmap. So roadmaps have more meat on the bone than this does. And so the suggestion was that we refer it to the documentation subgroup to see if we had opinions here in documentation about how to turn it into more of an actual roadmap. I think it's a good start. I'd like to see it as a roadmap because we can make it clear out to all the interconnection points to see the, to get to that feature. For example, for example, the logic of the hinter domain, which is dependent, which is the next step is the hardware Nix and SROV. That's where it's really good to see. I think I like it. I'd also like to see it as the interrelations of what like underlying feature gets to what so that people understand the relations between all the modules. I agree. And it also sort of explains things like, I've occasionally had people come in and say, why aren't you working on hardware Nix right now? And the answer is because we need to do other things first. It looks like a civilization game technology tree. This is intentionally the case. So again, Khan gave a talk in which he used this civilization technology tree as an example at CubeCon. So it was very much in my mind. So one thing I would suggest to Ed is you've already got some of these as links, but if there are things like say DNS here, right? Which has dependencies on init containers and et cetera is in the text part too, as these all eventually turn into URLs is to make sure that in, let's click on interdomain here, right? I may have bought the link for interdomain. I need to go fix it. It's actually pointing to the integration test right now, but yeah, I need to go- It's not a problem. In here too, if we're gonna have this tree here, then in like the actual write-ups, there should be like talk about what it means to actually be dependent on security in init containers and then talk about how it will also provide dependencies for floating interdomain, et cetera, right? Like, I mean, the picture gives you like a quick like, okay, I can trace this and once this is knocked out, I get this, but I'm like, I would definitely have in the text and stuff and like, are some of these linking to specs? Most of them should be linking to specs. Yeah, okay, perfect. So like, part of that spec should talk about what dependencies are and what dependencies it may provide. Okay, now that's a good point. In many cases it does like the draft spec points for our next points to interdomain. Cool. I mean, so you're absolutely right. I'm pretty sure we'll find places that has been done less than perfectly. But I also don't expect people to have gone and read all the specs to check back the dependencies. That's why I wanted to do a quick visual representation. Yeah, absolutely. And I think because people are probably only gonna click on the ones that interest them, having like, the caveats of did you consider this? That way people, once these specs actually are completed and someone goes in and like just tries to grab like a single component without getting everything it needs. It'll save you a lot of those weird emails that we get to the mailer asking, like, why couldn't I do this? Like, well, did you do this first? So when talking about a roadmap, it's kind of hard to do that without having the priorities on what we say people are working on what the driver is. And I know as far as the test bed is concerned, we have some priorities where we want to move things from what we're calling out of band to in band and use NSM to do that. So making things more in a cloud native way, say if you will. And I think the VPP cross-connect is one of the things that we're, and Nikolai can probably speak about it, but it's one of the things that we have. And so when we do our roadmaps, we're tying them to issues and trying to make everything to where we have all our planning meetings and stuff. So we can try to pull some of that stuff and have a trail here for maybe that block. Maybe the remote mechanisms as well. Yeah, but that's actually a really good point. And the thing is, obviously, priority is always super tricky in an open source community because you've got much of different people can collaborate together who have different priorities. And beautiful thing about open sources, you can have different priorities, right? Someone can say SRV6 is the most important thing. And we can all say that's great, fun, without it actually causing problems. And then, but even within that, communities are often very responsive to people saying, hey, this really matters to me. And so saying, look, for the thing that we want to do over here, we really need this is also super valuable. Yeah, so as far as the roadmap, I think those two definitely are being worked on so we can say that they're sooner rather than later, maybe. The BPP cross-connect and remote mechanisms and then we can have some links to some tickets that we have open in the road. So one thing on a roadmap too though, right, is we're still kind of also ironing out what the release schedule looks like. And obviously that will have a huge impact on what a roadmap, assuming that you actually are gonna put rough timelines on roadmaps, right? Like if there's only gonna be like an annual release, which that's probably not what we're gonna go with, obviously that might mean that like, there's just gonna be a certain number of features that all come at once and then, nothing else until 2021, like versus if it's a quarter release, like there's, you know, to Ed's point, a lot of this is who's gonna actually take the time to code it and will it make it into release, you know, whatever.x. Yeah, I think quite frankly, like there was some discussion of release cadence that happened in the network service mesh meeting yesterday, I think. There was definitely some sentiment towards quarterly releases. Oh yeah, I've been reading a lot about this recently. Still says VHOS, then it, which makes me nervous, but. It makes the people who've actually had to implement the dam, to implement the VHOS before even more nervous. It was an unpleasant. I cleaned it though, then current SRV and my personal opinion. That's it, that's why I talked about this, is that if we're going to work on those things, rather than try to backport SRV, I try and make people accelerate that part if you're not able to do MIS. Yeah, no, no, this is actually good because moving in this direction makes me super happy too because I really just want, what I really want from my Nix is I want them to blit the stuff that I want into RXQs in memory, inside device, inside, you know, CNFs, and then I want them to blit things out from TXQs and then put whatever in cap it was I asked on it. So far looking at this though, the good news is that architecturally, this fits beautifully in what we're doing. So it's mostly just a tiny amount of different work. I don't think it actually changes the underlying architecture of what we're doing. So one of the things that I've been excited around this space too is Intel is working to standardize these drivers. And when I talk to a lot of different like FPGA and SmartNIC vendors and stuff, a lot of them who, you know, aren't like just in direct competition with Intel are adopting some of these drivers, which means I won't have nearly as a hard time, right? Like, you know, Mellanox versus Nectronome versus Intel, trying to get different VNFs, you know, vendors to support it. Like at least I'll have a wider range of vendors with supported hardware Nix that won't necessarily make the driver support model make me want to jump off a bridge. That's it, same problem. Cool. All right. Well, anything else on the technology tree, wherever I put it? I think it's off to a good start for sure. And I will like maybe, maybe this weekend to be honest, like I've just, everyone takes an app. I'll go down and write the whole, try to finish the what is an SMM and then do a PR to the new website where I don't know where that went either, but. Yeah, so I mean, and hopefully, I want to sort of like give a little bit of time to socialize this probably till the next NSM meeting next week before we merge the new site is likely. And then we can, you definitely pull up your against the new site. I'm super excited about the new site. I wouldn't agonize too much to unlike the, you know, does this website define us as a community just because it's still just code and people can still make PRs if weeks need to happen. It is infinitely more usable though than the current one. I will say that. All right, cool. Well, does anybody else, I don't think there was anything else on the agenda. Does anybody else have anything they want to add? Not for me. All right, cool. Well, like you said, you know, Ed chat with Frederick, Nikolai, get back to Daniel and I kind of unlike what your thoughts are on like, how much or how little or which white paper we might want to see NSM pop in and what type of presence we wanted to have in said white papers. Talk to you later, friends. Talk to you later. Bye. Take care.