 as you put it out. I think it's really an angular resolution, not a pixel resolution. So it depends on how far away you are. So I think we should go ahead and get started. So I think today we're probably missing some of the Vulcan CNCF people because it is their bi-weekly call. So starting from the top. So welcome to the next Network Service Mesh Meeting. We have three meetings. We have this one, which occurs every Tuesday at 8 AM Pacific time. NSM document call, which occurs weekly on Wednesday at 8 AM Pacific. And we have the NSM use case, which occurs every second, fourth, and fifth Monday at 8 AM Pacific. We do have a NSM use case meeting coming up this next Monday. We are also participating in the CNCF Telecom user group, which occurs every first and third Monday at 8 AM Pacific. Notice the trend here. And so that one just occurred. So it'll be not next week, but the week after for that one. There is also a CNCF networking group, which occurs every two weeks on Tuesday at 9 Pacific time. So we can remove KubeCon EU. We've already gone over that. It was fun, but now it's done. We'll have fond memories of KubeCon EU. Indeed. It was a great time. We have KubeCon China coming up. And we have an intro in the maintainer talk. And there is also the second telecom user group working group kickoff that is occurring as well. We have the dbtk user space, which occurs a week before ONS. I believe it's like the 18th and 19th of September in Bordeaux. So I've submitted a paper into that talking about similar to FIDO, the state of FIDO and NSM. I submitted a similar how NSM is using dbtk to further the aims of its users. So we will have, if accepted, then I will be giving a talk there. We have ONS Europe itself coming up. The call for paper ends in just under two weeks time on June 16th. We have MEF Los Angeles. Do we have anyone who's going to MEF representing there? So I had to cancel MEF because I have twins due right at that time. So I'd like to stay married. And I've canceled all late fall conferences. Congratulations. Congratulations to you. OK. Bitty good fortune from ToxicCubeCon North America the same week. We'll be able to congratulate you. If one of them's a girl, we highly recommend Ariadne, isn't he? Once again, I'm trying to stay married. So we have, so the question is, do we want to keep MEF 2019 on this? I think we should probably scratch it. There is still a small chance that they use NSM, but our product managers are kind of throwing everything in the kitchen sink in their demo. So I kind of just feel like at this point I'm going to triage and move on to other projects. So we have CubeCon coming up in November. The call for paper ends July 12. And that is going to be on from November November 18 through 21. There is a new to be announced edge computing world coming up as well. And so that one may be, I need to talk with the organizer to see exactly where he wants to take it. But if there is a networking component or networking path, which sounds like there is going to be, then we may want to see, we may want to work out whether this will be useful for us. This is started by the same guy who is doing IoT. We did IoT world. So if you have any other events, please add it to the website. And please add it to this list. I assume Lucina is not on. Oh, hello. I'm here. Oh, you are on. Cool. So social media community team, you're on. Yeah, CubeCon Europe was a really good event for folks learning more about network service mesh. There were call outs for NSM in the opening keynotes, as well as in the Telecom user group meeting and various other presentations during CubeCon EU. And I tried to search for anyone mentioning us by our Twitter handle, as well as NSM, network service mesh, all one word or three words, and tried to retweet as much as possible. So we gained 45 more followers over the course of the last three weeks. I slowed down with following folks, just over 50 new folks. And we've got a total of 133 tweets. And fun news. I actually went to a park in Barcelona where there was a labyrinth and statues of Ariadne. So I'll try to do some fun posts about that, as well. Oh, that's pretty cool. It was a beautiful space in Barcelona. And I think I've posted the recap video of the intro and deep dive. And if there are any other, oh, actually, I can also post a recap of the FTIO presentation, because those videos have been released, as well as the Linux Foundation networking video. So I'll get those out this week, too. Thank you. Oh, fantastic. You're welcome. You can see those go by. So, OK, so release notes. There haven't been any changes to the release notes. I was waiting for the branch to occur before rounding up Nikolai and Ed to try to work out any major call-outs that we want to have within the first set of release notes before finishing it off. I think there may be happy news on that front. Have we wrenched? I think the folks responsible should probably speak up. All they did was get up and look at the slack traffic. Are the folks responsible online? Yeah, more or less. OK, Nikolai, have we wrenched? Yes, we did. So, yeah, after having a little bit of crazy weekend, like it was failing, then in the end, we apparently merged two patches, which were literally one line each, mostly deleting code. And then we had everything running. So we have the branch now. I still have to figure out how we are going to publish the images, because the credentials are somehow in the CI and probably need to figure out some automatic way of publishing the release images. But that's something to be figured out and to happen. I think that we're in a good situation, I mean, in a good position now. So, Nikolai, I understand looking at the CI as it's going. We're still having, and I've created a new tag for this class of things, things that I'm calling CI bugs, which is AWS just decided it's never giving us a cluster this time, which seems to happen remarkably frequently, or GOMOD is having a connection tip for some reason, which are actually not problems with our code at all. But it turns out, cloud provider is very wildly in their reliability in terms of providing managed Kubernetes. Not only that, we have seen, for example, our Kubernetes deployment on packet failing, because Cades.io was failing to provide us the relevant packages for QPADM to finish its work. So things like that. I mean, like. Also, don't forget that a lot of this infrastructure, not our infrastructure explicitly, but a lot of the Go infrastructure implicitly runs on, and Docker, runs on Google Cloud. Actually, Docker one doesn't. But. And so there may have been some issues that we may have had intermittently because of that. Something that we should consider, though, is there's a, we can spin up an Athena server in some of these locations that can help store some of this information, some of this context. So it may make sense to bring up a proxy and store some of the packages just to help with the GOMOD if that becomes an issue over time. Although it won't help with network connectivity out for the CI, so that's the only problem we run into. Unless we can work out what cloud they're running in in what region, but we probably won't get that visibility. One of the other things over the weekend with Google having issues is I saw a joke go by that's all too true, which is, it turns out when the Google Cloud goes down, so does almost everything else. As one person on the hacker news posted to claim to work at Google, Google's messaging service went out to that they used to describe to work on the headers. They claim to have backups online to deal with this problem, but yes, their backup system is reliant on their, rather their messaging services reliant on their production system, which I guess works for most things as long as your networking isn't that thing that fails. Yeah, I wish very well the Google engineers who doubtlessly work furiously over the weekend. Yeah, so, well, we'll leave the rest of that, since there's no changes to the release notes at the moment, so we'll leave the rest of that for next week when we have more of that posted. Code of conduct, so we have, I was looking over at the requirements for things that we're going to have to do in the future, one of them is we need to adopt a code of conduct in order to become graduated, and they explicitly ask us to accept the CNCF code of conduct, but what I would like to do instead of just saying I'm blanket, let's adopt it, I'd like to pitch it this week and next, it'll give time for people to read it and understand it, and then next week we can decide whether this is something, so there's two questions. Number one, is it appropriate? The second question is, is it appropriate to apply it now or should we wait until later? So, if you can open the code of conduct. Sure. Cool, so they... Lots of languages. And so, this is basically a more verbose version of Tess's excellent code of conduct, and so basically is just to, the main things is banning the use of behavior around sexualized language or imagery, personal attacks, trolling, insulting, harassment in both public and private, doxing people who don't wish to be doxed without explicit permission, and a catch-all of unethical or unprofessional conduct. The one thing to call out on the bottom is the section on maintaining a mediator, so if there is a problem between a community member and the maintainers or a problem between maintainer and maintainer, they do offer the services of a mediator, which is fishy chattery. I have not met fishy chattery, so I don't know, I can't say anything for or against in that scenario, but I do have some confidence in the people that the CNCF has brought on for this kind of stuff, so. Yeah, have we checked in with Mishie to make sure, as we adopt this, that it breaks me as a kind thing to let somebody know that they've been signed up as mediator for the project? Yeah, I think we should because, you know, even though it says all CNCF projects and so she probably is part of like a blanket catch-all in that scenario, but I think we should reach out and introduce the community. Yeah, I think that's probably wise. I mean, there's a whole lot of stuff that is just, I mean, we're a super friendly and courteous community and I like that about us, and even though I have no doubt that as a CNCF project, Mishie would stand forward and mediate for us, probably kind just to learn that. Yeah, and as always, if you have any problems that you can also, you can also discuss with us and we'll do our best to help. Yep. And so it's part of the job of the project maintainer is to enforce the code of conduct. It's actually the last line. Project maintainers who don't enforce, maybe permanently moved. So it's actually quite a serious cause. So I want to give people time to read this. Are there any comments as well or anything? Because we're not always, make sure that if there's anything that we don't want applied in here that we call it out as well. I'm surprised it doesn't say violence directly. Well, I have been repeatedly by Mike Dolan, who is an attorney at the Linux Foundation that weapons on stage has been instantly caught in the code of conduct. Or rather, there's a fact that you shouldn't have weapons on stage. So there's a CNCF events code of conduct as well on the bottom, which it also links to. So you should also make sure you read that, which covers, I think, only the events itself. But I think violence definitely falls under, well, I don't know if it falls under harassment or not. I think personal attacks. Personal attacks. Yeah. I think I get the loophole. No, you can have an impersonal attack. Again, I have great confidence that Mike Dolan would consider physical violence to be covered under this code of conduct. But yeah, unfortunately, I don't think there's probably going to be an issue for us. Well, unless the violence was voluntary on both sides. Anyway, cool. So I think it's probably good for people to go take a look and redo this. I've taken some time to look through it in detail and was actually super pleased. But looking through it in detail did lead me to realize the degree to which it's probably productive to give some folks time to look at it in detail. Yeah, so let's discuss this again next week when people have had time to read. If you don't like the way that something reads, I mean, we can do one or two things. We could either choose to, we could either respond back with feedback saying, hey, we'd like to adopt a bit of conduct, but we think this change should be made. We could choose to adopt it anyway. I guess there's four things. We could choose to adopt the modified version. And we could choose to not adopt it and do something ourselves. So is there anything else on this, or should we jump up to the next, jump towards the Andromeda release and start going over the backlog? Yeah, let's move. Cool. Do you have a forward decline? Well, the Andromeda release, as we said, the release branch is already on. You have to figure out the publishing of the images, the proper detect images. Think that we have everything set up there. In terms of, yeah, in terms of the backlog, think that most of the things are already, yeah. And the same with IPv6 payloads should be, okay, not fully result, but we are testing things there. We can just talk about all of them. Okay, yeah. And many, many thanks for the IPv6 stuff. It turns out that IPv6 was, IPv6 turned out to be exactly the way IPv6 always was. It always was. Every freaking time, those of us who put in networking a long time know this, every freaking time you go to test IPv6, you start out with the attitude, oh, this'll be fine. And then you just cover lots of little things. Yeah, the last one was really interesting. But yeah, we kind of went through it and we have a nice verification in our CI that we can safely pass IPv6 payloads and that's being constantly checked. So that's good. What else we have here? I think that, I don't know, maybe I need to do a pass here again and check what's still on. But from my personal tests and my experience with the examples, they're, I mean, LSN behaves pretty much stable for the type of project that we, I mean, like the status of the project that we are. I mean, I don't see any obvious, huge crushes and misbehavior. I think that this is, of course, due to the extensive integration testing that we're doing and all the work that we did in the last couple of months for enabling other clouds which are introducing other problems in terms of delays being slow, being fast, whatever. But then in the end, we obviously have tested a lot. So at least from my point of view, whatever we have in the release branch is something that we should be proud of. So thank you all for putting your efforts with patches and with whatever you were helping. I mean, particularly to all the folks, I know that it's been sort of the steady march of little tiny issues. And then some, as we said, some bugs in the CI environment. Turns out the code is fine, but you know, the AWS cluster never comes up. And one other quick question though, because I know we've met you on the phone, Matthew, you have a real Mac for running into, you test things differently than everyone else. How is it feeling to you? I think it's fine for now for the 0.1. It runs pretty well. And no one is expecting it to be run for production. So it's fine for me. Yeah, when people ask me about that, I usually put it obliquely and say, we're about to do our 0.1 release, which is exactly the way you would expect a 0.1 release to be. Yeah. Awesome. The good thing is that people are starting to appear within the issues with requirements or some observations. And Matthew, I guess that you have seen, there was a post in the group about skydive, some, so yeah, I mean, obviously people are testing us and I think that we are more or less ready for this. No, I'm feeling pretty good about it as a 0.1 release. And we got lots of exciting stuff to do as we move forward past 0.1. And I'm super happy that master is open again. That's always exciting. Yeah. We're going to break it soon. Oh yeah, sure. I mean, hopefully not break the existing stuff, but yeah. Yeah. I mean, adding more tests and stuff and yeah. So, I think that we should move. Do we have Ramki on Prem on the call? I mean, that's for me, for the Andromeda release. I mean, I expect that before KubeCon China, we should have attack and published images, something that can be shown around. That's my expectation and plan. Cool. Awesome. Do we want to talk road back going forward? Cause that's always super fun. Yeah. I don't see Prem or Ramki on. I can ping them and see if they can, I can ping Prem and see if he's willing to hop on while we talk about other things. So the roadmap, yeah. I think that we said this last time, I would really like to keep this item here for us to remind us where we are, what we do, maybe add things, remove things, kind of checking quickly the status. Just kind of open discussion here. So release is clear more or less. The examples we set, it's kind of a little bit tight with the next topic, CNCF test bed, and also me and Taylor, we were just having a good, nice conversation, actually preparatory conversation for the calls to have this week. So we have scheduled a number of calls with people that are interested in enabling CNSM for the CNCF test bed. We have some rough plan there. I would really like to see this enablement based on the examples, and I don't know, we'll have a discussion tomorrow. We'll figure it out if we are going to push the code there, are we going to push the code in CNCF? Okay, we'll see this. But the idea is that the examples for me are becoming the, should be from my point of view, the starting place for everyone that wants to use, and so wants to deploy. I also think you were talking at some point about, we've got a bunch of the examples living both in the repo, in the main repo, and in the examples repo. And I'm also talking about possibly moving the examples of the examples repo. Because most of the time those examples don't change. And so you should just be able to go around. Yes, that's the point, yeah, yeah. But that's a conversation that we probably should take offline. But I mean, I can't have it now because we have today merged the PR where we are renaming there's no examples folder in the main repo anymore, right? I mean, we moved everything to testing, the sidecar container belongs in the sidecar folder. So everything is moved to other places. So examples are existing by the virtue of just being part of the history and also being part of our CI. Maybe we need to figure out some plan for completely removing the things there. But okay, this is something that we can discuss. It's like also, mm-hmm. Okay, cool, cool. So for Mechery, I know that Lumina were supposed to have something there. I don't know what the status there unless Prem peers or at least give us some name that can join, yeah? Yeah, Prem sent me a message that he's stuck in a meeting at the moment. So we should probably punt that out next week. For SMI, we are going to keep an eye on it. I mean, on our side, on my side, but for time being no active movement there, it's a bit of a convoluted topic. I know that people have opinions. Yesterday, on the technical, not technical, telco user group meeting, and actually today, someone, I don't remember name, the name, but there was, I mean, there was an initiation of a white paper like preparation document. So SMI was mentioned there. I don't know how this merges, but I think that we should just keep an eye. That's enough for now. Colonel Forwarding playing, Raduslav, who is here, he started doing something. As I said in the chat today, I hope that we learned some things along the way of enabling VPP with IPv6 and debugging things there. So I think that we learned some things. Maybe we have still a lot to learn. I don't know, you'll see, but we're definitely on the path to move this forward. I hope to get some patches for having, like today, VPP is very deeply embedded in our testing, integration testing. So we're planning to have some abstraction there so that you can switch back and forth between the different implementations, which I will hope will make some people feel a little bit better because I have heard some opinions about this. Everybody likes having flexibility on the data plane. Yeah. And I would really like to stop calling it data plane because that's not what our glossary is saying. Okay, reporting play. Yeah, maybe we need to rename things also, but... I'm always open for re-education. Yeah, we can use the term subnet provider. No, let's not use that. Let's not do that. So security has been a thing that we've talked about a lot. And I don't know. I mean, I had some discussions with some folks around here. I've got some advices ideas, but I don't know. I mean, I'm ready to participate in discussions about it if this exists, but yeah. I scribbled something down for security that you may want to go take a look at, which sort of looks at it and says, okay, let's sort of look at how we handle identity, authentication, proper authorization. Not only for like the current single domain case that we have, but looking forward to a multi-domain case as well. So don't have a look at that. See if it makes sense to you. You know, and see kind of what makes sense. Part of it came out of comments from conversation with QCon where folks were like, look, the Spire Guy, the Spiffy and Spire Guy is just about Sol Federation, which is gonna be super important. So I'm happy to hear or someplace else talk through a little bit of that, my thoughts there. And I'm very much open to other thoughts and ideas. And please God review because security is the kind of thing that the more eyeballs you get on what you're doing, the better off you are. So this might be something worth eventually setting up some form of a technical work group specifically to go over security things on a... So I know that there's security and actually I popped in one of their group meetings there at QCon and they were saying something that they are doing things like project evaluation or security evaluation of the project, et cetera, et cetera. Like audits, but they can also do kind of pre-audits just because there appear to be people that actually are deeply into security so they know the common mistakes that people make, et cetera, et cetera. I don't know if it's worth involving them at this stage but I'm just saying that we might want to think about it at some point. Yeah, I mean, so I actually bumped into Justin Kappos while waiting for my flight and he's involved with the CNC up doing their security audits. And so what I probably, you know, minimally I would appoint him to the current scribbles. So that he can take a look and see what his thoughts are. And I've talked to some of my internal security folks about the general gist of it. I even done the security fellow here at Cisco. Yeah, and he was positive-ish about the issue. Now admittedly he then went on to give a talk about how you continue to secure things in the post-quantum computational world, which was interesting. Okay. So, I mean, no one vantage point necessarily here you've gotten it right, but the more eyeballs we get the better. Which actually reminds me that if we have these scribbles maybe I can post them to someone on our side too. So that's good. Yeah. No, I mean, we, we all, you know, for the most part we all come from places that have some version of really deep security humans around. True, true, true. And so you agree that we can it, it moves us to get them to weigh in. But I was particularly heartened by the security fellow being positive-ish, especially since he has a history of... So it means that we are at least obviously moronic in the direction we're taking. Okay. And then I think you, you'd written up something on DNS, Frederick. I have. And that sites that Google Doc, you open it up, shows the initial... So what I would like to do is I'd like to, let me give some background. So one of the things that we were looking at was what happens when you hook up, let's say multiple VPNs, or even just a single VPN. So now you have two or more DNS servers that you have to resolve against. And so one of them being your primary Kubernetes resolver, plus any additional secondary DNS resolvers that you attach on. And so this one of the solutions that we're looking at is if you look at the second image, so is adding in a core DNS sidecar. Well, we say sidecar, that doesn't necessarily mean that it's a container in the same pod. It could be a, it could be another pod that is running a core DNS in that area. Actually is that true? No, it has to be in the same container because of the game space. But effectively we want to set up a core DNS. We're looking at setting up a core DNS sidecar that is configured to use all of the VPN servers that you care about. So first, I'd like to talk about whether even, whether this is like a sane idea or whether, or what problems we can run into before we jump into the provisioning side. So does this look reasonable to people? Does this look like something like where we're working to sing break? So the one that I think I've mentioned before when we discussed this, that's a subtlety that's quite fixable is, I mean, obviously if you've got a core DNS sidecar that's fanning out the DNS requests to not only the normal Kubernetes DNS, but possible DNS is upstream of the various network services. Normally if you get a positive response, you would actually like to pass through the first positive response you get. You do have to be a little bit more cautious with negative DNS responses because you don't want to pass through the first negative DNS response you get because it may just be that somebody else has a response that's positive and they haven't gotten it to you yet. So when you say first response as well, is that literally like the first response regardless as to which one it is? Or do you mean first as in like the Kubernetes would get priority and that's considered to be the first response and then VPN one, regardless of the time ordering would be like the second response? I was thinking in terms of time ordering because you get very long latencies if you run through them serially, it's much more efficient to run them in parallel. And the Kubernetes DNS is going to have a huge latency advantage in that not only is it running in the cluster, but generally speaking, the Kubernetes DNS is now moving to per node caches for DNS. And as a result, you would expect that if Kubernetes DNS has something to say about it, it will have something to say about it really fast. Okay. So something we should probably consider in this scenario as well is making sure that we, for certain domains, if we can say like we want everything from example.com, like all requests for example.com to go only through VPN one or VPN two. So I think we should, I think it would make sense to add something in like that just so that you can say things in my private corporate internet. You're not pulling the public addresses, you're only pulling the private addresses for those. Probably supervised, especially considering people who run all kinds of crazy split DNS schemes. I see them way too often. Yeah. No, it's, having beer with a really deep, with a really deep DNS guys, they will just bitch endlessly about it. But yeah, everybody does it. So, but I mean, what I would encourage, because we're getting down to time here shortly is folks to please take a look at this and comment on it. This is going to be super important for us to support. Going forward. Yeah, I've added additional provisioning steps in there as well. So please go over the provisioning steps and, and let us know where it works and where it breaks. Yep. And I'm just good for consistency. Just make DNS a link there. Cool. There we go. Cool. So the inter-domain stuff, I'll go back and for next week, I'll put a link to that. We've got a spec for resiliency. Two is right now we've got a good story where if anyone network service manager or network service manager data plane dies, everything reconverges. And if you lose a network service endpoint, we auto heal. Andre had some really smart ideas where we could literally lose all of the network service managers and all the network service manager data planes at the same time and still reconverge, which I think we're tentatively calling resilient CV to folks have other ideas for naming. That's awesome. Dynamic rewiring. So we have auto healing already in the system, meaning that if you lose a network service endpoint, we can automatically reconnect a client to another network service endpoint to continue good continuity of service. And we know we can use the same mechanism so that if you change the networks, the policy for the network service, we are capable of dynamically rewiring that mechanically. And I think there are even ways for us to do it safely because obviously you don't want to do it to everything all the time. That could be destructive. And that might, that should allow us to do a really cool example where imagine that you have a pod and it's consuming a network service and something is weird and you decide, you know, after the pod is running for a while, you wouldn't it be nice if you had a packet capture box between the pod and everything else that was going on. Potentially you could use dynamic rewiring to do that. And you could use something like web shark to see what's really happening on the wire. And I don't know the developer who doesn't want that capability. So specifically what I'm hearing is you can use a web server to display your captured packets that you injected with the listener you injected directly into the network service wiring itself. Yep. Yep. Cool. Yep. So, and then the hardware and accessories. Two things I do want to make sure is number one, I've been starting to scribble and together an attempt at a network service mesh technology tree because some of these things do depend on each other. You know, so for example, security has to be before we can do inner domain. And I've been trying to collect the various things here together so that we can at least sort of try and put some order on all of it. And in doing that, one thing that became immediately obvious is we have been collecting specs on the spec board. And it might be a good idea for us maybe today, maybe next week to go through the spec board and make sure that we capture those things and that we're getting them all organized probably into the technology tree and other places. So we at least know what the bogies are. Right. What are the things we could go work on that people might want to go work on in the community? Because I know for example, like there's a bunch of stuff that you had on the spec board, Matthew around metrics. And I know we got some of that done, but I don't know if we've got all of that done and like Qvert integration. I know you were interested in that Jeffrey. I'd want to make sure that we got that represented correctly. We've got some new interesting things coming in around network or load balancing where people have got suggestions for that. And so I just wanted to make sure we sort of walk through the spec board, make sure we were capturing all the specs and getting them filed correctly so they could be worked on if people were just working on them. Make sure you ping Jeffrey remotely because he had to dash. Okay. Cool. Awesome. So. Yeah, so that was that was what I did there. It sounds like Nikolai, you thought the technology tree was potentially useful. Yeah. It was awesome. Also, also you're using the polite like the logo polite. I guess, like the colors. Yep. Yeah. I'm using the new colors and actually I'm, I'm not sure why, why the capture NSE needs dynamic rewiring. I mean, I can see but I mean, I can see the use case where you kind of can inject dynamically the packet capture, but the packet capture NSE can be part of the already deployed service. So if you want to demo that. There's a, there's a cost to injecting a packet capture in. So it's not a free thing. So you don't need it in order to, in order to do the basic use case, but it becomes a lot more useful if you can rewire it in. I think part of it, because I know you've been working on this Nikolai in the sense of building and our service at point, and I think we're suggesting is it would be, you could just build that in the network, the NSE in with supporting a packet capture network service. That's something like the web, the web chart could connect to, and that's actually probably true now that you mentioned it. I see. So having the NSE itself be able to inject it in at the appropriate moment. Yeah. Or just have the NSE provide the network service for it. Yeah. Fantastic idea. You know, and so you can sort of sort that out there. Of course, obviously some interesting things about there. But, you know, I think they're relatively solvable within the framework that's emerging. Yeah. That, that, that means that may actually be true. We may not actually need this connection here. Okay. I would have to jump to another call. I'm sorry to the, I'm a bit busy. So I would have to leave you guys. And since we don't have Jeffrey to talk about data plane separation this time around. Yeah. Unfortunately tomorrow, we have an overlap with the CNCS. That call for the doc call. So I won't be able to join the doc call. I hope that someone else will be able to. I will cover it. Yeah, for the, I did have a brief talk with Jeffrey about the split data plane. And I think the concept was that you may have two data planes that have different capabilities that. Would be complimentary on the same node. And so the question would be how to make sure like, if one of them supported. My life and the other one supported. Sorry. Then like, how would you, like, how would you forward them on to the right things? So I think it's based around that just as a, as a teaser. But I'll. Let Jeffrey talk about it next week because he has a more concrete use case in hand. Yeah. And quite honestly, there's going to be a little bit of that going on as well when we get to the hardware stuff. Because one of the things that's going to be super important is to be able to enable people to be able to provide the thing that handles the programming. So that we don't have to, and they don't have to. Okay. Yeah. All right. Anything else guys before we call it a meeting. Just, just about the roadmap. It's, it is a roadmap for the zero. Two version. Is there some kind of some kind of. Priority. Yeah. So I, I think it's just the roadmap in general. We just got zero. One branch. And so I'm not sure. Yeah. Part of what lands in what release is going to depend on who's interested in working on what. And when. But the thing is, if we have a general notion of a roadmap, one of the things we can look at in zero. Two is do we want to do a time-based or a feature-based release? And if we do a time-based release, then we'll sort of say, okay, we're going to release around here and then we'll see what we can do. Get in by then. And, you know, the things that will drive that will be the priority with which people are interested in working on things. Plus. By the technology tree. Like what has to happen before what else? Cause. Interdomain without security is just not going to. And then. We get sort of. There. I think. Which regardless as to which direction we take, we should probably do releases relatively often. Just so that we can get into the habit of doing the release and getting good at them. Like if it's something that's going to be different, it's going to be different. So. Yeah. What you, what you do routinely works. What you do occasionally often breaks. So if we only do one release of end of summer year, we're going to go through a lot of pain in getting it out. But if we do lots of little releases, then the little releases will be. Mostly painless over time. Yep. Yep. But that's, that's probably a good conversation topic for you. Yeah. So. Thank you guys. It's been great. Thank you very much. And with that, close it up and see you all next week. Same time. Thank you. Bye. Bye everyone. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.