 Good morning, Jonathan. Good to have you on the call. Good morning. Thanks for having me. Yeah, we were just having, just before this call, we have the issue PR call. And we sort of got to about 57 minutes, or three minutes until the top of the hour and went, oh no, we have to go to the community call because after we had some problems with zoom bombing, we had to sort of put various security things in place. One of which is that you have to log in as the host to start the meeting and we haven't figured out how to get the host off yet very cleanly. So security is hard. Yes. Cool. So we usually take a few minutes to get going, usually about five. But in the meantime, let me go ahead and put in the chat window the link to the meeting minutes. Feel free to add anything to the agenda. If you've got anything, also please add yourselves to the meeting minutes. As an attendee, we'll probably get going here in about five. One other thing to be aware of, these meetings are all recorded. The recordings are all being posted semi-automatically to YouTube. By semi-automatically, unfortunately, they're still working out actual automation. And so there is a lovely human who is kind enough to do this for us every week. But I do hope they manage to automate it soon. Welcome, Frederick. It looks like we got Jonathan Berry here today. Are you still up for doing your presentation today? Yeah, it would have been great. Oh, fantastic. The agenda is usually not bold, so that looks bad. Yeah, are you able, if you have any links to any material that you want posted, can you also add them to the agenda? I've added it there in the document. And if someone can share the document link, that'd be fantastic. And for relevant links, so for example, for my presentation and the related document, okay, I'm going over. Should that be in the overall documents? That seems to be like a standard header, or should it be in the individual weekly minutes somewhere? Ah, so yeah, in the agenda, we have a section. Yeah, I see your cursor now. Yeah, fix that up a little bit. And if you could update the name, because I just stuck the word presentation, if you can accurately reflect the name of it. So all of this, we keep a running tab since the beginning of time on everything that we've spoken about at a macro level. And so this will help people. So someone said six months ago, oh, but Jonathan Barry presented a really fantastic thing. You can easily go off and find it and work out what day it is, and then correlate it to the right number of service session meeting on YouTube. Makes sense. Cool. And the second thing that we have is you're going to talk about VL3 today after the presentation. But don't feel pressured. We can always wrap the conversation tomorrow to next week, or we could push VL3 back to tomorrow or to next week worst case scenario. And so don't feel pressured. Like you have to rush or anything to get your point across. And simultaneously, if you've made your point and don't feel like you have to continue on if you feel it's not effective, because we can film at a time. So in other words, take the time as you need. Perfect. One more minor thing that we should probably go over. Ed, if you can take a look at your Zoom real quick, and then that'll determine one of the spots on the agenda. Okay. Yeah. So I mean, I think effectively one of the conversations that we've had, things we had come in is NSNCon virtually you. And we've been saying for a while now that we're going to follow KubeCon's lead for the 17th through the 20th, because we already have the speakers lined up and everything else. But it might be a good idea to try and formalize that a little bit more, see if there's someone who's interested in going and chatting with the LF about, with the CNCF about what's involved mechanically, that kind of thing. And then we also had a question about when we're going to open up our CFP for NSNCon North America 2020, and hopefully in Boston. You know, hopefully things have settled down by then. And sort of see about that. So those are two questions that had come in on the mailing list. And I've added them both to the agenda. They should be short conversations. At least at this stage, yeah. Yeah. And so, and then I want to make sure that we get some clarity on there, because I know a lot of people are asking questions about it. And I want to make sure that we're clear as to what actions that we're taking so that people feel comfortable with the direction and that they're all informed. Yep, absolutely. Cool. Communication is key. And okay, so let's get started. So welcome to the next Network Service Mesh meeting. We hold this particular meeting every Tuesday at 8 a.m. Pacific time. We are also involved in the CNCF Telecom User Group, which occurs every first Monday at 8 a.m. Pacific and every third Monday at 3 a.m. Pacific. We also participate in the CNCF SIG Network Call, which occurs every first and third Thursday of every month at 11 a.m. We have links to each of these in the agenda. Also as a reminder, if you could add yourself to the attendees list in the group meeting notes, that would be fantastic. We also, so in terms of major events, I had a new event come up. I don't have the exact date yet. That event is going to be sometime next week. I'm going to be giving a talk on Cloud Native Zero Trust for Open Shipped Commons. They host a webinar a few times a week, usually at 9 a.m. Pacific. So I'm aiming, I've asked them for Tuesday at 9 a.m. Pacific. And that'll be on their Open Shipped Commons webinar. So I will post a message. I'll post the final link here in the agenda once I get it. And I'll also send a blast, both on Slack and on the mailing list, since they are asking people to register ahead of time. And we'll do one more blast out next week for people to be aware of when that's on so that's, that should be next, I think, what day is that? I think it's on the 26th, yeah. So tentatively on the 26th of May at 9 a.m. Pacific time. And we also have KubeCon climbing upon Europe, the virtual experience, which is which is going to be August 17th through 20th. It will, the call, so all the call for papers is already done. The agenda is already out in terms of who's speaking for it. And it will be hosted virtually. So please sign up for it. If you have not done so already. Simultaneously, if you have not received a, if you signed up for NSMCon, you should have received a reversal on your charge for the $50 registration fee. So if you have not received that, the people to contact are the CNCF, Cognitive Computing Foundation. If you're having trouble finding the right person to talk with in that scenario, come ask me on Slack, and I will try to find who the right person is so we can work out what's going on there. We also are, we also will have NSM, or NSMCon and KubeCon EU. So maybe we should just bump that particular part here. So we are looking to drive NSMCon EU still, except it'll be a virtual event. And so there's still a lot of logistics that we need to do from a mechanical perspective. And as far as you know, is there any reason why we would not have an NSMCon EU? Because my understanding is we're still on. We're still on. Cool. So there we go. We're still on. And so please go ahead and what we will do is we will get more information on how we'll host it. We'll probably end up using Zoom for that since Zoom has a good platform. And so please, if you're presenting there, please be ready the same way you would be beforehand. This will actually make recording a lot easier. And so if we have any community volunteers who would be willing to help with some of the logistics there, I mean, they shouldn't be complicated logistics. It's basically things like reaching out to the speakers to make sure that they understand how the process is going to work, that this is still happening, that kind of stuff. If there's anyone in the community who'd be willing to volunteer to help with that, it would be very much appreciated. And something I would like to explore because for me, one of the big things with this is I want to make sure that we have the community aspect as well because that's the real reason. We could always do these types of talks on the weekly meetings and have an ongoing set of sessions. But I think the real value in these type of things is the social interaction and how they're getting to meet people. So what I'm going to propose is that we also set up a couple slots during some of the breaks or during some of the periods that we set up slots where we can set up multiple rooms. People can promenade from one to the other and just meet each other and talk in a free form environment. So I think that way that the community can get to know each other. Because for me, that's a huge... The other one I think we probably want to strongly encourage folks to do is to hop on the Slack channel during. Because one of the things that I've actually experienced with various of the virtual events that I've been to is you get a ton of stuff that goes on in the background, at least that I've experienced personally with Slack channels. So you get a speaker who comes up, they give a talk. As soon as they're done giving the talk, you hit them up over Slack and maybe you start a room with other people who want to continue the conversation about the talk. And that can be just profoundly useful. So definitely. And this also has the added change of not offending the speaker when you're talking on Slack, unlike those conferences. And so please make use of that opportunity. We'll do the best that we can on our side and make sure that... I'll make sure that I'm present myself to help answer questions that as the heads of talks go through. So there's something that's really not clear. And I'm happy to bring to discuss through some of the more fine-grained details of things that I can answer. So NSMCon EU is still on. We will release more details about it. As time progresses. If you find that you're not able to give a talk for some reason, like you've come down sick or something's happened or so on. But also please get a hold of us as well, because then we can work out how to adjust for that as well. But yeah, I think in this particular scenario, so I'm super excited to see the... We have a very strong set of talks. So I'm glad to see that we're still going to be able to get those talks out there. We also have ONES North America coming up, which should be in September 28th and 29th. As far as I know, that has not been postponed. ONES Europe has been postponed. And I have not seen any literature come out on that just yet. So when I see any information come out on that, I will let you all know. But considering they've been pushing back events in the EU, I seriously doubt that it'll be in... That we're going to see it in an actual physical event at this point. QCon North America is still on as normal. Please get your talks in. You are running out of time if you're not submitted already. The CFP closes in June 12th. There are no major announcements that have been posted at this point. So in terms of social community stats, this particular week in terms of followers has been no change in status. We still have 761 followers. We are now following 2,297 and we have 1,295 retweets that we have done. We have posted call reminders, the last week's video recap, the CNCF weekly webinars. We also have various save the data events, such as the virtual LFN and testing forum and the registration for mutual QCon. We also have retweeted the Linux Foundation training, VMware open source work, more telecom TV stuff, more things related towards the cloud native survey and the CNF testbed. We've also posted links here, so if you're interested in new topics you can see them on the agenda. We linked the stats, we've also added two followers, so we recently started our LinkedIn account and we now have 150 content post the exact same thing as Twitter. So if you don't like the noise of Twitter, you can follow the stream on LinkedIn. We also plan to retweet the NSMCon EU, promoting registration and share more information about NSMCon as things go past. And with that, we've already covered in the main agenda, we've already covered NSMCon EU, NSMCon US. Is there anything that we want to say about that in the agenda? So effectively, I expect we probably will have an NSMCon US, but we are just starting to plan that because of all the disruption. So I think it is something we should probably work out. If we do have members of the community who would like to volunteer to be part of that planning process, that would be fantastic. Historically, the maintainers and committers have done a lot of that heavy lifting and we're certainly willing to continue to do so, but sort of broadening the base of people who are working on it can only be a good thing. Cool, and I'll see if I can find some potential volunteers as well and some of the people who are not present. But yeah, these type of things are primarily around, just from a work scope perspective. They can be at different levels. We have any help that we'll take. I mean, even if it's just like responding to questions or so on. But generally, the planning around this is something that the last Q-Con that we did was fantastic because of the planners. It was definitely a special thanks to the people from both Cisco and VMware who joined forces to make it happen. Very rare joining the forces. They were absolutely fantastic with each other, great synergy. It's a good way to also meet new friends as well. In terms, okay, so in that scenario, our next item is with Jonathan Berry, very moving beyond HVP, serving the state of L7 protocols and the cognitive ecosystem. You should be able to share your screen if you're having trouble, let us know. All right, let's get my audio on. We can hear you, you've got the floor. All right, how does that look? Looks good. Okay, so quick background on the presentation, and I'll go into the details in a moment. I presented this at the network SIG a few weeks ago, and Ed was there. The slides were about 10 minutes with the content and 30 minutes of discussion, which was awesome. It's kind of the reason why I gave the presentation, and Ed suggested that I present similar content here because there's also opportunity to research basically lower than L7. So anyway, if you've watched the recording of the video from the network SIG or saw my slides earlier, this is largely the same. Hopefully the discussion areas are going to be different. So feel free to jump in at any time, either over audio or in the Zoom chat. I won't have that open, but some can definitely offset the notification. So overall, this is around networking, and some work I've been doing that started out as work for my own startup, but really I hope to improve the overall ecosystem as a result of discussions like this. So quick about me, my background is product management on developer platforms. I work to companies like Google and Esk to startups and everything between. I'm currently working on my own startup, and I'm very active on Twitter. My Twitter handle is Berry Berry Kicks. If you're familiar enough with that serial, that's really the whole story. But nonetheless, I also have my email I put in the notes. So I mentioned I'm working on IoT platform. This is super high level of what an IoT platform is. It has a bunch of different device messaging capabilities with protocols, security updates. But really, the things that are really interesting for this group is around the communication between, let's say, a thing and the platform in between. And to zoom in, it's really the device messaging component that I'm most interested in. But as it relates to this overall survey, that's just one type of networking protocol that's really interesting for us. So as it comes into IoT, and I'm assuming there's a broad audience here, there's actually a ton of different protocols. Many of them are IP bearing. And as a small startup, we are picking one or two protocols to begin with, but eventually want to support multiple protocols. And that requires us to have a networking infrastructure in place that enables us to do that. And one of the challenges of today is a lot of platforms pick one and instead of only focus on that. So this is really the preamble of why I start to look into networking protocols in general and cloud native protocol implementations. This is our initial architecture. We have a device on the left. It's communicating with our cloud infrastructure on the right. It's using one of these IoT protocols, which is a UDP-based protocol to send telemetry data and control data back and forth. And most of the logic lives in the gateway. This gateway is an application we built ourselves. It speaks this protocol, and it's effectively communicating to back end services that handle the response. This is pretty typical. I think it's special and I think fancy. The actual architecture itself, though, is while it's on Kubernetes, it's actually not Kubernetes native all. Effectively, it's a VM. It's actually a container, but it's also running the same application on our desktop. We're not quite cloud-native here. Over time, we want to move to more cloud-native architecture, leveraging things like proxies and across-the-network service measures and things like that. And so this is the way we think about evolving into a more cloud-native way. I think it maps to a lot of other applications that are moving from a non-cloud-native architecture to a more cloud-native one. And eventually, we want to be able to leverage the full ecosystem and have things like the serverless and even functionless service. That's kind of the genesis of looking at multiple projects and how they implement networking and different protocols to different layers. And so for us, the first step was to take our gateway and implement either as part of Envoy or wrap Envoy around it. And about a year ago is when we started looking into it. And this is the issue I raised on the Envoy team. Well, looking at how L7 protocols are implemented on Envoy, actually, they're not super well-supported. HDP is very well-supported, and that makes sense because the primary workload of applications within a cluster are HDP. And actually, even UDP wasn't implemented at the time. And it raised this question specifically in Envoy, but became the question I started asking over and over again is, how do you implement an L7 protocol as a first-class protocol? And my first class in this context really means at the same level of capabilities as HDP. And when it came away with after having really good conversations with the Envoy community, is broadly speaking, the cloud-native landscape is optimized for HDP. Can all of that make sense? But as a result, projects from Kubernetes all the way down to individual projects that are being spun up right now have a lot of assumptions around the data plane and even the control plane that the traffic is HDP. So that has challenges for people who want to implement non-HDP protocols. And because I was looking at IoT, a lot of my use cases are really around different IoT protocols and ways those are implemented. But through discovering and chatting with the community, there's actually all their domains that have that same problem, the problem in that they're not running HDP protocols and they need to optimize in different ways. So one big category is gaming. Project Agonis is a project I created from Google. It's the game-serving infrastructure that's built on and extends Kubernetes. In the gaming industry, they use protocols for game state synchronization or real-time communication. That's obviously an HV. And today, they don't really have a good solution. Agonis itself, they don't have it built in. They just manage the game servers on Kubernetes and say, hey, it's up to you to figure out how to do game state and other game infrastructure components, which they would love to support. I also mentioned real-time communication. And one in particular is WebRTC. And we're using it to some degree in this Zoom call, but many other companies use it as their full solution. Pion is a really cool project. It's a whole suite of packages. It's written in Go. It's a native project. And they're looking to how do they actually run this on infrastructure like Kubernetes. The creator, Sean, and the project lead is looking into how do you do, for example, load balancing for WebRTC video traffic on Kubernetes. That's really an uncharted territory. And it's usually solved proprietary and out-of-balance. So what I came away with is that not only is this useful for IoT and the project I'm working on, but rather the community at large. And so I started this working doc, which got really blurry in the screenshot. But there's the bit.ly link. It's bit.ly slash alt-l7-nsm for those who can't click through it. And this is the outcome of these efforts. And I put together this doc, almost as a small presentation for the Envoy steering committee, which is now snowballed into serving all the related projects. A different implementers might want to leverage using the CloudAid ecosystem and how and where effectively L7 protocols are implemented or make it hard to implement alternative protocols. And so this is the crux of this work. And it's an ongoing living document. I've had the opportunity to get either core contributors or project leads to expand or explain or correct my mistakes of my survey for their particular project. And so it's quite useful at this point. And it's been either validating some of the ongoing CEPs, if there are CEPs, or open issues for those maintainers who actually want to solve all those issues, or just highlighting future initiatives that may be more interesting for the different project maintainers as they're evaluating their roadmaps. And my goal now is well beyond the startup I'm working on in IoT is to make it easy to do the easy things and make the hard things possible when it comes to implementing custom protocols. And here's an example. If you haven't opened up the doc yet, it's quite maybe relevant to this group of the type of analysis. It's very high level because really we're trying to understand this as an application developer. How can I leverage this project? So SMI is a service mesh interface. And this is a quick overview. Each resource in the SMI has this concept of a traffic spec. And that traffic spec allows you to define new protocols. And even in the definition of it, it says each resource in the specification is meant to match one to one with a specific protocol. This allows users to define the traffic in a protocol specific fashion. So the takeaway for people who want to implement alternative protocols is SMI should be able to support alternative protocols using their own traffic specs. And so that's the document is meant to go project by project and highlight the opportunities for improvement or effectively hey go ahead and use this to build your own protocol. And this is really the end of the presentation because most of the networking scene conversation was around these discussion points. And Ed really highlighted to me how there's opportunity here. And I've dug in SM to maybe raise some more discussion points to get feedback from this group. The first one is, oh make sure you check out the doc and please leave any comments. It's open for comments. As it relates to the network service mesh, where does that connect with L7 protocols? Just as the sort of current vision of enabling these application protocols to work. One example and I can speak to it a little bit more is this, well, a common architecture, this is IoT specific is maybe you have your device services and your application services. An IoT device communicates directly with something like our gateway, but maybe that's a more robust and sophisticated set of services where your application team is building the functionality to handle that. From a commercial perspective, we're seeing companies who actually want us to have a managed device cluster that they can, that we can run and operate within their own deployment and their application team has their own application cluster. That's really interesting from a cluster to cluster communication, but also from a managed VPC-like business model. I think there might be some opportunity to live with NSM there. Another one which is actually related, but different for IoT deployments that are maybe not clusters, maybe not Kubernetes. So imagine a local network or a gateway deployed network, that's one domain, one network domain, trying to communicate with either another on-premise cluster or the cloud. I think there's these sort of cross-domain networking challenges that I know from experience are very hard to implement and vary one off as they just stay, especially across different protocols. The other area is as implementers of these protocols, what are the concerns that maybe dip below the actual application protocol into the L4 to L2 domain, sorry, the layer? Whether it's routing, load balancing, congestion control, it might make sense to implement those L4, L3, at the very least, so that different protocols don't have to think about those type of concerns. The last, and this is I think one last thing that Ed and I discussed was what about non-IP protocols? I mentioned IoT, there's a bunch of IP bearing protocols, but there are also non-IP protocols. I think some of this touches on the telco use case, but also non-network. So in the world of IoT, we have what's called low-power, wide-area networks or LP WANs, and they use such small bit rates to save on battery and data that they can't even fit into an IP headers. And so they're running on these non-IP layers, and so how do those mesh into or connect into, let's say, an IP-based cluster? So smattering of opportunities, I think, and I would love to just get feedback on the overall presentation, but maybe dig into some of these specific ones that might be relevant to this group. And if there's any questions, now would be a great time to pop in. This is all very cool because these are exactly the kinds of things we were hoping that would be... So when you're trying to build something that's going to be so much generalizable, you sort of imagine what the problems in the world look like. And we had a whole set of problems that we had imagined for the IoT world, and as you're well aware, real problems are real, not imaginary. And so while I don't think we did a perfect job of imagining the problems that IoT has from your description, we weren't terribly far off. And so some of the architectural white space that we left in the hopes that we could help out the IoT world, it looks like we have some of that space here for you guys. One of the things that we've always pointed out is if your problem is shaped like HTTP, please go talk to the envoy or the Istio or the Kuma or the Likrity people, because they've done a killer job of that. They've done a really good work. But if your problem is not shaped like HTTP, then maybe, just maybe, you might need something that is not focused on it. And it sounds like you've got a giant pile of not shaped like HTTP. Yeah, and the tendency for prior implementations is to do protocol translation and marshaling at the edge of the cloud. And you lose a lot of capabilities that way. But also performance becomes a real issue as you search a scale. So there's this desire from a performance complexity, whatever you want to call it, to basically keep those IP bearing protocols native all the way through to the end application. Because you can, right? If you're writing Go or Python, you can take an IP header and do whatever you want with it. And in prior companies, there's a humongous cost benefit and various forms of cost when you can kind of be, without having to do that, marshaling and non-marshaling. And so yeah, and looking at the NSM vision and current working direction, it's well aligned with IoT. And again, those non-IOT use cases still have these same problems or challenges that are not HTTP. And I think the moment you're saying about the practical deployments, it'll likely be a mix of it because your application will have HTTP functionality, which is probably your public API or some other parts of your serving infrastructure, and then your non-HP components. So this, let's take a WebRTC deployment. A lot of that traffic that's going in and out is serving the application and the UI and the fact client that talks to the back end. But then there's also a secondary channel, which is serving the real-time communication for chat and state sync and video stream. So now you have to actually manage both HTTP and non-HP protocols and scaling that and load balancing and failover, like all those complexity. Oh yeah, no. It's super hard for all of us as we sit here with our iPhones that literally have supercomputer level powers to imagine how resource constrained IoT can get. I mean, the other one that sort of strikes me is if you can do a more intelligent job of slicing some of these things at the administrative level to reduce friction, there's a whole world of business possibilities that open up an IoT. So for example, let's say that I'm the vendor of XYZ industrial widget, right? And one of the things I will offer is a service contract where so long as the XYZ industrial widget can be backhauled to my, as the widget providers monitoring system, I will actually monitor it for you. And more than that, when it starts sending me wonky information, I will send someone out to your site with a replacement who could install it for you. I mean, this is sort of like one of the things that really impressed me, and this was literally circa 2000, was the NetApp guys really nailed this with storage because if they sold you a filer and one of those just started getting wonky, this was in 2000. You would get an email from them that said, just 43 on shelf 12 is wonky. We could have someone there tomorrow morning to replace it. Is that okay with you? And some of the sensors are super sensitive. I had a friend who said, he worked out that he had a friend who said, hey, come over here, let me show you something. And so he walked up to an array of hard drives and I forget which company it was. It was like one of the highest storage solutions. And he says, watch, I can make all the lights on the storage server go turn on at the same time. And he yelled at the box and all the lights go on, pegs 100% CPU, networks pegs 100% on that system, and the whole cluster lights up and then cools down again. And we worked out that what it was is the sensors on the hard drive were sensitive enough that when you yelled at it, it said, oh, something's really weird with this box. I better ship the data off of it before it goes back. Yeah, but you can see where if you can make these things low latency, low friction, all kinds of cool shit. Yeah. And so first use case is I'm setting everything to your back end. To our back end or your back end, it doesn't matter. Someone has to manage the network connectivity and the security. And that's basically what people are willing to pay for. So you imagine securing a device that maybe has 120 K of RAM and 64 megahertz communication from that device to some gateways to the cloud. And then the data in transit and the data at rest and then before it actually gets to the thing you're processing the data, super complicated. And you want to make sure you get that right. And so the friction is literally people want to make that boring and just say, hey, give me the API that will spit the data out into something I can operate. And so this is the device cluster versus app cluster scenario. Thank you, by the way, for that comment about boring. I'm sort of like the first thought that came to my mind is I need to give a talk at some point where the first slide is, I have come to make your networking boring. Yes, yes. I mean, I keep on saying that phrase over and over in a lot of my meetings because I think Kelsey Hightower was talking about communities recently and it says, I really hope it's boring. I really hope we get to that point. And we still have jobs and we'll be great, but no one will be so frustrated about using it or making mistakes about using it or trying to find a vendor of how to use it. And I actually hope networking gets to that point. If you haven't read the technical oversight committee of the CNCS cloud native definition, there's a beautiful term in there. And effectively, the term is minimal toil. And that is just such a perfect term for where we want to get. And minimal toil is not just like how many buttons you have to push. It's also how much you have to think. Yeah. And I think with the history of HDP and network administration within virtual machine infrastructure and Kotlin-Colos and virtual machine infrastructure, we have a lot of experience in that regard. We have very little as it goes beyond sort of HDP traffic and workloads. And I certainly don't know how to bridge that gap, but there's a lot of parallels in the kinds of things we want to do with non-HDP traffic. And again, like IOT protocols and other protocols. And I think thinking at it from this network service layer and how do we bridge these different domains and things like that are a key piece to solving that and getting to the point where it's boring and less toil. And kind of summary, I don't know how to best leverage the work everyone's doing here and how to participate and help the ecosystem, but I can see it's the same thing with, for example, at the L7 layer. So we at one point had a use case, the telco guys did a use case working group in network service mesh at one point. I'm not suggesting you go start an IOT working group for a network service mesh, but if we could identify sort of one low hanging fruit use case, that might be a good place to start. Like what's the single simplest use case we can think of with the biggest bang for the buck here, because that gives us something to go in a match, right? Because your problem in IOT is that you have an abundance of riches of problems to solve. And I mean, you finally solved the one of them, which is how to handle the marketing. You know, I've occasionally talked to IOT because like once you had the IOT term, suddenly it was a thing. I've had hysterical conversations with some IOT vendors at one point who have occasionally grumbled that they've been doing this for 40 years now. And then suddenly there's a marketing term that works. And now everyone's excited. Well, I mean, I think I think network service mesh and service mesh are a great one to latch on to at least for now. Yeah, I think that's a great idea. Even from our own product offering, we're focusing on one protocol with one very specific use case as it relates to the core infrastructure. And I think that can be extended to a network service mesh. Sounds great. And yeah, that can take that offline. Yeah, we definitely take that offline. And that conversation, I mean, we would love to have as much of the conversation in public as we can. But I do understand that that's not always possible when one is working for a startup. Fortunately, we can make this public. Perfect. Cool. I think that's really it for me. And like I said, I'm actually in the slack now. So if you want to support me there or in this chat, so thanks for the opportunity to share. Cool. Thank you so much for coming out. I appreciate it. You're welcome. Thanks for, thanks for presenting. This is fantastic. So the next topic was on the, was on the VL3 stuff, because we had some questions about VL3 last week. And so in terms of VL3, are we able to get the, this morning there was a really beautiful graphic that was, that was the contributors meeting. Can we bring that graphic up and share it? I'm not sure where it was stored at. Sorry, which graphic? The VL3 one that was shown off with the three clusters. Oh, Denise, could you bring that up and share it? Is that something you're able to do? It's your diagram. Oh, yep. Let me second. Yeah. This was a diagram that was shared in the issues in PR meeting that happens to half hour before this. And it was super happy making. It's a really well done diagram. Yep. I have provided some diagrams for VL3. If you have some thoughts or ideas, you're welcome to comment. And mostly we have two diagrams. For example, here is abstract diagram of basic VL3 NSE case. And also we have valid use case of deploy VL3 NSE. And also mostly I working in two directions with this issue. As you know, VL3 depends on some stuff like interdomain and floating interdomain. And currently we are moving to a new SDK style. And my first direction is moving and adapting features from monorepo to SDK. And here is a diagram of dependencies and current status of each dependency. And also you can find here some links to issues and PRs. And mostly that's it. If you have any ideas or thoughts, questions, you're welcome. Yeah. If you can just show the image and leave it there for a moment because I want people to get a sense as to what we're looking to build in the initial VL3 component because we have some questions on it that the people have. And so what we're looking to, sorry, not this one, the previous one that you had shown, the one with the graph, yeah. So this particular one, what we're showing off is three clusters. And each of the three clusters has a series of applications that are on them. And so what we do is in the general, what we generally do is every cluster historically gets its own registry. And this registry keeps track of where all the number of services within that cluster is. And so one of the things that we're able to do is that registry API, one of the first things that we did when we were creating a work service mesh was we came to the realization that that registry may actually live in other locations and there's benefits to having it like that. And so that API is actually a GRPC. So yes, it does get back by CRDs and the reference implementation for a single cluster, but you could extract that out. And since it's a GRPC call, you can back that with something else. And so in this scenario, we make the floating registry that is capable of tracking where the variety of services are amongst the cluster and then is able to coordinate the connection between the network service clients and end points as they need to communicate across the variety of clusters. Now, the second thing that is not really seen in this scenario is that the connections that we're making have, we have the capability to drive this in a few directions. And so it's not like it's a rigid shape. And so we can, if you want to plug in a service that is a subnet, that's certainly possible. Simultaneously, if you want to connect point to point, like maybe you have two databases, one is in like domain one and the other is domain two and a third database in domain three. And you want it to do synchronization between them all in replication. Then you could also set up point to point links between them as well so that they all will communicate directly with each other and perform that replication without having to worry about the subnetting of the rest of the system. And so we, I've already written, it's called a point to point IPAM. So we wrote a point to point IPAM that is capable of assigning IP addresses that is based upon what that specific set of services is currently using and based upon what the remote end is using and trying to find something that I can correlate between the two of them that only takes into consideration the minimum set of networks that are touched. In this scenario, if you have two pods, it's what subnet should I not interfere with on pod one and what subnet should I not interfere with on pod two and to give you flexibility on how to drive that. Another thing that we're adding in as well is we also have the capability to journal what decisions were made on the IPAM side. And we separated that out from the actual IPAM itself because then by doing journaling, that gives you the capability to get observability on what your IPAM decision makes. And if your IPAM ever fails, it then gives you the capability to replay the decisions made in order to recover the IPAM if your IPAM does not have that information in its recovery. And so of course, if you have an IPAM that solves those problems as well, you can just leave those out. The other thing that is not shown in this but is something that we've done some work on is DNS. And so anytime anyone shows you a L3 inter-domain solution, always ask about DNS because that's one of the harder things to solve. And one of the things that we have is we have upstreamed the core DNS, a fanout plugin that allows you to add and remove DNS entries. And what we'll do is we will pass in DNS information through the context so that way that you're able to to establish your DNS connections and remove them when the services go away. So this is just a small case of what we are currently building in order to establish the virtual 3D path. And because we're trying to minimize, we're trying to reduce global state into local state, then from a scalability perspective, this solution should work out a lot better as we add more and more clusters. And so specifically when you start looking at things like how do you resolve subnet conflicts and how do you resolve routing conflicts between them? And so as you add more systems out here, if most of your connections are kept to this, to connecting to the things that they actually need to connect to, then we don't have to worry about the complex side in terms of how it, to the same degree because we only have to look at conflicts in relationship to two workloads. So I've also, in relation to some of this work, we've been migrating a bunch of stuff off of the model repo into the SDK, so this is a work in progress. Once the SDK is in a good state, then we should have all of these services run through the SDK itself. So this means things like IPAM become pluggable. You can compose the IPAM using the SDK, which respects the Network Service Mesh API, implements the Network Service Mesh API in order to add in things like IPAM, DNS support, and this will give you the capability to take what we've written, and you can compose things from the reference implementation, compose them with things that are within your infrastructure. So if you want to use something, a different data plane other than VVP, you can swap that out. If you want to use your own IPAM or maybe you have a DNS solution, you can swap all of these things out for the things that you need and wire in the components that, from the reference implementation, that suit your needs and build up a solution that meets your needs at a very fine grained level. If for some reason the reference implementation does not meet your requirements, or if you're a vendor who wants to provide solutions to your customers and while still remaining compatible with the rest of the Network Service Mesh ecosystem, including things like the zero trust approach that we've been working towards and being able to work in the heterogeneous environment. So as time progresses, we'll provide more information in detail about each of these components and how they work, and many of them are already built, but are currently in the process of being tested and in the process of being exercised. So this is something that you would also like to help out with. Definitely get a hold of me and Ed, and we will help you with working out where the gaps are that you can help. Are there any questions on this? I had a quick one. We don't have much time, so if you can just quickly answer. So back in the days when this was first presented, when the first NSM came, the problem was, I'm pointing here on my screen, but I will try to describe it. So the link between the VPPC domain one and two, and also the same with between one and three and two and three. Is this a VPPC managed connection? I would assume so, because there's a PNSM manager between them, and then if so, which one is the client and which one is the endpoint? How do you solve that? For the VL3NSEs? No, the red link between the VPPs in the two domains. So between one and two, which one is the client? So this is the interesting part. That's going to depend very much on which VL3NSE came up first. Okay. So imagine that you've got, that basically domain two and domain three are already up and going concerns, and domain one's VL3NSE comes up. It's going to basically look around and say, okay, what are the other VL3, you'll go ask the floating registry. Who all is providing the VL3 interconnecting network service that I need to go talk to? And it will find the VL3, VL3 is for NSE domain two and domain three, and it will go and send a request to them and say, hey, I like a connection. So in that case, domain one would be the client. But if a domain four came up after both of those, it would then be the client to the others in stringing up the interconnect. Does that make sense? Yes, it does. Sorry. No, no, no, it's a super good question. It's a really good question. Well, it was the big problem that was never, I mean, like, I'm, I'm happy if this, if this gets answered today, because before that we were not able to, but I'm guessing that somehow with the advancements in the, you know, this multi domain that actually is happening is, yeah, we're able to. Okay. Yep. Yeah, so I think, yeah, there's definitely gaps in here that we're going to have to answer as we progress forward. And we'll adjust those as we, as we build it out as well. But in a nutshell, like from all the iterations that we've done, this is where we're, this is where we're currently building towards. And so the key behind it is really that floating registry in the middle ties it all together. And I'll see about, I did a demo for, for an internal customer that showed off some inner domain stuff with NSM and I'm, I'll probably reproduce that, that demo outside. And so that way that you can all see what how, how you can connect multiple clusters together. Once, once I get some of the, once I get a little bit further on with the SDK. But the, what ended up happening was I ended up sharing, I ended up manually copying over to the description of the network service, the network service endpoint and the network service manager to the second cluster. And then when I perform the connection, then the connection just worked. And so that registry is really the key behind it because we can, we can have that registry act as the anchor that binds all three of them together or more and provides the, and provides enough context for the bootstrapping of the connections. The other thing that's kind of cool here to realize is like, it looks like there are various components here in the system, right? But if you look at things like the proxy network service managers, so just the same way that the network service manager on a Kubernetes node is effectively managing the local environment for the clients and the end points that are running on that node and allowing them to communicate with the outside world. The proxy network service manager is doing a similar sort of behavior, but for the entire domain, right? So maybe your domain is running on some industrial site that is behind some funky firewall. And in order to actually connect to network services that are outside that, you're going to have to go twiddle the firewall. That's the kind of thing the proxy network service manager would be doing in that world. Cool. Well, we are at the top of the hour. And so if people would like to see more on this, definitely ping us and we can go more in depth in some of the components next week or the week after. And with that, we'll go ahead and close it up. Thank you all for attending and we will see you all next week.