 So welcome everybody, I'm going to go ahead and put a link to the meeting minutes in the chat so you can see the agenda and also so that folks can add themselves to the attendee list. We usually start about five after, but please do be aware that the entire meeting is being recorded and it will be posted to YouTube. Thank you very much of honor for sharing the agenda in the meeting minutes. Welcome. Please feel free if you've got agenda items that you'd like to discuss today to add them to the agenda. We're pretty liberal about the agenda. Quite intentionally. Interestingly, some of our best. Some of our best meetings have started with no agenda. So don't don't feel like if there's an agenda posted there. Like, don't don't feel like you shouldn't or that you shouldn't post anything. It's definitely welcome to add anything that that you feel should be spoken about. No, definitely sort of reminds me of my background isn't string theory and when I was a graduate student, I was a Rutgers, which at the time was the number two place that are for string theory and the way they ran their group meetings was in the same style as Quaker spirituals, which is to say that everyone met at a given time in their common room, and you sat there in silence and you waited until someone was moved to speak. And it was considered very rude to a prepared anything. And this led to some of the most interesting group meetings you could imagine, especially sitting out there on the edge of physics, like a network. Nothing happens until somebody until some system speaks up. Hi guys, should we add at the bottom of the agenda after the announcements and social media team thing. Please do add it. So we've got typically a stock agenda, which is the events announcements and social media. So yeah, down after that, feel free to add yourself. You know, because typically the events announcements and social media go pretty quickly. So we do this sort of upfront. I asked somebody to add something in the agenda because I don't have an easy way to add to it. So if someone could add a item about speaking about next week, because next week will be December 24 for the meeting. With that, let's also get started. So welcome to the next network service match meeting. We have a weekly meeting every Tuesday at 8am Pacific time. We also participate in the CNCS telecom user group, which occurs every first Monday at 8am. First Monday at 8am Pacific and third Monday at 3am Pacific. And the next call was, was yesterday at 3am. We also have, sorry, the next call will be, we'll have to check to see if they're doing anything on the first Monday. We have the CNCS networking working group, which is currently being rebooted. And that will occur every two weeks at 9am Pacific time. Actually, I think that that, I think that's incorrect. I think that they they're setting it to actually I need to double check that. Does anyone know if Tuesday, December 17th is the correct time for the CNCS networking working group? I don't know. So last time was, it was canceled. It was just after KubeCon. So it was supposed to be on Thursday, December 5th. So next one should be the 19th, I believe. Yes, it's 19th. It should be like 11am Pacific time. Yeah, because for some reason I have Thursday listed in my, in my, in my mind. It is Thursday. Yeah, so if we could, we could have someone double check and put the, put the correct time on that, that'd be good. Cool. I am on the, on the invite. I'm checking it right now. Okay, so the next call then is on, is on Thursday, December 19th. And so we'll update the agenda on that. Yeah. We have a few major events coming up. We have, we have dev conf. We have a talk that was submitted, but I, but it was declined. We have FOSDM 2020 coming up. There is an SDN room, which is going to create FOSDM, which is in February 1st and 2nd. It's a cube con and cloud native con Europe at Amsterdam coming up on March 30th through April 2nd. The call for proposals is now closed. And the notifications will go out approximately January 20th. So there is a, there is a MSN con at Q con EU that we're looking to, that we're looking to set up. So we will post more information about, about this as time goes on. The, the wheels are starting to turn on that, but we need to get up the, you know, the event site, the CFP, the prospectus for sponsors, etc. But if you're interested in, if your company is interested in sponsoring an SM con, it may be a good idea to just start warming the idea. Part of the reason the prospectus isn't up is we're still working through sort of budget and therefore sponsorship levels and that kind of thing. And I would strongly, strongly recommend taking getting that set up with your, with your budget and also, if you would like to speak to start thinking about what you would like to, to speak about. It's been unfortunate, but the time for Q con is a bit sooner than usual, at least for the financial one. So definitely start thinking about, about potential topics as well. You know, for those of you who, who are doing, looking at international travel, it's also often good to start having those discussions with your management. The next open networking and edge summit. Which is originally on us is now going to be branded to all any us. It could also be ones is meant to be in Los Angeles. And the call for proposals are just going to close on February 3. We have for the, do we have a scene on the pole. Good morning, I've just joined. Okay. I think we have the social social community so you're able to give us a quick update on that. Absolutely. With holidays and post cube con. This week was a little slow we got one new follower, followed seven more folks and posted six times. This week we've got some things scheduled to go out. I posted the keynote videos from network service mesh con and a high level session videos, as well as a new stack article that was featuring network service mesh links to those are in the meeting notes. The plan for this week is to post individual videos from network service con network service mesh con, and those are scheduled to go out as soon as 30 minutes from now incrementally. And we've got one, two, three, four. Oh, we've got about 10 things scheduled to go out in between now and the 20th. So there'll be an increase in activity on that side. Aside from that, if the contributors podcast is available, we can post a link to that. And once we're ready to share the day zero event at networks at cube con Europe, we can post save the date and sponsorship available and sign up. Cool. Thank you very much. And yeah, as to reiterate, there is the videos have been posted, including the slides, the announcement section of the agenda you will find a link to the innocent on video. I don't know what's up the YouTube link. So, I'm just curious as to what what is in the YouTube link. Because there's a YouTube link listed in the announcement, but nothing, nothing posted about it. Yeah, no, there's a channel for NSM. If you go to the NSM campaign, all the videos are linked from the individual talks and each of the talks individually linkable. So for example, if you've got a talk you want to point someone to if you go to the NSM campaign on the network surface mesh IO events page. So that will give you all you can take a link for that talk you can give it to someone it'll point them to the talk who the speakers are the slides and the link to the videos and the links to the videos from the NSM campaign are always in the context of the playlist for the entire day. So it is really the one stop shop for, hey, I want to go and find out about something, or I want to go point somebody at something. So what I'm going to point out is apparently the current winner in the who's getting the most views sweepstakes is the SRV six talk that Daniel Bernier did, which is kind of interesting. I enjoyed the hell out of that talk but I was not expecting it to be the most viewed so far. So, yeah, if you want to recap or you want to review or even get to attend, definitely, definitely go watch the video that you're interested in. We have. We are now entering a holiday schedule. So we now have back to back December 24 and January 31. The question that I pose to all of you is, what would you like to do over the next two weeks. So, in we, we can definitely cancel on the 24th and 31st. Is there any reason that we should that we need to move the meeting and canceling is there anything urgent that's coming up in the next couple of weeks. My awareness should be relatively calm. So, um, yeah, I mean, I would be okay with clients in the 24th and the 31st is, is everyone else okay. Don't all speak at once. I am okay. We still have the channel. So maybe just post a message here, like, in the minutes, and just let me just let the folks know that okay if you need something we are on the channel if you need an urgent discussion we are available there. We can zoom if needed. Yep. Now we're pretty good community about asynchronous communication overall so it's not like the whole world is going to shut down it's just, you know, probably it's probably makes sense to cancel the meeting. Well, I mean, it's still, I mean, like anything is not where it should be, but next year maybe. No, we've got plans. So cool. Okay, so in that scenario the next meeting will be on January 7th. Yep. And someone has actually already connected that for the Asia call. Yes. So it looks like the Asia call came to a similar conclusion. Awesome. Okay, so a couple other minor announcements the CIA is having a little bit of trouble. Yeah, it's being a drama queen about it. Basically, effectively what appears to be happening is there's some kind of a hiccup going on with packet. So, and the packet guys are hugely responsive to such things we just need to go find out what's happening. But there was apparently a change where it used to be that if every element of a particular cluster failed, and you ended up skipping all the tests for those clusters, because you had no place to run them, that it wouldn't report the test is failed, it would just report that cluster elements fail. So you could see there was a problem you can see what the problem was. And, you know, it's the overall job failed, but you could easily tell it at a glance, oh packet is having an issue. And apparently that behavior it shifted to reporting the skipped tests as failed. So now it looks dramatically bad, because packet went down and therefore 127 tests failed, but they were never run they didn't really fail. They just got skipped because we couldn't run them. And, you know, overall that should fail the job but it shouldn't report to you that the whole world is on fire in the same way. So the CI is having issues we are looking into it. And it was going to look very dramatic until we figure out what's going on with packet. Okay. Yeah, we really should start sticking some of these things into a database as well. Like, back and then come up annotate that. You know, true or Amazon or Google or so on. Yeah, because we probably start a lot. We're probably in the top, at least in the top 10% of cluster starters out there I don't think many as many clusters as we do because every time every time a job runs we're starting six clusters per cloud flavor. So, that's a lot of clusters. So that's 24 clusters per, per job. And we do a lot of jobs. So we probably start hundreds of clusters a day. Anyway, I wanted to let folks know because particularly with the difference in reporting people could get really freaked out that they somehow their PR has done terrible things to the world and though maybe your PR has done terrible things to the world that could also be true. But it's much more likely that you're hitting this. Okay, so with that, keep that in mind if you are contributing. The world may be on fire and it may not be your fault. As a reminder from Ed, there are past changes. Do you want to speak about that? Yeah, so I'll speak about this briefly. So we had talked before about making, you know, effectively making the NSM forwarders, essentially just a cross connect network service. And we had also talked before about the path changes. So I wanted to reiterate that here, not so much to walk through them again unless there's a strong desire for people, which is I'd be delighted to. But mostly there's a link there to the deck that I presented last meeting that talks about them. And there's also a link to the activity diagram that sort of details how the healing works for them. And it's a huge simplification to the healing process. And I, I'm intending to be, you know, doing some of the work on getting there over this break. But I wanted to make sure that folks took a look number one so they can sort of call out where this is going and number two, you're just just to expand awareness. It turns out to be a massive simplification. I, I, I sat down and I wrote the sort of SD case snippet for the piece that does healing. And I can't say for sure because I haven't put it in full context to integration test it. But it looks like healing is now a single file of 100 approximately 100 lines is 100% of the healing process, which is an incredible simplification. So the post for that is pretty, it's pretty simple. So this is where you're interested in. I think the whole thing comes in somewhere between 700 and 1000 lines of code. So there was a lot of a lot that we were able to cut with this. So if you're interested, definitely definitely read through the code and ask questions if you, if you if any will pop up. Cool. Ivan, can you scroll down here on your screen so we can get the rest of the agenda. Thank you. So we have a virtual layer three NSM manager integration open items. Can I just quickly. I put the gates 1.17 before just just a quick update from me. I apologize. I missed it. Yeah. So sorry, Tim. I know that you have much more important things to discuss. Actually, we have much more important things to discuss. So Kate's because on the examples repo, which is running nightlies, it uses the latest kind kind already does 1.17 as a baseline image. It's okay. It works there. I have done PR today, which actually requires some changes in some of the components that we do, but nothing really major. It's there. We'll probably try to upgrade before Christmas. If everything goes fine, we'll see for the time being we cannot verify it on packets, which makes it impossible. So that's it. So we are more or less ready for it. We just need a couple of notifications and we know that any similar as fine on top of it. We just have to adapt to their API API if we want to go fully 1.17. That's it. Thank you. Thank you for the update. And with that, Tim, you would like to speak about virtual where three NSM manager integration open items and it looks like you have a presentation. So we could even if you could stop the share for a moment so that Tim's Watson could present. Well, I did. I did add the presentation. This is essentially the presentation from NSM content. If you want to show it, please. If you don't feel comfortable. Yeah, I'll show it. And we can just discuss how the so sorry while I bring this up. Sorry for dropping it on you like that. No, no, no, that's fine. I'm not used to zoom sharing. So I'm taking a second here. So I think this is the right one. That's a different presentation. But so the point of this immediate agenda item is I have started pushing. So so Ed a while back made some changes to the API to allow for what I was, you know, essentially at the semantic level of the API definition kind of a lot of what I was doing at the VL3. Real quick, quite a water. It was a side effect of the unification. So it wasn't that particular change was made for this or that it's a side as a side of the unification for a remote local. There are certain things that are not expressible that were not expressible before. Right. So we we ended up. Basically, I was revasing a lot of the changes that I had on on to. So the I presented that NSM con was a off of a earlier point of NSM. And so this has been, you know, in the works for a while, but it was more just proof of concept and then now I've started in changes to making fixes based on rebasing on top of the latest API chip. So I think that's a good point of Nikolai. And as expected, these are like those changes and Nikolai had some very real concerns on what was what I what we're doing with virtual era three so we wanted to discuss that in this meeting. So what I think is that we are the difference between the normal NSC idea is that this type of NSC needs to have connections to other NSCs of the same type so in the same network service so we're not we're not creating like a new network service for NSC and having somehow to make connection requests with that new next network service. What I'm what I did was I made a virtual layer three NSC do a find a network service call to a network service registry to find all of the network service end points for that network service and then pick up the ones that were the virtual layer three NSCs and and do connection request to those specific NSC so so it looks essentially looks like that the network service endpoint is kind of doing what the another network service manager would be doing it selecting NSCs to on a connection request and firing off those connection requests. And that that semantics is that you can't. I don't think you can currently express that in the ERD for network service. There's no there's no way to indicate that a specific NSC is going to needs to connect to all the other NSC with that same type or or some maybe even a multi multiple connections per NSC so that was what we wanted to talk about. Nikolai, do you want to add anything there. Yeah, I think that it makes it slightly more clear to me now. So thanks. Can we can summarize the problem like today if you're if you want to consume the services from the SDK, you effectively are like you're constructing a client and you tell that plan. Okay, I would like to consume this and that service. Name like this and then I'm sending these and these labels and then we let network service manager to find all the endpoints that are matching these labels and then it around trove is there. Now what you want to do is effectively have a way for the client to say please connect me to all the endpoints that are matching this criteria. Yeah, yeah, exactly. So if I can make a suggestion right so there are there are a couple things I've been thinking about this particular problem. And so it strikes me that one of the whole points of the way we structured the SDK is to simplify things right so basically we break it up into snippets that you can chain together because it allows you to compose simple pieces together that do effectively one thematic thing and do it well. And then if you that also gives you the freedom to sub out pieces of it if you want to so if the very, very simple IPAM that we've provided doesn't float your boat. You can go write your own IPAM and just put it into the chain instead and you're good to go. Would it make sense in your mind Tim to simply write a snippet that does the behavior that you need and then you could sub that in it in the appropriate place. I mean I guess that the question is, you know, effectively are actually. Okay wait I see where the problem is the chain the problem is that we've got chaining on the end point side but we don't have training on the client side is that where the root of that is then. Well, I know I think it's more high level I think what we're talking about is that the role of the NSE and in this proof of concept is is doing end point selection and connection requests with that, you know, to exact end points and that is not conforming to everybody's idea of what an NSE should do. And so we're, we wanted to up level the conversation on, you know, should this be an NSE function. And then, if so, do we want a way to describe that this is allowed, or that this is going to happen. Yeah, yeah, but I mean, I guess the thing that occurs to me is. I'm in my mind effectively what you've got is a situation where you have a pastor NSE of some kind, and it wants to go and you even see and it wants to go connect to other NSEs, for whatever reason, right, and how it wants to do it. Exactly. And so what it strikes me is that logically you want to be able to simply write a new snippet that is the way you happen to want to do this for yourself. And what I think I'm hearing is that we've got too much going on in one of the snippets where it's both doing. I wouldn't marred a single snippet with all the possible ways that somebody might want to connect their NSE to someone else that strikes me as likely the wrong solution. Does that make sense. So when you're referring to snippets, you're, you're meaning like change endpoint. Yes, by the way, we need it. Yeah, we need a good name for it because snippets is not a great name. So if somebody has a really good idea for what to call those pieces, then that's fine. I've been calling the snippets. It's not great. But effectively, you know, here's what I mean, I mean, I think, I think it's the first I mean, I don't want to bypass the nickel eyes concern on when, you know, as manager is operating on a g RPC request, that's a network service request, and it has fully filled out endpoint information. You know, what, what should it be checking author, you know, essentially allowing in that scenario, like what does it have any authors, you know, when we get off the functionality, you know, is that that's a second issue. And I think that's a valid one. So do you want to, and the way I've been thinking about that issue, frankly, is it's an issue of policy, because I can see not generally wanting to allow that. You know, you could certainly want as a matter of policy to say, I actually want things to go to where the network service manager is directing them, right, I could see that as a policy issue. But we're clearly going to have to have some policies if for no other case than the case of healing that permit that, because if a client if a network service manager goes down and comes back it has no idea what's going on. And the client basically reconnects to it and says, hey man, I had this this connection and here's the path and you can see there where there's the authorization token you stuck in. Obviously, in that case, then we ought to respect the specified network service endpoint. So I think probably we want to factor that out to like OPA policy in general. And then you could say, okay, we're going to we're going to run with a default policy that allows certain things or not so we can set up a default policy but then it's up to the user who they want to allow to behave in different ways. Does that make sense at all. Yeah, I know you and I have talked about that and I would let Nicolai I'm a bit lost. Can you, can you please show slide 17 because there you have three points. Yeah, okay. So effectively the way that I see it is that the note a wants to say I want to connect to all the other entities that are from the same type, whatever that means like for us for certain type and it doesn't know how how many are there. It doesn't know their labels it doesn't know anything about them. It only knows. Okay, there are certain entities living out there. I want to connections to all of them. And today, we what we can express in our SDK and our API that we have there is like, I want to connect to any that matches and it's only one, you don't know how many are out there. That's for me the problem. So if we have a notion in SDK where we can tell the network service manager, please return me whatever connections are out there to all of these that are matching that will solve the problem. In this particular case, is there a bigger problem? I don't know. Maybe. Yeah, go ahead. Something that would something that would help me understand a little bit more as well is to understand one of the use cases behind this as well. And I'm not trying to invalidate the concept that skirts the opposite life. It will make a little bit more understandable as to what we're trying to what we're trying to to solve. Sure. I mean, So the original use case that we came started with was, you know, we have. We have the idea. So at Cisco, we are really interested in, you know, multi cloud solution, you know, interconnecting application services in multiple clouds. So we, we have some VPN related type solutions where we essentially set up inter cluster, pod networking across, you know, a VPN tunnel or DM VPN, using like a transit VPC or something like that. But that goes that get, you know, has a lot of security problems. And so we liked an SM because we could operate on the workload level right so we can have network services that bind to specific application pods in in different clusters and then we could create a a NSE that would or a NSE type that would you know, we could hide the the semantics for inter cluster communication or at least inter cluster routing behind that and then an NSM also has the data plane functionality to do the inter cluster where we could add, you know, inter cluster connectivity value as well. So this is the use case like we want like we're using database pods as an example, but we want to be able to have specific workloads and different clusters to communicate only with the with amongst each other via NSM we don't want to we don't want to expose the entire Kubernetes cluster network to another cloud. So does that make sense. Yeah, I think so. So let me let me give one concrete example and tell me if I'm correct. So that you might have a database connected to multiple other databases on multiple clouds. And so in order for the database to work, like let's say that there's you set it up with like with five replicas each on each on a different Yeah, region or cloud or so on. And so you need that database to be able to talk to all other four instances in order to in order for it to operate properly. And what you're trying to do is saying please give me a connection to all of them so that I may communicate with them directly and not have to communicate with just with just one is that is that correct. Exactly right so like in the demo that showed like say this Azure DB pod was like the MySQL master and and then all of these other DB pods were replica slaves and we you know and in the other apps on all the other you know for whatever geo redundancy or you know you haven't wanted to have a different app experience so you had you but they all need to access some replica from this master and you you just you wanted to kind of have the same a private replica network which with this with this and when that was what you were thinking of for NSN virtual layer three as a as a concrete example. Okay, and so Adam Nicolai correct me if I'm wrong so without the NSN SDK we're doing this manually we can get a list of all the endpoints that from from the from the registry so and then from there and then we so we do I believe we we do have a manual path to to make this happen. Is that is that assumption still correct. Yes and that was what's implemented here my objection to this approach is that specifically this example is pretty important everybody is looking at this whatever we said this as an example of how we use NSM is going to replicate very quickly that's my my understanding can my belief. So we should be very careful with what message we sent with this specific example. Yes, effectively we need to make sure that we do it nicely and cleanly, and that we pick up make a wise choice about what we do and how we do it. Right. And so that this sort of gets to my point about you know I it's a really good example of how you could write a custom snippet to do something interesting and sub that in for an existing snippet from the SDK. And does that make that that's part of why I was sort of musing and please note musing because I've not looked closely at it that it might rather than making an existing piece of the SDK more complicated. It might be a good idea to effectively look at the possibility of writing an alternate chunk. So an alternate snippet that we could make sure is nicely clean. Yeah, that's that that follows along my thinking as well to keep the simple the simple use case that's simple as possible because that should cover 95 or more percent of of most people's usage. But what you described is also important. So for example, I'll give you an example of a client where this is important as well. So when you want to connect to a TV as a client, you can specify multiple servers that you want to connect to. And this is so that if one server goes down, you can immediately ask another server to continue on with your request. You don't just you don't just lose your connection and have your client break on you. And so there's also use cases from the client side where certain databases that are designed for H.A. may also have some more or some more type of request. But you don't generally you do not generally connect to all of them. You don't connect to maybe two others. So usually no more than three. But while in the face of at CD, you definitely want at CD nodes connecting to all others, because when they do their votes, they need to make sure that they can they can handle their their votes properly, even even once they switch over to talking back to their single H.A. master again. So, so is it definitely a very important use case and we need to make sure that use cases easy for people to consume, but we have to make sure that it's not dead in a way that that encourages people who don't need that particular use case to to take it on. Right, right. I agree. So I, I guess, you know, for going forward, I mean, the, the, the definition of the, you know, network service in this proof of concept is really simple like it's, it's just the, the default match. Or the default route goes to the individual, the, the, the, the single virtual layer three NSC type and then the, you know, and any other if we had any other network services we would probably would add them. Essentially before the, sorry, if we had any other network service endpoint types, we would add them in the definition prior to that NS, the virtual layer three NSC. And so the chain, this would be like the end of the chain and then it would go and, you know, find all the other, what I call cluster with all the other virtual layer three NSCs. And I, I think that I mean, I, I'm just wanting to make sure that we don't want to, that we have discussed maybe conveying that this is a special like terminal type of NSC or there's a some other way of saying that this thing will match with will create connections with all the other types of NSCs for this for that are in the registry of this type. Do you know what I mean, but does that make sense of Yeah, I mean, I totally get what you're suggesting there. And so let's, let's, I think like the thing that occurs to me is that we have this lovely thing with an incredibly powerful set of matches in the network service, but it's not infinitely powerful. Right. It's not going to always express everything the way that you would like it to. It's just going to give you a really, really good 90% and you can sort of take two paths from there. You can either make the matching structure infinitely complex so that it can express infinite possibility. But we've tried to keep that as simple as we can, or you can allow a safety valve where you simply say look at some point if what you're doing is sufficiently bizarre, you're just going to have to sort this shit out yourself. Okay, right. And so I think my, my current thinking and I'm very much open to other thoughts here. My current thinking is that what I really want to make sure we get right here with the BL three is not incorporating arbitrary complexity into the matching structure, the network service. But rather, I'd like to make sure that we, as we're going to express the, okay, you've gotten so complicated, you're going to do your own thing pattern that we express it well and cleanly. Does that sort of mesh with what I think you're saying Andre, or not Andre Nicolai. Yes, and I, if I may, I would like to also add a couple of thoughts myself. I was thinking, please. Okay, so it looks to me that the view three should be slightly more dynamic like even if you're able to enumerate all the endpoints at the current, you know, at the particular point in time. This, the number of endpoints are eventually going to change like you should be able to add more like at another cloud. So maybe this is the time to think a little bit into direction of operators. Should we have an operator in our SDK, a way to like react to active changes in the number of registered or actually the registered endpoints within the service. Should we have something like this. That's certainly one possibility. I mean, another possibility that we may want to consider is if you look at the structure of our other API over the network service API, it has a monitor connections call. Yeah, okay. Do we, do we want to have something like a monitor for the registry so that if you're playing sufficiently complicated games, you can ask to be updated when a network service changes. You beat me to it. I was actually going to suggest that we take a look at how at how at CD handles some of these things as well from the client side because I could help guide our, our direction for this. And so one of the things that you can do with with CD is you have this concept of. I forget the exact time of it. So I'm going to make one of the tries to approximate it. So basically you have a service or a server registry and that registry will monitor for heart beats that of the various servers and when it doesn't receive a heartbeat it removes it from the list. And so it basically gives you a point where instead of having to ask find a bunch of servers and say, Hey, please think tell me what you think that the state of the world is. You have a limited number of systems you can connect to that's basically back effectively backed by a Kubernetes service or something similar that will feed you a list of Endpoints that you can connect to. And so, but at the same time, it's useful to have this endpoint update your servers as well. And so as things join or, or quit you can then make use of that. And so I think a monitor on that particular path with the wood potentially would definitely make sense. You could monitor the CRD but the thing I don't like about monitoring the CRD directly is that means we have now tied the endpoint. Or the things that want all the connections to monitoring a Kubernetes specific CRD. So while that we provide a gRPC and point for this that means that if you decide to implement using MySQL or some other system, your code doesn't break as long as the code was implemented properly for reporting out the event of changes. So how are these monitoring mechanisms going to work across clouds, like inter-domain. That's a, that's a good question. So I would say the same way they work for monitoring inter-domain right now across cloud, sorry, monitoring connections across cloud, which is you make a monitor, monitor connections to gRPC call. So somewhere there is a registry that you are addressing. And so if you pick a monitor call, then you are making the monitor call to that registry. Okay, okay, then that probably sounds, let's say the NSM way of doing things. I don't know if we have an agreement here, but at least sounds like a good direction to explore and see where this goes. I would actually characterize this a bit more as a, we've had a positive productive initial conversation where, you know, we all agree that there are some probability of solutions in various directions and we should probably, as you said, explore more. Yeah, so how do we want to proceed? I mean, do I, should I turn like these slides into more of a document and post it for review or just keep pushing PR reviews type. Yeah, my, my recommendation would be to, to do, to start off with what you said, to stick it into a Google document and then you, you know where our, our specs for it is with. Yeah, Ed's pointed me, I can get all this info from Ed pretty easily. Yeah, that'd be fantastic because that gives us a single point of, instead of trying to chase down a bunch of places, it gives us a single point of what we can look at. And yeah, I think that'd be a fantastic place to, to continue that. And if you, and feel free as well, if you have any, if you need any help or any questions, I know we're not going to be around for the next two weeks. Yeah, I won't either as well. But feel, feel free as well to, to re-add this to, to the agenda as if you feel that things are stagnating or getting blocked so we can talk more about it. But yeah, let's, let's, let's add this to, to the, to a Google doc and let's definitely, let's definitely document and discuss there. Yeah, I think, I mean, I've probably been more the limiting factor in getting forward progress on this. So, and I know there's some interest from other people. So if anybody else wants to help out and just ping me, I should be on the slack for the next four days or so. Yeah, and in, in the spirit of all open source products, then that's, on the friendlier side, we're, we're happy with any contributions we're able to give and we fully understand and respect that you don't have time to do so as well. And I don't feel, I don't feel compelled to, to push on this if, if you don't feel like you have the time or, or so I'm like, but we're always appreciative of any contributions that you make, including this. Yeah, yeah, thanks. And the, the, the example and the demo that you have is already very impressive. It has a lot of, it also has a lot of people super excited. So. Yeah, no, I mean, I, so just on the, I will not, not specifically this related but the, the NSM inter domain stuff. I, I'm going to try and push some documentation updates as well. I push some PR fixes for home charts because I use home all the time for installing. I don't use the make system. You know, I'll try to add essentially things that look kind of like this with some more detail on how things are communicating and why the, what the port mappings are and things like that. And I think that's needed in the docs to even understand what's going on. So, and then maybe some usage example information. I mean, that's what this is for the next few days and then I'll try to get this virtual layer three content into a shared Google doc and if anybody wants to help out or wants to review just I'll post that through the slide. I'm willing to, I'm very much interested into adding a proper inter domain examples. For example, in the examples repo, like now we have two kinds deployments should be, should be easy. Yeah. So, yeah. You have two kinds deployments. Sorry, I didn't catch that you have an example where you know we have in our main repo, we have recently added possibility to deploy kind to two clusters with the make machinery to kind clusters to be deployed in parallel. Yeah. But I still am not sure how to, how to utilize them properly. Okay. I've been trying to look at that as well, just manually, but so yeah, I'll look at that. That would be really cool. So, is there anything else that we need to to discuss on this because we're hitting the top of the hour. Yeah, I just wanted to ask if, if the recording from December 3rd is available on YouTube because I don't see it. And is there any reason for that? So, the recordings are not posted automatically. They're posted by a very nice person from the CNCF who downloads it and uploads it manually. My guess is maybe she may have gone on vacation or something like that. But we can ping it and ask to see, like perhaps it was just forgotten. So, we can ask about that. Okay. Yeah, that would be great. Yeah. And if that person happens to be to listen into the tail of things, thank you very much. Yeah, because I found the next one on December 10th, but I wanted to see the SROV demo that was sold there. Yeah, it's also, yeah, that's actually a good point. I mean, also mistakes occasionally happen, right? So, maybe one got missed. Yeah, that's almost certainly what it was. It was just an oversight. Thanks for bringing it up. We'll ask about that. Okay, awesome. Okay, because this is apparently our last call for the year, I would like to thank the community here and it has been an awesome year for the project. We started a lot, a lot smaller than what we are today. So thank you all and I hope that 2020 is going to be amazing. Yep. Thank you. Thank you, everyone. Thank you very much. Thank you. And everyone have a happy new year and we will see you all next year. Yeah. January 7th. January 7th, yeah. Bye. Bye. Take care. Bye, everyone.