 each week, because I noticed people get confused and start writing in the soon to be copied section versus the today's date section. Oh, yeah, last week, too, like either move this all the way to the end of the dock, which would be cumbersome because you'd have to scroll or just stick it in its own dock and move it over each week. But yeah, and if you look at like attendees and stuff, people kind of split where they sign in and whatnot. It's a little confusing. Yeah, I know that can be confusing. Speaking of which, we should probably do the traditional. Here's the share to the actual agenda and meeting minutes. And if you could add yourself to the attendees, please, that's super helpful. You just typed your name and really what's supposed to be next week's minutes. OK, well, it did say September 3rd. Right. Next meeting. Oh, and now they got rid of that because there's also September 3rd directly below that, too. We've got two calls on every other Tuesday and copied the meeting notes over prior to 10 o'clock. Oh, OK, 8 AM GMT. It's probably worth carefully marking. OK, you have actually carefully marked this 8 AM GMT, 10 AM CT. OK, cool. So yeah, that explains it. It's the China meeting. Got it. So I'm actually going to add something here for a readout from the China meeting. Get color code, maybe have the earlier time be blue. I'm a huge fan of color of appropriate color. I'm not terribly great at selecting color palettes myself, but if you give me a color palette, I will use the hell out of it. So as many of you have seen in the slides that I produce. OK, so we're six after you only get going, Frederick. Sure, let's get started. So welcome to the service mesh meeting. And so we have this particular meeting scheduled every week at 8 AM Pacific. We have two meetings which are currently on hiatus, NSM doc meeting, which occurs every Wednesday at 8 AM when it's not on hiatus. And the same with the use case, which occurs every second, fourth and fifth one day. And so we also have a new meeting, which it occurs every two weeks, right? Right, Nikolai? Nikolai is not actually on the call today. I think he's on vacation. Ah, well, it should. I think Radislava or Ivana should be able to comment as well. Yeah, yeah, Nikolai is on the BTO. And yeah, this age-friendly meeting is happening every two weeks. You're right. Great, and so I believe we just had a meeting today. So the next meeting will be two weeks from now, which should be, see, today is about the third. So maybe the 16th? Yeah, possibly on the 16th. And so we also have a CNCF telecom user group, which occurs every first Monday at 8 AM Pacific and every third Monday at 4 AM Pacific. The next meeting will be Monday, September 16th at 4 AM. So that's not one that we run, but it is one that we participate in. And I want to remove the CNCF networking group at the moment because I don't think that's currently running. So major events coming up. We have ONS Europe and Antwerp with four accepted talks and a telecom user group meetup and a CNCF testbed tutorial. We have Open Source Summit coming up in Leon with one talk accepted by Ivana Radislava. We have KubeCon and CloudNativeCon North America coming up in November 18 through 21. With NSMCon announced with a call for proposals. And we'd like to remind you that we're aiming to close the call for proposals on September 13. So please get your proposals in before then. Yeah, so it's also be noted that we actually are also open for sponsorship with that. So and with that, let's jump to do we have any announcements that we need to do? Well, hey, Jeffrey, I'm going to push your thing right after the social media community thing. And then we'll jump straight into the call rewitch. So Lucina, are you on? Good day. Yes, I've got a quick update for folks regarding the end service mesh Twitter account. Since last week, 17 new followers. And we hit the 400 followers mark. So I posted a little congrats and thanks on Twitter. 15 new accounts following and 15 tweets and retweets. I posted about our open networking summit talks next in a couple of weeks, as well as a reminder for the two Tuesday working group meetings today. And also posted today a call for CFPs with the closed date of September 3rd. Oh, that's great. Has the OVS orbit been announced? If so, I'll go ahead and retreat and promote that. Yeah, I forgot to announce it. And I was going to say it after you. But yes, it was released like maybe a day or two ago. Yeah. What I'll mention that I need to get the details on as well. So we've got that going on. I believe that we're doing a CNCF webinar October 3rd. I need to get all the details on that. So I will try and get that pulled together for you guys. And once we have a link, you can promote that as well. Sounds great. I'll add both of those to this week's plan. Yay. Excellent. So I think it's incredible that we're still getting almost 5% growth week on week. It's just insane in terms of number of followers. Yes, I'm excited. It'll really bump up once we're at Open Networking Summit. Go ahead. Sorry, a little bit of lag on the call. You first. Just excited to see the last KubeCon event. I believe we've got 200 followers in that four day stretch. So I'm excited to see the growth in a couple of weeks once we're at Open Networking Summit. For those of you who are giving talks in various places, it would probably be great if you could include the Twitter handle as part of your talk so that people could go and follow as well. Thank you so much. That concludes my update. And please let me know if there's anything else you'd like to see on the end service mesh account this week. Thank you so very much. You've done an incredible, you continue to do an incredible job. Thank you. Yeah, I was going to say the same thing. Ed beat me to it. It's very hard to talk faster than I do. Yeah, maybe. OK, we have the call reboot. Jeffrey, you have a floor. OK, so we're kind of past the summer travel events. I think what is the only one in EU that's right around the corner just happened. But we kind of hit a break just because there was a lot of conferences right around the June-July timeframe. And I think everybody's kind of back on a normal schedule. Weekly might be a little too often. It seemed like people were getting a little oversaturated. And we probably just need time in between to actually write documentation. But Ed added some suggestions for the guides. I think we should start a guide section. We've got some kind of pre-canned use cases. And I'd like to write idiot's guides for getting DNS or into domain working. I'm doing a lot of multi-cluster stuff, so I'll probably definitely dive in pretty hard on the inner domain one. But like you said, we've got the underclosery done. We can boot back up the picturebook spreadsheet that goes along with the closery that gives visuals with the definitions. And then like I said, we should do these guides. Additionally, I'd like to maybe at least once a month, once we get closer to the different events, also have kind of like a peer review for people who want to bring their presentations that they are going to take to these different talks. We can help just basically with technical accuracy, give suggestions if people get writer's block, like I don't really know how to make this flow. But I want to make it a basically collaborative space for people who are looking for help as they're getting their talks out there. And then yeah, I don't know. So I'll open it up, get people's thoughts on maybe doing it bi-weekly as opposed to weekly, et cetera. And if there's anything else that we want to add to the documentation section besides the guides and the overviews, then feel free. Yeah, I think the having a space for people to review talks might be very, very helpful. I know often just having another set of eyes looking at what you're doing for your talk helps you discover places where you're unkilled, where you could be more clear. So we have the readout for the bi-weekly, these are friendly NSM meeting. So let's see, there is a bookmark. So who wants to do the readout? Should I do it or do you want to? A quick comment, I was on mute, so sorry. I think speaking personally, I think a lot of people feel this when they're doing presentations is you feel like you don't want to take other people's slides. I think it would be good if we kind of had to steal with pride this set of slides so that we, if there's some introductory text or something where we can, you know. No, I think that's brilliant. And we definitely want to encourage that, Brian. I know we do have the folder up on Google for network service mesh that is chock-a-block full of things that the entire reason they're there is to be stolen. So please, steal with pride. Okay. Yeah, that goes with anything that's on the website as well. And I'd also like to point out that it's under a permissive license. And so you can steal things there and know that you're licensed to do so. That's actually a really good point, Brian. I typically have a page where I'll provide a QR code on most of my slides that takes you to where the slides are. I think it will definitely add a steal with pride statement to that page. Yeah, I think that's good, maybe. Cool. Sorry, back to the bi-weekly Asia-friendly readout. Great, so let's see. I mean, jump up to the ferned one. So they also added, let me see if I can find the right location. Okay, so the bi-weekly announcements included, looks like we have a set of Q and As that they added to the bottom. So we should probably get to them. So questions were IPs for clients and endpoints. And they don't have any for the rest in the chain. So people are asking how routing works. Are these things, what do we want to do in these scenarios? We want to answer them here, and then we can link them back to recording at a future time. Or another option that I think would be a bit better might be just to copy these questions to a Google doc and then we can have a readout in the Asia, that way we get data flowing to and from. What do the folks think who are actually in the Asia meeting? I think these events, they are very useful for the, maybe project documentation. They receive their answers during the meeting. So it's not specifically for that meeting, but we found it useful to write the questions that people may have. Cool. No, but this is super good because people do have questions. And it's almost impossible when you're too deep in something to fully understand the questions that would occur to people when they're just getting used to it. And so getting these questions out and getting answers to them is incredibly useful. Yeah, and in fact, it's difficult to get people to ask questions as well because people often feel like they don't want to look bad or they don't want to show that they don't understand something. And it's like we're better off asking questions than not, but the human aspect prevents us from asking on occasions. Network service mesh is not super complicated, but it's very different from the stuff that we spent the last 40 years wrapping our heads around. So it is a new space. Yeah, and more specifically, this is something that I think we're gonna see in a community. I've had a couple of conversations with people within the community about this. People are sometimes afraid to come talk to either me or Ed directly, but then when they talk with all of you, they may voice things that they don't fully understand and bring up concerns because of that. Or there might be concerns that are legitimate, but they won't bubble it up to us because they feel like they don't want to criticize or so on. And so my recommendation is don't worry about it if you feel like it's criticism or if you hear something that sounds like it'd be interesting to talk about, feel free to bubble it up. You don't have to tell us who it came from if you or them don't feel comfortable. So part of this feedback, part of what we're trying to develop is this healthy feedback of information. And because it means one of two things. Either we ended up not stating a question, either we did not explain it well enough but then it's our fault that we didn't explain it well enough or we generally miss something important and it needs to be brought up in which case that's our fault as well. But we're not mind readers as well. So we need to speed back in dialogue in order to resolve things. So. It has to be a conversation. Cool. So this question is on AutoHeal. Question was on how AutoHeal was triggered. And the answer appears to be based upon updating the specification with connection details in GRPC. And there was a question on whether we can make the trigger be more detailed than the namespace. Do we have a notion of what people had in mind with that question about making the trigger more detailed than the namespace? Actually the questions below AutoHeal were asked on the previous meeting that we had. And actually Andre answered those. So if he's on the go, yeah. Awesome. No, Andre is a good source for such things. Yeah, yeah. Cool. Okay, awesome. All right then. Anything else from the Asia-friendly meeting before we bounce back up to the main agenda? Yeah, there were a couple more quick cases, questions on intra-domain use cases and on how many people were getting involved. I think these are things that I'd like to come up with better, and I'll say better. I mean, I don't know what it was answered in the Asia meeting. And I'm sure that it was sufficient in that scenario. But I would like to actually come up with some good literature that we're talking about a potential for when we ask questions over time where we can add these type of questions. So. Okay, sounds good. I mean, this might also be something, I don't know, Jeffrey, do you wanna try and see if we can take some of these questions and maybe produce a fac of some sort in the docs call? Yeah, I mean, we should probably just have a Q&A section on the website that we maintain with the real prevalent questions. I'm kind of curious about the second to last answer too. I mean, is that now like a plan is messing with the C&I? Yeah, I mean, there's some interesting discussions there. I mean, one of the things we've sort of fundamentally done is say, look, leave the C&I plugin alone because people have radically different opinions about what C&I plugins they wanna use. You know, and so, but at the same time, we do keep coming across people who want to be able to interpose things in front of the C&I, like say, for example, some complicated IPS or some other thing that provides value that you want that you're not gonna get from a C&I. So you love your Calico, you don't wanna switch out your Calico, but there's something extra you would like, that kind of thing. And so we are getting a bunch of people raising that. So I think we wanna maintain the position of, you know, you don't have to change your C&I, you don't have to change your Kubernetes, but there might be something interesting to explore there. Okay. Yeah, because like part of what I think I'm hearing from hearing in your question there, Jeffrey, is don't make me change my C&I, is that fair? Yeah, one of the big draws to me for NSM was the fact that it's completely orthogonal to the C&I, but whatever people do what they'll do. To the original question though, we absolutely need to start like bringing these in. I should probably comb the mailer too and find some of the relevant questions that obviously like giant, like four paragraph answers and stuff aren't the types of things, but I think a little Q&A thing, we might even collectively on the call just recollect some of the previously commonly asked questions, add some answers and then try like a .md or something for the get and then put it on the website as well. Yep, yep, sounds like a good plan. Yeah, and for clarity in terms of the C&I past, part of the reason, part of what we're looking at is an inner pose. So basically think of it as like, I was talking with one security vendor recently as an example and one of the problems we're having is that they want to have a intrusion detection and security system that they wanna be able to inject in to the network in Kubernetes, but they don't have any way to position themselves between the pod and the network. And so this would give them the possibility of potentially injecting themselves in the middle and then the connection ends up going to the C&I after that. But the main use cases are centered around being orthogonal and I think that's where we would still prefer people go with this. Let me sort of make a very unequivocal statement that I think probably most of the main contributors to network service mesh will agree with, which is whatever else we do, the ability to use NSM as completely orthogonal to C&I will remain. Exactly. Does that make you feel better, Jeffrey? A little bit. I get it, you guys are product managers. You're trying to expand your customer base. The purist in me gets a little queasy. Yeah, I mean, the trick there is to not actually spoil what actually is magic when you do that. And hopefully, as I said, the ability to simply come in as an app, as an orthogonal thing is a big part of that in my mind. Well, so we have state of the project. So, Ed, you have the floor. Sure. So recently landed, and correct me if I'm wrong, so I believe, Denise, you said that DNS has now fully landed it in the repo, correct? Yes, hello, it is correct. Awesome. So, and do we have documentation for how folks can use it yet, or is that stuff we're still working on? I've updated documentation, but if you face it with some problems, let me know. I'll update the documentation. Cool. For folks who have been paying attention to the DNS site, this is actually cool, because one of the things that is true is if I have a network service and my pod connects to a network service, or even worse, my pod connects to multiple network services, I need to be able to get DNS from those network services, but I also need to continually get the DNS from my Kubernetes. And so what this DNS feature does is it allows those network services when they come up to return a DNS context that will cause us to fan out across any and all DNS's that are provided by the network services, as well as the Kubernetes DNS, and get responses from all of them, and whoever comes across the finish line with a positive response first, that's the DNS that we return to the client. And the result of this is that you can actually then be very, very consumable from applications because your application simply comes up and does this DNS thing just like it always does, and it can't see any of the details of how the magic works under the covers, but if you're example.com and you have a special magical DNS behind your VPN, that can actually be reachable by that pod. Cool. And then the first pass to the inter-domain stuff, I think has also landed, Artem, is that correct? Yes, you're right. Excellent. And how do we stand on the documentation for the inter-domain stuff in terms of people who wanna go kick the tires? We have some documentation for inter-domain, but I think we have things to improve. Okay, cool. Because I know there are a lot of people very interested and for those of you who are newer, inter-domain is basically, historically speaking, when we've done networking, we've welded our connectivity domain for networking to a particular runtime domain. So you have Kubernetes networking for your cluster. You have a VPC in your AWS. You have the data center network for your data center. And it's all essentially welding the networking to the places where things run. And with inter-domain, we can finally, to the best of my knowledge, for the first time, have connectivity domains that are attached specifically and only to the workloads that are relevant to the work you're doing. So you can have essentially crossing domain boundaries. You can have a single connectivity domain that is only accessible to some, but not all of the workloads in that runtime domain. So I can have some pause in one Kubernetes cluster, but not the whole cluster, some pause at another that can actually get on the same network service and be connected. So that's actually incredibly exciting. Cool. So then in progress, I know security was getting very close and then Ilya went on a well-deserved vacation. Is there anything from here that you wanted to terribly update us on, Ilya? Oh, hi. Actually, now today I re-based my PR on new inter-domain and the stuff and disabled security for inter-domain test because it requires some extra work for that. So I hope in a few days, oh my God, tomorrow, the third PR will be ready. Excellent. Now, this is super exciting. I don't know if you, for the folks who are familiar with Spiffy Inspire, but there's sort of industry best practices for handling identity. And so we will then be essentially have secure, authenticatable identity. And there are some other cool things coming down the pike, including, I'm hoping that we'll see some work to be able to use up in policy agent. So that rather than your network service and point having to understand who it doesn't, doesn't want to admit, you can essentially use OPA for that and it can delegate. And effectively then the mesh becomes the authority on what clients you'll accept. Oh, is there any updates about service accounts and deploying ports that we discussed? So not so far. I would say we actually, we should proceed the way you've been doing service accounts at the moment because my, we've been checking with the Spiffy Spire guys, that's how they were doing it. And I'm still a little suspicious of it, but I've yet to find the right person in Kubernetes land with regards to the service account stuff. So I'm still digging there. I'd say proceed with the stuff that you've got for service accounts. For those of you who are not part of this particular discussion on the PR, there's some question about what the right way to set service accounts for Kubernetes is and their documentation is a little in there. And so we've taken one available choice. We just wanna make sure it's a canonical available choice. Okay. And since Kubernetes service accounts are one of the things that figures into the selectors for Spire, for issuing identity, it does matter. The good news is that, even if we picked the wrong one, going back and fixing to the right one is not super hard at the end of the day. Yeah, I think so. Cool. Awesome. SRV6 support. So Artem, I know you've been sort of plowing forward on this. So, including, if I recall, having to go push bug fix upstream to Legato. And I think you're also working on a bug fix upstream to VPP itself. So you've definitely been busy. Yeah, we still have a few blocking issues from VPP side. Okay. Awesome. I think we're missing our favorite proponent of SRV6 today, but I'm sure he will be happy to hear. Okay, cool. All right, so, oh, and there's also a discussion I just wanted to bring people's attention to. Right now, the remote mechanism support for does be an eye selection in the network service manager and just talking about moving that into the NSM forwarder from the NSM manager on issue 1411. That strikes me as probably being a good idea, but I did want to make sure that I gave that a little bit of promotion here as a discussion to look at. Radislaw, do you want to talk about what's going on in the curl forwarding plan? Yeah, so lately I'm working on adding support for metrics. Yeah, as you know, yesterday we had a little discussion about how to address each metric. For instance, should it be by interface per pot or interface per name space or per connection? So, yeah, there are ongoing discussions about that. I believe we have a solution that you suggested yesterday, so we'll look at that. And if you find any problem with the solution I suggested, do bubble it up. I have been known to practice drive by architecture, which can be dangerous. But it did seem more sense to make them part of the labels. Because I can think of lots of other useful places that wouldn't consume mechanisms that would consume labels, where it might be useful to know the pod name or eventually things like the pod node name for the node name that you're running on. The nice thing about labels is they're just sort of a bag of strings. So you can use them in all kinds of flexible ways. Yeah, actually this was brought up by the work that Ivana is doing with the service mesh interface. Yep, let me actually bubble that up then before the refracting just simplifies since it's sort of thematically together here. Yeah. Cool. Yeah, I know, Ivana, you've been working on things around metrics, which is really exciting. Do you want to say a few words? Yes, about first for the pods, I'm now testing for the client side and I think the exposing in the endpoint connection labels will be a bit more tricky. I'll start with the client. You're a little bit faint. I'm having a little trouble hearing. Do you hear me? Now you're brilliant, yes. Okay, so I'm testing, I added labels to the pod name label to the client side and I'm testing this. And I'll, yes, maybe next week I'll look at the endpoint because I'll start preparing my ONS talk as a priority before that. And this issue that's opened previously with the VPP forwarding plane. Dennis took a look last week and he found a book in the legato VPP agent and they were comment from the VPP agent community members there and they say that it's planned like that not to send metrics because there will be a lot of traffic that will reduce performance. And they have, let me, I forgot the name but I have it open telemetry that is directly integrated with Prometheus. So I think Dennis said as he started working condition that he'll look at integrating this with the VPP agent. So for the VPP agent, we'll have directly metrics with Prometheus. And that's why I focused the work on the kernel forwarder because there we won't have Prometheus integration and I'm now on this after deep pod names because just for the wider community, we want to expose pod names because we want better observability from people's side. They want to see which pod to which communicates and what are those metrics for. If they search query in Prometheus, they will search for disinformation. So that's why we need to change that. No, that's definitely true. But the one thing to keep in mind as you're doing the work and I don't necessarily know what the result of keeping this in mind will be is that because we do have inter-domain support, it is going to, it may possibly be true that I may have a pod name in one cluster and a different pod name at another cluster. So you may want to think a little bit about how you want to handle those situations as well. So I don't know what the right answer is there. I'm just sort of suggesting that. I think that in my, what I think for inter-domain is that traffic will uniquely go with the one cluster or the another. So I think that the metrics, if someone looks for metrics for a specific pod, it will be fine in no matter what the cluster is. And there is a very rare chance of collision. Oh, so that I think you're right about. So, but it's just something to keep in the back of the inter-domain is something to keep in the back of your mind. It may not even be an issue here. It's just something to hold in the back of your mind as you work through this. Cool, so excellent. Thank you. All right then. So I think, Andre, you're working on the refactor to simplify on the SDK stuff, evolution stuff. Do you want to say a few words? Yeah, it's mostly ready for review. I just have one test failing on OCI and today you work it on fixing a packet. You sure you have. So for it, I have two requests at the moment. One is improving open tracing stuff and one is for SDK. SDK just need to one test to be fixed. So you can take a look for it. I already asked you guys for it to look to is pull requests. Yeah, so and for folks who haven't been keying up with the SDK evolution, this is essentially a simplification of how we chain together small bits of functionality in the SDK because one of the things that we realized in is for when you're writing a network service endpoint, there is a small amount of stuff that you actually care about that's interesting or unique to your network service endpoint and the vast majority of things that you would want to do are actually the same things that everyone wants to do. And so we've constructed the whole thing as a composable chain where you get little snippets of functionality. So if you say want to write a network service endpoint where you're using VVP agent, just drop in the thing that will connect the incoming connection to VVP agent. And then if you want to have an outgoing client for that incoming connection, there's literally just a one line that you drop in to do that and so on. And so it ends up being a nice, simple way to evolve this stuff forward. The other thing that Andre mentioned is the, we've actually brought the tracing from Jigar and open tracing all the way into the individual internal chain elements. So you can get a very granular picture of what's actually happening as you chain through not only in terms of functionality and longing, but in terms of timing. So if you say sit down and write an NSE and it turns out to be really slow, you can see exactly where it's really slow and why. So it's kind of exciting. And then I think next up after that is trying to refactor the network service manager in a similar way so that it becomes easier to reason about it because it's gotten a little complicated in there. And also because we have various people wanting to write proxy network service managers and other things or network service managers for other environments. And this will hopefully make it enormously easier for them to do that by making it simpler because you only then have to substitute in the piece of the chain that's different for your local environment. And then the last one that's a little bit more involved is you may want to go take a look at the issue is the linearization of the local to remote calls and network service manager. This essentially what it means is right now if you get a request from Pod saying, hey, I want to talk to a network service, there's a lot of complicated logic around whether that network service is remote or that network service is local. And this is sort of branching of logic that turns out to create a lot of complication. And by simply saying, look, if you get a local request in we always make a remote request. It vastly simplifies things even if that remote request is just looping back to the same network service manager. And it turns out this also gives you the ability to do the create proxy network service manager that we've been talking about for a while which I know people have been very excited about. So cool. Anything else, Andre? Before we move on to Ed, Valentin? No, no, no. Cool. Ed, thank you so much for making the call. I do appreciate it. Hey, sure. Thanks, Ed. Two ads are better than one. So the issue of the weekend labor day had to do with a situation that I don't completely understand yet. But in practice was that the NSM project created a whole bunch of requests at once to hack it, I believe, to destroy machines. And due to current limitations in how we handle things caused an undesired stability or instability in the packet. We know that on our side, we need to do better. And so there's work underway to re-architect a little bit of the request queuing stuff and also speed up some portions of that. But it also looked like there were changes on the NSM side. There's a PR 1546 from Andre, which look like they will address those. And so we'll be watching this closely. We've got internal monitoring that we can break out requests by API key. And if anything comes up, I know how to reach you out. We're very sorry that we turned out to be a source of instability. We were very grateful to you guys for all you do for us. And so we wanted to make sure that we very rapidly fixed the problem. And I hope that's what we've actually done. Do you wanna say a few words, Andre, about what you think the fix was here? Yeah, for packet problem was following. If timeout was set to about 10 minutes for provisioning of a cluster and if timeout was reached, our cloud testing tool decided what we could not continue with this one and need to destroy all resources we created. And packet API return what destroys not possible while resources queued or in provisioning state. So we need to wait in any case for a status change. So in poor request, I've added following changes. So if this is happening, the start is valid by timeout or any other error and deleting also caused error. Our testing tool will not try to create any more clusters with this identifier for this cluster provider. So at least it will not float with in all it requests. Of course, built will fail, but at least we'll be able to check what's happening without leaking a lot of resources and putting cloud infrastructure. I mean, one thing I might suggest that if you guys are looking in this area to begin with because I've actually run into this occasionally even manually. The fact that, so when every now and again for various reasons, servers get wedged coming up and they can be wedged for a very long time. And I can't, and so if it's sort of gotten, if it's been going long enough that it's super obviously wedged, then I really can't do anything because I can't delete it. It's clearly never gonna make it. So if it would be possible to queue up a deep provision request for that thing at any stage, even if it's not in fully provisioned, that would potentially be very helpful. Yeah, I saw the error message that came back from our library, something to the effect of you can't delete it, it hasn't been created yet. Which is normally a really smart thing unless it's been wedged in the process of creation, which is usually the case when we had a timeout. Right, and the timeout I believe on our side is set for 25 minutes, but for you, you've been deploying these frequently enough that you know that if it's not up in whatever the time thing is, waiting an extra 10 minutes is not gonna do anything. Generally, I mean, you guys continue to improve. So who knows, but yeah, generally. Yeah, okay. Well, I think I saw a snippet of that error message that came from Pack and Go, but I don't have it in front of me. If it's in a PR or somehow it can be brought to my attention, the specific condition of you can't delete it, it hasn't been created yet. Yeah, I think it's in the add or request, it's different for request. Yeah, there was a second poll requested. If you could maybe add it to a comment on 1546 since that is already watching that. Yeah. That would probably be a good place for him to find that information, because then it's one place for him to look. And by the way, I presume, Andre, that this means that my sort of like running around in circle, there's a famous saying in English, wouldn't danger or end out, run in circles, scream and shout. And I pushed a PR that was sort of like that to reduce the retry count. I presume I should go ahead and close that Andre, correct? Yeah, yeah, yeah. Also, I found one more, not the issue, but an option for at least packet. It's possible to set some automatic cluster destroy timestamp, for example, destroy in few hours. So probably I will try to set miss value as well. Oh, that would be fantastic, because I know... Potentially you'll prevent leaking coffee with clusters. That's a wonderful option, because I know that the bane of our existence everywhere is leaking resources and... Right. And that's a cloud-wide problem, right? That's not as specific to anyone. I've occasionally had to go delete tens of Kubernetes clusters for various public cloud providers. Yeah, at the moment, actually, the problem is if it's CircleCI, sometimes if we do force push or a trigger builds, it silently closes all the stuff on the CircleCI and our code is just not closing all the stuff properly. So some time out, auto termination time out, I think will be helpful in this area. And do you care your tests predictably long enough or short enough that you know that having a cluster destroyed in a couple of hours? Yeah, yeah, yeah, yeah. So usually, it's no more one-in-the-health hour. So you get all of it this week. So... The kudos to whoever on your side put a sort of automatic time out API thing in there because I've not seen that for any of the public cloud stuff we're dealing with. Ah, okay. Good, yeah, I don't remember when that went in, but I know that more than one person had asked for it. So... It's epically useful, like I said, I wish I had it everywhere. Yeah, that's one problem, it has no documentation, so I hope that works. Yeah, okay, so the absence of documentation for me is an opportunity to file a bug report in our documentation project. Okay. So make a note on that on 1546 as well or on whatever the PR is that you put in to put in the auto delete. Like, I'd love to do this, but I can't figure it out. And I'll bounce that back and get that. Yeah, sounds good, thanks. Cool. So is there anyone else who's got stuff in progress or that as recently they ended a network service match that you went to go ahead and highlight what we're talking about this week's status of the project? Well, before we jump on, I got a good question on that. So for the auto delete, does that auto delete timer start when on the initial instantiation or does that start when the system is actually available? I believe it's a timestamp that you can set to your desire. I don't think it's a time from time t, it's just a specific time and date. There should be documentation and if there isn't, that's a flaw which we will remedy. Oh, okay, okay, okay, perfect. Yeah, we appreciate all you guys do. And I, how to put it, it's better for the feature to be there and undocumented than not be there at all, but documented is even better. So, I believe the top of the NSM Dev channel says documentation is the most important thing. So. This is true, this is true, but I've had more than one instance with various cloud providers where I've reached out to various contacts and said, how do we do this? And the response was, oh yeah, that's not super well documented. And the document is literally an example line in a config file in a GitHub somewhere. So, you're not alone. All right, cool. Anything else that folks wanna talk about in terms of state of the project with stuff that's recently landed or that's in progress or any of the rest of that? Cool. And I do apologize. I failed to pull forward the spec stuff this week. We should probably briefly visit some of the spec things again. Cause I know for example, like the hardware knit conversation has sparked up again. We've got people who are expressing more interest on that side, which would be very, very helpful. Very cool. Great. Let's see. One last thing we should definitely bring up as well so we have ONS coming up. Do we have enough people here to still be? So ONS I believe is coming up the 23rd to the 25th. Yeah, around the time period. And so, and I think at ONS we've got you there, Andre is there. I think we've got Radislav and Ivana, you're also going to be there, correct? Yeah. So the question is, do you want to cancel the meeting the week of September 23rd through 25th because of ONS? Yeah, that's one question. We could still run the meeting. We'll just either be missing some people or we may be on spotty connection. Okay. Well, I mean, the good news is you brought up sufficiently early. We can sort of think about it and check in again next week and see what we think. Sound good? Yeah, that sounds good to me. Awesome. Anything else? Let's see, I can't see the agenda anymore. Was there anything else on it or are we... Still be sure, let me stop and reshare. We're at the bottom of the agenda. That's my connection. I was having some connections, so I dialed in. Okay. Cool, well with that, yeah, so I see we're at the end of the agenda. Is there anything else that anyone else would like to bring up? Okay, well with that, thank you very much for everyone attending and we will see you all again at the same time next week. So, y'all have a good day. Thanks, cheers. Bye. Bye-bye. Cheers, bye.