 All right. Cool. So let's go ahead and get rolling. So first up, as always, is agenda bashing. So if we go ahead and look at the agenda, we've got a lot of action item reviews. In fact, most stuff is action item reviews. And then we've got, you know, some time to review new items for the following week. So anything else that folks want to add to the agenda that's not currently listed or that you'd like to bubble up to higher priority since it is a kind of a crowded agenda at this particular moment. Cool. All right. So diving right in then, we had an action item for last week for the setting up of a project board. I believe that actually has been completed. So you can actually go and see if there's actually a link from the issue. But there's a project board here, which I think we'll probably try and use more of as we go that lists a bunch of the different items that we have as issues that are either to do as we're in progress. You know, so things like publishing the images to Docker Hub or checking out SRIOV and packet.net so we can get going on the SRIOV use case. And so I mean, as we go, I think, you know, we may start using this more for the agenda rather than sort of trying to describe everything by hand. But we'll have to see just do folks have thoughts or opinions on this? I like to use in the project boards. I have no issues with it. It seems like it duplicates work if you have to manually update it every week. So I'm thumbs up on that. Yeah, I tend to be as well, but I was kind of curious if other people had thoughts. So maybe for next week, we can start using the project board for the agenda. Does that make sense to folks? All right, awesome. So drilling further down. So Kyle looks like there was an issue you had with go path and the make files and that that appears to be fixed. Yep, Frederick opened that and he actually merged it, I think earlier in the week. Cool. So the next step was Frederick had taken an action item. So the CNCF folks have a lot of cluster resources from packet.net for real hardware. And they have been kind enough to allow us to apply for access to it. And I think we've actually been granted access at this point. And so a lot of what we're looking at there is there's certain kinds of NSM use cases for which you need physical servers, particularly the ones that involve getting your network service from a physical Nick or an SRIOV from a physical Nick. And so I believe that's moving around nicely. I believe that's done. So if folks would like to start chipping in to that direction and reach out to Frederick, he can add you to the access list and we can start drilling into some of that setup because there is a bit more to servers than just dropping code on them. So we also had a big move in terms of describing what NSM is. Frederick put up a dock talking about what NSM is. Do you want to drill into that here? There we go. And we can actually follow the link, I think, to the dock, not just the issue. There we go. So basically just adding a bunch to the existing documentation to walk through a high-level overview and pros of what network service mesh is. And this is actually incredibly helpful. It turns out Frederick is really good at pros and I am really not. And so I think this will be really, really good for us. Do folks have any opinion, thoughts or comments? Hey, the docking is really good. It was really useful. I'm still a little lost. I hope to get started building a new NSM plugin. I look at Sergey stock, which is really good as well, examples. But I'm a little lost in the trees from the forest. Okay, no, that's completely fair and points to something that we probably should take an action item on MiF work, which is document a little bit more how plugins work and how the plugins we have fit together. Is that getting to be what you're confused about or is it something different? Yeah, exactly. How would I do an end-to-end example? Surgery is simple. Data plane is really good. I have to figure that out. But if I wanted to build my own, what would I have to do and what YAML files would have to do if I had two end points? I wanted to connect. I had a simple REST API between two nodes. How would I do that? And use the simple data plane to do that. On how to stand up, you know, a pod that connects to a network service and points, you could take it and go. Can I can I inject just a quick comment? There is an excellent script integration test. And like I would use it as a baseline for basically mimicking those steps on the local cluster. Because, I mean, everything is very generic there. I mean, at least I don't recall any big dependencies on the Travis or on the CI. So, if you follow step by step, bringing those pieces at the end, you should be able to ping between those two pods. If you hit any issues, I mean, please make sure that you let me know. And we could look into it together, debug it a little. So, I'm totally open for that. I said in the chat, IRC, it would be useful to put all the new extensions in separate directories. So, you can read me and all these files in the same place because you guys are working on it day and day out, know where all the files are. I look at it and I would go, which file is a core piece of the platform? Which piece is an extension? I can go and look at each individual file and figure it out, but it's not that productive. And if you want other people to use it, it's like, here's a way to extend NSM, here's where you plug in your pieces, you know, basically cut and paste. Yeah. So, would you be willing to do the following for us, John, on this? Which is, would you be willing to open an issue and just lay out the simple use case, a couple of the simple use cases you want to know what documentation on how to do, just so we can get a crisp statement of, you know, what it is that you're looking to have documented? You have no problem. That's a good idea. John, my next step after the initial getting started guide of just how to get the cluster going and NSM started was to try to do exactly that, maybe in the same document or a second document. So, I'd be happy to help work with you on that. Great. We'd like to get together and work on it. And then this is actually why I was asking if we could get a crisp point on it, John, because, you know, when you say, I need more documentation, that's a completely credible statement to me, independent of context at the stage of the project. So, the question becomes, of all this basic documentation we could write, what might be more helpful? So, cool. Awesome. Shall we go back to the agenda then? Thank you so much for driving, by the way, Lucina. It's very helpful. You're welcome. So, I think next up on the agenda then is adding, so we talked about adding documentation to talk about what it is. So, next up, actually, Tom, is your getting started guide and some requests for additional review on that. Do you want to talk to that a little bit, Tom? Yeah, I just got comments from two more reviewers. I was in the process of responding to Fred's comments and, you know, I have some comment or response I put in the bottom of the review to comments by yourself, Ed, and Sergei, I think, somebody else. I think it was Sergei. Oh, Pratik. And just please place a comment in there if you agree to it. Now, I'll keep this going. Hopefully, I can get this wrapped up. I think, I know I definitely agree, and I think Pratik would agree that NSM, you know, having documentation that could allow people who are not as familiar with Kubernetes to get going with it is really important. The thing that was sort of rolling around, at least in my mind, and I suspect also in Pratik's is the thing you want front and center is the, you know, cube control, apply-dash-f kind of thing that lets people just who do know Kubernetes just go from zero to two. Yeah. So, I'll put a, like I stated here, I'll put a comment on the top saying if you already have a cluster up or you're really familiar with the cluster, then go straight to step X, and then that will have the basic, you know, go get to get the repo, I mean, to get the code make and then cube control apply. Yep. And hopefully, shortly, we will have Docker images published. So, if you just want to run it, you can just run the cube control apply. Yeah, that's my big, big thing I'm dealing with with the earlier thing is trying to figure out what to do if the Docker images are not published, and that's a little more complicated of getting the Docker images into the cube control cluster. I'm sure I have that process in here just in case. Well, I would say that even once, which by the way, I have a patch that that publishes those, I have not pushed it out because I've been traveling this week, but I plan to do that Monday. But even with that, I think it's still important to document the whole process of how you build it and everything. So, I think the work that you're doing, Tom, is super awesome. Well, I figure that even when these images are published, you know, more images will appear for other elements and other demons, and it would be nice to know exactly how to get image into the cluster without publishing it first, which isn't tremendously difficult. I also got a comment that people say, well, you know, just start the cluster without a VM, and it really doesn't make any difference, except for I think people are a little intimidated by nested virtualization. You need to start a demon if you're already working in a VM. And I think like container and cluster and Kubernetes people just don't like VMs or unfamiliar to them. And they say, well, look, we got Kubernetes and stuff. We don't have to deal with that stuff anymore. But what I was thinking was like, if we're going to do a real, like I think EDGE, when I logged in here, like I think it was EDGE said, if we're going to do a service that includes a base data plane that's hooked to an underlying software data plane and a host, we're still going to need some of that old fashioned VM stuff like the host user for at least the bottom level layer two service or the bottom level presenting the fast networking interfaces. So that's why I thought it would be good to have this documented, but I'll also add that you don't need it just to run the code. I think it's actually incredibly useful to have a documented because one of the things that happens is it's all fun and games when you get the cube control applied for people to be able to kick the tires. That's awesome. And then somebody actually needs to deploy it for real. And that's never quite as simple, right? They wind up with a bunch of details that are not a big deal if you just want to see it working. But if you want to do things that involve getting, you know, fairly optimized performance, you may have some of the things particularly when you get to space where you're talking about physical Nix or SRIOV, things get interesting. Yes, exactly. Exactly. Either with physical Nix or make sure that whatever is underneath in the host or the VM that contains whatever VM is running our cluster. At some point, we're going to need access to fast data, whether it's virtual or physical underneath us. So that's my thinking in sort of a generic way. And through that, we're going to need a little bit of virtualization, maybe. I don't know. I think we all are on the same page around the documentation. The comment I added was only like thinking in two different perspectives, one from one of the end users and one is from a developer who joined some new project. So if an end user comes in, they don't want to build everything. They just want every image which every project already has, they can just install in their own cluster. And if a new developer joins us, then he needs all these commands for sure. So this documentation really helps. So I was thinking in these two terms so that it will be helpful for both end user and a developer. But yeah, the documentation, absolutely we need and we all are on the same page. Is it okay if it's in the same document? I think that was critique talking or you really think it has to be two documents. That's the only, my only pushback on your comment because I can edit the document to say to go to step 17 or whatever it is the same document. Yeah. Okay. One thing I will throw out there, and this may just be an artifact of my own psychology, but the wall text effect for me, my psychology at least is real. And so a long multi-step set of directions, I find to look more like a lot more work than a short given summer in my own psychology. Cool. Awesome. So let's see next up on the agenda, we've got the becoming a Kubernetes working group member. So this is still somewhat in progress. I've been having some conversations, there were some comments on the PR that was pushed, essentially asking about what network service mesh was and I've talked to some of the people who asked. And so there's still a little bit of bouncing around as to various opinions as to whether we should be a Kubernetes working group, a Cygnetworks sub-project, a CNCF project or a CNCF working group. So I'm trying to get some of that resolved. It's taking a little bit of time. I'm not hugely upset that it's taking a little bit of time because there are a bunch of folks who I think are critical of the conversation who are out on PTO for the next week or two. So that's sort of stalled as a little bit too strong of a word, but it's still in motion. I don't remember the check for deprecated Kubernetes API calls comment apparently that I made. I do generally advocate for clear errors to tell you how to fix things, but I literally don't remember this, I apologize. Does anyone else remember that item? Okay. Is it something related to the Go client which was moved from client 7 to client version 8? That might have been. I mean, that very well may have been. It's fundamentally down to, I think I've made this comment several times. I'm generally the opinion that you should fail as early as possible and as clearly as possible with this good instructions on how to fix whatever it is that you can't resolve yourself as you can. But I don't remember specifically there was actually an issue that was open. It looks like Frederick may know more, but he's not here right now. Okay. Cool. So next up on the list is we have, Frederick was looking at SRIOV on packet.net. I think we have several people who've been poking around at SRIOV on and off of packet.net. Do any of the people who've been sort of poking in that area want to comment some? So it looks like Ian Wells went out and checked to make sure we could get SRIOV NICS on Melanox based, the Melanox NICS that are present in packet.net. And there were some outstanding questions about whether bios settings might have to be mucked with and what that would take. Does anyone happen to have any more visibility or any more knowledge on this? So I did speak to Ian this week. I believe that he thought we were good to go, because I believe that he also, so it's my understanding that the packet machines might have those Intel 510 or 710 cards as well, which he was pretty pleased with that. And I believe that he was able to confirm the SRIOV support for whatever Melanox cards were there. Though his only concern was the number of VFs that Melanox started to close versus the Intel cards. I guess the Melanox are lower if I recall from what he told me. Yeah, I recall him saying that they were exposing something like 8. Now it's important to realize I've chatted with the packet guys a fair bit and apparently they generally standardize on Melanox NICS. But right now for some of their smaller older machines, they're running MLX3. And then for sort of newer larger machines, they're running MLX4. And they would like to get to MLX5, but apparently that's really sort of hot and fresh right now. And I think that, you know, Taylor, keep me honest here. I think that some folks were finding that the driver-supported DPDK for MLX4 and MLX5 is enormously better, and thus your ability to actually do meaningful work with them is much better. Does that match your understanding of the world, Taylor? Yeah, very, very difficult with three. It's already hard enough with four, the Kinect X4 cards. And as far as I know, there's only a few types right now at packet.net that have them. There's, I think, a couple of X-larges and one medium. I don't think anything else supports the version four. Okay. And so, cool. So if you could add a comment about those flavors to the issue, that would probably be really helpful. Does anybody else know a lot more about sort of SRIOV on the MLX cards who might be able to suggest useful information? Yeah, well, I can send out info. We've played with them for a long time, as you know. So we're mostly using MLX5, though. We have completely skipped the three and fours. So I will send in info on the fives. Okay. I sort of do have a question, which is, as we get a little further along, would you have any interest in sort of, since you've got MLX5s in your lab, in sort of helping out and testing how these things are going against MLX5s? Because apparently it's... Oh, yeah. I will be pleased to do so. Oh, that's marvelous. Thank you so much. I appreciate it. Yeah, because I think the next thing, at least for me, the next thing up, I kind of want to get working is some of the insert a hard, you know, a hardware NIC or an SRIOV channel that provides a network service example, that's kind of the next one in my head in terms of use cases. Please note, if you have other use cases in your head you want to work on, please do do that too. I think of it open source, all work in parallel. Yeah. I'd like to work on VHOST user as a first use case, which is a sort of virtual analog to virtual functions, if you will. Okay. So you're thinking about VHOST user as sort of a mechanism, an alternate mechanism to a kernel interface. Is that right? Exactly. Yes. Exactly. Awesome. So we can definitely talk more about that. And if you want to open an issue to track it, that way hopefully it will be picked up on the project board as we go forward. Would you be willing to do that? Yes, absolutely. Yeah. Cool. Awesome. So then L2 forwarding with VPP example, I think this is probably what you were doing, Sergei. Do you want to update us on what's going on? I think some things have merged. Right. Yeah. I was, I started looking at the VPP and the way how to interact between the NSM and the VPP. I hit a couple of roadblocks. They were resolved, but while waiting on the answers on the VPP related things, I kind of moved a little bit and implemented that simple data plane just to be able to run end to end in the CI. So I have a couple of things to finish with the simple data plane in terms of the cleanup. And then I can start looking at VPP again. Because the documentation, it's a bit hard to see how to interact with the VPP from the NSM code. Okay. No, that's totally fair. Cool. So we do have this other issue out where Ed from packet cable was kind enough to send us a typo fix, but we still need the signed off by. So if you're out there, Ed, we would love your fix, but please feel the law of God sign it so we can take it. Cool. So Pratik, are you here? You want to talk about the sidecar containers stuff that you're working on? Yeah. Hi, I'm here. So the code is mostly done. The only challenge I'm facing right now is adding up to the CI with mini cube. So mini cube works in one of the implementation. Like, so there is a step in our process where we need to get approved, get a certificate approved and issued by the Kubernetes API server. So mini cube has these two modes where you can run with the local cube where everything is one binary, which only works on Travis. And the other mode, which is, which is powered by cube ADM doesn't work on Travis. So with the local cube mode, the certificate does not get issued. So that's where I'm blocked right now. I was talking to Kyle on, on the IRC. So once we move to Kubernetes on packet, maybe that will be the right approach to go. And then over there, we can get the certificate approved and issued by the API server, which will unblock us. So I added all the comments in my PR, but it's still failing. I need to address those issues because it's failing in the CI. I, I tried a lot of things. I tried using the Ubuntu 16.04 in Travis, but that's not officially supported. So we can't move there yet. If we move to Ubuntu 16.04, then we can run mini cube in, in cube ADM mode, which solves the problem. But for now we'll have to just use Ubuntu, what Travis supports and run mini cube with local cube. So that's where we are. Okay, cool. I appreciate all the effort. It sounds like, you know, one of these things where you just hit a niggling detail and it's been taken a little bit of working out. So yeah, I mean, I tested it out on my cluster on Kubernetes cluster. It worked fine. And then I moved everything to Travis. I thought in mini cube, it just worked, but then it didn't work. Then I installed mini cube on my Mac. It worked, but it was not working on Travis again. Then I did it on Linux. It didn't work. So I figured I narrowed it down to this settings to local cube and not running without local cube. Yeah. So that's it from my side. Yeah, document the settings both ways because I'll need to make sure I'm consistent with this and what I'm documenting and the getting started guy too. I thought it was just a question of which driver you start mini cube with, but maybe I'm oversimplifying it. It's the driver, but it's also they have this mode where who starts all the Kubernetes components like so does all the components run as part of one single binary local cube or cube idiom bootstraps the whole cluster. So that's the difference here. So driver part is a step before that, like how all the VMs or the infrastructure is running, but this is more how do the bootstrap Kubernetes on top. So I have this command link here where I detailed everything for mini cube, but let me know where you want me to add all those details. I'll add it there also. I just wanted to add a couple of things that so number one, the local cube thing is actually going away in a future release of mini cube. And this makes many people sad if you go and look at all the issues opened on me because because of the fact that I'm going to get in Travis, but it's also worth noting that local cube work outside of Travis. In fact, mini cube stopped working for me on in my setup and it only works now if I use local so there's a bunch of bug reports on this as well. So I don't know kind of a kind of chaotic local cube works for me. Everything comes up. The only issue there is a bug with local cube is it doesn't issue a certificate, which for which there is already an issue file against mini cube like in local cube mode, it doesn't issue you a certificate. That's the only challenge, but if I don't run in with local cube, the issue is resolved. I get the certificate issued. So there is no problem. But yeah, I don't have any preference with local cube or other mode. I just need the certificate issued. So it's not working in local cube mode. Okay, that's the one. Okay, cool. So I think next up is amazingly we've got our perennial you know item about agenda about a mascot. I've kind of been sort of using the Ariandre spider that I used in the narrative deck. I don't think you can bring that up and see how people feel about it in general. We would need to go and eventually get our own version of it made. This one was purchased from a stock graphics company. But do folks in general like the friendly spider as a mascot? That works for me. Yeah, the only additional suggestion I think that came by someone was suggesting you know if we were to go get our own version drawn to perhaps have the spider in your knitting. A spider with knitting needles, yes. Exactly. Knitting everything together. Cool. So it sounds like we've actually got you know, folks feeling fairly good about that at this point. All right. So then next up Kyle publishing images to Docker hub. You sound like you have a patch almost ready to go. Yeah, exactly. I worked on that earlier in the week but I've been traveling since Wednesday. So I'm hoping Monday well I should be able to get it out Monday. I just I just need to rebase it after everything that went in this week and just make sure everything still still is good and then I'll push that out Monday. Awesome. Do you have any plan like which images do get pushed like for every build we push or are there any specific images? No, no, no. My plan is the only push on merges to on pushes to master. We're not going to we're not going to push Docker images on PRs when people push PRs. Okay. Yeah. Just when we merge it to master whichever big accident they could really push that exactly on the successful you know once something gets merged to master and then the build is successful then we'll publish a doctor image at that point. Sounds good. And this this hopefully will also help with some of the as we build up more system level tests and packet and hopefully with the cross cloud CI stuff having the binary artifacts for downstream consumption of that stuff I think will be really good because you know yeah I mean I think that'll be really good. So Taylor did you update somewhat you know are you do you want to talk about the support things that you guys need for the cncf cnf project? Other way there are too many C's too many N's and too many F's on that that name. There are. I think we want to hold until we get some of the testing that we're doing right now on the cnf on packet so I think when we figure out what we can do with this first network function then we'll be able to describe those parts. We are working on the I guess the use case a write up for that we'll add that in but beyond beyond that we'll want to get the the rest of the details from the current testing which probably at least this week so this coming week so potentially end of that week we'll we'll have some results that we can share. That would be awesome cool and you're still figuring things out I mean so are we all so awesome I just want to make sure Taylor are you trying to implement this in NFM or is this separate? So right now we're doing some comparisons that I guess a much simpler way we're using docker containers and docker to have the containers and KVM either direct or with libvert we're talking to KVM compared to the VMs and we're doing all this on packet so we'll try to share any of the information on like the network cards and stuff especially on that other ticket for the SROV. Yeah I'd be really interested especially how you're doing the traffic steering part of it. For what part? How do you do traffic steering into the CNS or the VNS? Yeah so we'll definitely have to make some adjustments on what we jump from what we're calling box by box where we're just doing the just as minimal as possible down to the container as minimal as possible to VM when we go into what we're calling orchestrated so Kubernetes and we're looking at comparing that to OpenStack so that's kind of the goal that's going to make it more complicated. At the moment we're on single system so a single packet node for each of the tests we'll be doing the multi node multiple physical machines and that's we're keeping that in mind for how we're sending the traffic between the containers and the test the container running the network function and the test containers so. Yeah I can share a link if you want but go ahead. Yeah I appreciate it. Yeah it would be good because we're working on the same kind of thing. Yeah absolutely I think a lot of the stuff that we're doing is kind of a little bit preliminary where we are right now and then it's going to be leading directly into what Shell are doing so there's going to be more and more overlap which is why we have this ticket. I actually published a sample vnf cnf on github it just does a packet end map in and out they might be a useful test vehicle if you're interested. Sure yeah we have this specific ticket it's talking about the cdcp use case so there are a set of network functions that we need to implement and this was requested like here's the goal is this use case. If you have example additional ones that's I think a goal that we want to have is more than just this use case it would be great to say here's other use cases. Be a good way but yeah please share the link to that. Be great. All right then so I think the next up that we had was working out documentation infrastructure and I think this was Frederick sort of saying okay look we're starting to document things in Markdown which is awesome you know sort of migrating those all to docs migrating to Hugo adding go docs support that kind of stuff I don't know how much progress has been made on this just yet I generally like the direction so does anyone have anything to add to this particular item at this point or any interest in getting more involved with it? Well some stuff is working I mean if you put your doc in and docs it'll render so you just have to go back to the read me and make sure the length is correct but everything seems to create a top-down you know it renders the build that Markdown files just fine so I don't I think there's he had far more in mind with that but that stuff seems yep no there's definitely progress going on okay cool so I think the next item we had up on the agenda was so I suspect this is something you know something about Sergei this item around stop relying on dollar sign hosts to identify POD yeah that's correct so at one point we from the NSC perspective I needed to pass hostname to the NSM to be able to register the relation between the channels and the hostname so when then NSC terminates then I could actually identify which channels were advertised by that specific hostname which belongs to the ex or old NSC and then clean those channels from the references well Frederick mentioned that it's not very reliable way and so he I think he was gonna investigate for a more reliable way and then frankly I don't see why it's not reliable because names name and namespace guarantees the uniqueness in the Kubernetes so from my point of view it's good enough but I guess he probably came up with some corner cases when it's not sufficient yeah I think so hi this is Pratik so the parts host name you can from site and I think the most reliable way is and also promoted by Kubernetes guys is using the downward API so you can add this information in the parts like itself so when when Kubernetes starts apart they add this information in a file and you can read from that file or they set an environment variable so that's the recommended way to do it I'm a little bit nervous about ongoing creep in terms of how much modification has to be made to pods in order to use network service mesh so I totally agree with you you can bring this in via the downward API but we would like network service mesh to be as easy to add the pods as we can and having to add more and more and more things to the pod spec past a certain point starts making it difficult Ed what if we do it conditionally I mean if downward API is there we'll use it if not well we'll just use the hostname that port provides yeah I'm a huge fan of I'm a huge fan of doing cooler shit if it's available and not requiring it you know if you can get more information I think that's awesome the other thing that I want to mention just to keep in mind is we have this really strong tendency which I think is completely healthy at this stage to think about network service mesh completely within the context of a single cluster but as I look forward out into the world and places I expect this to be used we will have instances of people wanting to connect to network service endpoints that are outside the cluster so for example I know that in the NFE cases there are all you know there are physical network functions that are always going to be with us and we've talked about a little bit about having external MSMs that would make it look just the same to everyone inside the cluster but the guy living on the other end of that connection is not necessarily you know running in the cluster I don't think that actually has impact on this particular case because when you're talking about the NFC to you know sort of node local NSM API that intrinsically means you are running in the dam cluster but it's worth noting so that people can keep that in mind so we don't accidentally preclude some of the really good use cases with external network services. So it looks like I hopped on just in time so I can explain why hostname is not a good idea to always rely on. I can be heard you can hear me okay right because I'm on an airplane waiting to get out. Yeah we can hear you just fine. Yes we are you well. Okay so here's the problem the hostname is a setable variable. We lost you now. I think we just lost you. Yeah I got a phone call and then I just canceled it. So the Kubernetes host the hostname is a setable variable in the in the spec in the in the pod spec so you can say hostname equal web and then all the pods that spin up will have a hostname web and so when we do when we do the Kubernetes client get hostname we're going to be I'd be asking for the web hostname and there's going to be no such name because your pod is not named web it's named whatever whatever the it's internal name is that that is no longer what was said in the hostname what would have been said as a hostname had that variable not been set. So the problem is like typically if they don't set it then it's then it's reliable you can you can get the hostname and it matches the name but if it's not set or if they do set it then we can no longer rely on that on that technique. And so just so I understand the the thing we're thinking through here is in order to be able to do cleanup you know Sergei needs some kind of identifier you can go back to use to figure out what went on. Is that more or less the thing Sergei or have I botched it? Have we lost Sergei or the new pod? I think we lost Sergei now. Sorry about that guys no basically yeah when the NSC gets deleted I get name and the namespace of the pod which is being deleted. So I have to use this information to be able to match what we have registered on the NSM side for that NSC. So that's why I mean I need some sort of a reference. Something that we may be able to do in order to to work this out is we'll have an agent running on on NSM or sorry we'll have an NSM agent that is running on each on each node. And so one option that we have is perhaps we can leverage information additional privilege information that that may have access to like looking at at Docker looking at the name of the pod. Because that information is if you do Docker PS you can potentially correlate that with the with the namespace so we because we have the namespace ID so we can potentially correlate that to the exact pod that that we need to gain access to or that we need the name of. There are some other things we could look at for example because we have the namespace ID we could run FS notify or something similar across the var run net FS net NS directory and watch for the disappearance of the files for that namespace there. That's another kind of thing we could do the problem with that I realized is just Sergei pointed out to me is I think Sergey is right now striving to minimize the amount of state that we keep in the NSM because that way we don't have to score a little way someplace in case the NSM gets restarted. And so it's one thing to get the information about the namespace and use it. It's another thing to keep it as state so you can clean up later. Is that more or less capturing your position Sergey? Yeah exactly. I have this sort of vague gut feeling that we are going to wind up having to keep state but there are so many advantages that we can avoid it but I'm actually quite encouraging of trying to avoid it at the stage. And I mean I'm a bit surprised. I mean like for the Kubernetes having a name and a namespace it's a sufficient proof of the uniqueness. I mean why we cannot follow the same model. Well it's not an issue of a lack of uniqueness it's an issue of access to the name of the variable has the hostname so the hostname is not guaranteed to be the name and so we cannot rely on that to to capture the name. Okay okay in this case I will like I will trace where I use the hostname if I still use it and I make sure that I use pod name and the namespace for that. Yeah and the reason I created the reason I created an issue as well just is that turns out that before you put in that patch there's other instances of it as well where the name is being used from the hostname so that's why I created the issue. So it's not it's not it's not only about the patch that you pushed in or we're looking at it's it's also there's also other instances that we need to fix because once we get a pod with a with a hostname that's been over overwritten then we're we're going to see failures. Isn't this like a generic Kubernetes problem? I mean if they're pushing out broken data I don't understand why everyone wouldn't have this problem that's using the API. It's not broken that's that's the trick. Yeah so wait so you're so wait hold on so you're telling me that if that if someone names something puts it in the database then changes it later they're pushing out the old name still or they're giving us the new name. No no what he's telling you what he's telling you Kyle is that hostname is one of the following two things it is either whatever the hell was configured in the pod spec to use for hostname which may have nothing with the name or namespace of the pod or if you fail to configure it so that something is configured it will configure it to something that reflects the pod name and namespace but because that is a fallback position from the user actually configuring it you can't rely on the hostname being unique or having any any relationship with the podname and namespace ID. Okay that that makes sense so I guess it is a problem if we are using hostname somewhere and trying to map it to some value in Kubernetes is that am I understanding the problem correct right now? I think so okay that that makes sense yeah then I agree that that's wrong if we're doing it that way yep okay cool thank you so much for Lucina for capturing such good notes that's hugely helpful um all right so let's see on the agenda do we have okay so I think we've got on upcoming items for next week we've got using the project board for agenda so please do make sure to get issues in because they automatically reflect there and I think because John has this knack for catching the action items we do have a creative issue requesting a document on how to stand up a pod and connect to a network service endpoint so we can get a good idea of the documentation that John most urgently needs um does that sound about right to folks do we have other kinds of action planning for next week we want to do although the fact I think we have a lot of things already in motion we're going to continue okay um anything else before we conclude for today awesome thank you talk to you guys later thanks bye bye