 Okay, and let's go and get started with agenda bashing. So is there anything on the agenda that people would like to talk about that is not listed? What about managing the issues? Because there seem to be a lot of stair issues or some that just stay there for forever. So I understand that in the beginning, this was where this thing was happening, like on this call, but maybe we need some other form or formats to do that. That sounds good. Do you wanna go ahead and add it further down on the agenda? Then we can talk to you when we get there. Yes. And I presume we would also be talking about stale PRs at the same time. We have a few of those hanging around as well, I think. Yeah, the same, yeah. Cool. Okay. And see, who was it that was speaking? I think that was Nikolai? Yeah, that's Nikolai, yeah, that's... I apologize, if I got your name incorrect. Actually, I see an example further up, so. There we go. Okay, is there anything else that we wanna add to the agenda? Okay, in that scenario, let's go and get started. So we have a series of events coming up. We have a... The first event is part of the KubeCon co-located events, is the Fido Mini Summit on December 10th. And there is gonna be a network service mesh session that with two talks that are gonna be set up. So if you have a ticket to KubeCon and you would like to attend this, make sure that you add... You have to add it on to your registration. So make sure that you do so so you can get in. So we have KubeCon Seattle. And so KubeCon Seattle is from December 10th or 13th. We have two sessions that are gonna talk to talk about network service mesh where Ed and I will present. And we're asking for anyone to write blog posts or do podcasts or talk about network service mesh in any public medium that would be great. And so we also are gonna have a network service mesh presented in the little booths. I will work on getting a list of those booths at the very minimum, I believe. Is there gonna be an assistive booth, Ed? Yeah, so we've got a flesh demo in the Cisco Theater. So see where they're going to go. We add the level of sponsorship we have here. We should take it very cool. Yep. All right, so we also have the Cisco Theater. And if you know of any booths or demos where it's gonna be at, also please add it into the, I've added it to the events list. So please add your company onto that. If you're presenting it. Yeah, I also know there was a little bit of talk of stickers. I don't know if any of that came together. The stickers are on their way. I mean, they're- Oh, they'll be stickers? That's awesome. Yes, yes, yes, yes. Awesome. Awesome. I need to share with you again, the edited logo because the guys from the graphics team needed to do some additional. I find that extraordinarily plausible given that I just sort of packed it together with sticks. So. Yeah. Yeah. It was the entirely duct tape logo. Not for me. I mean, for me, it looked fine, but they were like, no. So- I love art department people. They're very pragmatic. Regarding the coupon, I just want to remind that on the site, we still have a couple of TBDs there, like the happy hour and things like that. So maybe it's too early, but I don't know. No, I think that actually is something we're going to need to sort out really quickly. I know one of the things that I was planning on looking at is some of that stuff this week, because I've been typing as fast as I can on code and we're starting to get everything landing in place. I don't know what it will happen. I think we definitely want to update the website, not just for the new page, but for example, the getting started page, updated with all of our new work and that kind of stuff. Great. Are there any other details that we want to add into KubeCon Seattle that I may be forgetting? There is also an opportunity. We don't have anything set up just yet. Let's see, FOSDEM has a date. That date is already passed for submissions. Do you know of anyone set up a submission to FOSDEM? I don't know of anyone. Anyone who submitted something to FOSDEM? I have submitted some kind of related demo talk. It is, if you remember, the Istio thing that I showed you. So if they accept it, I might try to kind of do a joint Istio versus NSM demo talk or something. Oh, that would be super, super cool. But that would be a bit, I mean, that's all. That would be super, super cool. All right, so announcements. Does anyone have any announcements? So in that scenario, let's head into the main agenda. So the first thing on the list is the Skydive integration, which I believe is a demo. So how do we want to proceed? Do you want to do some re-turing? I could stop sharing if Matthew wants to show us that. I can show something. I have something up and running. Running code is good. So I'm not used to use Zoom, so I don't know how to share exactly. You see the... I see a black screen. I see a black screen currently. Are you using Fedora? No, I'm using Debian with the i3. And behave. Just let me try to share differently. Okay. Okay, I have a black screen. I'm not sure if the old desktop works when sharing a particular app doesn't. Yeah, I have to change mine to not use... I think it was Wayland and I ended up moving it to use the XC11 driver instead because all my screen shares were black screens as well. Yeah, Wayland doesn't work, yeah. Okay, I can't even stop sharing. Let me see if I can take the share back and let me stop using it. No, actually, I can't take the share back. So there should be a little bar at the top of your screen where you can click Stop Share. I don't have access to this screen anymore. Alt-S. Alt-S? No, it doesn't work too. Okay, Pause Share. As a last-ditch effort, you could probably switch to a terminal and then kill the PID and then rejoin. Okay, you're no longer sharing again. Do you want to try sharing a little? So let me try to share. Maybe Firefox will be better. Okay. So is this a website or is it a... Yes. Oh, I see something. Is this something? Because I have a black screen on my side. Okay. Okay. So now you should be able to see, even if I don't see it. Yeah, do you have a multiple monitor setup perhaps? I don't know. I saw something and then it went away and now I've got black with two green bars. Yeah, black. I got a private message coming in for men panel on IRC notification that came up. So something is now available. Is there... So Nikolas is trying to help me on IRC. But I don't know about my IRC challenge. Cool. Okay. So basically let me explain what the demo is doing. One last question. Is there a publicly available IP address that one of us can point at? No, it only works on my laptop. So it doesn't work on the public as an IP address. Yeah. But it's quite easy to make it work on your side because I have uploaded the probe. And basically what it does, I don't know if you're able to think about how Skydive is displaying namespaces and things like this. Are you familiar with the Skydive UI? A little bit. A little bit? Usually. Okay. So let me try to... I will, I can't... Okay. So basically in Skydive you will have two namespaces and what I'm doing is I'm developing a probe, an NSM probe which is connecting to NSMD. And once it receives a cross-connect that involves two namespaces under the control of Skydive, under the monitoring of Skydive, it creates a link between those namespaces. That's really simple, but that's how it works for now. It only works for local connection, not for remote ones. I'd like to make it work for remotes, of course. And basically for now, it only displays it only creates a connection between two namespaces displayed by Skydive. Does it make sense? Okay. For Skydive, a namespace is a pod in this case? Yes. Okay. Oh, cool. And so the good news is, I think once the remote cross-connect stuff lands, which hopefully will be today or possibly tomorrow, then you could start doing that as well. So this is very cool. If you could put somewhere in the notes, a link to the code you're working with and any instructions for people to try it out, that would be super cool. And if you could stop sharing so other people can share. Okay. Cool. Sorry about that. I will try to have a better demo for next day. I'll let you with the people helping you. So you should be able to get it started out. Cool. So one other thing as well, we also have a network service mesh website. So one of the things that we could do is we could do one of a few things. Number one is if it is safe for people to visit a website, we could see about spinning up a cluster somewhere that has the connections on there so people can then browse through the information themselves, which is just like as a demo. And a second option is we could stick a video up there. And a third option is we could stick a couple of pictures up on the website. So. I think for now I will just share some capture on the related issue. Okay. So I will be able to share the way I'm displaying things and discuss based on those pictures. Okay. That sounds good. Okay. So is there anything else that we want to talk about on the Skydive integration or should we move to the next section? I think people from Skydive, Skydive developers are on the call too. And I think to have a better displaying and better rendering of NSM components, we will have to have some kind of help from the Skydive team in order. What I would like to have is the ability to highlight the NSM components and points, clients and cross-connect links. I think it should be the main goal. Do we share this point of view? That sounds good to me. Okay. Okay. And this needs some kind of hack on the JavaScript managed by Skydive. So hopefully the Skydive team will help me on that side. I know we have some folks on the call. Does that sound reasonable to you guys? So definitely, Mathieu, I'm going to help you on this. If you are able to do that. So just ping me tomorrow and we'll work on that. Great, thank you. Great. So for an action item then, so I'll put Skydive to, I guess, to discuss features you want to highlight. Okay, so next item we have on the agenda is match selection for network services. So Nikolai, you have the floor. Yeah, I can share just not much for demo, but I can just share something. Yep, I'll start sharing so you should be able to share. Yeah, yeah, I will. I guess you see it. So what essentially I'm showing here is the YAML file of the API extensions that we are trying to do. So we're working closely with it. And this is what we came up with. Actually, there was a discussion with Mathieu also. And this is already merged. It's in the master and now I'm working on the implementation of the selector. But this for now is strongly dependent on 522, the Mitic. So many people are. Yeah, so I hope that tomorrow this will be already in place. I started looking at the code and it looks great. So what is the idea here? So we define two levels of matching. We have a source selector and a destination selector. The idea is that when the application actually the client requests a new connection, it will have its labels matched against the source selector and the destination selector, that actually the NSE will have its declared labels matched against the destination selector. So this is a very powerful concept. And if I can scroll a little bit up, this will essentially allow us to do things like this. So this is kind of service chaining, if you wish to call it like that. Yes, that it sounds a little bit blurry still, but that's more or less what we have today. I guess that once we get further, we'll be able to show a bit more like a more meaningful demo. Matt, do you think that this kind of describes what the status today or? Yeah, I think it's just this great. Do you wanna talk about some of the other sort of further out things we've been mulling about? With like the create stuff? I'm sorry, like what, the create stuff? Yeah, okay, so we also came up with the idea. I don't know what was the number, was it the next? Yeah, there should be, I think it's probably the next one. There should be some links in the conversation because I referenced this theirs, but yeah, I think it's like 532 and 533. But we talk about what we created. Yeah, there was this creating, yeah. So there was also this idea of create action. So once the new connection establishment is matched like against the source selector, then you can enable route action, which will essentially link it to an agency that provides the asked service. And we also played a little bit with the idea of what it would be if we can spawn the service on demand. So that's what create would do for us here. So in the description of the network service, we can just tell the intention of that. If someone requests a service, this will be automatically spawned and depending on the configuration, it could be spawned within the same cluster, on the same node or different options here. This is very, very powerful concept in terms of and also probably dangerous as with everything that is powerful. But yeah, I think that if it's used properly and configured properly, this will allow you to have very much dynamic configuration of the needed services. Essentially, you will just declare that they will be services, but they won't be spawned until they are needed, which is really interesting, I think. Okay. So with that, I think I will stop sharing here and give the floor to the next guy. Sorry. I have a question about the first pull request you showed. Do we, are we already able to, or do we think we are able to manage an update on such a network policy? I mean, if we remove one of the component of the destination, do how do you manage rewiring things like this? No, it's actually it's just the API that is implemented. So that's what is merged. So it's not really... You actually have a really interesting question, Matthew, which is at what point do we rewire with policy changes? Cause I can think of at least two answers to this question. The first answer is you process the selection and the wiring at the time the connection is requested for the client and it stays the way it was, right? So you don't update it when the policy is updated. And there are definitely gonna be circumstances where that's gonna have to be true because some network service endpoint in the chain has some kind of state related to that connection that would be painful to recreate. And then the second one would be to say, this is definitely something we could do. It's the question is how to toggle the flag, but if the policy is updated, you update the wiring for the connections because the beautiful part about the cross connects is I can leave the same kernel interface in your pod or the same MIF interface in your pod. And I can change the destination connection in that cross-connect in the data plane and simply cross-connect you to something new. So this should allow us to do things like auto-healing. So if a network service endpoint identifiably dies, we could connect you to a new one. And it should also allow us to do, if we so choose, automatic rewiring on policy change. It's just a question of under what circumstances is that a good idea. But super fun. Yeah. So at first the first option will be implemented. I think it's a simplest way. Yeah, of course. Yes, you start with the simplest thing and then we'll be built on top of that. Cool. So I'm hoping that 5.22 lands today and then it sounds like you're close on its heels there with actually getting the basic selection stuff working. Yeah, yeah. Cool. All right. So are there any questions that anyone has before we move on to the next question? So we also have some changes with the VPP agent, NSC and NSC Charlie and NSE Echo and direct BMIF connections. So Ilya, do you have the floor? Hi, hi everyone. Yes, looks like yesterday I finished my work with a VPP agent, NSC and NS endpoint. As far as I know, it's already a part of integration tests and it's a little bit changed concept at create or commit. Now we have one site that's request connection. It's always something like slave and other site is always kind of master but they don't know that they slave or master, they just have a destination or source. So it's some changes. And I fix a direct MMIF connection to be the same with that logic and they create also pull requests with some unit tests for converters. And I think that's all about MMIF. One other thing I will mention which is there's now a new make rule called make pay a check. It's super, super helpful because what it basically does is it will go through, right now when we deploy with K8s deploy or when we're running in the CI, it will deploy currently one each because we're on a single node of the ICMP responder and the VPP agent ICMP responder. And then it will deploy to network service clients and those network service clients will round Robin. So you're guaranteed that one of them will connect to one type and one of them will connect to the other because both of those are providing the same network service just implemented differently in terms of MMIF or kernel interface. And so the makes K8s check will go through and check and make sure that you get the right ping behavior from the clients to somebody providing the service. So it makes it super, super easy to check and see if you've broken things. Well, and yeah, we need to verify that all the labels are set up properly and so on because the CI is now running two nodes which means it's possible that your NSC and NSC can end up on two different systems. And so right now I haven't seen any breaks related to it, but we wanna make sure that we're testing what we think we're testing. And in some of the remote work, I think we're playing with pod affinity and anti-affinity for some of this. So if it does become a problem before 522 lands, we can do that. And once 522 lands, then we should have multi-node testing and the pod affinity will put all the network service clients on the same node. And then the pod anti-affinity will put one each of the different flavors of NSCs that we have on the two nodes that we have. All right, so we have, that actually segues us well into 522. So Andre, can you tell us about 522? Yeah, yeah, we almost have NSM to NSM remote connectivity. We have it in branch. It's works locally on the ads and on my environment. And we just need to figure out what's happening on continuous integration. So in general, we have it already almost landed. So if NSM have a request from the client and the endpoint is not local one, but found using Kubernetes registry, it has direct connection to remote NSM and do cross-connect on both sides with a big LAN protocol at the moment with remote mechanism. So after 522 will be landed, it will be possible to cross-connect between different nodes if a VX LAN mechanism. Super cool. Super, super cool. So by the way, Nikolai, this also means that your VX LAN code works. Yeah, I realized that. Yeah, I think there was still a minor issue that we were running into with 522 and getting it to, so it works on a local vagrant setup. We're having a little bit of trouble with it in the circle CI or packet environment. So Ed and I are working to finish off that, those last details and that way we can get this merged in as soon as possible, so. Yeah, and I think we learned a ton of the process. It's been really interesting. A lot of the process of writing networks, sort of a mesh code has been, you get something working and then you look at what you've written and you say, okay, this has to be refactored. And you go back and refactor it into something that's a little bit saner. So if you go looking through the code for 522 and you see some slightly rough spots, a lot of those slightly rough spots are actually labeled as to do this is slightly rough. So it's probably an interesting place to go fishing for small things that you can do that might be very productive. Just go looking for it to do comments in the code. Yeah, and just so that people know, part of what we intend to do in time with these particular setups is we don't want people to have to understand or know these little details in order to get your client or your endpoint running. So one thing that we want to do is to provide clients and libraries and so on as well in the future so that people can get that abstracts about most of this away. So that way that you can just focus on your logic and not have to worry about, that you put the right parameters in again, MIF working or VXLAN or so on. So for us that's like right now we have to, we're running into a couple of those little things right now and in the future, you shouldn't have to worry with that. Like you should have something that you could just plug in and work. So that's the goal. What are the really cool things about 5.22 is we spawned four network service clients and of course because we're around robbing, you know, two of those, and they're all spawned in the same node, two of those into being connected to providers of the network service that are local on the same node and two into being connected remotely to something on a different node. And guess what? They literally don't know the difference. Cool. That's cool. All right, so before we continue on, are there any questions on PR number 5.22? All right, and that scenario, we will move on to the KubeCon demo project board. So a lot of the stuff we've actually spoken about already. So we've spoken about creating a VPP agent, VXLAN. Let's see. I think we spoke about huge pages quite a while ago. That's actually quite a while ago. Yeah. So for the in-progress stuff, so the first one is 507, it relates to 5.22. We already spoke about Skydive, dynamically expand NSMDB pull of device IDs. Ed, do you want to talk about that? So I think actually Andrea Plotov is working on that. Do we have him on the call? I don't know if we have him on the call. Yeah, so basically the trick here is, we're using the device plugin API in order to allow us to inject environment variables and mounts into your containers. And this is sort of a necessary thing. And so right now the component we have, the NSMDB, which is doing that advertisement, it is advertising a sort of a fixed number of device IDs, right? I think it's currently 10. And effectively we need to make sure that it will scale that pool. So it always has at least 10 in reserve in case lots of pods get scheduled all of a sudden. So that as things get allocated, it will sort of note that and expand the size of the pool. So if we start with a pool of 10 and we get one of them allocated, then we expand the size of the pool to 11 because that way we've got 10 allocated in the pool and so forth. It's just sort of part of making the whole system more robust. So there's a few NSM data plane, or sorry, device plugin things are listed in as well. So I think all of that is still part of the same talk. And so the last part on here was eliminating race conditions between the device plugin, the NSM demon and the data plane itself. So has there been, do you know of any status changes on that or is that still something? There have been a couple of patches that have landed in that area. It ends up being, I think that's probably been fixed. Essentially the problem comes down to the NSMDP should not be advertising NSM resources until the NSMD is up and functional. And the NSMD should not be indicating that it is up and functional until the data plane is up and functional. And you get all kinds of weird behaviors if you get race conditions in that ordering. I tend to be of the view that a race condition is simply a failure to enforce a mandatory ordering of events. And so this is just putting together the order of events for that. Yeah, and this is not listed on the board, but we also added in some code for race conditions between the NSMD and the Kubernetes registry provider as well. So one difference between these components in the registry is that the device plugin and data plane typically connect over a Unix socket and the registry because we want it to be a little bit more generic. So that from NSMs that are, or rather E-NSMs, those are network service managers that are not part of Kubernetes can eventually publish network service endpoints to the registry. So we chose to expose that using a TCP IP port. And so we've also eliminated a race condition on there where GRPC when you ask it to connect will keep on trying to connect. Like it's actually an asynchronous connect and it'll keep trying until it works with an exponential back off. And so effectively registry was coming up very fast and triggering the exponential back off and but still reporting that the registry was up and running. So that should be resolved as well. I think that pretty much covers it for the QCon board. Do we want to add anything to the to-do list that we haven't discussed in the meeting already? We'll add stuff that is previously what we'll add after the meeting into this board. So I mean, there are a couple of things for the demo that are still sort of floating around. One of them is we've been talking about for the demo doing something that was basically like Sarah's story where we basically stand up the world simplest firewall and then configure something to act as a VPN gateway. And BPP can be pretty easily configured with some ACLs to do a stateful firewall behavior. And it also supports IP sag. And so there are still some outstanding items there for basically writing some super, super simple network service endpoints for those two operations so that we could actually deploy them. And in this case, a network service end point is just literally a tiny bit of code that configures whatever the additional rules are. And then like ACL rules or sets up your IP set connection to your IP set concentrator here at the VPN gateway and then just calls the existing code that we have when a connection comes in to connect the end point, you'll connect the incoming connection into the VPN. So it shouldn't be super complex. It might be a nice starter project for someone if someone wants to pick up a code editor, but we are sort of facing tight timelines because we're hoping to have some of this, you know, be in a position next week to have the demo more or less in decent shape so that we can roll into QtCon with it. Okay, so effectively there's an IPsec network service endpoint and a firewall network service endpoint. And so for someone who may want to take this on, so would the idea be to take the BPP agent, ICMP responder and then modify the configuration to add in the firewall and make a connection out to the next hop? Yeah, I mean, roughly that's roughly what you're looking at is you would start with that. You would need to, you know, so you're acting as an NSE, but you also need to act as a network service client and ask for a connection out. This is part of why having the BPP agent network service client example is very helpful because you would be both a client and a server in that case. And then there'd be a little bit of interesting learning because you have to learn to program a couple of ACLs with the BPP agent over GRPC. And then there's one other interesting thing that we would have to do and that is we would probably want to, we've talked about when doing parameters, we've talked about wanting to be able to pass back essentially routes. So you pass back prefixes and where they should go. And so you'd want to be able to pass that back to the network service client so that that actually gets configured. So there's a little bit of work around that as well. Is there anyone who would like to take one of these on? Like this is something I think would be relatively low complexity, but would be high impact. So is there anyone who would like to volunteer to take one of these on? It's my mic on. Okay, so yeah, I think that my simple matcher, simple service selector could be very easy to be done. So after that, I think if there's no one else, I think I can take on the firewall thing. Cool, cool. Okay. It's all people working in parallel on these things. So everything is easier when you've got a friend. Yeah, cool. Yeah. And just so you know from a community perspective, so part of the way that we've operated in the past is that if someone, like if I take on a particular action item or it takes a particular action item and someone else gets to it first, then there's no hard feelings or anything. We move forward. So if someone gets to it first or decides to hop onto it, the only thing that we recommend is that if you're working on something that it's publicly on one of these boards, that you put the post on there that you're working on it. So we avoid basically try to de-duplicate work. But other than that, like no one should feel bad for opting to take something early if no one's gotten up to it yet. Yeah. I mean, the other thing I'll go ahead and do is all right, I have some issues for some of these. I tend to write fairly detailed issues that people seem to find to be relatively easy to follow. Too easy, I would say, but yeah. He just writes all the pseudocode and you just have to translate. Do you turn it? He turns you into a compiler. Yeah. Well, I mean, there's a shit ton of stuff that is super, super, most of these things are actually super, super easy if you know the right place to look. And I don't personally find the frustration of not knowing where to look to be productive. So I will often just say, okay, here is the place where you would want to go look and here's the things that I would suggest you think about. And that ends up making it fairly easy for those people. That said, please don't ever read an issue that I write as something you should robotically follow. Right. I don't need people robotically following these things. There are always going to be things that I don't think of. There are always going to be better ideas. So I'm trying to help not to direct. Yeah. So we don't have that much time. So I'm going to move forward to Taylor's CNCF, KubeCon, CNF, VNF comparison. So it's Taylor on the call. I'm here. Apparently it's still a mouthful. How about just CNF comparison? Your audio is really bad on my end. You sound like a robot. Yeah, it's also bad here. It's like a robot in a tunnel. How's this better? Way better. Okay, great. I just dropped from a phone call. Okay. So we have the CNF that we're testing with based on VPP deploying to Kubernetes with the Helm charts and Helm. We had to disable the second set of cores on the second CPU. We're on a dual socket system on packet. The DPTK is having NUMA issues crossing over. It doesn't, we don't have these same problems on Docker. So we're able to be very selective on pinning what cores are used. We don't have that type of fine grain control in Kubernetes. After we disabled the second CPU socket, then we stopped having errors. It was actually causing VPP to crash with a DPTK or memory error. So we have that right now in place until we can figure out anything else on the Kubernetes side. This does allow us to use 28 cores with hyperthreading or 14 cores without hyperthreading on the systems that we're using. And then the, in the host side on the worker nodes, we have VPP set up as a V switch and that's working as well. So all of this is preliminary to the NSM availability for us to plug in, which sounds like it's pretty close as far as some of those needs. Probably won't have them at least for KubeCon, but we'll be looking at how to plug those in after. So the next step is setting up the test case that we're gonna be using and deploying with Helm for that. This is for the Kubernetes side on the open stack. The big side is working with the VPP Neutron plug-in. So this is the open stack VPP plug-in. It was not starting up and showing the devices or anything at last week with the Chef cluster. We had it work in DevStack. We now have it actually showing and talking to the Neutron agent and updating the database and open stack. So that looks good. There's some specific setup that we're working through for the test cases on how the bridging and how everything is working with the networks between the nodes and working through those issues. And then we should have a good reproducible open stack cluster that has VPP as a V-switch for open stack as well. And let's see, that's probably it on that side. We have been working through a multiple service chains. I'm not gonna change any naming right now, but the testing that we've been doing is using a snake case topology where it goes in and out of the VPP V-switch as well as pipelining as we're calling it, connecting them directly with the MoMAIF interfaces. Those were working on C-set. And we've been porting over to all of that code that was functioning on the packet side. And this is in the Docker KVM and then replicating what works on Docker KVM to Kubernetes and open stack is the way that we've been going through that. So those tests results are pushed up for all of those services, including running multiple service chains on the same node. So that's where we are. Cool. Hi, so is there, as always, is there any way that we can help you out or is there anything that we can unblock you, help unblock you on? Well, if people have experience with the open stack VPP side, that's an area that that Neutron plug-in would definitely be helpful there. And then you can reach ping me on Slack, cloud native or shoot a name, I'll either way. And then the other one is if you have experience with Kubernetes and the CPU core management side, there's some new stuff that's rolled in, like 110 with the policies that you can set, but we need stuff even more fine-grained control. So. That's an area that I've been following. And the current state of the art to my understanding in Kubernetes is you can configure a node such that if you request a core, if you request some integral number of cores, it will give you pins to those cores. But there is nothing that I'm aware of even in the pipeline for allowing you to pick which course Kubernetes deploys you to. And I'm not necessarily the closest guy to that problem, but it is a problem I've been following a lot because I realized this kind of stuff was going to be important. Yeah. The only work around at this point is probably shutting down cores to 100% ensure that you land in a certain nubizone. But, yeah, beyond that, there's work that needs to be done in upstream Kubernetes in order to in order to guarantee that entire alignment. Okay. That's where we're going right now. So if there's anything that's pre-release or whatever to test, or if there's workarounds in the host that we can do that would help them. Yeah, anything on that side would be great. Well, I'll also ping an internal Red Hat team who I think has been looking at some of the stuff and see if they have anything that they can point me at. So I'll get back to you on that. Yeah, the one thing to be careful of is the last time I poached my head into this problem space in the device plugin management working group, there were a lot of things being discussed, but nothing had actually reached the status of accepted. So lots of people have lots of hacks, but it's not clear whether any of those hacks are going to actually make it into the real Kubernetes yet. Yeah, that's my main concern is if you end up depending on something that's not going to get into Kubernetes, then that can be problematic. Yeah, cool. So someone poached me and said that Matric had laid it on the call and he had asked if we could talk about the time the call occurs. I don't see him on the participants list. Is he going back on the call? Maybe you don't, but I'm here. The list is very long now. Okay, cool, cool. Oh, you looked in the spider, not as Matric. Oh, apologies. Yeah, we only have five more minutes. So let me just toss out a real quick call for help first and then you have the rest of the time. So anyone who wants to help with documentation or help with the website, any help in that area would be greatly appreciated because it needs to be all up and ready to go for KubeCon. So it's an easy way to join in and help. And with that, Matric, you have the rest of the time. Well, I don't know what Ed wants me to talk about. I have an issue with the call time. I admit I overlooked the consensus call or rough consensus call over email. I express my view over email. I don't have anything else to add. The call conflicts with FDIO, VBP call that is held by weekly. Which means that my coverage here will be not spotless, but rather spotty. And I wonder if there is any way to avoid the conflict as I believe that there is a huge potential for collaboration between the two projects. So there was a number of people that may want to attend both, like me. And there are some folks from FDIO assistive who would like to do the same. That's all. Cool. So I think this is probably something we need to discuss a little bit more in depth then. So a couple of minutes isn't going to, isn't going to be enough. I fully agree. But there's an email thread that I think it's on reawaken. So why don't we do it on email thread? I do recall so that Lucina has a conflict with the CNCF clock meetings that occur monthly. Is it, Lucina? Two times a month on Tuesdays. Okay. So basically every other week, you have a conflict for the talk as well. So it's not just Machik who has the conflict. So I think we may want to take it to email. I know that we have had to move this meeting before and it's probably better to have this conversation again now because the bigger the community grows, the harder it becomes to move meeting times. So we'll see sort of if there is another time that would work better for the conglomeration of folks. If folks could please speak up about sort of their needs and so forth on the call. Do I, I'm trying to recall, how did we sort this out last time? I feel like maybe we did a doodle poll or something. That sounds good actually. Well, I'm trying to see if anyone remembers what happened there. Cause it seemed to have worked out but I don't remember what the message was. There was, I recall the distant little poll but I don't recall the consensus over the loop but maybe I missed the emails. But I think there were multiple polls in the past. Well, we had two rounds of this. The first time was when we decided to stay on Fridays and there were issues with the doodle poll that we ran into. And then the second time I believe was just an email thread and the email thread actually landed us a consensus. Okay. So I would strongly encourage everybody who attends the meeting to please monitor and pipe up on that email thread. We may very well keep this time depending on obviously finding a time that works worse for more people is not going to be a thing but maybe we can find a time that works better for everybody. So one thing I do want to point out is this time seems to be friendly for the folks in Europe. Correct me if I'm wrong cause we do have a bunch of folks in Europe who are turning up now. It's fine for me. Okay. Was that, are there any last minute announcements or should we close it up? I had a post it in here at the CI working group for the CNCS CI work group is at 2 p.m. Eastern. If anyone would like to join that's interested in CI topics is welcome. Is that today? That's today. Cool. It's monthly and it is today. Okay. Well, in that scenario thank you everyone for attending and we will see you again next week at the same time. All right, talk to you guys then. Take care. Cheers. Bye-bye. Okay, bye.