 Okay, before we get started, is someone able to share the agenda? Good morning. Yes, I can help with that. Thank you so, so very much. Okay, so to get started, we have some, we definitely have some agenda bashing to do. So, the next event that we, so let's start with agenda bashing and then we'll go to the events. Is there anything that anyone would like to, to discuss? I think we're good at this point. I think we're sort of good at this, but it might be useful to add an item for a little bit of cross discussion around getting some of the Harvard BCI working at packet, because I know that's something I've been looking at. And I know we've got other folks in the call who've been looking at it. So that might be something we're talking about. If we're nothing else than to sort of commiserate. Okay, I think that's a good idea. And I'll stick that before we do any discussion on the Milanox one since that's related. And if it turns out that we talk about the Milanox, some of the Milanox stuff in the process of that, then we can go ahead and skip that. Yeah, no, that would be, that would be really, really good. Okay, and I think we need to do an OSS recap. Sounds like fun. On upcoming events, do we want to, yeah, on a syrup definitely, we probably also want to do a cube con because one way or the other, you know, even if it means we have to come and do a bar somewhere, we're definitely going to do something at cube con. So we should grab the dates as well and stick that on. So great. So in that scenario, let's go ahead and get started. So the next upcoming event is in ONS Open Networking Summit in Amsterdam. So if you're going to Amsterdam, or you happen to live nearby Amsterdam. And do you want to talk to either me or Kyle? I don't, I don't believe Ed will be there, but Kyle and I will definitely be there. Then come along. We have a presentation that we're, that we're giving and hopefully we'll drop up a lot more visibility and support of what we're working on. Let's see. Cube con is, I believe at the end of November or the beginning of December. So we have a few months until until then. So, Yeah, the CFP for that's closed, but I think the years, the FIDO days to is still open. Yeah, that's a, that's a good point. And so we've added a couple submissions to to cube con. So we'll have to see what happens in the process of that. My, so one of the recommendations I had, I don't see him on the meeting right now. He's here. Okay, so one of my suggestions is Thomas has been working on a VPP, a cross connect with VPP. And I think that that would be an excellent submission. So if you haven't, if you haven't added that already, I would definitely recommend that you do so. Are you saying Tom Herbert. Yeah, that's right. He's not on the call. He's trying to he, he already seed me. He's in the DPK summit Ireland and he's having trouble connecting from from a cafe or something there. So he's trying to figure out the right number to call in that from Ireland. So he's not actually on the call but trying to get in at this moment. I miss what the. Well, he signed in to the document but then he couldn't get into the blue or to the zoom window. Cool. So there is also a cube con China going on in Shanghai and we're trying to work out if that's that's something that we can fit our budgets into. In regards to speaking for about and work service mesh. So So we'll we'll we'll know more about that later. Okay, in terms of So in terms of action items, let's go ahead and hop on to hop on to that. So, so one of the So there's a couple of things that have been going on. First is there's continued to be work on SRV and I know that so that Sergey has been spending a considerable amount of time on SRV to to get that all working. So, I'll let you talk a little bit about where where you're at with that. So basically, most of the work is done. And right now, there are two components. One component is a binary. So far it's a standalone binary, which basically scans the host and prepares the config map, which is then used by a controller which is running on on the host providing SRV services. The the bits are running on one of the packet net server. It seems to be okay. There were some like a minor test done to talk to the VF devices or VFI devices seems to be okay but there's a part missing with which is the actual data plane. And I mean, right now there's effort somehow to bring either VPP or DPDK to be able to actually test the data plane part of this solution. So that's it for me. So basically, we think we have working SRV but until packets start flowing. One is always a little bit very Exactly. And I know that that you know this is part of what I was wanting to talk about about getting CI working on packet because I know that there are a bunch of different efforts going on between network service mesh and some of the VNFC enough comparison stuff at CNCF, where I think we both have a really strong interest in getting minimally TREX, which is the packet generator and VPP working in the packet environment, because you know between if we get those two things going then we can all really do all kinds of cool things. So is there is there anything that you're blocked on that we're able to help you with or I can discuss some of the issues that are going on and I know we've got Michael on the call who can say even more things than I can because he's been bleeding into that a lot. But I think the challenge that I think I've seen has been Most of the packet mix are Melanox mix, which are wonderful mix. Unfortunately, their drivers are problematic in particular the DPDK 1805 drivers were broken. And so there's a patch that fixes them in VPP. So if you go get the latest VPP 1807 from the stable 1807 branch or from master, you can build with Melanox drivers, although that that's challenging. I know Michael spent a lot of effort trying to get that going. And so there are there are just challenges with using Melanox in general that we're sort of working through. And I think it's not so much that the drivers don't eventually work well. It's just the consumability of them is tricky. And just for clarification, those patches are on the stable branch but not in the latest release. So the stable 1807 release for VPP should be having a dot release at some point, hopefully next week. And so they will be in the dot release. They're not they're not in the dot zero release. We actually found out because we rolled the release out and a whole bunch of people started saying, hey, you know what's going on Melanox isn't working. And what it turned out was Melanox had done things in the DPDK 1805 to break their drivers around some of the IOMMU stuff. And so everybody got their heads together. There was a big collaboration and everyone moved fast and we got patches upstream where VPP will patch its version of DPDK when it builds. But I know that you've got some connection Billy with the packaging of DPDK for, for, you know, sentos and rel. It would be really, really good to get the patch that fixes this stuff into distro packaging as soon as humanly possible. I know I've been talking to the Debian packages for DPDK about that as well. Okay, I'll talk to Tom Herbert. I don't think the 1807 has moved into the sentos NFV sig yet. And so it may be worthwhile instead of pushing the main release to the first release to go in is dot one release that my timing might work out okay for that. Yeah, no, I mean I tend to think of this problem systemically like the first was to stop the bleeding with VPP by patching DPDK when we build it. But the second thing is to make sure the patches go upstream to DPDK and that the back ports go upstream to the packages because DPDK 1808 has already come out. But we can fix the 1808 that goes into distress and we should. Right. Right. Do you have things you would like to add to this whole discussion. No, no, I actually don't use Milano. I would just want a clarification just so I understood but I don't actually use the Milanox myself. So it was really just an informational point or clarification. No, I think that's good. And Michael, Michael Patterson, I know you've been working on a lot of this stuff. Do you have anything you'd like to add? Well, I can add a bit. I see you have my notes open right now. Thank you so much for those notes, by the way. Yeah, I think those need a bit of an update. I have some more notes and I just need to make them look a little better and then I'll probably update at some point. Well, one thing about those notes that I will note, by the way, in poking around with CoroS, which is what Cross Cloud CI is using by default, it does appear that CoroS has IOMMU on by default and has huge pages set equal to 2 meg by default. So it looks like, at least with CoroS, you don't actually have to tweak kernel parameters in order to get the right things available at the kernel level but I'm not entirely completely sure about it. Oh, that would at least be good but I guess you can't really save any reboots regardless since I think after changing the settings in the Milanox firmware then it asks you to reboot as well. Okay, what we did initially with them is that we could work because it's still an ongoing process with Milanox to support the DPDK by default because of their OFED dependency. It's always because of the OFED dependency. So one thing we could look at is can we work with Milanox like they did for us is they pre-build their images, they pre-build the packages for either Debian or Ubuntu like they did and we use this one so they publicize the repo pre-build daily so we could leverage them. Well, that would be good but there would be two things that I want to really strongly press them about. One is include the flipping fix so that the drivers are usable and then the second one is they actually do publish Debian packages but in a very unhelpful way right now. You can go click through a bunch of stuff to download them but I would really love to see them make them available via something like a package cloud after yum repo so that you could just flipping point to an after yum repo and install them. That would be unbelievably helpful. I guess if anything I can add a few more details about my package generator then. So what I ended up doing there was since I'm running everything in a container I found that T-Rex, the version of T-Rex that I'm using I think it's 2.32 only supports an older version of OFED so I actually went and installed that one. And then for the container I had to share all the host libraries with the container and then it looks like it works. Yeah, is it because you're using an older version of T-Rex or is there something that is. Okay, so it's been fixed in more recent T-Rexes. Yeah, no I'm using the old one that's packaged with in a vbench the Cisco framework for in a VIP. Okay, the reason I was asking is Heddock who's the PTL for T-Rex is a good friend. So if there's something that's actively broken in T-Rex that hasn't been fixed I can go talk to him about it. But if it's something he has fixed in more recent versions you know I'm going to get a very interesting response from him if I complain about problems that are already fixed. Yeah, I know how most of them feel about doing things for older versions. I guess they have quite a short support window. Why not just use T-Rex directly? I know that's typically what Fido does is just directly. The problem doing that if you want to make any like actual measurements any PD on NDR tests then we'll still need to write the cases for setting up the traffic and setting up the flows. You may want to reach out and talk to some of the Fido CSIC guys because they do that they literally run thousands and thousands of performance tests. Yeah, no I think that that might be good at a later point but for now it just in a V-bench is so easy when it works. It takes 10 minutes to set up and then you have measurements running. No, no, I understand. With all the issues we're having we may be blowing more time into working with antiquated things. That's the thing I have a solution for it now. Awesome, that's good. When you get a little more subtle if you could add more comments to this because this is a lot of the breadcrumbs. I'm basically following the breadcrumbs from this and from the instructions that Sergei posted. Yeah, I'll definitely add whatever I have once I have it in a state where it's shareable. Well you're sure you're not saying things that are less helpful rather than more. Cool. So this is goodness. Cool. Anyone have anything else they want to add on this? So I think that we can continue on to the next part. So another thing that I've been working on is I've been certainly in back and I'm starting to I want to start moving more components towards the plugin model and work out and start refactoring it so the system so that it becomes more more testable. So I think a major component of our success is going to be in continuing to increase testability so that as things change, you know, even though it's still a simple project. You know, we still want to have confidence in our changes over time. So there's going to be quite a bit of work on my part in order to do that. Kyle, did you want to add anything on your side about some of the stuff that you're doing? Because I know you're looking at some of the CI stuff and I'm not seeing anything else on the agenda that you've been working on. So it helps if I unmute as well, I realize now. But can you all hear me now? Yes. Okay. Yeah, yeah. So the stuff that I worked on this week was Ed and I, Ed was looking at the CRD code a bit last week and of course everyone was traveling. So I never had a chance to circle back with them until Tuesday. But essentially I pushed a patch and Ed and Sergey reviewed and merged, which basically now we automatically generate open API V3 validations for all our CRDs as well. I think it's the third one there, the fixed CRD code to auto-generate. So that one was pushed and merged and that's actually pretty slick because it now means that all of that validation code which was written by hand before and required syncing with if any of that ever changed in our CRD. Now it's auto-generated and automatically should get validated the correct way as well. But yeah, that's basically what I worked on this week. And then after that I made our CRD creation a little bit more robust. I know I talked to Sergey. Sergey is going to, I think he's going to push out a patch to modify the CRD creation a little bit more even yet again. But yeah, so basically it was all about this week I spent a bunch of time on these CRDs and making that a little bit more robust and less error-prone and more automated. Yep, and if we could also make sure that the documentation is clear enough to date because I, when I was trying to follow the documentation, it looked like there were missing steps. It doesn't mean there are missing steps. It just means that, you know, there was confusion on my part, but it's worth revisiting from there because I know that folks would, you know, we've got a couple of ambitions about in fact it's probably worth talking about the stuff about channels versus not channels in this call. Yeah, I think we can do that next. But I, so the other thing that makes that helps Frederick, I know you pushed and I merged the patch which fixed, you know, the default target for make which was an obvious oversight by us but that's super slick as well because now if you just type make you get the desired result so. Yeah, I ran into a build break that Travis broke my local environment and so I was like, okay, what's going on here. But now that's that's really cool. Yeah, maybe now, maybe now's a good time to you want to leave that discussion on the channel stuff as well. Yeah, no, so like, one of the things that I've been noticing is when I originally, we originally started talking about network services. We were sort of viewing very close to the patterns that existed in Kubernetes services and when you look at a Kubernetes service, you've got a service that has names and then it has ports and a Kubernetes service can have multiple ports. And so at the time we said we'll calling something in networking a port is probably not a service to the world, given how other many other things are named ports. And so we sort of called them channels. And in thinking through the whole thing, I'm coming to be of the opinion and I think Kyle is as well that we should just have you'll get away with, you know, do away with the channel concept and just have a network service. Support a single kind of payload. So that if you if you need multiple of them, you just have multiple network services. It kind of simplifies a bunch of things in the architecture to go that route channels end up introducing a lot of complexity and a lot of like weird questions about how you do them. And I'm not sure how much value they're actually bringing us. But I did want to sort of raise that here and see what everyone else's opinions were before we started hacking through code and stuff. Did any of that make sense at all to anyone. Yeah, when I was playing with the NSC and NSM, I mean, I couldn't I couldn't find a way to feed the channel concept and they saw, I mean, it seemed a bit redundant and at least in that specific scenario not useful. So I mean, that's what pretty much what you're saying as we're seeing as well. Makes sense to me. In some of this quite honestly was watching you working on this with the NSC and the NSM is going why am I making surrogates life so hard. I shouldn't be making surrogates life so hard. Yeah, thanks. Yeah. You know, and some of this would just involve some nomenclature changes like right now there's a GRPC call that a network service endpoint makes where it says expose channel. And that would just become something like expose network service. Yeah, and when I was thinking through these through these scenarios. I wasn't able to think of any scenario where not having multiple channels didn't like where things were unexpressible. And I know that that's gone through this exercise several times as well. So in fact, most of the exercises that we do when we discuss about when we're doing request to service and accept and so on. One of the things that one of the things that I noticed was we're not, we're not making a mention of any channels in there whatsoever. And all the examples still make sense. And so so in that scenario, I think it would be a good idea to just further simplify it and do away with it. Okay. No, I mean, I think this is probably for the good. I think it's also important for us to have these conversations as communities, as a community because I can't tell you the number of times I've been in situations where people run off and do things they think are smart and come back and someone is like, wait, wait, wait, that is actually really important to me. Let me explain why this is not where we want to go. So it's better to sort of talk these things through ahead of time. Cool. Awesome. So it sounds like the consensus is to do away with channels. And so we'll just have network services and a network service will have a payload type. So cool. Okay, so next item on the agenda. So let's go ahead and talk a little bit on the Open Source Summit. So, Ed gave a fantastic talk to the CNCF work group that was on Tuesday, the day before the conference. 93 slides in 25 minutes. 93 slides in 20 or 25 minutes, but simultaneously, it didn't feel like 93 slides. Which is good because the presentations with 93 slides in that amount of short amount of time usually feel rushed, but no, it went really, really well and got the point across, I think. And it's actually really uncanny because the person from TELUS, I think it was Sana who gave a talk. She gave the talk and listed all of her problems. And then Ed's presentation was like, here are all the solutions to your problems like one for one. It was really, really uncanny and they did not coordinate. Yeah, it was even the same order. I swear to God there was no coordination involved. So, so I think we've, all the conversations I had over there with various people were very positive. So I think Ed, I'll let you talk about some of the interactions that you had and some of the outcomes that you've took away from it. Yeah, I mean, effectively, people were incredibly, incredibly happy. They identified really closely with the kinds of problems that we had. There were folks who commented, they identified very closely. I was telling sort of the Sarah and the Secure Internet Connectivity story. Lots of people identified with the character. And in particular, with the sort of Sarah's definition of health problems that everybody was running, that everyone runs into. So, you know, there was a very strong sense from folks in the audience, this is really where they wanted to go, which is always a positive thing. Now, of course, now we have to keep typing faster so we can actually deliver it to them so they can go. But yeah, I mean, it ended up being a very positive experience overall. All right. Let's say one, one thing that we need to do, I still have that server running. So I need to copy over those few questions and move them over to the frequently asked questions so that they're part of the part of the markdown and not forwarding to another server. I think that's probably a good idea. It was nice to have the live questions and answers available on the website. And it also turns out to be massively handy to the QR code stuff still makes me so happy because I had a couple of places where various people pulled me aside asking for pointers and I could just bring up the mobile phone and give them the QR code directly to the website. All right, so just a short if a short to do as well so Kyle and I need to you prepare our slides for the ONS. So just just as a slight to do thing. So I've got the template. I will also spend the time on that this afternoon and you and I can think up one on one and we have a ton of material to pull from so that's why I haven't been stressing about it too much because, you know, I think with a day or two of kind of focus, focus time on it will be able to nail it pretty quick. Yeah, I completely agree. So two things I would suggest on prepping the slides one is remember we just killed channels. So lots of that collateral will require a little bit of work. Yeah, exactly that's the need to get that pull request out and merge to the other thing is what we don't have a lot of collateral on right now but that people seem super super interested in every time I talk to them are ESMs and PNSM like super super exciting to people. Although I so ESMs are the external network service managers and PNSM are proxy network service managers and an ESM will let you have a participant in the network service match that is external to your cluster, whether that's a different cluster, or some something that's managing physical network stuff. And a PNSM will let you basically insert a control plane helper into the service function chain. So if you had things like, if you're doing segment routing v6 and so, you know, you end up a segment writing v6 is your underlying carriage well the two ends of the network service mesh that live in the cluster only really know about the IPv6 that they know about, you might want to stick a proxy NSM in there to add additional sins to the sid list. You know, that you that allows you to insert some wisdom about the physical network. So those are two exciting things we don't have a lot of good collateral on yet. Although I'm not quite sure with PNSM how to represent it. PNSM are sort of the lightsaber of the network service mesh world, they can cut through anything but you should be strong in the force you're going to cut off your own arm. Yeah, I found PNSM is really, really useful when someone tries to to say that network service mesh like to look at the general pattern of the CRD and and look at the general pattern of of the standard NSM and one of the questions concept they will sometimes be brought up is but it doesn't really handle use cases where some form of omniscience or some were something really advanced that requires very tight coordination from all parts of the chain and have to be taken into consideration before any decision can be made. And so introducing the PNSM at that moment is is very, very handy. So. Okay, well, in terms of in terms of the time and priority I think we should discuss the packet.net. And what we want to do to get continuous integration running on there so first to note, we have packet.net accounts so that was graciously provided by both CNCF and packet.net. So one of the things that that we need is, well, first make sure that everyone who is going to work on it has access to to packet that they have a username that that and access to the group. And so I know that I've sent request off to to Kyle, Ed, Sergey, and I think Ian Wells also has access. So is so one of the questions is, is there anyone who wants to help or is or is helping that doesn't have access. So if anyone decides to to jump in on any of this stuff, get get a hold of me and we'll we'll work out access. One of the. Okay, so one of the things in this particular this area then so I'll give a little bit of a backdrop and what I'm thinking so right now. We ran in two modes you can run it is in a VM mode or in a container mode. Right now we're running in a VM mode because we have requirements that they require root and require capabilities outside of the container. So of course, this ends up slowing down our initial starting time. Because that's the spin up the VM and simultaneously we're running Kubernetes within within that VM and IBM is very slow. The, so the initial setup from, from my view was that we continue to keep packet but we switch packets, we're sorry we keep Travis but we switch Travis to container mode so we get a very fast start. Travis can then send the appropriate commands to, to packet in order to spin up or spin down a cluster. So we're gonna have to think a little bit about how we want to deal with this in terms of, of authentication and so on make sure that we don't get exposed credentials, and, and think about how we want to approach this. And the other, the other thing that we need to work out as well as like I think it'd be a good idea to keep the unit test on Travis, but we push the Kubernetes integration test to to packet specifically. And so my understanding is that there hasn't been work on this done just yet and so that's that's going to be one of the big tasks. Does anyone else want to add anything, add anything to that in terms of the high level overview. Yeah, I think that's about right. All right. So, in terms of, so in terms of details. How do we want to approach this like do we want to split up the tasks into a series of, of smaller bites that we can, because like one of the things I want to be careful with is I don't want to end up without a CI system as well while we're making this transition Hey, there's no CI until we get it all up. Yeah, I think we have to keep what we have to keep the CI we have going until we actually have something working. That's different. I was actually kind of toying with the notion, because, you know, as we make this transition, the real work of the CI is going to be happening and things like packet, but the control you still want to control playing with nice webhooks. So I was actually literally looking at, you know, maybe playing with this in Circle C while keeping going the stuff we have in Travis and just not making the Circle C stuff voting until we get it working. It doesn't have to be that way, but I've worked with the Circle C in Travis and there are pluses and minuses to both. Does GitHub have the ability to say that this isn't a voting thing and just to give stats on it. Sorry, does GitHub have that ability? Oh, that's a good question. I don't know. If it doesn't, one option that we have as well is we can fork the repo and focus on getting a Travis setup that works with it as well. Sorry, get a Circle CI version. So that might be an option, but my preference, of course, would be non-voting and then we switch the voting from one to the other. So that's something we need to look into is can we get voting. Okay, so another thing as well is another approach we can look at is see if there's any webhooks that we can use to ship some of this stuff off to packet.net. We could keep a small, so if it turns out the credentials are an issue, setting up a webhook where we have something that listens and acts on our behalf to set up such systems may also work. So we'll have an alternative if we don't have a way of getting those credentials in place. So my biggest concern is primarily someone typing echo credentials into the logs and then getting an output of username, passwords, tokens, etc. So we protected against that in master for Docker at the moment because Travis will only unload those secrets within merged code, but this is stuff that we want to run on every commit. And so that becomes a little bit more problematic. Yeah, I know Travis and Circle CI have really good ways of managing secrets. For example, we use them right now for pushing to Docker hub so I'm not that concerned about being able to manage secrets in those systems. Okay. So, so what are the tasks that do we need so we need also to spin up a Kubernetes cluster on packet itself as a as a task. There's also questions around SR, SR IOV. So do we want these systems to have SR IOV and my intuition is yes. Yeah, I would say we definitely do because it's one of the cases we want to test. Yeah, and you cannot test it without the Kubernetes cluster. So I mean these two goes together. Absolutely. Yeah, no, exactly. I mean, I made intensely nervous. But the fact that we don't have grid CI around the SR IOV stuff. I know that the folks who are working on it are being assiduous about testing the changes they make. But, you know, I feel so much better knowing that good testing is running and I know everyone wants to get there. Yeah, 100% agree. And so in terms of SR IOV is there is there any are there any issues between like resetting like we spin up and can we reuse the same cluster or if we have a another test coming in or do we need to turn down the whole thing and bring it back up again? One of the major issues I discovered while playing with SR IOV is that some package servers have SR IOV disabled and bias. So to overcome, like in my specific case, I had to manually get into the bias, change the setting and after that it was working. I know that Jan is working on the more automated way to deal with that. Once it's done, then basically there is no any dependency. You can recycle the server, change the configuration and basically go through the steps. Steps are known already. What was the technique used in order to update the BIOS? No, not update, change. Basically just I was using the serial connection to the console server console and reboot the server, get into the bias, change the settings, save it and then restart the server. I see. So so in effect, it requires connection to the out of band, the out of band console or a rescue console. Absolutely. Yeah. If we split the testing of stuff like SR IOV from doing full end user integration tests where you may want multiple systems and you're showing how pods and containers and everything come up. If you move that as a separate thing and just have SRV support, I know packet would be okay with dedicating. Number one, you could keep something up. It'll also be cheaper to have it up all the time. And it's one small set of systems or single system for testing those parts. Yeah. I mean, we also have a lot of interesting things floating around that like for example, it turns out that some servers, you have to change the buy some flavors you have to change the BIOS some flavors apparently you don't. And so some of it may just be picking good flavors. But I would I think we would generally like to keep all this stuff as reproducible as humanly possible. And so, so that other people can come and run our tests. So let's set up a set of next next tasks then so. So the first one is let's let's work out if we can run the see the circle CI stuff. And see if that works simultaneously. Let's focus let's focus let's also focus on very the very beginning. We're going to need this regardless as to what flavors we need, which is how do we connect in and how do we run arbitrary commands on our on packet dot net and eventually those arbitrary commands can be spin up a spin up a Kubernetes cluster. Well, so so I actually strongly encourage folks to go take a look at the cross cloud CI stuff, because I, it is utterly trivial to spin up a cross cloud CI cluster on packet. There are really good directions there. The only thing I would strongly strongly strongly counsel is when you give the name of the cluster you're spinning up the instructions will say, you know, cross clouds CI, if you just can make sure we give that the name NSM. So we don't collide in DNS with the stuff that the actual cross cloud guys are doing. But that is like super, this is super straightforward super fast way to bring up a K cluster on packet and it also, though I've not personally tried it should work at all the major class provider cloud providers and everything else. So, rather than having to reinvent that wheel. I think we're probably best served by just using that we all. Yeah, I think that's a great idea because eventually we're probably going to be asked about this is this thing work well on on Google or Azure or so on, and better than saying well it works on packet is we've tested it on those platforms. Well, what I'd actually like to do is is is once we get things working on packet, because we do have some hardware needs I'd love to start getting the CI in general working across all the major cloud providers. So we can sort of say well, yeah, because people will ask all the time, does this work on, you know, the public cloud and and right now the answer is that is our intention. And I would love for the answer to be well, it passed CI last, it passed CI earlier today on all the public clouds. That's just a much stronger answer. Okay, so let's take a look at the at the cross cloud then so I think if we focus on those two things to start off with and then we'll learn more about the problem and then we can circle around and work out a more detailed set of next next steps in regards to how do we inject network service mesh into into the cluster and make sure that the demon sets are all set up properly and and so on. So I think so I think starting off with spitting up the cluster using using cross cloud and with the circle CI stuff, specifically using circle CI. I think that'd be a good, a good approach. That might be another way as well as we can add in stuff with circle CI as a inside of a and have some flag that disables circle CI and just automatically passes except for the test that except for the patch that we want and then once we're done with the patch and it's where we want it to be. And then we we remove those. We remove the portion that that causes the that causes it to automatically pass. So that might be another another option as well. So I mean, there are lots of ways to skin the cat. Yeah, you know, it's just there's a little bit of slogging through getting some of the Nick, you know, getting past some of the melanox issues so that we can just literally just stand up containers. So once we can stand up containers for VPP and T rex and optionally get Nick's or SRI oV Nick's into those things. As part of like a cross cloud see I think I'm back at once for at that point, dude, the world gets so flipping easy, because there's so much stuff that just it becomes easy to do. The trick is getting over the initial usability problems that we're having. We've all been doing a fantastic job with documenting that so definitely definitely keep it up because six months from now a year from now, you know, if we need to go through some of this again. You know, we want to have as much information as possible that's stored in that state. Thanks to our friends like Michael and Taylor and Lucina and Watson on the VFC comparisons. You guys have actually sort of wandered ahead in some of these directions already and figured out a bunch of stuff and that's really helpful. Okay. So is there is there anything else around this topic that anyone wants to add. Hello, can you hear me. Well, hey, you're you joined us. Yeah, I kind of came in. I went through audio hell in the beginning here and I won't give you all the details. My laptop wasn't seeing the audio devices, and I went to call in. And so I had to find I'm in Ireland still I had to find the access number. I found the access number for zoom, but the caller ID wouldn't get me into. I mean, the conference ID is apparently wrong in my meeting invite. And so that's something we have to check to get us into this zoom channel. Anyway, I met I was a DP DK yesterday in the day before DP DK user space here in Dublin. And I was talking to some people who are working on on some configuration APIs for configuring cores for for for containers, some Intel people, I think, largely. And there was also some there's a lot of growing interest in containers in that space. I started talking a little bit about NSM. People weren't aware of that. And there's a there's an interest both to talking about how their their APIs could be incorporated in our extended endpoint API for configuring, you know, layer two and stuff that needs bare metal resources like cores and core tuning pinning and so on so forth. But there are also some some usability things from DP to get it would be ultra ultra helpful. I know there were some discussions about hot plug for DP DK, but it is an unbelievable pain that, you know, there are apparently problems such that if I start a system up. And I decide two hours later that I want to insert an SROV Nick or some other DP DK Nick that I've got to go bounce the whole system to make it work. That that turns out to be a real bomber in a more dynamic environment. Yes, there was a paper on the presented on the hot plug API to so there's several of these things coming together and what what I'm thinking is I got to go back to my notes and figure out what which people I talked to, and see maybe if we could have some kind of joint invite or joint event, or maybe having them specifically to do a presentation about what they're doing in one of mine. I think that would be, I think that would be a really good idea. I know that I've been following a bunch of the things about dealing with new zones, which we're eventually going to Carolina. Yes, yes, exactly. I've seen sort of two proposals there. One is the new group, the new manager proposal. And the other that I've seen has been the CPU group proposal. And the CPU group proposal is kind of rough, because it asks Kubernetes to change how it thinks about everything. The new manager is good as far as it goes, but I'm a little bit concerned about more complicated use cases. Let me sort of give you the example of what I mean by more complicated use cases. Imagine that I have a server, a node, and I got two nicks, a 10 gig Nick and a 40 Nick, and they happen to have their PC I lanes coming into two different sockets so 10 gig goes to socket zero 40 gig goes to socket one. And I wanted to play a container that's going to grab both of those nicks, right, through network service measure, whatever mechanism. I would like to be able to pin to some number of cores for socket zero to serve the 10 gig Nick, and some number of cores for socket one to serve the 40 gig Nick, but yes, but it's not clear how to do that in the new manager proposal, because the new manager proposal would simply say okay. For the 10 gig Nick, you know this is my suggestion as to the course that you give this thing and you get conflicting advice for the 40 gig Nick, and there's no really clear way how that gets resolved. So I've also been sort of rumbling in my head that something sort of similar to the pattern we have for network service manager might help there. Yes, this comes down to this comes down to a C to a CPU set problem if I understand correctly, where you need to create a CPU set for the pod for the cores that it's supposed to pin to. And quite honestly, there are a lot of these situations that you kind of have to deal with I think in ways that are similar to the NSM, not the NSM itself but same pattern. Yes, exactly. And I think that there's some related issues to with tuning cores and being able to step up the frequency and doing power management on the course that can have interesting effects they're working on to. So, in any case, we There's an opportunity for us to have some collaboration on this stuff as well, because there's some fairly significant frustrations on the part of these individuals with regard to Kubernetes because they were have been focused to this point on somehow trying to you know plug this stuff into standard Kubernetes network. Part of the frustration may be that there's a strategy that was taken with open stack which was just take all the nitty gritty details and make them part of the global API. And there's absolutely no way that that's happening for Kubernetes. Right. And this is where I think it's like I think the CPU group guys have run into which is no I'm sorry, we're not going to change entirely. This works just for your very new shoes case. So, I've had a couple conversations are exactly like that in the past week and it's like it doesn't support this it's like, Oh, we'll go talk to the Kubernetes people they'll see the value of it and then they'll let it in I was like, No, this is more complex and stuff they've rejected by far. Then, like, conversations with have been to a bunch of the work group meetings to follow these proposals. And the thing that's really interesting is the thing I hear most often is an acknowledgement of the importance of the use case, and a very polite pushing away on the solution that's been proposed. So, yeah, these are ever more reasons why we what we're doing is really necessities and make reasonable NFV type workloads to work in a Kubernetes environment. And so it's all the more reason why we we have to have these conversations I think, particularly as we talk about how how we're going to provision with you know, provision these core dependent types of things like the use case you're talking about it. Quick question. We're at the top of the hour before we close it up. You've been working on the BPP stuff as well for the cross connect. Are you blocked on anything? Do you need help with anything? Well, I will contact you either outside, you know, directly or will or will connect on the on the on the channel. I'm not really quite blocked yet because I don't even know exactly where I'm blocked. But I need to I will be working on this quite a bit next week this week because of being here on DPDK I had some some limited time beyond, you know, and with traveling on Tuesday and so on. Okay, so I'll I need to close up this particular meeting. But, you know, we can we can discuss afterwards chats chat rooms probably the best because I know that a couple people have to drop off who are interested with that. Thank you everyone for attending and we will see you. We will see you all next week. Yes. Okay, thanks. Right. Goodbye. Thank you. Bye guys. Thank you. Bye bye.