 Welcome folks. Hi, Jeffrey. Hey, good morning, everyone. Good morning, Kayleigh. Give it a few minutes for people to trickle in and probably get started at five after. Also, while we're waiting for more people to trickle in, if anybody would like to add anything to the agenda, please go ahead and do so. Okay. If everybody's cool, let's go ahead and get started. Just a reminder, these meetings are recorded. So if you ever want to go back and re-watch something, you can, and then additionally, you know, try to keep our conversation civil and clean. So, CubeCon just passed. I apologize, Taylor and I. We took last Monday off, so we missed some stuff. I know there were some security discussions. I don't know. For anybody who was here last week, did you guys kind of do like a recap? Any like solid sessions that like the groups you go back and watch, maybe not seen enough specific, but things that might like relate to our field around like the security space or the networking and plumbing space or anything like that? We can put together a list if that helps. I don't think anyone's gathered them. It's just, yeah, because the YouTube videos or the recordings are not released yet. Sure. And if anybody like, there's one line where I just, this was really just a chance for someone to like speak and be like, oh, this was really cool. I think the group should look at this. And then I think Frederick, it'd be a great idea to your suggestion that once all the videos are uploaded and available, maybe we just kind of put like a little miniature library together of things that we should like look into. Trying to think. So we did have, looks like Taylor's got a cool list here. So we'll go back and look at those. Maybe, you know, we'd like to discuss some of the things that came out in them. I know there was like the maintainer track. We had a couple of people. One thing we might want to consider too is, you know, the maintainer track was of course on the last day about dinner time. So we did get a lot of people from the Asia Pacific region that showed up to the call or the presentation. So one thing we might want to consider too is obviously we've had a very, you know, Western Europe, North American friendly time. We might want to consider in the future how we get some of our friends in Australia, New Zealand, China, Korea, et cetera, India more involved. So Taylor's put that list up. We can add links later once they become available. And if anybody wants to go back to the notes and add some sessions that they think were awesome, please do. I don't know if anybody's got some topics they want to dive into too that I kind of want to look in is I don't know, Frederick, if there's anything from last week that we need to continue to double tap on. And I see that Ranny's on here, but I was actually thinking now that we're kind of past mostly ministry stuff, we've been talking around best practices. I do think working asynchronously will help us and we can start fleshing some stuff out. But I think another topic that would be really good to look into is Ranny had started this discussion around dealing with NAT in BGP. So BGP is a faculty life, obviously in the CNF space and NAT is everywhere. So it might be something, I thought it'd be like a good discussion post-CubeCon to start like digging into alongside some of the least practice or least privileged security best practices, et cetera. So if anybody else wants to potentially add new topics or say, let's look somewhere else then by all means please speak up. Yeah, on the security part, I'm going to put together some, I'm going to do a little bit of writing to discuss the software bill of materials and with the some of the stuff that's going on there and how we can start to put recommendations towards people building towards it. This is, it's still a very early space. So the tools are very immature but there's a lot of work that's going to be put into that particular space because one of the requirements are sticking in the US government is that critical infrastructure also follow the guideline or follow the requirement I should say. And the question that becomes how long will telecom be considered to be or service provider be considered to be critical infrastructure? And if so, when will it be considered? So they may not give, they may not initially define it but they may expand it over time. So in short, what defining for people or describing for people what it is and where things are heading even if there's nothing concrete that they can do right now because of the maturity level it's still useful for people to know what's coming down the pipeline. So I have some stuff I'm going to write on that and I'll put it inside of a full request there on. Cool, and I was looking at last week's notes. So yeah, the signing and verification of signatures is this eventually going to lead in like I know I've talked to Taylor about this we had some discussions about air gap installations private repositories, et cetera. This is actually a topic I've been looking at internally myself recently is this notion of signed images and creating trusted repos only allowing things to deploy into production that are signed, et cetera. Which I feel like one of the things about cloud native and faster and more agile software releases is sometimes we collect things a little fast and loose and they need to go fast sometimes outweighs our need to be safe. So I'm just curious too, Frederick if that's going to be involved in this I know it's something near and dear to my heart is if I'm working with vendors one, two and three like how are we creating that secure supply chain between them in the microservices and CI space? Yeah, it's part of it, but excuse me it goes a bit deeper than that. So it's actually looking at not just what's the signed image but looks at the contents and tries to work out what's within those signed images and does so recursively throughout the vendors. And part of the idea is that let's say that someone pulls in a library that has a vulnerability in it and they compile it in statically so your images don't pick it up. Then because it's in the software bill of materials then you still have visibility into the fact that this thing was pulled into your system and ends up giving you the ability to rely on tooling that has information from the source as opposed to relying on image scanners to try to pick it up. And so it's part of it is trying to move the bar is trying to move the bar further to the left in the build system. But the other part on it which is where I think the real benefit is if we can pull this off properly as an industry is going to be around the additional tooling we get which allows us to determine what's in our infrastructure because a large problem that I've seen on major enterprises is that many major enterprises have no idea where things come from or what's running. Somebody leaves the company, their server is there that they're not sure if they can shut them down or not because they might be doing something important. And this will help give them some of that visibility as well into by maturing the entire tool change. So a lot of people will hyper-focus on the Esfam itself which does provide some utility but I think the real utility is going to be around the tooling that generates around it. Anything else that we want to touch on in this space? This up here in case there was anything that didn't get finished last week I wasn't here so I'm not sure where you guys ended because it's not. Rainey I don't know if you would let me put you on the spot or not but I was wondering if we could potentially talk about this discussion you started if you wouldn't mind potentially giving us a little overview and we could kind of talk them. This is something I know specifically as a service provider is near and dear to my heart. And I feel like there's probably multiple ways to address it so. Yeah it's been a while as you can see February it's like nine months ago. I'm not sure I remember what triggered it but anyway I think just there was some discussion that reminded me of some similar issues back in the days from the sleep and voice over IP days. I think some of the concept might be applicable here. Basically it's the idea of your network function running in a cluster and the local IP address of that the container sees or its end of the IP socket may not be the actual public IP address and there needs to be a way for the network function to know what is the externally routable IP address so it can advertise it in something like a BGP. So there are several techniques out there. I mentioned a few of them. I don't know how prevalent they are these days but stun and turn used to be popular like 10 years ago. So something like that may be used as a best practice for a network function like a router to discover it's externally routable IP address and use this in the application protocol, in this case, BGP. Breathe in true. If anybody else wants to chime in. I know Ian and I have talked about this one a lot. I think this one is interesting, like just the concept of putting a route reflector in containers inside of Kubernetes. You know, it's not as straightforward as you would think it would be and specifically for the like issues that Rani just mentioned. Everyone's quiet this morning. I always talk a lot. I mean, so some of the things that we can look at right is obviously there's some of the things here. You've got the any other ideas. I know to, there could be, this is always the dangerous one, but CNI based approaches, right? So I mean, you have certain CNIs that you have BGP speakers bundles in with them and you can directly advertise pod space. So then theoretically, you can go and host your services in a pod that has an IP address that is advertised and therefore is the source address versus adding through the host address. I don't know if anybody else has got any thoughts on this. I'm also curious, do you like, I've done limited stuff with this, but if anybody's got an experience like the notion of putting lots and lots of these endpoints out there and BGP best practices around where you set your aggregates et cetera, so that you don't constantly have a bunch of entries being added and withdrawn as pods spin themselves up and spin themselves down. I think there's probably even just an engineering context is more and more people look at peering what they're underlay with both Calco or Sillian. We could put out some best practices or work with the network plumbing group because I think this would probably be within their space as well, but like how do you sit there and properly, look at this from like a large scale network perspective with BGP as far as how and where to advertise. I don't know if anybody's got any thoughts on that. But it's something where like I've seen where we've done like limited context and stuff, but like the way that we handle our route tables in some cases are quite massive and having tons and tons and pods come up and come down. It's not something we've won. So we basically set like, we use EBGP a lot more than IBGP for instances like this to make sure that we're like protecting the bigger route tables. This is a bit of a connection storm issue basically. You know, you either do it in the Kubernetes control plane or you try to do it in the networking control plane like you were handling lots of equipment out there around guessing, since it is services, they're going to be spun up together with the containers. This is the tricky stuff from hell. Finally, Sillian has support for BGP for having read this article, but I mentioned the BGP support there. So are the options then to either say that you have a separate service for setting it up or you try to do it in the, let's call it the Kubernetes control plane or you try to do it in the networking control planes. Are those the three options or is there a fourth one that we just haven't considered? The three option and the one that I've seen used the most basically is not necessarily advertising this pod space, but rather to bring a specific interface to the pod being that route for reflector and leverage, like was mentioned, CNI based approach. So you could have multiple interfaces and then touch a specific pod. We have a specific interface. You know the other service with the specific interface that does that and then it sets up the rest of the network from there. Exactly. So you wouldn't necessarily touch or do anything regarding the pod service itself or the Kubernetes layer. You would just bypass it completely by adding an external interface to it. Yeah. One thing I just kind of wanted to mention on that topic since we run into some of these issues when we're talking about Kubernetes interfaces, of course that everyone now seems to be talking smart nicks of one way or another. So even if you're using CNI and Syllium and whatever happens, the offload of larger pieces to these kind of smart nicks coming in will probably be the next step for a lot of the telecommunications space just for cost pieces mainly. I think I misunderstood something. Is the goal to have the CNI handling BGP or is the goal to have within the cluster an application or a pod that is say a route reflector and the application or the pod itself is handling the BGP traffic? Exactly. Is that where we're getting into that point that then we're doing packet processing on a CPU again? So we're trying to like catch up with all these notes. So one, I think we're just like spitballing here and coming out with discussions, right? So for one, the smart nick thing is a completely tangential thing. I would say that side like, I was just mentioning CNIs because this specific topic right here is around NAT, right? And so really just BGP, we should probably start a discussion around just like BGP pest practices, which would I think fall into like the security or least privileges kind of like top level domain where then you would have lots and lots of subdomains that we could tease out. So like a CNI is just one way to do it, right? Because you can basically get away from NATing by directly advertising pod space but there's definitely some pitfalls into that especially around some of the discussions that were just managed on, you know, who is managing the control plane at that point? Are we having the CNI talk to something? Are we gonna have some kind of plug in? You know, I mean, the SDN battle has been waging for quite a long time, you know, between clouds, third party controllers. Once we start mapping this into our underlay, right? Well, like I said, we've done limited stuff. We're like, we'll peer like specific clusters like a limited set of pod space so we can advertise like cluster IPs, et cetera. But like doing this at scale, like starting to like really like populate this into like, you know, your global routing tables to where I want service X to just be like finding, findable within my network and stuff. I mean, there's much, much bigger implications. Pardon me, I can't talk this morning. Haven't had enough coffee. But around like, you know, what does it mean? I mean, the whole concept of BGP and Y is popular is because it's stable and it has this notion of convergence, right? Like, so what does that look like if something is constantly churning? And I mean, it's doable, right? We see this with like, you know, VXLan and EDP and implementations and routing by MAC address, et cetera. But like, I feel like there's definitely like best practices that are super relevant to the CNF space, especially when we start talking about putting, I mean, I can just, you know, we're getting like vendors coming to us with, you know, cloud native, quote unquote cloud native, like BNGs broadband routers, you know, obviously the packet core is the 800 pound gorilla that everyone of you wants to solve, especially the user plane side of it. So I would say, you know, the SmartNICs would be one path where we might do this and then we'll let the SmartNIC handle this and we've passed that through into a pod. There's the concept of I'm gonna put something that needs to understand and speak BGP within the pod. So then how do I handle the NAT issue, which is what Ranny has up here. I think there's a lot of stuff that we could tease out in the space. I would like to capture what we'd like to have in the future, even if it's not, doesn't look possible now versus what we can do today. So they, in the original discussion post Ranny had listed the application querying the orchestrator about the assigned IPs. So that one seems more like dynamic services and treating this whole thing that way. And there's probably other areas that that could expand on like being able to use the SmartNICs and anything else that comes up. So an ideal situation, what would that look like? And saying, here is how we would love to be able to describe our services this way, the BGP and here's how it would work. And some of it may not be there, but if we could say, eight out of 10 or six out of 10 pieces are ready, but these four critical ones aren't, then we can go and actually share that directly with like the Kubernetes plumbing working group and SIG network and everyone else and start pushing that forward while we also work on what can we do in the meantime solutions. It seems like if you're gonna have, if you're not going to directly attach interfaces to a pod that's handling the BGP and maybe even you're using some way of directly attaching that pod to a SmartNIC that's off where you're offloading stuff, then you're gonna end up with some type of BGP that's broken down further into components where you're needing some service level. I don't know if that means an operator or if there's something else that's more generic that's expanding on the current capabilities for having IPs assigned to services, which you can do, but it doesn't seem like that's handling everything needed right now. So potentially we're getting into that service chain where the service connects to an interface that then does the BGP and then whatever it runs that on if it's CNI or the SmartNIC or whatever. Well, I want us to like keep the, sorry, I was talking in the mute, trying to answer you Taylor, like there's definitely some like lines of responsibility when we talk about attaching an interface, right? Like if we pass a SmartNIC into a pod, then we're giving it the physical interface versus a TAP interface, et cetera, but like I think what we're really discussing here and please correct me if I'm like not thinking about this right is really the provisioning of said interfaces, the provisioning of said services, like what are the lines of demarcation between potentially Cates and a generic CNI like type of exposure, something where we're actually passing forward, sorry, passing through some type of physical interface to a pod that's been like offloading some of this capability so we're not actually doing that packet processing in the pod itself. And then like what the relationship between the Kubernetes control plane versus the rest of the network control plane looks like, right? But I mean, I think we got to be careful saying that like if we use a SmartNIC that we're not like attaching an interface because we still like how we attach that SmartNIC to the pod it's either gonna be some type of direct pass through or it's gonna be through a CNI, right? Like using, I don't know, like MULTUS and DSRV, said CNI, et cetera, right? So, it sounds like we're really talking about this. Say that again. Yeah, it needs a similar setup. Even if you're using different components to reach the same end result, what you want to do is that you want to make it easy to set it up. Yeah, this is how you can connect it. Right, because at the end of the day, right? We have an interface to a pod that like exposes it to the rest of the network, right? Like that doesn't go away. There's just a lot of ugly ways to do that right now and exploring the best way. And BGP because it's very finicky happens to have some strong opinions on how you should do that. So then that's when we have to start figuring out like do we just put SmartNICs everywhere and use node labels and just say, this is a NIC that's gonna make this a lot easier. It's gonna handle all of the BGP negotiations here on this discrete chip on the NIC. And then it's gonna have an interface that passes through directly to the pod and all the pod does is offload traffic to it. Or I mean, I feel like there's not gonna be one like winter takes all to you, right? Cause there's gonna be a CNF developers who are gonna develop software that does packet processing in pods that is ultimately gonna need to be able to then talk to the rest of the network. Which I think that's probably one of the like things that have been like hotly debated since MOLTIS and everybody first showed up on the scene, right? And the film stuff, it was really good for quite a lot of traffic. We've been playing around with it and it seems to be working, but the key is how much traffic are you actually pushing through it or not? I think one convenient thing about this use case too versus some of the other ones is since this is a control plane exercise too, we don't have to get super far into the weeds around data plane acceleration, et cetera, right? Like this is specifically around understanding how to basically advertise routes, withdrawal routes and do this within a potentially nadded and containerized world. So I would like to just call out that and while I know at some point, this BGP speaker would probably be attached to some type of data plane as well that we would then have those things. Like we don't necessarily need to solve this IRV and fast packet performance in a pod for just pure BGP. I think if no one else has any topics they specifically wanna discuss right now, I'm gonna start putting in some PRs. I've been kind of like slammed with work and getting past the conferences, but have some things to discuss and I know Frederick said he's gonna do a write up, but is there any other topics that anybody would like to add to today's discussion? Okay, then we'll give everybody 25 minutes back. Oops, that's stuff in chat. It's just a meeting notes. Excellent. Okay, I'll chat with everybody next Monday then. Thanks everyone. Thanks, Chef. Thanks, Chef. See you. Bye. Thanks all. Good day.