 Hi, um, we will get started in about two minutes if you just come in that you can add your name and any agenda items to the Meeting notes. All right. Are there any walk-in items that people have that you haven't added to the Meeting notes Yeah, I added two of them Just as a follow-up on the discussion The last week we had around the external network orchestration and about the glossary The terms that to be added So I think you created a discussion there. So added as a comment with a couple of terms and their definitions to that Great. Thank you. Does anyone have anything else to add? Meeting notes are in the zoom chat as well as slack chat. I'll just add the review of PRs that were merged during the week And any open PRs in addition to these All right, that's at the end though. So if you would like to go over yours on the glossary, lexicon maybe would be the start And I can share my screen or you could share yours if you'd like Yeah, go ahead with the share maybe So first item this glossary lexicon Yeah, so I think you created this discussion for For the terms that needs to be added to the glossary And we can agree upon here And then create a PR as it says So I added a couple of them like network attachment relates to primary as well as secondary networks and then Different types of networks like overlay external networks tenant networks So the intention is to Bring everyone on the on the same page when it comes to When when we are using these terms and what we are basically trying to To define For the for the external network orchestration and the discussion so to ease out such discussions and That are we talking about the same thing? So So maybe go through the definitions and add comments and We can review it offline and in the comments section Uh, thank you for this. I think there's a I think we need to spend some time reading it carefully. At least I do Does anyone have any comments or questions or anything right now or Yeah, I I'll chime in like something like pod I think we should avoid overloaded terms. Um, so I actually use pod Internally kind of the way that a look has it here But obviously pod means something very specific in the kubernetes world as well which is um Important, right? That is true. I mean I I spent some time. Okay. We like you said we We use it heavily when it comes to saying about the optimized data centers which involves compute switches and yeah storage and and The physical infrastructure belonging to but but you are totally right. It's it's overloaded term and at least in the kubernetes ecosystem So I'll I'll try to Yeah replace it with some some other I mean, it's fine. Like everybody we need to like help you through this, right? But like because like I said, I've used pods and same thing with like the term availability zone. Um That's a common term in the networking world. It's also a product feature in aws It's a metadata construct an open stack So I mean, I would say that you know, the big thing is is we try to even if we do have a term Like maybe we keep pot. I'm not telling you to get rid of it. I'm just saying we have to be careful and explicit in the glossary because these duplications I try to make it as explicit as possible. But yeah, maybe we can Go through it and and maybe Yeah, but do we do we need the pod with capital pod? In this glossary at all. I mean, what what is dependency on on that construct? We might not need that even that's right. Yeah, so we can even Remove that if there is no need then I think we should not introduce it Yeah, I think we are not talking about the data center or we can we can replace it with the data center and I think a center one Another quick comment the the network attention attachment, you know talking about primary and secondary That's very multispecific and maybe it's even going into too many details, you know talking about specifically relationship to pods um Yeah I think we should talk about it in a in a more abstract way perhaps, but um Yeah, I'm not I'm not sure I'm inclined to agree with tal. Um, I would say if we're going to bring anything That's like I don't know product or solution specific that maybe we just do like um Like a prefix to it like If we're talking about like network attachment and we call it just that it should be abstract to tal's point If we're going to talk about like something that's specific to multis you should maybe you just put like multish dash secondary network attachment or something but um Really, we don't want to like do anything in the glossary that's already starting to pre-prescribe things or you know Build something from a specific solution standpoint. We're trying to kind of get like this common understanding of just concepts That way we can um evaluate different solutions fairly and equally across the board So my suggestion and and I know it's not a great suggestion Is I've been using the term networking rather than network, right network attachment What what we're really missing here is even a definition of what a network is uh But part of the problem here is that the kubernetes plumbing network plumbing sig already took over the uh the term network attachment and network definition so that's That's already You know and it's referenced here. So so that's already something that's defined And and it's true. It is kind of defined in relation to multis specifically So those terms do exist but And some of the discussions we've been having uh, I was using this ian was using it too We were thinking of of a higher level kind of abstraction And I was using the word networking rather than network And it's not great. I I don't love that term But it's a term I've been using to to try to differentiate from the The kind of lower level plumbing that that is referenced specifically by multis I mean, I would argue Yeah, I understand what you're after I would still argue that the The term network attachment as coined by the kubernetes network Plumbing working group is is more generic than multis. Multis is just a reference implementation of that concept So I think this deserves a place we can kind of qualify it here To mean exactly that yeah, and then we could we could have a more generic abstraction of a secondary network attachment That encompasses the multis network secondary network attachment nsm secondary network attachments and any any other type of secondary network attachment Yeah, that's a good idea. I'll to get a little bit technical here for people who aren't totally versed in this So there is something called a network attachment definition which is standardized as crd Within the kind of standard kubernetes namespace Sorry namespace is the wrong world the naming convention Uh Multis specifically adds an annotation to connect a pod to that network attachment definition so Um, and I won't comment about how awkward those annotations are. I'm not a big fan of them, but um Uh, yeah, you're right. The multis way to to use those is specific to multis, but a network attachment definition by itself Uh could live by itself But the strange thing is is if it lives by itself, there's no There's no definition of how it will be used. Yeah, it doesn't have a lot of specific meaning Um, and also I'll point out it's it's it's a very minor definition that the crd is extremely simple It's it simply encapsulates a cni configuration in json Um, so there's not a lot there It's very very generic So on this note though too when we talk about being generic or whether it's solutions to I mean the way that primary network is laid out Like I think we should We should be careful like being generic in some areas and be specific and Stuff because if it's all just like kate centric's networking Then it should be like the kate's primary network or something This is the awkward place that we arrive when we come to like cns If you talk to a network operator and you talk about primary networks, right? um They're not thinking about it from a kate's perspective, right like At least in most cases. So like I'm hesitant to use terms like primary network and then I have very specific kate's connotations Like just because we're trying to bridge two worlds here. I mean if you talk to a kubernetes person You said primary network, they're probably going to have their own biases So I I think we should be like explicit with our terminology when it's important Or if we do do something that's vague like network attachment, then it should be you know, like what talis saying It should be abstracting, you know cover all potential implementations or like at least accommodate for like The different ways that you might attach a network that isn't 100 you know just kate specific Right. I think that's a very good point I'll add that you know, we usually talk about planes in our networking world So we would call this maybe the kubernetes control plane But then at the same point sometimes the data pane playing piggybacks over the control plane, you know, the primary network So, you know other terminologies that we use are things so planes and we also have fabrics uh I think this is a very good start to help us thinking but I think we there's a lot more stuff in the glossary We need to add and think about um But but thank you for this. This is this is a good opening shot Perhaps a better term might be default kubernetes network. It makes it very explicit. Yeah during the discussion. I was thinking the same could be yeah Could be a way to say the same And default also gives the implication that there may be other networks attached to it as well So it as opposed to just a unified Uh, primary or secondary like secondary even gives the connotation that there's only two networks when there may be more than two as well So I I have a preference for calling it a plane because Network is so overloaded. It's more than just a network. You know, this is The technicality of it. Yes, it's an ipv3. Sorry. It's a it's a third layer ip network dual stack right now is supported in the latest version of kubernetes So we can talk about it that way, but it's implemented often using some sort of fabric some sort of sdm controller, so I'm more inclined to call it a plane and then that plane itself is is implemented through various networking solutions, right? I'm with you because we have something if you scroll down all those like control network. I've I've never I've you know, it's always in my mind If that's a control plane, right? Like yeah And so and we have data planes and then data planes, you know, it can be subdivided So the the thing too that it's going to be tough is figuring out like How networking, you know, I'm going to make that a word We get just because I've definitely know dealing with like the ads of the world in the past It calls us the sneaky network people. They tend to like Get a little queasy when we start talking about overlays and this and that But when we start talking about like a default kubernetes network and stuff or a control plane kubernetes network I mean, there's still an overlay involved, right? Like you're writing on top of the underlay You're doing ip in ip or some type of other encapsulation method that the cni is brokering for you so like It needs to be like Explicit enough that real networking people who are going to have to like plumb these cns into their networks You know once again, there's that overloaded term it makes sense to them But at the same time, you know, it's accessible to like maybe the more cloud side of the house where Everything has just been abstracted in a yaml file up until this point Right. We care very much about the implementation details. That's that's the difference between us. I think in Some of the other parts of the kubernetes world I'll just add that cilium might not be using overlay networks There are other solutions to to implementing that kubernetes control plane And sure even like any of the ones that have direct bgp attachment, right? Um, there's ways to like directly peer with the underlay. So like but that's that's the thing though, right? Is that's why it's important to acknowledge planes like you said It's important to acknowledge overlays etc because that drastically changes even at the cni layer where we do have Some of these, um, you know, just corrugate constructs um yes, uh That's a good one. I I've gone through this before with that because we did the same thing where like You you bridge these two worlds like in the nsm space We're like nobody agreed on what a network was what an attachment was what an interface even in like the term interface Was like this super complicated thing for all of us to like agree on so um But I do like the idea collectively of us like sintering around the concept of um networking networks planes and overlays because It it kind of helps clarify those implementing Can't talk implementation details that you were describing And the last part I'll say on this is it's important, right? Because if we go with something like cilium then The nat assumptions that we come into typically when we're dealing with kubernetes because we start talking about that primary network For instance, like some of those assumptions may be false in certain contexts So we don't want our terms to like specifically lead us astray Yeah, and another really good example and this is one of the ones that we It was the the early writing on the wall that it was much more complex was calico. So in fact when the plumbing group was being created we made all we all met in person in austin and It was at the time called the multi interface group And one of the things that we had agreed upon was to get rid of that specific name because With calico if you want to add something or change something You weren't going to add a secondary network to it or a secondary interface instead you are going to render your Your thing into into calico and then calico could make the right cost changes or whatever Whatever else that you want to to make that it's within its capabilities So we do want to be very careful in that it may not be a second a secondary network It may just be a configuration that's flipped in a in a control plane that causes the The functionality that you want within a single interface a single A single network, but from a From a production perspective or from an operational perspective that still ends up with that separation that you want I'll point out another thing one of the Things I hope There will be a deliverable from this group is suggestions Recommendations for the plumbing group. So as I said the curtain network attachment definition Accepted by the plumbing group is extremely extremely generic and simple and and it's obvious why you know, there's just so many There are a lot of problems to reach an agreement and alignment that would please Everybody but you know, we're a group that I think is is We are versed in these things so I don't know how much it's set in stone already the network attachment definition Also, of course, as you guys know in in crd's you can version things So, uh, so maybe the version one of this of the crd that exists now or maybe it's version one alpha one That's already set in stone but we could potentially think about a version two of that network attachment definition that would eventually encapsulate a lot of the New thinking that that we might introduce here. So Anyway, my hope is to eventually get to that point that might take a while Yeah, and it's something that We we should not try to de-conflict the entire world here. We should just de-conflict and explicitly say, would we mean ourselves? locally because like even something like data plane Like we have conversations where it's like, oh, well, this is a data plane. Oh, no. No, that's actually a control plane The real data plane is here and then it's like you look at the hardware I don't know that's the control plane on the hardware. Did the real data plane is here? It's like turtles all the way down and so we should We should draw a line somewhere and say here's explicitly what we mean We're okay with We're okay that it doesn't cover 100 of the edges It should be clear what we what we mean and if it's not clear Then let's make sure that we we get that clarity, but without having to de-conflict for across the whole industry Right. I'll just point out that there can be many control planes and many data planes It's a control plane not the control plane necessarily exactly. It's uh, you know, my My data plane or someone else's control plane So so my definition that That that I didn't found in this list. Uh, it's underlay And not for definition. So do you think that is important to including the list? I I think long term personally it is because when we start talking about overlays You need to have context for what you're writing on top of I mean at some point, you know You need to understand that like if you're pulling in say SRV like vfid into a pot or something then that means you're starting to get like down into the weeds I'm like who knows maybe the best practices eventually say SRV is a bad idea But when you start getting into those low-level things and you start like doing Direct peering into the underlay or even something like calico right when you you peer with the underlay versus building an overlay on top Like you need to have that concept of an underlay in an overlay in place And then it's exactly like the planes. It's not the overlay. It's unoverlay, right? So I I would say that it's important Yeah, I tend to think of it as if it's something you have to build before you can establish connectivity Then it's it's maybe it not guaranteed, but it may be an underlay So for example, I if I have two Kubernetes clusters and I want to hook two istio Based overlays to each other or two istio based systems I cannot just say hey, here's a connection like I have to go build something else before I can start establishing those those istio connections. So in that scenario that thing that I have to build underneath Is a candidate for being called an underlay? So that's that tends to be how I try to position or think of it, but I know that there's there's rough edges to that definition as well Well, you know, it's Another term that might need some definition is mesh, right? I think we keep inventing new terms because network is already taken So there's fabric plain mesh and I was always curious why network service mesh took that term, but Yeah, I don't know if mesh even has a common definition Well, it is a mesh of network services. And so that's why we chose that that term It it does meet it does it does meet that we negotiate connections between each other. We establish those connections and The other the other phrase would be to call it a bag or not even a bag. It's it's also a graph. But yeah, it's it is a It it is a hard problem picking picking names that don't All of these things are graphs all of them Exactly tell one of the I think very important things that you pointed out was If ideally we can get recommendations accepted or at least seriously considered upstream into kubernetes, and I think the The important thing out of that would be to make sure that Whatever we use we can communicate clearly how it relates to existing terms so the conversations earlier around Like pod and other stuff if we feel like we have we need to Use a term and I didn't and show it where there's a conflict in the meaning Then we need to be very clear And wherever we use it What we mean And if we can do that Then when we present use cases Then they'll be a lot easier to consume because what we're asking is for people to take their time To read through understand what We're wanting what we need And then try to find solutions So if we're going to do that we want to make the barrier of entry as low as possible And I would I would suggest whenever possible we try to use Those existing terms whatever there Yeah, and and by the way one of the things we can Contribute upstream it doesn't have to be something technical in terms of a new definition. It could be updating the documentation right right now the documentation for kubernetes networking is Problematic I think for some of us Some of the language there won't fit some of the concepts we have here. It's not generic enough So so that could be something that we would do upstream, you know help Help kubernetes find better language I mean, it's no mistake that it took so long for kubernetes to finally get dual stack ip support Some of the initial thinking was just uh not thinking far enough ahead So one thing we can help is is really Better conceptualize right the the how networking is described in kubernetes upstream But we'll see I don't know what that's putting the carriage before the the horse. Maybe I think uh We have a lot of work to get there Yeah, I would I would even go far enough just to suggest that early kubernetes was not even concerned about things like IP or or similar It was primarily concerned with just connectivity like I have a name it resolves to something can I Can I connect to it and there were basically three properties that if it if you met it Then it was a happy like can nodes talk to nodes can those talk to pods and can bots talk to pods and How that happened it didn't it didn't care whether it was one ip multiple ip's or or something else like it was to try to to Detach as much as possible and see a loop of the network as much as possible which turns out there was more complexity there than that occurred because of because of that Well, I mean there was a basic assumption that it would be tcpip version 4 with a specific subnet so It it was making certain assumptions and that's that's part of the problem, right? Yeah, but possibly ip is is probably the one is the one assumption that it made Uh in that in that path Right. Yeah, I should say not tcp specifically ip is the assumption all right Is there anything else real quick move on? Oh, I was just saying can you go back to the discussion taylor, um, so I would say over here or The um in in git this and if you scroll up so I don't know if you guys remember like very first call or second call. I said we needed to define cnf People came at me with pitchforks and then Sure enough, none of us agreed what it was The first pr was put in so um, I've made another attempt to pull this one from the tug I think that this is a definition we should probably get done sooner than later because it's kind of specific to Our domain here and kind of what we're trying to contribute versus modify like you know from an originality standpoint And then if there's any agreement on this I'll put a pr in for it And I'd also like us to start the argument on what does cube native mean I mean, and for the record for cube native I'm okay with just saying that it's designed Intrinsically to run in kubernetes, but I'm sure people will want more than that I I saw sorry. I'm late by the way I got another meeting and I only just got out of it But I saw gergay made a perfectly reasonable point that um kubernetes varies from version to version But you know applications still run on kubernetes regardless of the version So I think there's something we can do with kubernetes here Yeah, so I mean first he's got the cloud native like Victor kind of talked about maybe just Rephrasing it a little bit. I mean, I'm fine with whatever But um, I would like us to just have a starting point for when we say we're working on cnf stuff. What does a cnf mean? I find it horrifying we have to define it, but I think it's the other it's the emperor with no clothes in many regards We can't work without having a definition. We can't just assume everybody has the same definition in their heads Well, not only that but like we should assume that like this definition is really just a placeholder because We just got done discussing for 30 minutes the fact that basically everything that we're using to build the definition of a cnf cnf is poorly defined So then you get into this weird chicken and an egg scenario, but like I mean as people come and start checking us out keep keep kind of use what happened right around the corner of this And that like if people just want to go back to their boss and say this is what a cnf is based on what I heard From the cnf working group or the tug they at least have something Yeah Yeah, I'll add that, you know some Definitions can be very specific and they can be very generic So we could potentially word something that is kind of a big tent definition that allows for A lot of a lot of uh specificities Uh, we are going to have to accept that someone's going to disagree with our definition. Um, because as things stand, um As I say that it gets a bit vague as to what precisely is and is not a cnf Um, so I think even with a fairly light-handed definition we'll catch somebody out Um, but yeah, I mean other than that we don't have to go too far in depth It doesn't need to be it runs with a certain kind of networking it It requires cpu pinning this kind of thing. In fact, that isn't part of the cnf definition I think we all accept but somebody will say it is so You know, we're going to have to find some middle ground there Jeffrey there's already a Dedicated discussion for this one and I would suggest that we move Keep it over there because cnf the the comments it's pretty short here, but if we go over in the The original discussion that you started There's a lot more Um back and forth on that I think I'll just put the pr in later today and I'll probably incorporate some of them victor's suggestions And then you know, this comes to town's point earlier like that that's the one I pulled from the tug white paper Um, the first one that caused all the like conflict was the one I pulled from the scene As principles, um, et cetera, like You know at some point too, we could just The thing is is theoretically all of this is agile theoretically all of this is open, right? So if we do something here And we've modified something that we've you know barred from another place We can always attempt to go put prs and those adjoining repos to You know try to make sure that there's not this definition sprawl going around I mean, I don't think it's any secret to anybody that like it's probably going to be the same 12 people Who are looking at the tug repo as the ones that are on this call right now. So I doubt it'll be that big of a challenge Frederick, um, I think you put for the the term of using cube native Do you have any feedback on that or is there something written out that you've seen Or that you have So I didn't write anything down on that specifically The thought process that I had on here was one of the traps that we that we ran into was Trying to give it to to generic. So with cloud native, we don't want to make the assumption that it's That it that it runs any specific place in the cloud But instead to try to to drive down what we mean to To a smaller thing like I would argue things like If you create a build pack which runs in lambda or runs in cloud run And you were to run in that That would also be cloud native, but it's likely not what we're looking for in these conversations and so the term cube native was Uh thrown out specifically to try to provide some initial direction Saying hey, what are the things that we need to do in order to get these things to not only running in The kubernetes environment, but to to run well But we also have a there's also a trap here because it is possible to weld ourselves too much to kubernetes in the way that it currently does things where As kubernetes itself evolves or maybe another platform comes around in the future that Where We're stuck in a similar position as we were before of not being able to run well so I think it's uh, it's uh Balance we don't we don't want to turn the crank too far, but I think the term I still think the term is it's still it's still useful, but i'm okay with With dropping into favor of another term as well. If that's what the group like I The the purpose I think originally was purely to say that calling something cloud native does not mean it runs on containers and saying it runs in containers Does not mean it runs on kubernetes And so kubernetes was a shortcut through the whole discussion so that we didn't use cloud native to mean something it didn't mean Um, so if we're if we try and keep it that light and airy So it's just really a shortcut for what about 70 percent of people actually mean when they say cloud native because they're not being careful with their words then We might get somewhere with that Yeah, because we're not trying to build network functions that run on anything but kubernetes here. We're trying to say You know not cloud native network functions or containerized network functions, but network functions have run on kubernetes We're not pretending that anything else is our goal. I don't think Yeah Go ahead If I really to create an l2 network and I was to expose it behind an api That you could then use Add into a cloud in order to connect to this also network like is is that cloud native? And some of the definitions would say yes because it's it's nicely accessible through an api You can declaratively state it what it is and so on And other scenarios people would argue know because it uses primitives which don't lend itself well to cloud To cloud environments or things that tend to work well across cloud environments So it's also trying to was trying to be careful in not having to argue those particular types of things. It's like It could be we could say here's here's the best way to run within kubernetes on how to interact with it and separates out the the question as to whether it's a good idea to To expose out these types of things like what's Well, what are our best practices towards this and what is something that runs well in those in those environments and separate the two out so that we don't And and I and isolates the conversation specifically to things that are that are within kubernetes rather than try to drag in a whole range of other things which we may eventually have to jump into some of those things but But we don't we don't have to do them now so I I don't know if this is helpful or or we'll complicate things that anymore but um For me, I I never liked the term cloud native network function I I don't refer to workloads as cloud native cloud native to me implies a set of practices So you could potentially take a network function that was not designed necessarily with cloud native practices, but wrap it in some sort of orchestration system connectivity layer a set of operators that Cloud native eyes is it right makes it work much better within the kubernetes environment in a way that can make it seem cloud native Whether the workload itself was designed that way is almost beside the point A little bit of historic context. What you're describing is exactly what we meant as in it's not enough to do a lift and shift But you really should redesign it to To work in a cloud native environment So if you have something it was a lift and shift in the containerize That that is not the intention of of cloud native network functions It was it's literally how do you design following 12 factor apps Creating good metadata that you can then consume and you can then reason about it So I will your scheduler can make decisions about your workload like oh, you're a workload that supports ip I'm not going to join you to something that only speaks uh, ethernet frames instead. I'll make sure I connect you to something else It's ip or you use uh, srlv. I'll connect you to an srlv thing And make sure you get all that in the scheduler And we're going to go around this I mean because I want to contradict tell and I'm going to bite my tongue and not I don't think this is going to be a productive way of using our time on this call because There are probably as many perspectives as what cloud native means or could mean as there are people on this call So let's take this to the discussion if we really want to have it But I think yeah, if I say this started with is kube native a useful way of A useful sub definition of what cloud native means and I think it could be here because it's more specifically what we're trying to accomplish Yeah, and to add to that we we spent Well over a year trying to get people to come to agreement on what it means to be cloud native never function and there's still no agreement on that so That's the other reason for driving towards kube native. Let's to avoid all of the Discussion around that because it's a that's a trap that will That will lead us down towards a dark hole that we may never emerge from I mean it seems to me that what we're trying to do here is build And find best practices for building applications that usefully serve end user needs and run on kubernetes Because it isn't the cloud nativeness of the application. It isn't the kube nativeness of the application so much as Does it help? Yeah, that was pretty much my point as well, you know It's kind of nice to idealize and think of these pure excellent cloud native functions that are out there But for example, our whole our whole conversation about networking orchestration is not cnf specific I mean you could work with pnfs as well We're thinking about the environment in which these network functions are going to be running, which is kubernetes based But of those pure cloud native functions name three Right Anyway, I don't think I'm helping here. I'm just making it more uh more We're all saying that there is a difficulty here. We all see it We're all using different ways to try and solve it But this isn't a meeting for solving things because if it was it would be a lot longer than an hour It would be all week eight hours a day um Let's everyone if you if you have thoughts on kube native, then please add it if we feel if we feel like we need a new into the discussion um Thread and if you feel like we need a dedicated thread just for kube native then create one We do already have a dedicated thread for cnf definition so feel free to add in here and then um At Well, sorry not and jeffrey. I think dropped jeffrey is going to create a pr. So we'll see what that looks like for the cnf Let's move on Hello, are you still with us? Do you want to talk about the Discussion 118 yeah, I think There were some comments that we collected last time just at the end of this discussion If you scroll down um, I think jeffrey had some points unfortunately, I think he dropped already but There was and Are you still around jeffrey? so I think there was a discussion About should we consider external network orchestration to be part of kubernetes ecosystem or Does operator sitting inside kubernetes cluster should be accountable to such kind of orchestration rule So I want you to ask um, I mean It seems to me that we've got danem multis this nsm a bunch of theoretical things that could exist but don't um Their solutions to a problem they could potentially be a best practice If you can make a strong argument that one of them Does everything that could possibly be conceived of and could never be better, right? There is a perfection here and you've reached it Um, and I presume you're not arguing that you know is that you're saying you know is you know better not the best ever um The question Yeah, I think that that's not our point Ian Yeah, no is not at all Substituting or replacing any of those that you mentioned it is Complimenting them because it's filling a gap for which there is nothing there today And that is to orchestrate networks that then multis and the c&is can use in order to attach pots to them right, so you're thinking in terms of more of the connectivity that multis doesn't address as opposed to the Presentation that multis some does address But I mean all right fine multis a very small gap Or task and that is to to to plumb pots To networks that that already exist and are configured up to a certain level inside the cluster on the worker node That's what the c&i does Right You know you is addressing all the rest that is there in to set up those networks in the fabric and inside the cluster And maybe on the dc gateway In order to prepare the infrastructure for for multis to do its job or for damon to do his job Yeah, or anything That's right. Um, what I was trying to I may have not used the most elegant words to do it, but the the point I was getting to is You can take this two ways either you can say that eno itself is the best practice or should be a best practice because um It solves this problem either as well as anything does right now or as well as ever will be solved um The the second part of this is rather than take eno the implementation You take the problem space it's trying to address which I think is what you were just talking about you were saying that Connectivity is an issue and you say what have we just learned about the problem space? I mean What's your end? If I left you to your own devices if I got you to write the best practices What best practices would you write based on what you know about eno? hypothetically anything will do I mean with the external with eno we basically bringing the Like yon said the the automation for the external networks, which then eventually be consumed by the network managers like Malthus or danum and yeah nsm. So yeah, we kind of Bringing the sense of automation for for such networks that will then later be consumed by the network functions Which doesn't exist today? in The ecosystem Yes, but so I'm saying best practices. So what's best practice would you write? That either declares that eno is the best practice or points strongly towards eno as a good solution that solves the best practice How would you phrase that? I'm not sure he and I understand what you're after at all. I must say I'm completely puzzled What do you mean with best practice? Well, we have a we have a challenge We have a challenge today if an operator deploys a kubernetes cluster He has to manually set up all the networking underneath and inside the cluster in order to prepare for these secondary network attachment Managers like Malthus and this the c and i is that they control to do their work So we don't have a best practice today What we are trying to do is to create something that That gives that provides an an api a kubernetes style api so it is it is It's meaningful to actually host it on the cluster itself a crd to provide an interface a declarative way for for an orchestrator to To to create those networks automatically All right, that's that's the idea. That's what we're after Yeah, right and the reason I bring it back to best practices is because that's what this group writes I'm trying to work out how we use those best practices to argue That you know is great or it's bad or it's as good as you're getting right now and I think what you're saying is that A best practice here that we're looking for is that you have a set of apis Well, an initial best practice is that you have a set of apis that allow you to reconfigure the network So you can attach to Where you want to attach to and the long term Best practice would be you use precisely this api Because this api is standard and if you use it you'll work on any any kubernetes deployment you find And that's where we could you know again either of those actually says eno in it, but the thing i'm trying to And I'm not trying to say eno is good or bad. You've heard that I have my I've thought about this and I think there are other things we can do here But that's not to say that I mean right in my choice of implementation I'm just trying to work out what it tells us that we can use from a best practice perspective And I think I do absolutely accept that Eno lets you do something that you need to do and and interesting also that today you can't practically speaking do So if we were to write, you know User stories and use cases are not Altruistic in my experience you write them with a fairly pointed aim of saying there is a hole here that we need to fix So what you're saying here is I would like to connect to the network that sits next to my cloud and I currently can't do that I'm going to well, you don't have to say I can't do that You simply have to say in order to do that. I am going to need these things and that's where I think I would take your what you have And phrase it as this is going to be a necessary component of whatever solution we build Because you aren't going to have network functions if you can't actually attach them to the right bit of the network The way you want them to be attached Am I yes, yeah, I think that's our proposal here. That's that's the the thing that we want to say The the aim is that we we define a northbound api a kubernetes style api That can be hosted on the cluster itself to To provide an interface that can be consumed by orchestrators running on top of the clusters To actually do all the necessary network plumbing Inside the data center to prepare for these networks to be consumed by cnfs Yeah, and I think that is very open and the example that we have shown And that we are that we are coding and the pock is focusing on the very simple first use case to provide kind of bridge domains across the fabric to connect to connect Secondary network interfaces to the ph right. Yeah, but that's just a starting point and there may be other more interesting use cases that will require Additional api constructs To model them successfully And we are 100 open to that What we're after is the main the main thing we're after is that We believe there should be such an api and an operator underneath it that automates this Yeah It sounds like there are some best practices that are at least being Are part of the design for you're trying to get some type of practical solution that could be used And at a minimum you're saying declarative apis are configuring the network But I was hearing other words that were mixed into the like communication about what is zena doing and what are you trying to accomplish That I would say would be At least ideas of best practices that should be used and then some things that sounded like more of The implementation side Underneath Maybe you're it's the best practice or not when you're talking about the plumbing and northbound side and everything else There's some concepts in there that are Maybe not best practice, but something else So taking some time to go through what they know is and identifying Here is an area that we actually are trying to follow a best practice And here's an area where there is no best practice or we don't know about it and we're trying to solve this And if those could be labeled or identified then we could look at the Items that maybe don't have best practices and think more about those Yeah, I think also there's some things about you know as it is it's current implementation the fact that it does Layer 2 networking That one I think Um, I mean you've heard my opinion on this before but it isn't to my mind necessarily A best practice because you know, there are other ways of doing networking. They may or may not be more valuable Um, so that one might be more an implementation choice But again, it sounds like we've just said Not really the focus of this the focus of this is that we have absolutely a blank wall here We can't do anything with networking and that itself is the problem that we need to address And regarding the L2 network, I think that's just one of the Network scenario and the implement the initial implementation of you know With the yes fairly straightforward use case that we have chosen today. So I mean So that's fine I would do that because it's you know, it's a simple thing to do It's logical. It's basic and actually it plays the history. So everybody understands it totally fine And it may well have its uses and that's completely good as well But I think if we divorce the two then you don't lose one argument because you're trying to win the other one You've got your your we need A decent networking API We need to figure out what that networking API would do We can work through some use cases or user stories specifically to work out how it could be used when it's a matter of network admin problems Versus cnf owner problems Those are all valuable things to address Yeah, totally, but whatever we do We must not lose this simple basic use case because it plays if you like it or not it plays a very prominent part Today and we will have to continue to support it for a long time for Many of these container cloud native network functions that have been built now based on existing technology Including sriv and all these things We don't like them. They are not cloud native, but they are out there and we need to support them and we need to automate them as well Can you work on adding the use cases and you just mentioned several I heard at least Three I would say that could come out of that quick comment and I think those are important to keep could you work on prs to create those use cases? Or at least create discussions for that So we can do that. Yeah, all right. I want to quickly go through and we got about a minute. So switch on what's been merged. So we switch individuals and the interested parties I think I clicked the wrong link there. So interested parties If anyone would like to add themselves, it's now just a long list of anyone interested and then we tried to attach names Company names to everyone. So we see that so this is backwards compatible with what we existing have but please If you're not on here and you'd like to be added then do a pr request to add yourself And as they get a username we remove the tech leads from From the governance items Until we need them we can add them back later If we if we decide that it's necessary but to simplify things What we were trying to do there is They've kind of grown up as a concept without really having a purpose. So We thought it was better to remove the wording until we found what we want people to do and then we will fold it back in So it's not like they've gone and they've gone forever. We're not trying to change the way things work We're just trying to make sure that that need drives change versus, you know change for change is sake Yep all right And let's see the acceptance process for delegation This has been merged. So this is about The simpler items um Will be based on the contributing guide and the pull request information. So all of this is now merged in there and you can see that and I think those are the top ones There's a few pull requests Some pretty minor ones, but if you want to review and give any feedback that's there We still want to get books use case through so please do some reviews on that and let's hopefully get that in by end of week Thanks everyone for your time. We'll see you next week Thank you Thanks. Bye