 So welcome everybody to the first in the inaugural meeting of the Universal Data Plane API Working Group. I think hopefully everyone's here because they've read the charter that was put together a few weeks back by a man and myself and perhaps may have even read the blog post by Matt on Medium, which was describing roughly what's the goals of the Universal Data Plane API are. So hopefully we don't need to do too much to sort of levels and expectations there. But I think what would be great is if we can just go around and then just like introduce ourselves, like say, who we are, which company we represent, what we're hoping to get out of this series of meetings. So I can start things off, I'm Harvey Tuch. I work at Google. I'm also an envoy contributor and senior maintainer. And my real hope is that we can achieve, we can actually make concrete progress towards realizing the vision that was set forth in the charter and Matt's blog post of making envoy's APIs universal in the sense that they can be applied well beyond the envoy itself. So the set of APIs, which generally describes behavior that you expect from data plane load balances and proxies. Can we come up with a common configuration language starting with the existing envoy APIs and make this work for a whole variety of other products and build an ecosystem which can interoperate together and provide a lot of value as a result. So that's me. Maybe we can just go across left to right in the grid as I see it. So Matt. Hey, I'm Matt. I work at Lyft, the founder of the envoy project. I'm excited to be here for all the same reasons that Harvey laid out. A club? Oh, we are in a volunteer from you. There we go. Yeah. Is that better? Yeah. Okay, good. I do that a lot. You can see my lips moving. You can't see, you can't hear anything, right? So I'm Voslav Turchek, I'm at Microsoft. And again, I guess a lot of the same reasons that Harvey mentioned already, but we did a, I think we kind of tackled this, the control plane universal API pretty recently that was a joint effort, which, which seemed to have some, some pretty interesting uptake. So it's interesting to see how this will play out for the data plane as well. Okay, very cool. Machneta. Sorry, are you talking to me? Yeah. Yeah, okay. I'm Mark Roth, I work on GRPC and we're looking at adapting this API for, for you some GRPC. Hi, I'm Emma Berenberg. I'm Eric, I'm here at Google working on, on what GRPC I'm thinking of. Everything interesting. Everything interesting, yes. And my interest in this group is actually interoperability of XDS servers, given that they now have so many in different companies. Hi, I'm Vishal. I work on GRPC load balancer and load balancer in general in Google. And obviously my interest in XDS is primarily, stems from GRPC and now it's expanding. Hi. Hi, I'm Hone. And I'm the area technology for API design and infrastructure. I don't work on specific API, but I'd like to help on the API design and the governance and hope to contribute to this project. Thanks. I don't know how to say that, Krusee. Jay. Hi, Krusee. Hi, I'm Ruy Sibisadis. Hi, I'm representing NS1 today. We're in our date of DNS provider. I think our main interest here is kind of just figuring out how to have some tighter integrations with some of this stuff as it moves forward. We have a lot of this, like data generally, and I think kind of reading through that blog post and all that, it seems like DNS was one way to do it and we're seeing if there's a way to advance with you guys as you kind of develop this API. Okay, awesome. Thanks. Willie. Yes. Can you hear me? Yeah. Yeah. Okay. So I'm Willie Tarot. I'm the founder of Cheproxy. I've been maintained for something like 18 years or something like this. I'm not using API for myself, but some of our users do or have us to improve in this area. However, I'm extremely careful about whatever evolution we make to maintain the maximum of compatibility in depth configuration. First users are big gems in configuration. That's something very important for us. So that's why I'm interested in participating here. Great. Thanks a lot. Ethan. I think you muted it. Hello. Hello. Do you hear me? Yep. Yeah. This is Ethan from Antifincial. I'm working on a front-end load balancer and so far PCA service match things in Antifincial. And we are using Envoy Chattis API in controlling data plan and very interested in the working group. And I myself will contain the working group to join the discussion further. That's it. And Yelko? Hi, yeah, and Yelko. We can say that is Yelko. Director of engineering at HAProxy. And I work with Willi and with APIs. So from my standpoint, if we can integrate HAProxy with emerging trends and help improve the general state of APIs being used for proxies, that's my primary goal. Awesome. Eric? Oh, yeah, yeah, yeah. You need to unmute on your laptop. All right, so you don't hear me twice because we're dialed in as well for the one. Okay. I'm Eric. I'm part of GRPC team. And we're interested in it for all control plan and data plan pieces to integrate sort of like things that others are always interested in. Okay, I'm Doug Folly. I work on the Go implementation of GRPC. And yeah, things that are interested in, you know, making sure that the API is gonna work well for upgrading across various versions of GRPC and make sure everything works smoothly. Hi, I'm Louis Ryan. I work on this geo here at Google. So all the same things for today. Go. Brian? Hey, Brian Salenza. I'm from AWS, software engineer here. My main focus right now is on AWS app mesh. But I'm also here sort of representing AWS in general because there's a lot of interest in both Envoy and XDS within AWS and within Amazon as a whole. My real hopes right now are one to learn about what's coming, about how you're thinking about the API design. Hopefully I can contribute there as well. But also to perhaps represent the needs of some of the other teams within AWS or Amazon who are interested in adopting one Envoy or XDS for their own needs. Awesome. Finally, but Lisa. Hey, I'm Lisa. I'm working at TechTrade and I'm interested in, I'm an Envoy maintainer as well. And I'm interested in this work group as all the same reason that Harvey and Louis and everyone talks about. We have some integration point with cloud providers as well. So I'm interested in those as well. Okay. That's me. Awesome. Thanks. Okay, so that covers the introduction. I link to the charter here. Again, I assume most folks have read this. Yeah, so there's actually, the idea here is we're trying to actually evolve the APIs in a way which is sort of taking into account the needs of many projects and organizations. There's actually a lot of people on this call from Google but that shouldn't be an indication of any particular sort of waiting in that direction. We're actually here also to learn about what things when we're thinking about APIs makes sense to other proxies and projects. So, the next thing that I thought it would be good to just point to mention, to kick things off is like in terms of upcoming activity around the APIs and probably one of the biggest changes we're gonna see in the next few months. And sort of the opportunity for this working group to be effective will be to, is this stable API versioning proposal which I linked to the GitHub issue there and sort of a proposed plan of record for actually making that manifest in the Envoy code base. So today we just sort of brief history, we started off with the Envoy view on APIs which were provided Envoy with things like service discovery and then the incrementally additional features like routes configuration lookup and things like listener discovery. And we went about two years ago in our project we formed a set of relative coherence GRPC based APIs which also have REST variants which were what we call the V2 XDS APIs and they've essentially become the standard across all of the Envoy ecosystem built on top of bi-directional streaming GRPC and ProtoBuff as they're sort of called defining characteristics. And there's been actually huge amounts of work in terms of these APIs over time. And some of the aspects of these APIs are truly university, some of them are very specific to Envoy. And we're at this point now where we're actually having at least our first real additional client. There are many XDS servers out there but there's only really been one XDS client for the longest time that was Envoy and that's GRPC-LB and that's planning and onboarding these protocols. And while working with this team we actually realized that there's a number of things that we need to do to make these APIs usable by folks beyond Envoy. And these include things like for example trying to at the design level think about these APIs in terms of what is Envoy specific and what is universal. And there are also some very sort of concrete things we can do like not breaking, removing fields or basically breaking the APIs in ways which from a consumer of ProtoBuff would be considered to be sort of non-standard. There's various conventions which have arisen in the ProtoBuff world around how you treat ProtoBuffs and APIs which Envoy was not really respecting because we had our own sort of communal understanding of how compatibility was supposed to work. So we're trying to adjust these things and get them together. What if it were and sort of trying to improve things here. So we have a detailed design document describing how we're planning on sort of stabilizing the Envoy APIs and actually moving from beyond just a monolithic notion of V2 to a family of APIs essentially which would be versioned independently in our long major versions and where we will be working on essentially a V3 and things like V2 and V3 will be mutable APIs which will only grow in non-breaking ways. And this is essentially what we're working on right now and it's basically a combination of actual API design but it's also a lot of tooling to make this feasible on the Envoy side. And it's also things like features in Envoy to make it reasonable to roll out these APIs and new minor versions without actually breaking existing deployments and that kind of thing. So we're actually planning right now and the pun of record is to at the end of Q2 or actually coming up is to cut, make a start V3 alpha essentially for any API which we wanna start reorganizing. And we want to by the end of Q3 finalize V3 and actually cut that as the sort of the next version for whichever API needs to actually be bumped. And probably a lot of APIs would be bumped because there's a quite a bit of technical data in the APIs because we didn't wanna break folks historically and this is actually a really great opportunity. Alongside that there's actually started to there's now been a bifurcation at the top level of the APIs in the package namespace from Envoy.xyz to Envoy.xyz and also UDPA.xyz and we are hoping that we can try and you just use package namespaces to start drawing a clear distinction between parts of the API which can be truly reusable by anyone and ones which are very Envoy specific. Now this will be a process won't happen just over the course of V3 or even V4 but over time we're into this direction moved towards this package organization and the end state is ideally just that the Envoy namespace only has things which are specific to Envoy and potentially there will be other proxy trees there. For example, if HA proxy would need to have some very HA proxy specific things they could live in such a tree. Hey Harvey, one thing that we should talk about doesn't have to be now is we had obviously we used to have all the APIs in the data plane API repo. Yes. I do wonder if, especially for the UDPA portion of the tree if we should plan on actually moving that back out into a separate repo it would be more in the spirit of sharing. So it's something that we don't have to decide today but I think that we should talk about that and I think given that the rate of change in the UDPA API should be slower we're actually gonna have to figure out a bunch of stuff around Envoy documentation because all of the Envoy documentation effectively has to be stripped from those APIs and put somewhere else. So it's probably worth just putting down an action thought item of the actual storage and structure of which GitHub repos we want things to actually live in. Sure, I add that to the sort of technical topics. The, we didn't see if it actually created for us a GitHub repository, so that could be a starting point but yeah, that doesn't have to be the one we use. Cool. Yeah, I just want to echo, I was mad, I think that's a good idea. So you, that's what we draw on the API repos at Google, so you could have a slash Google APIs and that is all the protocols and all the changes is that you're, some teams submitted the protocol change before any code is written and that is where you define the interface before you have code then you can get away from that because the protocol is relatively lightweight so you have like a hundreds of repos depending on the relatively stabilized protocol the dependence is very low. Yeah, I mean, that's actually what we used to do but it had made, at least on the envoy side of things our development velocity slower because we had people doing PRs back and forth. I think that now we are at a different level of maturity so I think now is the time that we probably need to think about moving back. Yeah, okay, yeah noted for sure that's I think something we should definitely cover. That would be definitely welcome and I think it would increase the visibility and discoverability of both the API and any documentation that can be accompanying it. Yeah, we'll actually have to think about documentation from envoy and the API perspective a little differently so I mean, we can strip the envoy specific documentation but we probably want to have some union of the actual API documentation and they all must be written the way we build our docs. Yeah, yeah and I think that many projects will probably need that so that might be something where again these are just things that we can think about but I could imagine that the existing Proto doc tool maybe it would have some way of taking a set of generic Proto's and then projects can specify additional documentation that gets effectively merged in somehow. Yeah, some sort of temporary thing. Yeah, maybe, yeah. Cool, okay, excellent. So yeah, so I linked to the plan of record. So we, I've also sort of there's a number of sort of technical things in there which I think are interesting to this group here and that is we want to actually like real soon now like within the next week, start to lock down on those APIs a bit better. So currently right now any envoy maintainer can merge your PR which touches an API and there's no coherent sort of review of the APIs and we would like to probably, we've created a team within the Envoy proxy GitHub organization called API Shepherds and basically please let us know if you're interested in being one of these API Shepherds. Ideally you're very familiar with Envoy if you are right now and this group will be necessary to be required mandatory sign off from API Shepherds before we can actually merge any PRs to the API tree and this is our first step to avoid breaking things because these group will be responsible for first of all, not breaking things and the V2 APIs and second of all, they'll be responsible for trying to actually provide, tasteful API design, make sure we are acting consistently across different parts. Like if someone uses a header map here use the same data type as a header map there. This, there's another group that I've created called Envoy Proxy UDP AWG which hopefully anyone here who wants to submit there GitHub ID to me can be a part of and we'll try and ensure that this group is tagged on all API related reviews. So there'll be, you know, mandatory sign off from API Shepherds but everyone will get to actually see all these reviews and have an opportunity to comment. Usually we don't review, we don't merge things instantly. There's usually like a day or so or at least before anything gets really done with some reviews and Envoy. So that's probably like the most interesting one which is about to happen immediately as well as sort of V3 alpha being cut. So that's sort of the opportunity for anyone who wants to suggest, well, why don't we make this change the API to make it more universal and it can be a hugely structural change if you would like. We can actually do that. I mean, structural is in the bounds of there's still somewhat resembling XDS. Like we're not interested in completely changing the discovery service overnight have the way the Xdiscovery service product is work overnight or completely changing like the notion of listeners and clusters. But, you know, there's some amount of flexibility we have there, for example, you know, if someone were to point out, for example, this is a routing level concept but in all these other systems it's a service level concept of vice versa. We could potentially think a bit about how we could accommodate these as we move towards V3 and I think that's a good time to be doing that and you get these notions in the table. And I think like this working group is ideally where we can actually have this discussion and then raise the things that might be interesting there. So, yes, that's all I had to say about the version proposal before we actually dive into maybe some of the technical topics that we would like to get out of addressing the working group and we should actually just, actually before we do that I just quickly chat about a meeting cadence and so on. Does monthly work for most folks so do you think both frequently would make sense? I would vote for monthly right now. I feel like we can do a lot of things over email and then if we feel like we need to meet more often we can, that would be my vote. Okay, that sounds great. Okay, is that good with everyone else? I agree. Okay, cool. So we had a request from I think it's Dave Cheney to not make these meetings on Friday because he's in Australia and in Sydney which is not a good time in Sydney as it's a Saturday morning. So ideally we'll push this to somewhere mid week. Is it due to locate a tricornernator, a common time on, was that okay with everyone? I mean, hey, hey, hey, hey, it's full of ads. What is a time that generally works with the time zones that we have? I don't even know what time zones that, so we have someone in Australia. Yeah. China. China, Europe also potentially, or? Yeah, France, yeah. Okay, I guess Lisanne's probably the time zone expert. What time do you do meetings, Lisanne? Well, I do sacrifice my time zone. So it will be around midnight in Pacific time. I don't think that worked for Eastern time zone. So probably some time around now is good for me but I don't think it's really good for those in Asia if they're, I think it's like four or five AM in China now. Well, sorry, like what, with your experience of doing worldwide meetings, like is there any good time or what were you? Well, it's hard to accommodate if those people are like, if everyone like from West Coast, Europe, and Asia that basically like eight hour difference each. So it's never be a good time for everyone. Well, we could rotate time, that's one option, but I mean, absolutely, if East Coast US, I'm willing to go to like nine p.m. whatever, but I know that's probably not great for Europe. Is everyone H.A. proxy, are you folks in Europe or are you US-based? We are in Europe. We are in Europe. Yeah, it's never be a good time for like all of those three, since like eight each. Why don't we, I mean, it's something that we can figure out offline, but it feels like to be fair to people like we should probably rotate so that one month that's within working hours for some people and then the next month they have to either be up really late or something. Yeah, yeah, you can only accommodate up to two after three. Okay, so maybe we just take an offline action to maybe every other month we can rotate so that it works for two and then the third is gonna be very late or very early. Yeah, and should we just do it all? Should we just pick a fixed time which makes sense for the time zones? I mean, there's quite a large number of people to coordinate, so I don't know. I think whatever seems like normal working hours for one of the three time zones, one of the three regions should be okay. So whichever time zone is gonna end up with the short end of the stick, they'll have to adapt anyways, so they'll just gonna have to take the hit. Okay, let's, okay, so let's, yeah, let's just go ahead and take a look at the topics. And please add any agenda items if you would like to. Yeah, okay, let me, if you'd like to, I'll share this doc with those who haven't. So this is a sort of an artifact from me using Google Corporate G Suite which means I need to share with everyone who needs access. I'm pretty sure you've done this, Marilyn. Yeah, so I think Mark had the first sort of technical topic that he wanted to discuss in the context of this working group. Yeah, so, and I'm not sure, just procedurally, how much detail we wanna get into this here, but I just wanted to sort of float the idea. I can always, if it seems reasonable to people, put together a PR and then we can iterate in there. But basically the idea is right now, the, I think it's in CDS, I believe, there's the CDS response that comes back basically tells the client, okay, here's what intracluster load balancing policy I want you to use. And right now it's in Enum. And we've got a lot of sort of custom load balancing policies that we wanna use in different cases. And so an Enum is not really gonna cut it because we can't expect adding a new Enum value for every random custom case that we may wanna use in a particular deployment. So in a particular NGRPC, we have a pluggable LB policy API. So any third party can sort of write their own and plug it in. So what I'm proposing is that basically we replace with appropriate backward compatibility, obviously. The Enum with a more flexible system, like the policy name is essentially a field and we can sort of attach some arbitrary, like a proto with arbitrary configuration options for that LB policy. Does this seem reasonable? Yeah, that's a huge plus one for me. We actually just did that with the cluster objects also because now we allow cluster extensibility. So it would be great to actually move that way. And just from the on-boy perspective, we wanna allow pluggable load balancers also. So that sounds great. Super. All right. Yeah, and if you actually just look at the changes that were done in the last couple of months to add this capability to clusters, I think you can mostly replicate what was done there. Okay, sounds good. I'll try to sort of get it to appear then. Yeah. And the question is, do you want this into, so which things I guess belong in the V3 Alpha and which things belong in V2? Do you need this sooner or later? It would be nice to have it sooner. It would help a lot of things on our end. And also, I mean, since this is presumably gonna be done as an addition, it shouldn't break backward compatibility. It'll just be a, you know, the old field will have to continue to be populated basically for older clients. Cool. I think we should take an opportunity just to put that into the envoy design guidance. And it's a, this kind of debate has been happening many, many times. So our rule of thumb is that if you define an enum, you need to prepare that it won't change more than once a year. I'm actually like an HGP1, HGP2 create speedy, you know, that once a year is where the enum can handle pretty well. And anything more than that, you'll start with a string and that is like ISO, like a country code, right? And you just have to define the string then there's a spec to explain what is the string. And even more aggressive, then you go to the Kubernetes CRG model, which you define the CRG, you're using a string to refine, to arbitrary CRG in the cluster that you don't have a spec on that string, but you're using a CRG. So that is where the granularity goes. And that rule of thumb seems to be working well at the size of Google. Yeah, I was gonna say that makes sense to me just because thinking about it now, I can think of many cases with the on-way APIs where the use of enum has caused issues. So I would be fine with basically just saying that unless we all agree that this is one of those enums that's just never gonna change, that we just block them. I agree, this is a good wisdom. Do you have like, is there a link to like where this is all collected or is this just the collective wisdom of API folks? So in the Google, in our API design guide, we have a section called the Common Design Pattern and we collect many of them, but also we miss many of them. So I'm just- Can we put a link to the talk in the meeting notes? Yeah, that would be very helpful. I will, I will. Super, thank you. Yeah, no, I mean, we base our versioning proposal roughly on what was at the, in those best practices, but it seems like we probably should become familiar with all of them and try in kites as many as possible, which are missing from that doc. Cool. So yeah, so I think that one's not particularly controversial. The next item was federating XDS. So we have been to Google in being able to support XDS in a world of multiple management servers, each responsible for some set of resources, such as services and who are authoritative for them, but peering together and being able to share configuration and distribute it. And this can address a number of interesting sort of use cases such as multi-cloud, on-premise hybrid and so on. And this actually is going to require a lot of thinking to actually make work with XDS. If that is indeed the level which we want to federate at, but it's going to, for example, I think carefully about how our resources are naming works and how our resources are composed together and sort of what our XDS discovery and protocols look like. I don't think that this is certainly not the mean to brainstorm that, but it's probably a good opportunity to raise the point that this isn't what we're thinking about and that hopefully we can address in the context of UDPA going forward, is what would federation actually look like here? And that's, yeah, probably going to require a separate design doc and probably a deeper dive in later meetings. Is there anything I missed on the topic, Anna or Louis, that you happen to think of? There are different use cases, like authorization, that's where we should look at it. Yeah. Only or failover or whatever, all sorts of scenarios. And I would love to hear other people input because me and I've been talking about it, but I'd like to hear how other companies, especially cloud providers, if anybody here is from cloud providers place, how they think about it. Or the people who are national? So I get this kind of an initial question for people, which is the current Envoy APIs and the proposal here, right, they're discussed in terms of configuring a proxy, right? The federation use case, you can kind of think of it as either having two endpoints configuring one proxy, or you can think of one control plane using this API as input into another control plane. There's actually, there's a third case and there's a proposal in the Envoy repo currently where people want to allow Envoy to consume, config for multiple control planes. And then on the Envoy side, they actually want to merge it. So I think that they're actually a bunch of interesting reasons, why I think people would want to do that. But I think that it's worth thinking about it from all of these different points. Matt, can you put that proposal that you just credit? Yeah, I will grab the issue. There's a design doc, I can put it in the chat one sec. Yeah, if it is also just link it to the doc and then do that, thanks. Yeah, so the fundamental difference between the two use cases, right, is when one control plane is using the information from another control plane, what you would expect it to do is to recompose that information into other forms, right? Filter things out, augment, join, right? Generally perform compositional operations on that data. And in addition to other sources of data, it may have locally or in different sources of data. So what that would do would be to put pressure on the API to provide the right kinds of information and structure to allow composition to occur. Yep. So it would be, I think it's important for people to understand that if they want to take on that use case, it is gonna put pressure on the API to facilitate those compositional patterns and we're gonna have to look at what compositional patterns make sense. It's a good position in my mind, you're increasing risk for stability on the proxy. This is a data plan, you're gonna be doing a complex business operations there. Not necessary. Wouldn't it be better to contain the more complex operations on a control plane level? And if we need to pass information from one control plane to another, to actually have another control plane API and then just keep the data plane API as a simpler form? This is a good point, but one of the things that we've actually seen in the Envoy community is that the Envoy APIs, at least the XDS, there's two aspects of XDS. There's the actual resources which exist today, like class to a listener and these may very much data plane specific objects. And there's also the transfer of XDS now which can be used through, sorry, I'm getting my back over, which can be used to move essentially one product from point A to point B and allow some common addressing scheme. And this has actually been used on the control plane between control plane components and things like Istio, I believe, for things like MCP and also in, I've heard of at least one other large company who I think are on this meeting today, who might be using XDS in a similar kind of capacity. Yeah, I mean, there's probably not gonna be a way to mandate that people funnel everything through a single control plane. And I think that for the people that are speaking about be better if everything funneled through a single control plane, better to put that logic there. I actually agree, I think largely that's true, but I will give you a concrete example from Lyft. So Lyft has a control plane for Envoy where we do most of our control plane management. But let's say for example, that we wanna build a separate service that either does red line testing or does fault injection or something like that. Fundamentally, we have two options. We can build APIs from our primary control plane to talk to another service that would then do the merging there, and that might be the right decision. Or we can potentially allow the data plane to talk to multiple control planes in a very controlled situation and then do the merging on the Envoy. There are pros and cons to actually both situations. And I don't know that from an API perspective, we have to preclude people from doing one or the other. To Louie's point, I think we need to be cognizant of the fact that these APIs might end up getting merged and I think that's actually worth thinking through. Okay, cool. So let's raise, so split APIs, just think with Pository. Okay, that's something that we need to address. I mean, I think probably we're all in agreement that we need to split the APIs. It's just a question of how to, out from Envoy, it's a question of just how to make that work well with the Envoy flow. I don't know if we can avoid the three-way commit dance, but entirely possible, we might. We could, I mean, to me, that's something that we have to go off and figure out and that shouldn't concern this group, right? But if we're serious about making these universal APIs, we can't favor the Envoy flow, right? So I think it's the right thing to do, which is to have at least the universal data plane API portion of it being its own repo. And I think that would be nice. Yep, great. So, yeah, I guess going forward, so please resume your GitHub ID if you haven't already and I'll urge you to the UDP-AWG team and you should start seeing a stream of reviews tagged with that, which will potential the API changes. I think we're gonna actually, just in the scope of Q2, we'll be taking one API package within Envoy and thinking about what V3, Alpha will look like for that and which bits are universal and which bits are sort of a very Envoy-specific. But beyond that, we will be doing most of the work in Q3 and so probably after we next meet again. So ideally between now and then, yeah, do follow along the progress there if you're interested in it. The main list is a great place to raise any issues you have and I think this doc is also a good place to track future agenda items. Was there anything else folks wanted to discuss today? Yeah, well, just one thing for folks on the call who are not using Envoy, whether it's a cloud product or HAProxy and you don't have to answer this now, but I think that the sooner that we can get a concrete case of someone that wants to use these APIs, that's not Envoy, that already includes GRPC, but if we could make that include additional products, I think that would help us do this right. So if you are looking at this and you're trying to figure out your product plans and you would like to use these APIs for some future product and we can make that concrete, I think that would be super helpful. So I would just throw that out there, you don't need to answer right now, but if as part of your plans, you feel that in the Q3, Q4 timeframe, you're gonna use these APIs, like let's just work together to get things split out in a way that actually makes sense. I was actually going to ask something similar along those lines. What would you say for this working group is how much of the work we're doing is it to move forward the Envoy APIs versus actually trying to come up with a real standard for the industry on a data plane API? So the notion of a universal data API to me says we're kind of defining the common set of functionality that we think would be most useful to other proxies and load balancers to standardize around. And there's a little bit, which is we do also want to advance the Envoy API specifically. Yeah, I guess I will briefly give my thoughts on that and then I'm sure people will have different thoughts, which is I think what's interesting about this is that we have to, at least from the Envoy project side, we need to strike a balance of, we want to continue to iterate on Envoy and we want to continue to move things forward. So this is not entirely let's stop everything and let's do some effort to actually standardize. At the same time, we've seen interests from different companies and different proxies where we think that there's a need for this type of API between control planes and between proxy systems. So I guess my view is that to the extent that we have interest from people like the GRPC folks or hopefully others that they want to consume the APIs, I think we want to be pragmatic and actually make that work. So it's not like, let's just do a standard project and let's hope that someone uses it. I think we could be more pragmatic, but if we have people that want to build products on it, I think I'm very excited about making this work, yeah. Yeah, I mean, my true sense is basically what Matt said. I mean, I think incrementally evolve the APIs towards the point where it meets that second criteria that of being just the general standard, which is not on voice specific. It's essentially where we want to go. I think like nothing's off the table at this point in terms of like that long-term direction. And we can definitely, if someone decides, for example, well, this form of this, the way in which listener object today makes absolutely no sense or we should be using, maybe there are some things which are off the table. Like I think we're gonna, we wanted this to be proto-based and we want there to be sort of good GI pieces support. So like there's a few sort of like fundamental sort of structural elements of these APIs, which probably are gonna be completely standard. But within that, I think we are aware of a lot of scope to like completely reshape large parts of it, but this will have to be a gradual process and we have to be cognizant of the fact that, yeah, we need to have both at any point in time, real clients that we're running in and we also have real customers in terms of management servers today for those APIs who also are gonna really be in a position to make, you know, radical changes overnight if that's what we cook up. So anyway, that's my two cents. On the pragmatic side, when you mentioned that we would probably want to split out a separate repo to host the universal data plane API. I think that's necessary. I think you kind of have to start there. But we can even potentially take it further like when I saw someone posted general kind of API guidelines to follow. So if this would end up being a common set of universal APIs, perhaps another thing we could do is also provide a common set of guidelines for people that are writing extensions onto the APIs that are subject to other proxies. So it's something we'd probably take on further down the line, but those are things to think about. Yeah, to the question I wanna ask you, what to you would be like, what would a universal data plane API look like to you, which the Envoy ones don't look like today? So I don't know if it's not just what they would look like. I think there's kind of a common set of functionality that maybe pretty much 90, 95% of proxies have to provide. So does the API cover that? Does it cover too much where those 90% of proxies look at it and goes, well, I don't have most of that functionality. I can't implement this entire thing as a specification. Or is it, yeah, these are the common things. I agree with all these, I provide all this functionality. I'm gonna use a common, I'm gonna use kind of the same idea or the same intense to produce additional functionality that I wanna do, but it's gonna be in the same vein. So if we say XDS is kind of the way that you do data plane API, that to me makes sense. And everyone kind of follows a common pattern. Then there's a question of, what are the actual XDS APIs specifically that we would expect everyone to implement if they wanna be, I don't know if you wanna say compliant, but if you wanna say, yeah, I implement the universal data plane API, these are the APIs I have to implement. And then there's also kind of the strategy of this is the way that other API should be implemented as well. So usually a spec will have, here are the things that you have to implement and then here's a way for you to do extensions for your own proxy or for your product that are unique to you, but they look like the other APIs as well. Does that make sense? I do think there's a sort of a missing piece here in the sense of needing some sort of general purpose, like capability negotiation thing so that you can start from a basic core and negotiate out from that. Like you can always add fields and in the strict backward compatibility sense, you haven't broken any clients, but if the management server and the clients are expecting different things about what fields they, in terms of what fields they know about, then the net result isn't gonna be what the people configuring the system necessarily intended. As you mentioned Mark, that is addressed in the stable API versioning proposal with a client feature discovery. Well, that didn't make it in there, okay good. I don't mean it's not exactly, it's very high level right now, it's just a bunch of opaque strings and we have to think about what they really will mean and whether there's a better way of structuring the APIs to actually structurally separate out some pieces and you know, yeah. Okay. I would like to make a point, if possible, regarding either we start from NVI or we already find some scratch for our participating to a number of working in the past, I would say that every time we want to restart from scratch to define something universal takes a decade because it's impossible to cover everything and the time will have to stop. When we start something, it's either to the initial goals of the first design and so it is difficult to steer it towards something more unilateral. I think it is absolutely mandatory to be able to cover the current and voice API just because apparently it works. If you like it, we need to cover it. It's good start to have a feeling of something to do but we need to be reasonable at some point and probably decide to fork when it has to fork even if we break the compatibility at some point, maybe we will figure out that certain parts need to be significantly modified because the ways we are implanted use only match a certain set of which I avoid this part. We can detect a number of things like this. So I'm the power of start the NVI API and I'm with no reserve to only refit over time. Okay, yeah, I think like the intent, I mean, yeah, we're going to this exercise with the idea that we probably, well, we have some networks, we probably don't have something that works for everyone and we're very open to the possibility of changing just about everything as long as we have like real concrete sort of use cases to motivate and that will be essentially all of the right. I mean, that's just generally in the onboard community the way we work. We prefer to use like in a concrete tangible thing. So if, for example, if we learn that, to support HA proxy, we would need to make these changes. These will be things that we're very willing to discuss and we're open to. As long as we don't actually lose that fundamental capability to express what we need for online. Well, there's... I think there's not a big danger of that. If we combine Vastlav and Willys thoughts, I think we have the right track. So obviously the current APIs work and that's their big strengths. What we could do is find within those existing APIs what we can consider to be the core or basic features that should be common amongst all the answers and they should be implemented by all the answers in order to be effectively communicating with control planes and providing data plane API with functionality. And then out from that, we can build up maybe one or two additional feature sets or optional feature sets which would cover a larger scope of features and maybe be either more specific for a particular product or just enable more advanced or extensive configuration options. I think that's fine. I think this just comes back to what I was saying before which is that I would not wanna do that work if in practice no one's going to end up no one's going to end up using it. So like if for example, and I understand that there's no commitments within this industry but like if the HAProxy folks if you were to say with high likelihood if we do this work, you will move to this API I think we would definitely do it, right? But it would be ashamed to do a bunch of work and then it ends up being that no one ends up using it. Yeah, for sure. We're not fans of doing useless work, definitely. So like from our standpoint, I think by the end of this month or maybe start of July we'll be able to like have a clearer idea of what kind of products or what kind of ways we might be using the data plane APIs in. And then they would allow us to give some actual concrete feedback and suggestions based off of the current state of the APIs. Super, yeah, that's great. Yeah, I mean in terms of like moving towards this like common core that is kind of the goal of the current sort of migration from the Envoy to the UDPA package namespace. So hopefully we'll actually start moving out some basic things like really soon which even things which are very uncontroversial for example, data types and this kind of thing. And this is something that can be done sooner rather than later. I think that the really difficult questions will be on things like what belongs in a route versus a cost of definition in between different proxies like that scenario or what form will our important load balancing assignments take if it's to be actually generally useful for everybody. There's a lot of specifics in there which are crafting based on the feature support that we've been adding to our Envoy. So yeah, I look forward to those conversations. Cool, so I think we're out of time. Thank you everyone for joining. I hope this has been informative and useful. And yeah, in about a month's time. I'm glad to see everybody and thank you all for your time. Thank you. Awesome, thank you. Thank you very much. Yep, thanks. Bye bye.