 Hello. Hey, everyone, daily. Hey, Daniel. Hey. Hey, dude. Good. Good. It's, uh, well, yeah, I'm going to try to keep the wisecracks to myself. I'm doing rather well. Good stuff. Yeah. It's weird times, right? All around. Yeah. I don't know. I'm bound to get myself in trouble with a poorly done joke. I think the last time I bumped into you, Lee was San Diego, sunny at times, right? Yeah, they were. Yeah. Boy, I can say that the week of cube cons are those things are a blur. They are. They are. They're amazing, aren't they? But they are blur. Yeah. Yeah. Well, since we're on the, by the way, by the way, this is a CNCF, uh, hosted call. And so as such, we record the calls and post them on the community and, and et cetera. And I say that to say that since we're on the record, I will say that, uh, yeah, I'm why I understand, but, but, uh, just I can't help but still be disappointed that we're not, uh, really jonesing for some, uh, some like face-to-face interactions, some, some sharing with, with others. So I'm really hopeful for some, at least a physical conference or two in the next six months year. Yeah. Yeah. It's going to be difficult. Nope. Yeah. I'm looking forward to keep going to you. We've got a bunch of presentations lined up for that one, the virtual will be it, but that's a, that should be good fun. Yeah. Trying to recreate some of that community bars in the slacks and on zooms and things. Yeah. The whole, I just haven't seen the whole, the hallway just hasn't been the same virtually. I haven't seen it done. It's hard. No, no, that's, that's actual connection. It's very hard. I may, well, uh, let me put a link to the meeting minutes in, into chat. All right. Fair enough. Who else do we have here? I've got Luke, Nikolai, we've got Watson, Amy, Simon, Jonathan, Matt, David, very good. All right. We, we're about four after, three after now. If the meeting minutes are a community effort. Um, so please don't be shy. If you're on the call today and, uh, your fingers still work, go ahead and slap your name into the, the attendees list there. And we've really just got one agenda item today, which will give, uh, I think you, the present today's, uh, maintainers and presenters, um, some, some comfort in some room to, to tell us about, um, ambassador to tell us about their, their project and, uh, hopefully won't overflow this, uh, SIGS plate. So, so I'm pleased that we just have one, one item on the agenda today. Um, with that, we're a few minutes, we're a few minutes after. So let's, let's go ahead and get going. Um, Daniel is here with us. Um, and so is, so is Richard. And I think probably so are some other folks representing, um, those that have put blood, sweat and tears, I suspect, into ambassador. Quite right. Got a t-shirt as well to prove it. Yeah. Um, with that, um, Daniel, are you presenting today? Are you going to? Yeah. Yeah. I'll be presenting it. It's okay. Can I grab the screen? Please. Can everyone see me? Okay. Okay. There it is. Yeah. Kind of sweet. Oh, so I'll just pretty, yeah. So folks on the call that haven't bumped into me, my name is Daniel Bryant, product architect at data wire, Richard Lee, co-founder and CEO of data wires also on the call as well. I'll be presenting today and then Richard and I can take questions at the end. Data wire is the, uh, sort of the founder and the student, if you like, of ambassador at the moment. And this is our discussion around, uh, proposed donation to the CNCF. The TLDR ambassador is an open source API gateway for Kubernetes, powered by Envoy, very much focusing on the North-South use case. We've, we've tried to make it as developer focused and developer friendly in the cloud space as possible. So custom resources for configuring your endpoints, your routing and so forth. We support Kubernetes ingress too, very much into open standards. Uh, hat tip to Alex Gervais on the data wire team who's been working hard on the Kubernetes 1.18 ingress class support and the path type field as well. And there's wider adoption in the industry. We're using like thousands of orgs, lots of Docker pulls, uh, 3000 plus folks in our, uh, community Slack, and over 130 contributions and non-trivial contributions. I'll, I'll run through some of those later on, but there's some fantastic people in the community really sort of contributing to interesting discussions, interesting bits of code that's extending ambassador all the time. So we are a effectively a control plane onto Envoy. We are focused at the North-South use case and we convert Kubernetes config or your mappings and so forth into Envoy config under the hood. Envoy is a data plane traffic routed through. I'm sure on this call, folks, you've seen the contour stuff, you've seen probably other gateways, probably nothing new to yourself. So I'll skip over that and we can, you know, move on to the perhaps more interesting content. We've come a long way in three years. So, uh, I think it was early, early March, 2017, uh, when we announced the project, uh, we had the 1.0 release early this year, which was like super exciting, really, really pumped to see that. GitHub starts 2.8K because, you know, we're like, GitHub starts as the way to measure the success of any project, right? That's the standard cloud native metric. But, um, but, you know, it shows interest. It shows like the appreciation of the sort of the supporters into the, into the actual project. We've had 130 contributors. I counted early on today actually 1700 and pull requests. If you want to read more back to the later journey of our 1.0 release and so forth, I'll put the blog blog post for you there. It's been a fantastic ride. I've been involved in a whole bunch of this time, Rich has obviously been there from day zero. It's been just amazing working with the community. We're looking forward to driving this forward on in the future too. Core features. Again, I probably won't label it this too much because I'm assuming this crowd are pretty happy with sort of envoy features and we layer on on top. We provide that ease of use. We provide that North-South use case and we provide that ease of use for developers. So, interesting things I will point out from the sort of resilience side there. If you look to the left, for example, we do have all your standard resilience features, but we've also had Auth from a very early development initiative in Ambassador. Initially, it was a custom extension to Envoy. We then worked with the upstream Envoy community, helped form the extAuthZ interface and we've worked through that now and once it was upstreamed into Envoy, we then changed Ambassador to use that well-established, well-agreed API as well. So, we love working with the upstream Envoy community. Fantastic community all around there. Rate limiting, for example, is done by the RLS Proto as well. I've got, if you look at my blog or our blog, I should say, I've got some Java services where I've created Java rate limiting services and plugged them into Ambassador using the lift example, using the lift rate limiter example as well. So, I had tipped to the lift team on, I'm sure Matt's on the call somewhere, so you had tipped to the team there, Matt. Observability, all the good stuff from Envoy and we have distributed tracing, Zipkin, Jager support, metrics with stats theme from Aethius, all the goodness with the logs. From the cloud native perspective, we have service discovery with Kubernetes services, Kubernetes endpoints, and also console as well. So, console's been really interesting from a sort of hybrid use case situation. If your pod that Ambassador is deployed in can route to somewhere on the network, you can route out of the Kubernetes cluster. So, people on Google GKE and GCP use IPA listing in their network and they have VMs running, they have GKE running, Ambassador running in there with console and we can use console for service discovery for IPs and ports and we can route not only to Kubernetes services, we can route to VMs outside of the cluster. So, that kind of supports the lift and shift, the hybrid model, that's been a super interesting journey and a tip to the hashical folks we've worked with quite closely there, a fantastic community as we all know and that's been a really interesting sort of enabling use case for getting folks who are perhaps stuck with the VMs or not even stuck, they like the VMs, but they want to dabble with Kubernetes too. All the other stuff, we have good stuff, zero downtime config, we use Kubernetes, all the primitives in Kubernetes to manage our state, so Ambassador Pogs can come up and down, the state is stored within Kubernetes itself. L7 support, again, hat tip to Envoy, this is basically building on all the great stuff in Envoy there, nothing too exciting on that one. From our perspective, but obviously super useful, the L7 routing is becoming really a big thing as folks move more towards microservices, APIs, there's more things at the edge, so to be able to do all this clever L7 routing based on headers, based on Jots and all manner of things is really powerful and we're just providing that nice API, that nice experience onto all the power that Envoy provides under the hood. Our primary use case is in the API gateway space, traffic management app security, app development, allowing folks to sort of move at their own pace, do different releases, they can contribute to their own Kubernetes files with the mappings in, so different teams can go at different paces, all the good patterns we see with Cloud Native, all the good patterns we see with microservices, it decoupling an independent release. We do see some folks running multiple ambassadors, that's totally a valid use case, often with an internal and an external ambassador, so like internal devs and then external API offering on top, they can change your different paces and so forth. We don't really encourage it, but we do see this hub and spoke model too, it's like a bit of a service mesh light type thing, where if you're running a very shallow graph of services, maybe you've got the monolith and you're breaking out services, as long as you're comfortable routing traffic around the outside, you can use ambassador as that service discovery mechanism, if you like, or that routing mechanism. Configwise, so customer resources and Ingress config, so we have mappings and hosts, again folks used to sort of all the traditional routing like NGINX, actually proxy, this is probably nothing new there, you have your endpoints, your backend services, hosts, you can configure your hosts, your TLS config, that kind of good stuff too. And then from, I think it was September last year, we added support for Ingress as well, so if you're comfortable and like using Kubernetes Ingress, you're good to go there. As I mentioned earlier on in the TLGR, we're continuing support for Open Standards, so hat tip to Alex Trevay, doing all this fantastic work around here with the rest of the team. We've released the latest version of Ambassador, supports the Ingress class and the path type field as well, and we've blogged about it, and we're also, I think the final stage is now of getting a nice Kubernetes blog update on all the work, all the fantastic work that Alex and the team, working with the wider community to implement this and to add our experience onto this as well. So it's been a fantastic journey, and Alex has thoroughly enjoyed working with the wider community to test out these ideas, give his opinions and so forth. In terms of being sort of proven and growing rapidly, we have many production deployments, just a handful we've picked out here, mainly because they've done great blogs and they've done KubeCon presentations, and there's been, as Lee and I were talking, sort of earlier on off-mic, we've always loved the KubeCons, and this is a highlight for myself, Richard, the whole team, actually people coming up to us at the booth, coming up to us when we're doing talks and saying, hey, we're using Ambassador, we're doing this, App Direct, fantastic story, Ticketmaster, sure needs no introduction. Chick-fil-A blew my mind because Chick-fil-A run a Kubernetes cluster in each of their restaurants around the US. So I was chatting to this person at the booth, and he said, oh, I'm running Chick-fil-A, we run hundreds of ambassadors, because we run an ambassador in every Kubernetes cluster in each of our restaurants, kind of like at the Edge use case. And that was just hearing his story of how they managed this stuff, was just fascinating, right? So I love learning from production use cases, and it feeds back into our design goals, feeds back into things we put back into the community too. In terms of community contributions, just sort of highlighting, this is our latest release, it's just one release, I wanted to pull out a few interesting things, a hat-tip to Procaro Joshi, contributed preserving ex-request IDs, they're multiple envoys, got multiple envoys in the stack, great work, chat to Procaro on our Slack, he worked with Flynn on the Dev side, he works at Hotstar, he's got the Indian, say Netflix, and it's where Disney Plus is available over in India. He was loving the community, loving Get Involved, and we really appreciate this kind of tweet shouting out, yes, we recognize the community, the contributing experience is super important to an open source project. Other couple of hat-tips to Phil Pebble there, contributing the setting that Envoy shared memory base ID, to allowing multiple Envoy proxies in a pod, so if you've got Istio there, it was his use case, and you're running Ambassador 2, that was a great bit of work, lots of fun chatting to Phil on the Slack, really big and interesting piece of work, shout out to Noah Fontes there from Puppet, and Puppet have got Relay.sh, which is their take on GitHub Actions or Argo, that kind of thing, and Noah has been doing fantastic work with the Knative support with Ambassador, so he's been doing some performance improvements as support for path and timeout options in the Knative gateway, and he's working on some blog posts, that's just a fantastic story all around, so we really enjoyed seeing more and more advanced contributions to Ambassador from the community. Robot-wise, Rich and I thought about breaking it down into two strands here. From an experience point of view, we're all about making it easier to use. Commonly complete, I think we hear across the Cloud Native stack, is there's just too many things, and once you've made your decision, often they're quite complicated to weave all together, so we've worked really hard to make it as easy as possible to get up and running with your mappings, with your hosts. We're looking to improve documentation. I think that's again, that's a common theme throughout the CNCF space, and good documentations are worth their weight in gold, and we're always looking to make them better, and community contributions are fantastic here. More tutorials, more integrations, improving the contributor experience, although we have plenty of contributors, we'd love to get some insight from the CNCF here, we'd love to get some guidance from the CNCF, on how to make this even better, because again, if we're all trying to drive forward the innovation of the North-South use case for Envoy, we think we're in a great position to do that, or to help do that, but we'd love the input from the CNCF to make that happen. Features-wise, Wasm support, obviously in the Envoy space, Wasm is super hot, that is in general, but we'd love to look at this as well, as in our future roadmap. Caching API, coming on the roadmap. IP Allow and Deny, very popular request on our GitHub, very popular discussion points on Stack Overflow and in our Slack and so forth, and continuing our support for emerging standards, like the service API we mentioned earlier on, and the Ingress features. So, CNCF donations, we're going for an information proposal. It's a mature project ambassador, we'd love to advance the North-South use case for Envoy. Again, I think I don't need to really preach too hard here, we all know and love Envoy, and we think the North-South use case is a really important part for that, the edge proxy, the Ingress part of your cluster. We've got production, approved in production, thousands of deployments, lots of interesting use cases we've linked to in the deck, we can share more. We're all about driving this cloud-native best practices for Kubernetes Ingress, and again, I think that gels nicely with the CNCF goal of promoting this cloud-native experience, all the architectures, all the operational models, and how we configure as a big part of this. We really invested in the cloud-native config, love the GitOps, we frequently chat to the WeWorks team around these things. Self-service, comprehensive integrations into the CNCF ecosystem, Meteos, all these kind of good things too. And what I can mention, we are focusing on the North-South traffic management use case, but we're making it as easy to integrate with East-West too. We have integrations with LinkerD, with Consol, and with Istio. We often find that folks get on board with, say, the North-South use case. They spin up Kubernetes, they get some CI and some CD in there just to deploy their containers in. The next thing they need to do is get traffic into that cluster. That's the ingress. When they start doing more microservices, they often want to move towards service mesh. So we're looking to make that as easy, that on-ramp as easy as possible for them. The community has been fantastic. It's been humbling being part of this. We'd like to make it better. Again, that's what we love, the CNCF guidance, the CNCF helping this space, but plenty of slack users, lots of contributions, and multiple coupon talks. Some we don't even know sometimes. A few years ago, we heard the Knative folks talking about Ambassador, then we were like, hang on, you're talking about Ambassador. This is fantastic. And that's what then triggered even more work around Knative too. So yeah, it's just been humbling to see folks building on our technologies. We'd like to help them do that even more. The asks from the CNCF, we're looking for the vendor neutral home to grow the Ambassador community, to grow the North-South use cases for Envoy and Kubernetes. We love help with CI and CD, or more folks probably on CD, continuous delivery infrastructure, and assistance improving the docs, how to the general kind of onboarding experience, the general contribution experience. Because we recognize if we're trying to do all these goals, we're trying to drive forward the North-South use case for Envoy, Kubernetes for all the Knative tech. It's all about making it easy for developers, particularly the late majority, the late adopters. This is a big hurdle for them. So we'd really love some guidance and some help on the CNCF on how to improve this experience for folks looking to get traffic into their Kubernetes clusters. At that point, I should say thanks for your time. Happy to take any questions. Richard and I can jump into this too. And we'd love to have a chat about sponsorship too. Thank you, Daniel. This is great. This is great. I've got a couple of questions, but we'll bite my tongue for a little bit to solicit questions and comments and from others on the call. If you're like me in the UK Times, and I know it is late, I've had a couple of extra coffees. I'm kind of good. This is Watson. I have a question. You guys building on ARM? When you say that, Watson? We are not currently building on ARM, although we're starting to see a couple requests for that. So it's something we'd like to do, but we haven't actually looked at it beyond sort of a theoretical perspective. It doesn't seem like it's super hard to do, but we just haven't done it. Okay. But we love community contribution. That's a good point. Chase, Watson, thank you. Go on, Leah. I'm leaving you. You're reaching for a question, go on. Yeah, I'm leaving space for others. I'm leaving. Very hard practice for me. There's a lot of strength in that silence just then. The of the so good, so pleased to see and kind of confirm that that the proposal here is at an incubation level, which is, you know, appropriate. The thousands of deployments or kind of the numbers around that, how are those measured? How are those confirmed? I think it's a little bit of triangulation. I mean, we, you know, based on sort of the number of folks we have in the slack and, you know, we get some data from Dr. Poles. It, you know, people pull all the time. So, you know, it's a little hard to say. So, but, you know, we, we actually shifted from key.io to Docker hub because of the persistent outages. And and with, you know, week we had half a million polls. You know, so, so based on sort of just sort of general triangulation and things like that, based on what we hear, you know, that's, that's our best estimate, but we don't have anything, you know, super precise that we can share. So, Verna, yeah. Is there, do you guys have a perspective on other projects or other alternative tools that people use in the space? So, so it's we, you know, contour is, is an example of one of those. How, how, how and when do you guys find that someone is drawn to ambassador versus, you know, other, other, other way, other alternatives? I think when people want to extend their ingress for sort of more broader use cases, that's, that's a big one. So we, we've actually had a lot of users, migrate from contour to ambassador because contour hasn't supported authentication and authentication is a pretty common requirement at the edge. To the enterprise context. You know, and, and you can argue whether or not it belongs in an ingress control or not, but, you know, the reality is, is that if you're exposing something over the internet, you probably want authentication. So we take sort of a, probably a little bit more of an extensible approach in terms of exposing like the rate limiting APIs, authentication APIs. I think the other sort of area where we've historically tried to do a little bit more work is just integrating with different other projects. And we're not super opinionated as to if you're like a cloud sass thing, you know, some folks from Datadog contributed how you integrate with Datadog. They're obviously not open source, right. But we also had folks who contributed Prometheus and a Grafana dashboard. So we've worked pretty hard to try to make it because there's so many things that you have to install to get Kubernetes working the way you want. We've tried to make it easy to just integrate with, you know, all the other stuff you need. So I'd say that's probably the, so there's a ton of how-tos on our website around Istio, Linkerdee, Consul. Yeah. Yeah. I'm interested in that. I'm interested in that. So I'd say that's the other area. I know the console folks, linked off to a couple of community-driven matrices as well. We keep an eye on those and not 100% accurate sometimes because, you know, stuff, obviously the pace of the community, like it does change a lot. But we're actually working with some folks now, like, I've got the names temporarily, but they reached out to us and said, hey, you know, can you give us an update on what ambassador supports so we can ping you the link to those as well and as they get published. Because it is interesting, like folks outside of our community are just, you know, keeping an eye on all these things, the sort of checkbox stuff in some ways, L7Sport, L4Sport, SNI, you know, check, check, check. And then we always add sort of extra comments on top in terms of what the, as Richard has mentioned, what integration is being super important. There's additional value over the checkbox stuff, but the checkbox stuff is covered quite nicely by external folks. Yeah, thanks for that. I'd be interested in that link or if there are some now, yeah. Yeah, it's even more helpful when it's third parties that are putting together perspectives. Yeah. Making sure that correct is the key thing. That's it. I'm going to give some quiet time for others that might have questions. Hi, Janyal. Yes, you can hear me. We can? Janyal, I have a question that for the, I have been in the past seeing the control damning the things they have done. So they have a very good use for use case of them when you have an ingress controller for that and you have a fully quite qualified domain name that have a very similar to that. So in a native Kubernetes ingress, you can't do the same fully qualified name name for separate two ingress services. So, but using Contour, you can achieve that. It's have some very unique use cases but people often talk about that or you have an ingress controller and let's say I have a www.test.com and you have the same ingress services that have the same www.test.com and slash other products. They have a limitation but they have the unique sort of domain name. So in using Contour, you can do that. But in native Kubernetes cluster, you couldn't do that. So is that a thing in ambassador API as well? So, Sam, this is Richard. So to make sure I understand, you're basically asking, can you have multiple TLS hosts on the same ingress controller? Right? Is that correct? Just have a unique case that I've lost. So the answer is yes. You can have multiple different fully qualified domain names. The reason why I think a lot of ingress controllers don't necessarily support this is because it's not really part of the ingress spec. And so common use case though would be you have two different domains www.a.com and www.b.com that you want to host on the same Kubernetes cluster with the same ingress controller and you also want TLS with them. And so you need to support SNI and then based on your SNI headers you wrap the correct host, you return the right certificate right to the right TLS stuff with each host. And so that's a use case that we do have people using ambassador production for. I'll bring up the topic of telepresence and to kind of acknowledge that well that hey this isn't some of you the project maintainers Richard and Daniel and by the way are there other maintainers on the call of ambassador? Don't think so. No, not tonight. Okay, very good. I wanted to make sure that I was being inclusive with my statements. But I think that it's probably important to note as we're reviewing and considering ambassador that Richard and Daniel and other folks that that steward ambassador have had some experience with already having a project in the CNCF. Clearly since you're looking to donate another one that experience has gone well. Any reflections there? Anything that you're looking to do differently from the I mean I think it's like so we've had a positive experience with CNCF. Telepresence is a sandbox project. So I would say that the engagement level with the CNCF for telepresence has been I characterized as a little more arms length. And which is which is perfectly appropriate given its sandbox nature. And I think we've also been caught up in the vortex of what is sandbox which seems to be a spread that reemerges on the TOC every few months. So I would say we have higher expectations around incubation. As ambassador I would say just from a community and breadth of adoption standpoint is more mature than telepresence. So we think incubation is appropriate so we would expect sort of just more engagement with the CNCF community around ambassador. And I would hope that the community kind of pushes us in directions we may not have considered and we consider that to be a good thing. Definitely one thing I'd add it is been the sort of facilitation aspect Lee like going to the Kubecons doing the presentation in the maintenance track. Just meeting sort of folks has been fantastic. As in like I did the 101 talk at San Diego and I've done the other ones as well. And that visibility even at the sandbox level was great to Richard's point I definitely think these are with incubation to be even more more of that. But that alone has been fantastic gives us that platform to talk about what we're doing to get the input to have those kind of hallway breakout sessions where we can share and learn. That's just been for me fantastic just seeing that part of the CNCF. Yeah, very good. Of the another topic data wire considers ambassador something of it of the enterprise offerings around ambassador. Ambassador is is it fair to characterize that as an open core project? We basically have a non open source sidecar that you can deploy on the same pod as ambassador that provides additional enterprise features. Sort of our general belief is that we want to make sure that's and that's also why the open source component has just public APIs that are all documented. And we are our sidecar just uses exact same APIs. So so folks can choose to replicate all the functionality of our sidecar, which includes, you know, open ID connects and OAuth and all this other stuff or they can or we have lots of users who don't and just just use our APIs in the open source and build your own authentication service. And so so that's that's sort of how our business operates. Oh, very good. Of the of that proprietary sidecar is that in the presence of that is there a disabling of are the two Yeah, you know what? I've been in service mesh land focus there so much that I'm Yeah, no. Okay, very good. So you got you out on the sidecar. Does that add a network hop in the way in which the sidecar works? Or is that a memory? The sidecar is deployed on the same pod or it's really and we usually just, you know, actually package in the same container because it's just easier. So so it's it is over gRPC but it's over local host. So so there's no there's no network hop per se but but it is a separate process. Yeah. Okay. I think that's a fair response considering that I use the word hop there that very good. Other questions from Yeah. I noticed on the slide you all said that you wanted some maybe some assistance with CI CD from the CNCF that was one of the asks or something like that. What all were you all looking for there? I think so a few things. One is not sure what CNCF is planning on doing in terms of supporting ephemeral Kubernetes clusters. Right. So we have we have a pretty comprehensive regression and we want we run the regression all against Kubernetes clusters. Right. So being of support that particular use case where you spin up an ephemeral Kubernetes cluster run our full regression. You know, we have thousands of tests at this point. You know, and we run we also run performance regressions. There's a lot of so being on support those use cases in sort of cleaner ways would be very helpful. And then multiple Kubernetes versions as well. Yeah. Multiple Kubernetes versions. Right. And then to an earlier question. Right. It'd be great to support ARM. Right. You know, so I was watching this question. Yeah. Yeah, it supports ARM. And yeah. And I know there are different CI providers to support ARM, but but yeah, we would need CI to run ARM. So. Okay. Thanks. Um, I guess there are some provisions in CNCF to probably get. I mean, if you guys were probably would need to do the work, but as far as packet clusters, packet resources, so you could have ARM machines as well as regular AMD machines. And you can install your own Kubernetes clusters there. And there's plenty of resources there. And you should be able to get some type of permission for that. As far as assistance with porting things to ARM, I don't know how much assistance you can get for that. Yeah, I don't think we would. I mean, of course, how porting to ARM would be great. I think it's more in so much as there's because we can't be the only project that runs regression tests of software that gets deployed in Kubernetes clusters. Right. So in so much as there's infrastructure or projects in that vein, those are things that we would definitely want to take advantage of. So good. Given that we have a little time, maybe some additional details around what we're talking about, the number of contributors and kind of the diversity of the maintainership, kind of the project governance. And I guess what else can you guys say there? So I'm just kind of looking through now as the roadmap is public facing, or how does the community run? I'd say it's very much a work in progress. I think a lot of the things we've added are people typically file GitHub issues and people tend to vote up popular GitHub issues. And that tends to be a big sort of source of prioritization for us. And then people sometimes just show up and there's this poll request. And that's great. So that's how we learned about Noah working on K-native because he opened this poll request. And it was a pretty sophisticated poll request. And we had to get on a couple calls with him, go back and forth. Before we could get to the point where we could land as changes. So we have a developer's channel. So all the chatter from both our internal and external developers are all on a public channel. So we have that. And then basically, and we'll probably need to formalize this, but essentially the folks who have become maintainers not from DataWire, they basically just jump in and then it turns out that they know more about some part of the system than we do. So we're just like, okay, well, we're not qualified to, we're not the best person to actually review the code. So do you mind overseeing this part of the code? And that's what happens. So. Got it. Understood. The frequency by which the community meets? So we don't have formal community calls, right? So this is, so it's a little more ad hoc. And there's just a, and most of it just happens through our developer Slack channel. And then for complicated things, we just hop on the phone with a contributor to just work through issues that take too long to work through on GitHub. And we do get together at KubeCon. So that's a fantastic resource like to again the hallway track that we had a dinner last year at San Diego, a bunch of folks together. Last one, the GoSpotCheck folks came along and we were chatting to them. So that's kind of a touch point in real life, which obviously we're missing at the moment. But we're more than happy to extend it onto the virtual world as well via Zoom. Nice. And then to clarify of the, of all of that ambassador is, so there's an ambassador operator and maybe a few other things as well that I'm not familiar with, but to clarify the, of what we're looking at, I'm donating here is, is which repositories and which, what all is this? I think we, I mean, logically it's the ambassador repository, which is sort of the main repository under DataWire. I am not, and I'm guessing, I haven't talked to some of the folks we know better. They're probably some assorted sort of dependent repositories that are needed to move. But sort of the primary repository would be the ambassador repository and that's where like of our documentation sits and the code and you know, all this sort of stuff. I think we have like, there's also an envoy repository that I think we would probably move. Yeah, so there's a couple things there but, but yeah, the main thing is the ambassador repository. Got it. Out of curiosity, the the envoy repository, what's, what does that have? So we maintain a couple of patches to envoy, usually at any moment in time that we're still sort of the process upstream. It's also where we, it's actually not a public repository or some parts of it are not public because when they're embargoed envoy security patches, that's where we do our work. So it would not actually necessarily be public. So because we, we test, we test with embargoed envoy and then we'll, we'll generally release more or less the day the embargo is lifted. So it makes sense. Matt, if, if you're on comments, thoughts, nothing specific. I think it would be a great addition to the, to the ecosystem, for sure. Jason? Fair enough. I think I might have run dry. Anyone else have questions for, for Richard and for Daniel? No, but I have a different rule. If we hear the cat, we must see the cat. I thought I had a cat too. I wasn't going to say. That's fine. No, this is good. Thank you. Sure. Sure. Okay. One, one second. You said super cute, Matt. I'm going to like, just to meow on itself. I thought I said it super cute. Hold on. Am I here? Okay. Here. Here's my cat. Oh, awesome. Awesome. Yeah. That's exactly what I'm saying. That's totally worth it. That's worth recording for a S-E-N-C-S-I, isn't it? Yeah. Oh, fair enough. Very good. Gents, thanks so much for spending the time. The presentation was, was fantastic. Thanks for having your house in order, so to speak. That makes these easier. Thanks for the guidance from myself and Amy there. Early on that, as in, we definitely like crypt from some of the other examples shared, so I appreciate the input there. Super useful. We wanted to deliver what was most useful as possible to yourselves. So that was really useful getting that insight from yourself and Amy there. Good. Thank you. With that, we'll, the, well, so Matt, myself and, and others in the community will be in touch to do the, you know, begin the, go through due diligence. I mean, you know, go through. Nice. Great. Okay. Yeah. I was confused at first when we were, just before the call, I thought, well, that you guys were filing for Sandbox and I thought, how inappropriate that would be. Definitely incubation. Fair enough. Well, that, that's, that was our one agenda item for, for this meeting and so that's it folks. We'll see you next time. Thank you. Thanks, everyone. Good to see all of you. Be well. Bye, guys. Yeah.