 That fire behind you looks lovely. I'm actually at a University of Minnesota facility. This is the alumni center and there's fire and there's also all sorts of skylights over in the presentation area, which is sort of fascinating. Very cool. I was gonna ask if that was a virtual fire or a real one. I think it's a gas fireplace, but I mean, it is producing a little bit of heat. I found like one of the only quiet places to sit because the lunch is still going on. Central time. Hey, Bridget, one time I see. All right, hello all. Good, Bridget, it's the comfort to hear that someone else is in central time zone. I was beginning to think the world just ran in Pacific. I'm just... I mean, the fact that it's 1 p.m. and everyone thinks it's morning is evidence of that, I think. Yeah, that's right. Nice, I am... Oh, Nicolay, beat me to it, very good. So a link to the meeting minutes are in the chat. So I might repeat this a time or two and Matt may as well, but this SIG is just forming, just getting underway. And so today it is... As we end the call today, it will be launched. We have done some earnest consideration. So I'm excited for today's agenda. This is great. By the way, my name's Lee if I haven't spoken to you before. Matt Klein is on the call as well. So bye. Bye. Matt is the TOC liaison for this SIG. So in some respects, the sponsor for the SIG, if you will. Ken Owens is also co-chairing this SIG network along with me. So as we've begun to establish practices for the meetings, if you would, not dissimilar from many other open source calls that you're on, the meeting minutes are a collaboration much like the work that you guys do out in open source land. So if you don't mind, record your attendance today. Just two items, if we can... I'm hopeful and assuming that we can get them both in. And those two items are trying to address one of Matt's top concerns actually and concerns of others. And that is the fact that there's a mounting backlog of projects proposed from corporation or donation into the CNCF that's been building for some time. And so the two that are up for initial presentation and feedback review today is Contour, Project Contour and Service Mesh Interface, or SMI. And so I'll try to speak and type at the same time a little bit if I can. I know we've got representation today from Project Contour. It was just having an interaction with folks representing SMI. And I see some of them on the call now, very good. And some of those folks may even be representing both of the projects, I'm not sure. But we're covered either way, so good. So very good. Five after, hopefully that sets some pretext. Many of you are familiar with these types of presentations and reviews. Any additional context, Matt, that you might wanna set before we invite the Contour team to present? I don't think so. I think you did a great overview. Sounds great. Okay, very good. So, Mr. Michael. Yeah, hi everybody. So this is kind of the first time we're looking at this new way of basically presenting projects to the different SIGs. So I'm not sure I know what to expect from your team. Has anybody had a chance to look at our PR that we filed for Contour that has some of the relevant details? And there's a link in the meeting minutes as well for that. Yes. If you guys haven't, that's okay. We can go over that in a very, very quick way. But essentially Contour, and we're gonna show you guys some slides and talk a little bit about the architecture, but it's an Ingress Controller is built on top of Envoy. We're very active in the Kubernetes community or we have a mission to make this one of the best Ingress Controllers for Kubernetes, make it scalable, available, secure. And our goal is to get it adopted by CNCF and donate it in the incubation level. We have a couple of CNCF, I choose sponsors already, Alexis, Mad, and Joe Beta. And we were redirected to the SIG Network to make sure that you guys do some due diligence on Contour. For, I know that for graduate projects, there's this big technical due diligence document that needs to be created. I'm assuming we don't need to go to that length for the incubation, but we wanna find out from you guys, what is it that you wanna see from us to start procuring beyond the presentation as well? And actually, let me apologize up front. I think Michael, you'd asked this question of me a few days ago about items to bring and things to repair and I didn't get back to you. We actually have something of a template for the things that we're looking for. I expect that of what you've sent. I expect you've got those things that we're looking for in one review. I'll send out that template post after the call because it'll be really helpful after the call. Yeah, that'd be great. If you can email me that template, I'll make sure that we basically provide all the information that's basically been asked for us. Nice. So I'm gonna go through this presentation. We have a few team members here as well. We have Steve Sloga, who have David Cheney and Tim Hinderleider. Different folks will talk at different areas since they have expertise and it's better to hear it from the actual folks that build certain components of the architecture. But I'm assuming you can all see my screen. We can. And actually, Michael, two quick things to set context for and just help provide clarity on. One is that, and I'll caveat a lot of things that I say here, like unless I'm mistaken or rather it's my understanding that, yes, that projects that go into incubation stage at that point do go under the full due diligence for projects. So it's kind of this presentation that there's a number of other items for due diligence. I think that clearly there's a distinction between incubation level and graduation, but I think that the diligence is the same. No problem and we'll execute. When she sent me the requirements on that template that you guys have, we'll fill them in and I think most of those map directly to the diligence as well. Nice. So that was, I'm going to pop over a link to that due diligence note about incubation stage and I think it applying at that time. The second note, and this is I think other than Ken who might be on the call now or at some point, he might be one of the only others that recollect this so that we're around at the time, but prior to CNCF SIGs, and many of you have been in some other work working groups, but one of them was the networking working group and CNI, the container network interface actually presented in that working group kind of in advance of presenting to the TOC. I think it was the 10th project to come through if memory serves. So there's a little bit of at least of my experience in my part, there's some precedents for sort of funneling in this way, but I think this is, you guys are certainly one of the first to be presenting in context of a SIG. And I'll say that we are, even of the template that I'll send after this, that template is just being formed and beginning to be standardized across the rest of the SIGs. So. And we meet all of these incubation stage requirements I'll talk a little bit about the customers later on, but we meet all of the requirements. I've seen this before many times since I'm also trying to get Harvard to the graduation stage. So we'll create a document and work with you guys to produce the due diligence doc and send also your template questions. So we'll do that within the next week or so. Okay, so let's talk a little bit about contours. So our website is projectcontour.io. We're, like I mentioned earlier, we're an open source Kubernetes Ingress Controller. We're built on top of the Envoy Edge and Service Proxy. And we are basically built to support both dynamic configuration updates who work really well with multi-teams that wanna provide increased delegation, as well as making sure that that delegation is done in a very secure way. So our mission is to be the most secure, perform on scalable and available Ingress Controller for Kubernetes. And we're also looking to expand into that as we're having more and more scenarios that are working within the Envoy community. From an architecture standpoint, I want Steve to create this diagram to walk you guys through this really quickly. Steve? Sure. So, like Michael said, Contour is an Ingress Controller for Kubernetes. So what you'll see here is, you'll see Contour is an Ingress Controller. So its job is to look at the cluster and look for resources, services, endpoints, secrets, Ingress Objects, and then some custom series that we've built from the Contour project. When it sees those changes, it'll pass that configuration down to Envoy. Envoy is the data path component. So all traffic routes through Envoy. So it handles all of the actual data proxying and heavy lifting of network traffic. Contour is designed so that you need to attract data or attract requests to Envoy. That can be done in a number of different ways. Typically, you have a load balancer in front, so in some sort of cloud environment. But again, in a different type of environment, just as long as you can attract traffic, this will work functionally well. But in any case, traffic hits some sort of load balancer and then gets sent to Envoy and then that routing decision happens there at that Envoy layer. So where traffic can go. And then again, when changes happen in Contour, it'll process some configuration and pass that down to Envoy. That connection is over gRPC, so it's a rich data connection. So we can update Envoy without losing connections from requests coming in. Is that enough? Yeah, thank you, Steve. Does anybody have any questions on this at the high level? Nope, it's clear to me. All right, moving on. And just to give you guys a little bit more visibility here, if you were to view Contour deployment in a Kubernetes environment, you see that we have an Envoy demon set that deploys the Envoy's pods and then we also have a Contour deployment that basically handles everything from the search generation jobs to the secrets are mounted into the pods to secure the communication. And then we watch objects in the Ingress Service endpoint and we have HTTP proxy which is our latest API but also Ingress Route which we're deprecating from the past. And I don't think we need to go into more details here unless anybody has any questions. All right, so from a project overview, we start, Contour started at Heptio either near November of 2017, about maybe over two years ago, and we released one of the dough release in November, 2019 essentially signaling to the community that we're entering a stable, backwards compatible API release of Contour. From an implementation standpoint, we have five plus product implementations that are basically some are commercial products like the ones that VMware is publishing like EssentialPKS and a couple of others that are coming down the line right now we've just completed an integration so that Knative can offer Contour as an Ingress Controller for Knative and Flagr has also done an implementation with Contour to provide it as an Ingress Controller within their portfolio. We don't have entirely all the complete statistics in terms of the contributing organizations or since we're not in, since CF yet, we don't have access to their stats. So some of this data needs to be gathered manually and we're working on basically gathering all of that but we have over 100 plus community members. Looking at some of our high level project statistics, we have over 2.1,000 GitHub stars, 80 plus contributors on GitHub with 319 folks and close to 2,000 clones and the clones are done by almost 200 individual GitHub users. We have four maintainers, two of them are here on the call, David and Steve who had 42 releases. There's no clear way for us to track downloads yet because of the fact that we have also some testing engines that are basically pooling images but we're gonna work into getting a number for everyone as well. We have 480 Slack members, a ton of Slack messages but some of them are archived and it's very hard to actually get the full number. Over 500 Twitter followers, 2,000 commits, we've had 10 blogs and I'm trying to get a list of all our QBCon talks that we've had and EnvoyCon talks but it's about seven in the last couple of years. We've had 1.2,000 PRs, 8,000 GitHub views and close to 2,000 GitHub unique visitors. So as you see, we have a community and we're gonna show you guys, we're gonna produce some charts as part of the due diligence document that's gonna show that there's a stable number of commits that are happening over the last two years on contours. So both the momentum of the project has been stable and increasing over time and also gonna show that the number of contributors have also increased over that same time span. So that's to show you that the project is in good health. It's alive and well-funded by different organizations, mainly VMware and it's here to stay. Any questions on this? This is fantastic, by the way. This is no small work not only doing this but then summarizing it all. Let me poke around a couple of these things not because the answers to them are necessarily gonna matter one way or the next but it's just a good point of note and that is of the forks and the clones are those indicative of the number of, like in the case of clones, is that sort of the number of downloads? If you will, is that how the project installs? That could be an indication of forks that just wanna basically clone the project and look at the source code, view it in their own repo, maybe make some more changes and specifically on the forks, it's we know at the least of one or two organizations that have forked contours just because they wanted to add some of their own things and then they wanna basically push them upstream later on. Some of those organizations don't wanna be named publicly that they're either using or contributing to contour. You can think of financial institutions falling into that category. So the best way for them was to fork it and start working on their own fork for now. Understood. And so then in terms of, I guess the question mark on downloads is what's the most common deployment model or the metric if there ultimately is one that you would use to indicate number of, I guess it's a difference number of downloads or number of deployments, is that correct? Yeah, so if for other projects, the way it is done is that if they have a Docker image on Docker Hub, they basically grab the download number from there. They post that. I don't find that to be reliable which is why I'm not doing something like that. Other folks have a Google or an AWS a bucket that they post their binaries and that's how people download them and they post that. In general, downloads is, if you can find the true number which is very hard is an indication of deployments but it's, you know, if someone tells you that they figured the golden formula for this are likely lying. There is no easy way to get that. No, we don't have telemetry that reports back to us how many folks have installed or something like that. So since that doesn't exist, it's very hard to actually pinpoint the number. We'll give you guys a rough estimate on the number of downloads of our binaries but I don't know how realistic or how much stock you can take into that. I mean, to be clear, we use Docker Hub to publish the images and we could put the number up there, but it's a lie. Yeah, that's why I didn't want to put it. I was trying to figure out a better way to provide that number but in absence of that, we'll eventually go with Docker Hub and I'll put a caveat there. We just want to be honest here. I'll give you an example for Harbor. One of our images has one million downloads. I know for sure I don't have one million deployments. So those numbers lie. Absolutely, telemetry is a pervasive problem for usage analytics for most of the source projects. It's a shame that it's not just more commonplace. Of the four maintainers, can you speak to their organizational affiliation and then their focus or responsibility around particular components of a contour? Yeah, so the organizational affiliation, they're all VMware employees today. And I'll let David, who's our head technical leader for contour talk a little bit about their areas of expertise and areas of contribution. David. Hi, my name's David. I was the first engineer who worked on the project. I started it in about September, 2017. My background, obviously I spent some time in the go world, but before that, in a previous life, I was called administrator, SRE DevOps, just admin or all those things. I spent a lot of time working with a bunch of load balancers because mainly most of my previous roles were B2B or B2C type e-commerce site. So lots of HTTP stuff and I've worked with almost all of the, almost all the popular open source web services at the Apache EngineX, Lighty, Cherokee, all of them. Yeah, Michael, should I speak for the other three? Yeah, just talk a little bit about the areas that they're working on on contour versus their background, which will be harder to basically produce. Oh, of course, of course, of course. Gee, to try and put people in boxes, I mean, the kind of facetious way to say is that we are the four experts in this project in the world because we've worked on it the most, but that's not particularly useful. We generally work on individual feature streams. This remote team tends to break down better that way. We have, at the moment, James, who's not on the call, focusing mainly on our integration tests. We're working with, testing is always a challenge, but James is very passionate about improving an integration test so that we can be more sure about when we get a new release of Envoy, rather than just, it seems to work, can we really be very, very sure about that it works. Steve is focused a lot on individual features, like the features focused towards the customer. He's been working on things like upstream features. Right, I mean, I know he's one that he's really wanted to work on for a long time, and we'll land it soon. And Nick is the kind of point person on Ingress, which is working close with the SIG service guys because one of our goals is that part of the thing that came out of adding our HP proxy support is Contour has the ability to use multiple, to consume multiple ways of describing Ingress and CIDs, like we're not closely tied to any one Ingress document, and that gives us the ability to integrate new ones. So as we go from Ingress 3.1 beta to Ingress 3.1, and we go to Ingress 3.2, we have the ability to integrate those at a reasonably low cost. And so part of the commitment we've said to borrowing those folks is, we want to be your first user, we want to be your, forget beta, we want to be your alpha user. As soon as those types are available, we will integrate them in Contour, and we can scratch it and see if this actually makes sense. Very good, and actually the initial question was just aimed at kind of two things, sort of assessing the bus factor, if you will, for the four maintainers, just the, and then also assessing the, of the affiliation of those maintainers and the governance of the project, the decision makers for that maintaining and their alignment there, the healthiness of the diversity of the maintainers, I guess is what I was. Yeah, absolutely. So today the diversity is mostly VMware, but we're a very open and welcoming community. We have community meetings multiple times a month, which are open and not flexible time zones. So other folks can come in and contribute. For example, Matt Moore from the Canadian community just came in and added Canadian support into Contour in a matter of about three, four weeks of work, just engaging with our community and getting that up and running. So the Flagger folks did the integration with very minimal interaction with us, but we all come more maintainers if other folks wanna come in and contribute to the project to, and have a flexible governance to enable them to come in and contribute and have a seat at the table. And I wanted to mention one more thing before we leave this slide on the Cubicon talks. Steve Slogger and one other person from our team had the presentation at Cubicon that had almost 10% of Cubicon attendees signed up for it. That was huge. We almost had like a thousand people signed up for that presentation. The room could not accommodate that many, but there were a lot of folks. There were more than 500 people in the room. I don't know how many, but it was a lot. I just wanna just chime in on contributions. Like this is something that I'm very, like personally very, very passionate about. And part of the way to do that is like I have a very strong policy. We do everything in GitHub. We try and keep as much as possible in the open. We're trying. Sometimes it's very easy to fall back to old habits, but actually trying to have like, we have a contour channel inside VMware, but we actually try and use the public one on the Kubernetes like even for like a developer chitchat, just like talking back and forth about like, what's this bug? Did you break this? Did you see these things? We try and do as much as possible in the open. Recruiting Contour contributors is hard. Keeping them is about 10 times harder than that. So everybody that we can make feel welcome is crucial. So let's move on the customer profiles really quickly. So Contour is being used both in production, pre-production staging at many different customers. We have a GitHub customer testimonials link up on top. There's only one testimonial there. Our problem is a lot of folks, of customers are using ingress controls. They don't wanna basically talk about what they're using for the front door of the Kubernetes clusters publicly. We have a major financial institution that has basically made Contour the default ingress controller for all the Kubernetes clusters and they have a lot of them. We have one of the leading online market places that uses Contour in production today. Knative, I mentioned earlier, Knative is gonna provide an option for Contour to be an ingress controller in their product portfolio. We also have a Cilium and included a link there where they talk about Contour. And then Flagger also has an implementation with Contour as the ingress controller. Furthermore, Adobe had a presentation at Cubicon 2019. It was a landing talk. So I'm referencing that they're big users of Contour and they talked about the architecture and how they've implemented it within their infrastructure. So they're also big users of ours and we work with them directly. We're gonna open up for questions now. Since I know we have two minutes for the half of the hour and I know there's one more presentation coming up, but please ask questions. We'll finish the template. We'll get back to you guys very soon with that and we'll also publish the due diligence document as well for your viewing. This is fantastic. This is the last question for me and maybe Matt or others have some. And that is the, if you would, maybe between Matt and yourselves, is the consideration and thoughts around Contour as a sub-project of Envoy versus a separate? So we've had that conversation. I'm gonna let Matt explain it in his words because he did talk about Contour being a sub-project of Envoy before. Yeah, it's something that we talked about. I think it's certainly an option. I think from my perspective speaking more from the Envoy side of things is it would be complicated, both in the sense that we don't have any process yet from the project perspective to take a project like this in. We've done some project adoptions, but this would be of a different type and scale than before. So we'd have to develop all those procedures. And to be totally frank and honest, the larger problem is that it would be politically quite complicated just because Contour has quite a few competitors and then all who use Envoy. And then there's some question of what that would look like within the Envoy org in terms of picking, say, a default Ingress controller. So my advice to the Contour team was that a direct CNCF donation makes more sense right now. I think it's just simpler. If in the future, we want to eventually move it under the Envoy org, I don't think this precludes that, but I think this is probably the easier thing to do for right now. Thank you for that, Matt. I actually can't really imagine that you've got, that you really have to deal much with politics in terms of the pervasive use of Envoy and all of the, all that comes with that. So, is that a joke? Yes, that was my horrible way. Oh, okay. I couldn't tell if you were joking because, wow. Right. That was so dead pan perfect. Yeah, no, it was, it was very serious statement. I couldn't tell if you were joking or not. Michael, Steve, Dave, thanks so much, guys. This is, that was beautiful. That was what we'll, I won't be tardy with following up with you, Michael. So this is great. Excellent. Thank you, Christian. And Lee, I did want to let you guys know, I did post, it has been updated. It was like the last update was three months ago. There is a due diligence review template out in the TOC and GitHub. And so it does kind of give you a rough idea of what the TOC is looking for to go into the different levels. And I'm sure it's going to change as the SIGs get kind of looked at, but at least there's a, gives you a rough idea of what was being thought of at the time for the diligence. Yeah, and that's the one we followed for Harbor. So if you want us to follow that one, we'll go with that. Correct. Yep. It's at least a starting point and it may shift into some, but there's a lot of good information and that's why I don't think it's going to completely change. Thank you very much. And any other questions for the contour team? I had a quick one. So it's probably to David. I don't know whoever can answer it, please answer. Of course, shortly, no need to go deep. So when you started the project like two years ago, a little bit more than two years ago, what were the gaps that you identified that are out there in the open source landscape in this domain? And how would you compare this to today's state of this landscape? Are there any new projects coming up, something that overlaps with the goals that you had back then? How would you position today against the landscape? Okay. There are probably two parts to this question. The first part is with all due respect to Matt, two years ago, the kind of stand out hit that Envoy that we all know today wasn't as clear two years ago. So this was a little bit experimental, but also almost immediately as I started the project, I realized how good a fit the Envoy XDS API is worth for the declarative nature of the Kubernetes API. It was a very neat fit. The second part of the question, which I'm struggling to remember, the second part I was going to approach it as, how would we change today? Well, really what changed as we got into the project was we realized that the, let's be honest, the Kubernetes Ingress API that exists today is extremely limited. Now, again, two years down the track, there's a lot of work to fix it, but two years before that, there certainly wasn't. So that's why we went our own way with using Ingress Rout, which then evolved from HTTP proxy to solve problems that we were seeing with our customers, who were struggling to make multi-tenancy using the traditional networking beta one Ingress API work. Is that helpful? Yeah, yeah, okay. From my point of view, I'm Tim, I'm the people manager around here. The purpose of Contour then and now is really two-fold. It's multi-tenancy is really the key bullet point in features. And compared to some of the other options, the surface area of Contour's API, as it were, the document description is much smaller. Partly because the de facto Ingress specification has evolved over many years, but you can see it. So the Contour project has stayed really true to keeping things, very good defaults, and not even exposing knobs. For instance, I always use is, we don't support TLS 1.0 because you shouldn't use that. So you can get going in either the simple kick the tires case, or complex enterprise configuration with all of the benefits of multi-tenancy pretty quickly. And I think those goals still sort of stand, so. Okay, thanks. All right, very good. And next up is service mesh interface. And so we've got a couple of folks representing from that project today. And Lucky is, is this you or are you up? Actually, thanks, Leigh. Thomas is going to be taking it. Thomas, I'll pass you the reins. Perfect. Let me get some slides up and we can get started here. Okay. I'm going to assume everybody can see this. So let me give the high level of what SMI is all about first. So the idea is that service meshes are awesome. And starting to really proliferate, especially at KubeCon in San Diego. We saw kind of ADF AWS adopting at mesh console, connect coming out, Istio, Linkerd, we're all got really great innovation in this space. And it's introducing some interesting constraints on users and implementers. And to be honest, I think that at this point in time, we've got a really standard feature set. So the idea was to come at this from a definition perspective and talk a little bit about how we can produce an interface that all of the service meshes can interact with. And then all the integrations can go on top of. So let's talk a little bit about what SMI covers. These are the three major pieces that we saw from users that they wanted from a service mesh, the traffic, policy, AKA access control and identity telemetry. Those are the golden metrics that every S3 wants and traffic management, which would be a flag or style canary rollouts and more complicated solutions there. And we've really focused as a project on those three things more than anything else. So why does that work? Well, number one, we're striving really hard to be provider agnostic. Benefits on both sides here. The users get to have choice. The integrators get to integrate against a single API and work across all of the back ends. And the service mesh implementers don't need to go and dream up new APIs. They can go and use more than anything else. And the service mesh implementers don't need to go and dream up new APIs. They can go and use what's kind of been suggested in this best practices for the ecosystem. So that's kind of where we're going there. This is a better picture. I think of kind of what I'm talking about there, which is you've got apps to an ecosystem on the top of the service, service mesh interface. And then on the bottom, you've got all of the really great service meshes that have provided fantastic functionality there. So going back to those three use cases, we actually have technically four APIs today. Traffic metrics is the most obvious one that builds off of the policies that are put together by the metrics and custom metrics API and Kubernetes, and then adds the gold metrics, which would be a success rate throughput and latencies. Traffic split is the ability to do canaries with a orchestrator like flagger doing the canary rollout itself. Then we have traffic specs that let you explain how traffic looks. The idea here is that these are requirement of access control so that you can do per routes access control as well as doing not a service by service basis. So that's kind of where we are today. Let's talk a little bit about who we've done this with. I've been having a really fantastic time working on SMI myself because of how many great partners we have. It's really a cross industry thing. We've been getting a lot of fantastic feedback from pretty much everybody, both service mesh providers and folks like Kupost who are building solutions on top of service meshes for their users. In fact, that leads into this slide, which is all of the ecosystem implementations that we have today. My favorite one is mesh, which just decided to use SMI without talking about to any of us launched their product and then joined into the conversation are now a big part of the community there. So all of this leads into goals and non goals. The primary goals are being agnostic, making sure that it's vendor neutral and solving real world problems for users, both where users are end users, as well as implementers on top from an ecosystem perspective and the service meshes themselves. A non goal is to implement a service mesh. I don't think any of us want to do that. We're already building those as projects. Obviously I spend time on the PD and that's already a CNCF project. And I think that users should have the choice for what is the best solution for their problems. We don't require implementation of specific SMI APIs. So the important part here is that we're not being prescriptive. If a service mesh wants to just support traffic splits, that's all they need to do. In fact, we have a long conversation that we as a part of in SMI about compliance and providing users visibility into what is and isn't supported so that it's a incremental slow thing instead of this big bang requirement that everybody needs to implement. But also we don't want to restrict what it is to be a service mesh. A great example there is is a lot of the functionality inside of Istio that's absolutely fantastic. It's not something that liquidity is likely to ever adopt and that's okay and should be a big part of the ecosystem there. Cool. So a quick technical overview is kind of three parts. We've got some Kubernetes CRDs. This goes back to SMI being Kubernetes centric. While we're having conversations about how to bring in the rest of the real world, we're sticking really hard to our guns on Kubernetes being the one true way moving forward. We have an SMI provider to act on the APIs. There's a go SDK for folks to use. There's extension APIs to build on top of things. And the resources are obviously configurable. The thing that I like to bring up the most, however, is that SMI isn't actually just a spec project. We actually have quite a bit of software and components in it. We've got the SMI metrics extension API server that actually works for Istio and liquidity today. We're moving in a container functionality into SMI. It's a common pattern that all of the service meshes are adopting at this point. And really should just share the implementation. And then the whole point behind that is that we can go innovate on functionality that's unique and interesting to our implementations instead of just doing the same patterns over and over again. My selfish goal here is that once we start to have these common patterns of software, we'll be able to go and get those patterns to be smoother. For example, sidecars being a first class citizen inside of Kubernetes. So here's links to the community and related repositories. Websites, GitHub, we're doing meetings. They're all public. They've been fantastic for figuring out where everybody's going there. And I think that's it there. Let's see. So benefits of CNCF inclusion. The biggest one for me is the association with Kubernetes and other CNCF projects. I continue to reiterate the whole point of this is a community and an ecosystem and CNCF is the right place for that as a vendor, neutral home for everyone. We like to work with Yeager. We like to work with Keali. We like to work with Istio and Lika D. The glue folks pretty much every project inside of CNCF and out of it is fantastic as an ecosystem and a community organization. And the other big part of this is that as part of being vendor neutral, we're able to go and get more community contributors and speed up the adoption of both the API spec itself and the software components inside of that. And then finally, CNCF is the elephants in the room when it comes to cloud native and Kubernetes in particular since we've hitched our, hitched our horse to that cart. And that's all a big part of that for us. Just a quick look into what our project priorities are today. We want to get all of the SMI APIs up to a stable state, though obviously we're going to go through beta to get there. We want to get our SDK, have a stable release. In fact, we just landed go, generated go client for the SMI metric stuff earlier this week. So lots of great work there. We've been chatting with all of the service mesh integrations. Kuma is the top of our list to get them building on top of the SMI APIs and then going and getting additional ecosystem tooling. CLAG is already there. We're having active conversations with Piali, the tilt and measuring folks obviously leave. Thank you. Working towards that. And then finally that conformance test suits where we want to make it so that it's not confusing to users and integrators what's going on and how they can actually integrate. So to wrap everything up, we want folks to get started quickly. We want it to be simple so that users can understand it and we can go and provide end user benefits as well as implementers. And we want to be as ecosystem friendly as possible with the call to action being at the end of this meeting, we'll have the PR and would love all of the discussion and any questions you have that we don't answer here to go into that PR. Wonderful. This is good. Okay, just a quick time checks. We do have a fair bit of time for some questions. Maybe the first one that will help drive some additional questions or how, or how much homework or how deep today, if you will, is. Well, as a question to the SMI team, do our, is there a consideration around entry into sandbox or entry in as an incubation project? And lucky you want to take that one. Sure, I'd love to take it. Thank you. I'll catch the ball. Yeah, we had originally reviewed the graduation criteria as posted on the CNCF GitHub. And we felt that we could be considered for incubator. But obviously we're at the, you know, coming under the auspices of whatever the CNCF SIG network and TOC decides, either sandbox or incubation, we're happy to go either way. So I think we'll just, we'll post, post it up and if people have comments, we're happy to hash it out on the PR. My, my personal feeling just from a public perspective is that I think sandbox is a no brainer. I'm personally less sure about incubation. I think we're very early days within this, within this ecosystem. I also frankly have concerns and maybe this is something that we could talk about on this call is that I think SMI can really devolve into the current Kubernetes ingress, which is it's the lowest common denominator that doesn't end up actually working for anyone. So I'd actually like to talk about that point. But, but just because I think of those general concerns and just how early it is, I, I guess my advice would be, I think sandbox will be completely non-controversial. I think incubation may be more controversial. It, it, it might happen. Like I'm not sure how people would vote or what people would think. But I do think that just given what has happened, particularly again, with Kubernetes ingress over the last several years, there's probably going to be some reservations about this type of, type of spec. So I've been here too. Can you hear me? Okay. I've had some audio. Great. So, you know, I originally, I had, I recommended incubation looking at, you know, the adoption in the last several months and how there has been several meshes just adopt SMI in production. But I totally agree with you, Matt, in that, you know, sandboxes is non-controversial. And I don't think that the team really cares one way or another. I think that what we're really after is that vendor neutral home. We want to have, you know, a common touch point to talk to all of the meshes and the ingress be two people and any other tooling in this space. So really either way is fine. And I'm for a sandbox. And one note, I'll step in here. We are starting to be able to do the actual annual review process with everyone. So being able to actually come back towards like the end of this year early next year and be able to be reviewed for incubation probably wouldn't be the worst thing either. Right. You know, in addition, maybe just, yeah. My counsel would be, would be the same in, you know, in part adding to what each of you said also in part based on, I think, well, maybe before I say a couple of other things that I would say that there's another question to be asked. And that would also bear weight on some of the particulars of the requirements around the different levels. So the question is, and in most respects, asking a number of these questions for the fourth, being very familiar with SMI myself, but kind of for the public record here and for the, going through the process, but is, well, let me be long-winded about this and say there is some prior presses. So it has long been said that the CNCF is not a standards body. It doesn't tend to necessarily produce internet standards per se. That said, the difference between, and the value between a standard and a specification, not being able to really, really make sense or make sense of the requirements for adoption and dotting T's and crossing eyes. And this thing, we could have a long conversation, all of these things rather the, there is some precedent for, I'd spent a lot of time inside of the, a serverless working group. As did Ken, who's on the call as well. And helping navigate that fine line. first did come forth even at a time where a specification standard was pretty confused and and and is being you know I think quite successful being adopted as 1.0 and so the question is to be short-winded about it now is is SMI intended to is its future pointed toward a standard or is its future a specification I mean I think that's hard very difficult to answer if it if it appears to go towards standards sure you know if we tear it up that way but you know as a specification and providing value in this ecosystem the CNCF ecosystem specifically I think we want to at least focus on that and you know I've heard similar things about specification we got cloud events we got tough which has recently got even graduated I think having more specifications in the CNCF well you know if the TOC is favorable to that behavior I think you know it'll only make it better for subsequent specifications to find a good home in the CNCF I do want to quickly address one thing that Matt brought up was the ingress lowest common denominator value proposition here and I think it's interesting because ingress specifically and even off the back of contour you know those kind of abstractions were very early ingress specifically and if you go and take a look at ingress v2 actually it takes a lot of what contour did and some of what SMI did in terms of extensibility that is not driven by annotations which was one of the big gripes for ingress v1 and with that extensibility kind of framework you can actually have deep integration and provide more value other than lowest common denominator and I think the other areas that I would draw to example are CSI CNI CRI C star I I will just say C star I interfaces across the CNCF ecosystem have allowed the same behavior to exist and have accident of extensibility points that provide useful because I think storage is infinitely more complex than service mesh and CSI is still providing value as an abstraction to both users and storage implementers in the ecosystem so you know ingress I think is one that's people have a rough shot maybe a bad taste in their mouth but I think ingress v2 is kind of bringing a non least common denominator and extensibility points which allow a value to exist the other thing for SMI specifically as we're seeing ecosystem tools like flagger Kiali and other tools in the ecosystem look to SMI rather than having to implement every service mesh under the hood so for them to provide value to all the service mesh is by implementing a single API so that's that's what I think about starting with abstractions and standardization across these things I've definitely from the spec perspective itself spend a lot of time adding the extensibility points from my perspective is someone who works only to be on a regular basis I don't want to be have all of my flexibility taken away to go and innovate and do new and interesting things and so taking a look at the ingress spec and what's happened and some of the other specs like CNI I think is a great example of something that came along and really just blew up all kinds of really fantastic innovation early on in the community space and that's where I see the service mesh space going the you know the calicoes and we use them the flannels of the world are there and fantastic and really doing a great job and I think that that's where the same place that SMI fits into so good so if it isn't and I'm trying not to make it too obvious how excited I am that we're having this SMI discussion so so that this is fantastic so good I think we we characterized kind of standard and spec you know there had been a question earlier about the the common use of Apache v2 as as a license for many of the you know the most common license used for projects that enter into the CNCF and sort of stated preference for that license and I thought I would say on the call here that of the the open web foundation agreement that's used as a license for SMI that that to my knowledge there is there won't be contention as a consideration for a project that comes into the CNCF the OCI uses that license as well I understand that for you know that there's a choice around that license as it has different implications around patents but that the that that's a license friendly to the CNCF or that the CNCF is friendly to that license so good good to have that you know confirmed and so Michelle I think you were following up on that but I don't anticipate the issue there yeah and just to be transparent about kind of how this stuff works the CNCF has a legal team that is is very well versed in in this area and so they've you know come out with hey like for the best practice for CNCF projects is to use Apache v2 along with the GCO and and that's you know just cut and dry we do need to go back and talk to the CNCF legal team and staff about potentially seeing if using the is it the open web foundations license for for specs whereas like all the other projects that are actual code can remain Apache v2 I don't think there's any issue with that so then there is also a legal committee within the governing board that we may also want to run this pass so I'll work with Lockheed to kind of go through the motions there and see and make sure we're just doing everything the right way nobody has any legal issues with it awesome one item if it's easy enough for you guys to glean is so we talked about I think you presented on community some stats if you would if you have contributor and kind of maintainer stats and whether those are specific to the four API's that are there or just as a collective for the for the project those would be be good to see those numbers as you move forward into additional presentations and by that I don't know yeah well I mean we're happy to show that and I could probably do a quick dive on it but specifications are by no means measurable to actual code in terms of the number of contributors and the forks and stars and that's by design the implementations should have all that so in essence I would go pull the stars on all the implementations and say that that is a combination of being powered by that spec but the spec itself across anything you know even OCI distribution which is all the container ecosystem probably got five stars on it so you know if I'm happy to pull it but specs and ever as interesting as code I don't disagree I yep so I'm somewhat torn between wearing my my sig network hat right in my hat and the there is so just to clarify Michael if he's still on the call the contour team of the template that I was referring to earlier it's really a template that matches up with the project proposal process there's a v1.2 and in there it calls for stats around community size and existing sponsorship social media accounts release methodology mechanics website versions issue tracking but it just kind of goes on absolutely and I've done part of that in the PR and I'll do the rest in the in the technical document as well so we're ready in the PR that is clear to me actually I was just calling back to you as reference to for the SMI team as well that it's that I will send out so can it send down a link to due diligence which applies to incubation and above but for sandbox and below there's also just kind of a standard set of statistics that the contour team that you guys covered well you've got there but feedback for SMI team is to try to incorporate those as well we'll pop over a link actually I think I've got it here which is just helpful it's not a it is it is not a pushback on the on the project rather it's just part of the diligence a part of best foot forward look at that we're at the top of the hour anyone else have feedback on these wonderful presentations I'm thrilled to see these going these are awesome yeah it's yeah really yeah really I'm excited for on behalf of everyone else on the call so this has been great I don't want to hold folks to make them tardy for their their next call but thanks so much for all of the work you guys have put into these I can't wait to see things go forward here I'm assuming I'm speaking on behalf of Ken and Matt as well thank you everyone really appreciate it good to see you all good see you bye bye