 Hello. Hey, Alexis. Hi, Chris. I think we have Brian Cantral on. Who else? I know Ben said he can't make it. Quinton is here. Give it another minute. All right, we've got five minutes past. Is Brian Grant, Camille, Jonathan, and Ken are Sam on the line. Brian Grant said he wasn't gonna be able to be here today. Okay. Scratch him out. Cool, well, I mean that's five minutes past. That's enough time. So it's in your camp, Alexis, so go for it. Thank you. Hi, everybody. Welcome back. Apologies for missing a call two weeks ago. We had to do some hygiene. Let's press on with today's meeting, please. Next slide. Can you do the conference bit, Chris? I think you're the best person to do that. Yeah, sure. We just held our conference, Coupon China for the first time last week. So that was super exciting. Thank you for Quinton for taking the time to show up and representing the TOC there. We posted the videos online. So you should be able to see most of the talks there on YouTube. Seattle is coming up in a few weeks. Our flagship event is currently sold out. So thanks, everyone, for signing up. It's a little bit difficult to predict the demand of these things, but I'm excited for us to having a great event there. The schedule's linked, and we're also doing a bof at Coupon, Seattle, to provide an avenue to get feedback to the staff based on the event. Let me know if there's any questions. Okay. Cool, moving on. Next slide. Move on. Is it just me? Can I? No, no, next slide. I think Taylor should be steering. I'm gonna tell you that. Can you not see the TOC elections? No, there we go. You're there. Yeah, so this is just a reminder that 2019, all the events are confirmed and scheduled. We just opened up the CFP for Barcelona. So if you go to the CubeCon, ClinicalCon Europe site for Barcelona, you'll see the CFP link. Next slide. Yeah, so here's just a reminder on the TOC election schedule. Give it a next slide, Taylor. We are currently in the nomination period, which ends at the end of this month. Essentially every CNCF member has the ability to nominate up to two folks during this process. And then there's a qualification period that happens where the governing board vet these candidates and then a formal election is run condorsed style. And then we have a new TOC on January 29th, seven of nine slots. Any questions here? Yeah, I had a question. I just want to clarify something, Chris. This is... Sure. In the emails, can you clarify that it doesn't have to be a governing board member to nominate someone? It just needs to be a CNCF member. Correct, and yeah, any CNCF member, I just sent an email this morning, I believe. So yeah, if you have any questions, feel free to reach out to me. And I'll send another reminder next week. Also for folks. One last question. By member, do we mean member company? Member organization. Yeah, they have to be a CNCF member. Okay. Cool. This is just a reminder for the backlog. Always a good idea to kind of take a look at that and offer any assistance to the TOC on reviewing proposals, reviews, and so on as they go in. Next slide is the core, I think, focus of today's meeting is the kind of category slash refresh of working groups. So Alexis, I'm gonna go leave it to you to bring this up. Right, thank you. So this is something that we have sort of touched on before. It comes about because we wanted to figure out a way to make the CNCF TOC scale better in the presence of more projects, make it easier for people to focus on specific areas and understand how they fit together, make it easier for the community to engage and add value and many other objectives besides that. The core idea is to identify a set of categories such as observability or storage or security and then try and bootstrap up dedicated efforts driven by the community, shepherded by the TOC and other volunteers and assisted by the CNCF team, folks like Chris and Chris's team and his colleagues in order to make us get more work done and add more value to users and make projects more successful and more attractive, make CNCF more attractive to those projects. So to that end, a proposal has been drafted around the categories and working groups. If you click on the link, you'll see the proposal. If you can't see it, shout because it might not be. I think all the settings let you see it. I think I've set it up to be editable by anybody. So please don't go spamming the text if you unless you have a very good reason for doing that. It's had a quick pass from the voting members of the TOC and I think the kind of basic ideas are there now ready for folks to jump in and start to contribute. We wanted to have a working session on this during today's call. Chris, actually, can you remind me how many TOC members are actually on the call? Right now, one second, I bolded. One, two, three. Three, myself and... You have Brian Cantrell and Quinton Hall. Okay, right. I'm also here. Can you look out here a little bit? All right, make that four. So, you know, this is our first interaction between those four folks and the wider TOC community. Have a quick look at the document and start to digest it. And then I think we'll just kick off with some questions and comments from people. So I'm just gonna give everybody a moment to read what's there. So the absolutely key idea here is Lee is asking if SIGs plant working groups. We originally called these category working groups but felt that actually they had a very special purpose which was quite similar to the SIGs in Kubernetes. So it was suggested to use the word SIG to describe the working group associated with the category. So they're not getting rid of all the working groups but some of the working groups like security and SAFE would morph into a SIG to be associated with that category. But there could be other ad hoc working groups that might be short lived for other purposes. That's out of scope for today's discussion. Alexis, I just wanna make it clear that the current SIGs were actually put in place as sorry, the current working groups were put in place as working groups as short-term groupings to produce essentially white papers or other defined things which is quite different than the SIGs. So I don't think they necessarily turn into SIGs or certainly not the current formation of the working group doesn't magically become a SIG. Right, we're gonna just basically reboot things like serverless and security and see what we get. So yeah, it's not guaranteeable to me. There's another concept that is a project which is owned by SIGs so things like white papers could also be owned by a SIG as a sub-project or SAFE could be a sub-project of a SIG kind of thing. More on that as we evolve the actual definitions in Kubernetes and then of course the NCF, TOC can take more or develop some of their needs. Thank you. Yeah, I mean, if you can have a look at the document where you get a chance, Sarah, it'd be great to get your input. Happy to. Thank you. Okay, so let's see. Alexis, this is Brian. I think this looks really good. My only comment such as it is would be that I think experts in any given field often have the bias of their particular experience and expertise. I may change that wording just slightly to clarify that their responsibility is not to their particular technology or to their company but to the broader special interest group because I wouldn't want us to be excluding people to be running SIGs because they happen to develop their expertise in a particular technology. I see, that's a good point. Thank you for bringing that up. I think that essentially why are we doing this? I mean, the objectives are set out at the top of the document but it's just become impossible for the core TOC membership to properly keep on top of everything as the CNCF has grown and whilst we've made some headway with TOC contributors and the community, we're still struggling to really scale and get organized. So this is an attempt to do that and it represents in a way the biggest change in how we work for some time. What we want to do is retain the ability of the nine seat TOC to do things like vote in projects and to be, to really focus on making sure that CNCF is doing a great job for the community and the users and information but also to invite people to help. And for us, I think we'd be talking about this. The most important thing to us is that the folks who are putting in the most time really understand what the CNCF is trying to do, what its mission is and demonstrate a lot of integrity in terms of that mission. So we're worried that a SIG or any kind of working group could become its own sort of political structure. We think that would be a bad outcome. So we've tried in this initial structure to come up with a way to balance between sharing and retaining core control in the TOC if that makes sense, which means that we're actively seeking people to show leadership in these SIGs who we think it would be people who we'd welcome into future TOCs at another time, if that makes sense. So you don't need to be a category expert. You need to be somebody who deeply cares about the mission, deeply cares about the community and is able to demonstrate balance and integrity in that process. So Alexis, go ahead, Charles. Alexis, that's a perfect way of phrasing it. I would just replace the unbiased with what exactly what you just said. Right, what did I say? Yeah, that was my wording, the unbiased and I knew as I was writing that it was going to be controversial. I totally agree with your sentiment, Brian. I just couldn't think of a short way of describing what we just said in a hundred words, but I think we can wordsmith to be better. On the plus side, this was recorded so we can even tune up Alexis's words to be shorter. Oh, thanks, not for the first time, I might add. Okay, did someone else just speak up just at the same time as Brian before? Yeah, this is Matt Ferrena. Me, Matt. You know, it might be interesting to put a purpose at the beginning of this, not just to get the technical introduction because I think what you're trying to say is you want to scale the contributions to the CNCF around the expertise because as we see the CNCF growing, at least this is my two cents on it, we see the CNCF growing with the number of projects and things going on. There's more in expertise and knowledge and we want to have a place to scale that. And this is that opportunity to do it and it's sort of reflective of how Kubernetes has been able to successfully do it, but with our own slant on it. Does that sound about right? Well, I'm typing, it sounds kind of okay. What do other people think? I think that's mostly item number five and six and fewer objectives there. Yeah, I mean, what this one, four and that's five and six. Sorry, I can't even count. I think also what we have found is that the surface area is now so broad that we require depth in so many different domains that we need to have the ability to delegate that depth. At least what I say is that I think that there are so many projects that come up for incubation or other kind of feedback to the CNCF TOC and they are, I'm not able to provide feedback for them because they're simply not in my, they're a deep technical project that's not my area of expertise that I don't particularly, I don't personally use. So the SIGS I think allow us to delegate some of that, delegate where people can get really good technical feedback and guidance from people that have been effectively delegated by the TOC and then when those projects are deemed kind of ready or appropriate by the SIGS, we as a TOC can have more confidence and more background in terms of what they actually are. That's my perspective anyway. Hey, Alexis, this is Prateek Water from Intuit here. Hello. Hey there. Hey, I had just a quick rough question, which is would there be a disconnect between this SIG and an appropriate Kubernetes SIG and how do we make sure that we don't create a parallel working structure or parallel ideas? Very interesting question. So let me try and answer that. I'm not sure if I'll get it right. I think Kubernetes SIG is focused on extending Kubernetes with functionality reflected in the SIG. So for example, I don't remember what the names are of the pieces of Kubernetes deal with monitoring, but I'm aware that there are recommendations about how to monitor Kubernetes and making sure that it's possible to do that. In this CNCF, we might have an observability SIG, which included projects, it was there to pay attention to the area to which Prometheus, FluentD and other monitoring logging visualization pieces, debugging pieces live. So I think that white papers on the structure of the space, useful pictures for users to understand what is going on could be deliverables, as opposed to necessarily getting into the individual projects and how they run. Does that make sense? Yeah, it makes sense. And I'm just looking at Chris's comments here where he says that the scope of the CNCF working groups is broader than the Kubernetes. I agree with that. I agree with the overall philosophy. I'm just wondering, how do we prevent confusion? Well, I heard- We can take it offline, that's fine. No, no, it's a very good question. I think one of the key jobs of the SIGs is to help us to educate and reduce confusion. So I can even, this is Matt again, I can bring up an example here that may even really touch on this. We have Kubernetes SIG apps and we have talked about things that are very specific to extending Kubernetes, but also lots of related tools that help people because SIG apps just wanted to encourage that app developer and app operator space. But over here, I see we have a category for something like app definition and development. And so since there is that broader space and SIG apps is definitely touched on in Kubernetes, there's definitely gonna be overlapping effort at the CNCF. And so I figure we need to probably resolve that. And it'll just be a case maybe where the Kubernetes SIG and the other SIG need to talk and figure it out. I don't know. But there are spaces where we are gonna have that because Kubernetes SIGs have gone just beyond extending Kubernetes to try to enable the space as a whole, which I think is what the CNCF wants to do. Yeah, that was actually gonna be my question because there are some SIGs which have gone beyond Kubernetes, right? So if you start just thinking about like service mesh or these areas are gonna be broader than just Kubernetes. So you may, I guess I don't have an answer. I'm just asking a question. Totally. Go on. Yeah, this is Ansel from the Safe Working Group. So we have an example in our group that is actually the Kubernetes Policy Working Group as they were looking to extend beyond what they were doing, they decided, well, actually at first they drafted a proposal to go to the CNCF and then they looked at that and this looks awfully like safe. Maybe we should just join forces with that working group and we became a larger body because of that incorporated their interests. Could I expand on this a little bit as well? Sure. Can you hear me? We can hear you. Yep, okay. No, I think that any SIG in Kubernetes that's working on anything that looks like a pluggable or extendable interface is going to almost by definition extend outside the Kubernetes project. Whether it's scheduler, CSI, OSI, CNI, take your pick. So I think, I don't see this as conflicted though. I just see it as like there's already these great touchpoints for the Kubernetes SIGs to work and in hand with the CNCF SIGs. I think it's positively not a conflicted thing. I agree with Bob. I think the only real risk is if they're named exactly the same thing, we are going to come up with crazy, crazy confusion on naming. So if there are two SIG apps, one of which is a CNCF, one of which is a Kubernetes, then we are going to add further madness to a space that is already a bit wonky. Yeah. I also gave some thought to that. And I think it applies generally to the term SIG, not necessarily to the individual SIGs and we can have this confusion. One simple, we did consider calling on something completely different, but then they do fulfill a very similar function to the SIGs in Kubernetes. So that would create a different kind of confusion. One option is to just call them CNCF SIGs and never use the word SIG without the word CNCF on the front of it in the context of these SIGs and potentially encourage Kubernetes to call them Kubernetes SIGs. Just a thought. Yeah. SIGs would work and then we just articulate each time which one, which type of thing we are speaking about and that would make sense to me. Yeah. Okay. Do we need to update the document to reflect this insight? Yes. I think it's a good idea to just make clear that we know that we need to make a distinction here and that there will be some groups that are above. I also think there'll be some SIGs that don't overlap at all or there'll be some SIGs that when they're Kubernetes SIGs are solely focused on that attribute as it pertains to Kubernetes. I'm obviously thinking of observability in particular where the observability technologies are really orthogonal to Kubernetes itself. Okay. I'm typing. Anyone else wanna keep pressing on with questions and comments? The use of the terms SIG here to help kind of reboot working groups or even if that is that critical and needed or just if working groups themselves are revised but continue to use that label there is no confusion to the extent that the labels are distinct. Sorry. Was your proposal that we call these things working groups instead of SIGs? Yeah. More or less. Not, you know, no special affinity with working groups but just maybe just to the extent that we're able to find a different label that doesn't require, you know. Yeah. We did actually consider that. So Kubernetes has these two distinct concepts. One being a SIG which is a long-lived thing which basically lives for the lifetime of Kubernetes unless it fails in some way. And it has long-term responsibility for code and projects and a fairly broad area and that is distinct from a Kubernetes working group which is a very specific group of people put together for a finite period of time to solve a predetermined problem. So produce a white paper or, you know, figure out how to do Windows conformance or whatever the case may be. So those are two, you know, quite distinct kinds of entities and these things we were talking about are long-lived. So they live forever basically and they're responsible for all of the projects that fall within that area whether it's, you know, observability or storage or networking or whatever. We envisage that within each of those SIGs there will also be working groups that are spawned to solve specific problems within the ambit of the SIG or perhaps in some cases crossing SIGs. Does that make sense? Yeah, it sure does. Yeah, I think at first blush or particularly makes sense if that's the way in which Kubernetes SIGs have been run and are understood and that through that understanding the use of the term CNCF SIG just helps people have the right frame of reference to begin with and that reinforces the notion that you'd use SIG as a term. As opposed to committee you or whatever else you'd call it. Yeah. Also just SIG long predates Kubernetes. I mean, that's an ACMism that I actually like us tacking into that because I think that the ACM SIGs actually are an analog for what we're trying to do. Okay, anybody else? Good, okay. So I think we've exhausted the initial discussion on this. The next step is for folks on the community call. There's today's call and if you have friends who care about this, let them know. To have a look at this document, work on it together over the next two weeks and we'll see if we can get to a revised version in time for the next TOC call. It's not a promise but it'll evolve along the lines of the sandbox document which took a few goes to get right. In the case of the sandbox, it wasn't until quite late in the process that we realized some quite crucial stuff had been missed out. So please do keep making an effort to make this document better. We really, really do want this to work well for everybody. Good, okay. Can we move on to the next slide please now, Taylor? Can someone, there we go. Chris, this is your section. Yeah, so I'm happy to kind of go over this fairly quick. I sent them out to the mailing list. I think it's been a few weeks ago, but happy to go over it quickly. And then we have some graduation reviews that kind of want to have the option to present. So I'll go over this fairly quickly. So we survey our maintainer community twice a year. And so we did this recently and we've kind of collated the results for this year. Overall, our maintainer satisfaction is 4.2 out of five on kind of a five point scale, which is a slight increase from each half one of 2018. This time around, we also got a hundred percent of our response rate from project representation. So each project had at least someone respond. And then this was a new question that was brought up by Alexis and the TOC that large majority of maintainers would recommend CNCF as a home for other projects. So that's the overall kind of a quick executive summary. The next few slides kind of go over each kind of detail and question that we asked. But this kind of main takeaways that at least I had from this was CNCF projects are mostly asking for support kind of in three major areas. The first one is around kind of technical documentation, website help. Other one is around just marketing help us, you know, write a blog post or technical article, technical blog. And the other one is around events. Hey, help us host a, you know, EnvoyCon or GRPC conference, et cetera. So that's kind of my overall takeaways for here and we've properly staffed up recently on the technical writer side of the house. And we continue to serve our project maintainers with events like EnvoyCon, GRPCCon and so on. So I don't want to dive into each specific question that kind of goes on to the next few slides, but I think on slide 20, we have some of the comments, you know, has CNCF reset for projects, you understand help, response time and so on, but slide 20 kind of shows some quotes from maintainers in terms of their thoughts. So before we go on to the graduation reviews, does anyone have any questions for me on this? We plan to do this twice, at least twice a year and we'll be kicking off the next survey in late January. Any other questions? Otherwise, look forward to the next survey being launched in January. Cool. Is container D here, Phil? Yes. I'm here as well as Michael. So lovely. Okay. Feel free to scare away. Go for it. All right. Yeah. So Chris said we have about five minutes. I may even beat that if you're lucky. No worries. So, yeah, I think I, I'm not going to walk through the exact checklist of TOC graduation criteria. The PR is linked and I think I assume many people have seen it at the end, obviously, if there are specific questions about checking off items on the list, we can discuss that, but I'll mainly stay somewhat high level. So, we've heard of container D. We joined the CNCF at the Berlin, KubeCon just last spring. The goal being to have this core container runtime. For both Docker and Kubernetes to have this sort of boring, stable infrastructure runtime under, under which both could, could then innovate. Our key tenants have really been Again, thinking about this being boring infrastructure is having a strong focus on reliability, stability of that core runtime. Again, we're built on top of OCI's RunSea. So again, our goal is not to add a bunch of functionality around that but to simply have a strong guarantee of life cycle control over RunSea launched containers. Beyond that, we've built a really nice client API that means from Golang or GRPC. People can build other interesting things, not just Docker and Kubernetes. And so you can see a few tweets here of people who have found that very interesting and in our project use list, which we'll look at on the last slide. You can see that, you know, outside of the Docker and Kubernetes use cases, there are others finding this API very valuable and interesting. So that's a nice bonus as well. We've also added strong compatibility guarantees, back porting fixes, so having long term releases that are supported in a very stable and reliable way. And then performance obviously is another key tenant. Next slide. Our community, I believe, has been very healthy and it has grown, especially even this year. I think one of the nice things about the graph, it's a little bit small, but obviously you can go to the CNCF data. And I think one of the nice things about that graph is that over time, the expansion of actual committers has grown quite a bit, instead of a few people doing significant amount of the work, but we have a lot more activity from new contributors. We do have 12 maintainers across eight organizations, that's listed a little more in a detailed way in the PR. We have a reviewer category, they're allowed to LGTM, but not merge, and so that list has been growing. And again, if you're interested in stars and Twitter followers and all that, obviously again, there seems to be a healthy interest and community that's come up around container D last slide. So again, obviously the goal would be that container D would grow in usage. And so today we have two public clouds who are offering container D as the Kubernetes runtime. So IBM cloud and Google, so GKE and IBM are both offering recent versions of Kubernetes with container D as the runtime. Alibaba cloud, several of us just met with some of the teams from that organization last week in Shanghai. Their pouch container project is built around container D, you can see other uses as well. But again, we see a significant growth in interest and usage of container D and the graduation proposal has a more extensive list of projects that are using container D. So with that, I'm a little bit under five minutes and so I'll stop there and see if there are any specific questions we can answer. This may be more of a Kubernetes question than a container D question. But do you have a view as to when container D might or will become a default runtime for Kubernetes? I don't believe the Kubernetes project tested as a feature blocking. It's not a release blocking compatibility. It's kind of tested after the fact. Yeah, I mean, that's an interesting question that I think relates more to what would we see as the default because there's obviously multiple ways to install a Kubernetes cluster. And so depend on which path people are taking to claim that container D is the default. I really mean the question is not intended towards installers. The question is intended towards upstream testing. Like what is required to actually qualify a Kubernetes release? Okay. Yeah, I don't have a good view of that. So I don't know if there's anyone else on that could respond to that. I don't know how it works as far as on the Kubernetes side, but at least on the container D side, we have the node end-in tests running on all the CRI PRs so we can catch issues early in our own testing cycle. It's also, Kubernetes is also tested in the upstream Kubernetes tests as far as I know. So I don't think there's anyone on the call who's worked on that, but as far as I know, it's a first-class test target for Kuberneteses at the moment as well. I don't believe that it is. I believe it is tested after. In other words, a regression would not cause Kubernetes as a release, not to ship. It would obviously be something the container D community would react to. I'm actually very supportive of getting container D to that state. So anyway, I think there's still going to be something down there. I thought we were definitely trying to work towards that, but I don't know. I only know of this indirectly, so I'm not the best person to answer it. Yeah, I think we could definitely take that as a follow-up and find the right folks to have that discussion with. I had a question. Could you just give us an understanding of how much of container D is exposed to applications in a Kubernetes cluster? So to what extent can applications and containers be oblivious of whether it's container D or some other CRI compliant runtime underneath? Yeah, so effectively the goal of the CRI was to make any use of the Kubernetes API, any pure use of the API should be agnostic to the runtime. Obviously any application which decides it needs to inspect the host, either through sharing namespaces with the host or trying to interact with underlying runtime obviously will care. But an application which purely uses the Kubernetes API should have no concern or effectively no problem with being agnostic to the underlying runtime. Okay, makes sense. Thank you. Chris, what's the process step here? If the TSC is comfortable with container D here to go for a vote, a formal vote, then I will kick it off. If not, we hold. So it's up to you. There's no issues with it. Yeah, okay, thanks. That's good. Just trying to be quick on time. With only four of us on the call, we should probably put it onto the email list to allow also Brian Grant to speak. I think I'd like to just make sure that from Brian's and your perspective, we understand how the actual graduation criteria have been met. Personally, I'm very impressed with the progress made by the project. Anyone else want to say anything or we can move on to the next project? Yeah, I also feel very comfortable. But it would be good to have at least one TSC member, perhaps the original sponsor and don't remember who it was. It was Brian. Do it, do it. Yeah, Brian Grant is the owner of all of this. So he unfortunately couldn't make it today. Yeah. But I'd like to hear from him, please. Yeah, I'll send a note out to the mailing list. I think he isn't supportive of moving forward, but I'll give him a chance to say that in public. OK, let's move on to the next one. I'm actually going to have to drop off in a sec because I'm going to run across town to meetings. Tough. He's going to talk about Tough. Sounds good. Yeah, if someone could go to the next slide, please. Slide down. Oh, it's Justin. Justin. Yes, yes. Justin Kappos. So yeah, so just a quick reminder for those of you. So Tough is the way in which software gets distributed largely across a bunch of domains, including the cloud, that's resilient against server and key compromises. So it's a framework that makes it so that even if people break into different parts of your infrastructure, different, you know, you're signing your repository or other aspects of your cloud infrastructure, it's meant to resist this. So we're something of the plumber's plumbing. So I know a lot of, you know, basically what's being done here in the cloud is basically plumbing for the services of the future. And we're even kind of, you know, just the boring underneath part under that. So Tough has multiple roles. It has a bunch of issues with, you know, it uses to provide security, a bunch of things like threshold signatures, selective delegation, supports HSMs and TPMs and so on. And there's a couple of things that Tough does that makes it fairly invisible under the surface. It's intentionally meant to be very easy to drop into existing workflows. And so apart from maybe needing someone to sign something they didn't sign before, you know, which it can be as easy as just making a change to a script or having them use a UV key, Tough is meant to be very invisible. There's often a one-time initial setup, cost of having to make a couple changes somewhere in the way you sign and build things, but it's very meant to be very transparent and easy to use. And it has a very strong security focus. So there's minimal design with this, or sorry, there's a minimal, intentionally minimal design. We're not trying to grow and add and have every possible feature. And it's meant to be low-turned for those who go and implement the system. So the history of this is back in 2010, I had some folks from the tour project that came to visit after they'd seen some work we'd done on security for Linux package managers and saw there were a bunch of issues with that we pointed out there that also applied to the tour updater. And they spent a little bit of time, did a bit of a design and went away and huddled and then created a design that we found some issues with. So we built on that and made a different version of it that was tough. Myself and an undergraduate who was working with me. We were admitted to the CNCF in 2017 along with Notary, Notary, which was created by Docker is the most widely used implementation of Tuff inside of the cloud and is one of the most popular implementations of Tuff overall. Tuff itself as a project has a formal process for changing the standard because we are meant to be very low churn and have a very simple minimal design. So there's a process called a tap process. And this is perhaps one of the biggest distinctions between Tuff and a lot of the other CNCF projects is that we don't directly interact with a lot of the people that adopt Tuff. They mostly interact with whichever implementation of Tuff. They've gone and integrated. If they use a reference implementation that some integrators have then we've actually communicated with them. But oftentimes I will find out that we have a new Tuff deployment because I'll hear something from the Notary team or hear something from another team in a different domain that's gone and done this. And I'll talk about some of those other domains in I think the next slide. So next slide. All right, so Tuff is used in a lot of the large cloud companies, as you can see a list there have gone and used this. If you go and click on the adoptions link that's on the bottom of this slide and you click on any of the company logos and things that'll take you to the articles and discussion about how they use it and what they do. Most of the cloud users with a few exceptions use Notary, which is our most popular implementation in the cloud, which David Lawrence the other Justin, Justin Cormack and others spend an enormous time working on. There's actually quite a lot of automotive use of the Tuff variant up team. The automotive industry for those of you who are not in the know it's extraordinarily secretive. So we go to automotive conferences and people come up to us in the hallway and say you can't tell anybody this but come look at this demo we're actually using up team here. And I don't really understand why that is it's very counter to the way that most of the open source community is but we can't talk openly about which of like for instance which of the big three U.S. automakers uses is using up team in their new models or which large Asian automotive manufacturers are doing so. But I can say from the things that are public we're part of automotive grade Linux and about a third of the new cars sold in the United States are gonna be including up team in the future. We also have use outside of cloud for different projects that are not cloud projects that are using Tuff from different environments. Next slide. So we have a lot of different committers that go and work on different aspects. The most interesting aspect here is really the specifications. So these are just different implementations here. The Python reference implementation has a collection of folks as do the does notary. There's about six other implementations that are done by different organizations. Some are open source, some are closed source. Some are things that we can't really talk publicly about we can talk about and point to quite a few of them if there's interest. The specification itself is fairly low turn. We try to be pretty protective about adding or changing things in the spec so that our implementations can be low turn as is ideal with security software. Next slide. So the real way to look at Tuff is really to look at the tabs to look at the changes to the specification because once again, we're a little atypical and that we're mostly sort of the specification rather than a piece of software that's directly used although people do use our Python reference implementation in production. So we have had in the last year and a half or so or two years or something, we've had a bunch of accepted tabs. We have several under consideration. In fact, I think there've been some additions to this since we've had this conversation and have had a lot of folks both from cloud native projects and also outside projects that are both automotive and broad go and produce things here. There've been a bunch of activity on this. So 10 different contributors with 500 something commits which is a lot for something like standards documents. Some of these have been things like as simple as typo fixes or other things, but a lot of them have been just more substantial clarifications. You can see this, you can see what the changes to specification. Notary and the Python reference implementation have also had a healthy set of commits and contributors participate in them. Next slide. And the last slide I have here, I just wanna say we've of course done all the things that we're supposed to do as part of this. So you've adopted CNCF, a code of conduct and you can read for more information about our governance and contributor process adopters list. One last thing I'd like to kind of leave people with is that we have both the passing and silver and we're almost all the way to the gold badge for CII best practices. And I just wanna encourage everybody on the call, if you have an open source project, CII best practices is a really helpful process to go through. And I encourage you to take it and get not just the passing badge if you can on your projects, but to really take it further. It's a fantastic project and one that we're happy to be working with and happy to have benefited from. So that's it for me. That's the last tough slide. I'm happy to take questions if we have a moment. Cole, yeah, thank you, Justin. We definitely have time for questions. Any folks from the TOC or community wanna ask Justin some questions? This is Matt for an albite. I noticed that the current version of the spec is a 0.9 and it looks like the markdown file has a 1.0 draft. Is there movement towards a 1.0 coming soonish? Maybe it's part of graduation or? Yeah, we'd be very happy. It's one of these things that we've kind of stared at it so long we didn't notice we had the draft designation on there. And so we would like to do this. There are two tabs that we have that we were considering whether they should be in a 1.0 or whether they should be in the next iteration, the two tabs that are pending. But we're basically ready to bump that number because really all of our main adopters are on or near the functionality that does not include those tabs. So yeah, we're ready to bump to that. Any other questions? I just had a very brief observation. I went through some of the proposal, et cetera. It's kind of blurry the distinction between the standard, the specification, the reference implementation, and notary. And I quite often found myself wondering which of these things are you actually talking about here? So more like constructive feedback than a question. Sure, I'm happy to disambiguate any of that to the extent I can. If you'd like to, I'd be happy to follow up with you. And I can make a pass myself, but if you spot anything else that could be clarified after that, then we'd love to hear. To clarify, this proposal is for graduation of TAF. We'd like to do graduation of natury in the future as well. They're independent projects, as far as CSF is concerned, even though they were brought on at the same time. Any other questions for the folks? All right, I will thank Justin first time. I'll shoot another email to the TOC list asking for more feedback. Generally, our approach has been asking the original TOC sponsor to kind of support this request, but in this case, this was Solomon. So we'll figure out if we get a current TOC member to shepherd this along to do the formal call for vote. So thank you, Justin, and the tough and notary folks on the call for presenting. Cole, I think we're a little bit tight on time. Three minutes will be a little bit tough. I don't know if core DNS is on the call, but we're happy to give you time next week. I was expecting to present, so maybe next week, next time, you mean in two weeks? Oh yeah, sorry, not next week, next time. So it'll be first week of December. So yeah, I just don't think it's fair to you to only have two minutes and fair to people who have a meeting next. So sorry about that, Francois, but hopefully we'll get you definitely scheduled next time around, okay? All right, thank you. Okay, cool. Awesome, and then Ken answered the question on the tough issue since he's doing some work with it at MasterCard. So I'll work with Ken offline to get him pushing that forward. So other than that, thanks everyone for their time and we'll see you in a couple of weeks first week of December, all right? Take care. Thanks.