 Hi, everyone. This is the conformance working group monthly meeting on May 24th. Please remember that this meeting is being recorded and will be uploaded to YouTube. So please be respectful and remember that it will be recorded. We have an agenda which I am projecting. Hopefully this is coming across. First up on the agenda, we have an update on the current state. And I think this is a reasonable place to start with each of these meetings. Let's see. There's some information from the chat. Great. Doug dropped in the dock with the agenda. Feel free to add to the end of the agenda now if you didn't make that. So first up, we have Ayesha is going to give an update on the current state on API coverage numbers as of today. Yeah. So these numbers, I got it from running Omichi's tool, which basically gets the end point coverage. So this was run against Masters. We took the test from Masters branch and ran it. And yeah, stable. We have about 18% coverage and overall we have about 11. It's not much of a jump from where we were at 10.2. Understandably so, because we are just wrapping up on adding more coverage in the areas that we'll be discussing further down. Should we go on to the next item in terms of prioritization? Sure. Any questions on the coverage numbers? How those are collected or any additional questions on that? If not, we'll move along to the prioritization agenda item. I'll add a pointer to the coverage tool itself so that anybody wants to take a look at the code. So as for prioritization, so we met with SIG ARC a couple of weeks back. There's an update on that coming further down in the agenda. But from the meeting, it became clear that we want to focus on components that can be easily swapped out by providers. At least we want to focus on that for our first round of our confirmance coverage. So towards that, I met with SIG node early this week and we have a couple of volunteers from there who can help us cut through all the APIs that we are tracking, specifically the pod ones, and then identify ones that already have coverage and also ones that need to be prioritized for the first round. And once we have that list, I plan to sit with them and define user journeys which then our vendors can help automate. I opened tracking issue for that this morning. So I'll keep that issue up to date with progress there. I plan to do the same thing for API machinery. Again, I opened another tracking issue. We have a couple of PRs in flight. Again, they are laying down in the agenda. There are some end-to-end tests being proposed to be added to confirmance for watch and aggregator. So that hopefully those tests will go in 111. We will then be evaluating gaps and see where we should increase coverage for 112. So hopefully for 112, by the next confirmance group meeting, I will have identified hopefully a couple of APIs that we will increase coverage for in 112 for both pod and API machinery. So that's those two. And then there's some confirmance program update. Yeah, I expect this one is probably Dan. And we will come back later in the agenda to some of the process improvements and visibility and communication issues that were raised, I think, by the steering committee initially. So let's hold on that part of the conversation until later in the agenda. Any questions? I'll just give the very brief update here that we're up to 58 certified vendors, which of course is completely insane and really makes this one of the largest and most successful certification programs I'm aware of. And we did this refactor about a month ago where we divided all of the certified products into being distributions, hosted platforms or installers, and we're then tracking that number. And other than a few Twitter fights and such, I think that most folks have found that distinction useful. And then we are tracking the number of certifications from different levels. One other piece I'll just remind people of is that there's a small number four or five implementations that are only certified for 1.7 and haven't done a newer certification. And those folks now have another month and a half to certify as either 1.9 or 1.10 in order for their 1.7 certification to remain valid. And, you know, we're emailing them and reminding them and such. But if they don't, then the 1.7 certification goes away. Hi, Dan. This is Deepak. Hello. Hi. Just, just to let you know that, yeah, we have one of those. So our certification is small zone 1.7. And we are working towards that towards 1.9 certification. And there's one issue left. So we should be done pretty soon, actually. That's great to hear. I'm relieved to hear that. I will just make the quick pitch since I've got you that we really are interested in having you certify fusion stage as well. Definitely. You're one of only, yeah, like six known non certified implementations. We have, I think there's more work involved there, but we are definitely working towards that as well. Sounds great. Thanks for that update. That's all I have. Okay, go. Excellent. Well, thank you, Dan, and congratulations for the ongoing success here. Let us know if we need to focus on how the communication and the end of the life cycle unfolds. If we need to improve the process, we can put some effort into that in this group. I mean, we have contacted for all the people as part of their application. So they are reaching out to them and it's mainly just, are they going to prioritize it or not. Excellent. If they did fall out for one dot seven, they're always welcome to come back again. It's just that older version that would move certification forever. Sure, sure. Okay, moving along conformance coverage for one dot 11 stable features back to. Yes. So I took a look at the features that are going to stable in 11. So four of them are going and I was following up with the owners yesterday to see if they have, if they have enough representation in the conference week. Two of them, both the RBAC, the RBAC one was identified as need not, I mean not required to be a part of conformance. So we have omitted that. And the second one is code DNS will be replacing QT NS and they already have coverage for that in conformance. And they, they did inform me that it is confirmed with the with the replacement as well. So I'll be following with the two more features. I have a proposal for how this can be folded into the release teams task itself and that is coming further down in the agenda. We discuss it. Okay, excellent. The core DNS one is especially interesting because it's in an external repository. So I would encourage folks to take a look at that one and just think critically through that whole dependency. It is a new world that we're moving into where some of the default components are no longer in the Kubernetes repository. I think we will see more of that as cloud provider extraction project moves on and many of the external cloud providers are already in that world. So please help think through that and anticipate issues in this whole process. So I wanted to give a quick update on discussion at Sega architecture. I think two weeks ago now. There was there were some questions raised in the steering committee meeting, I believe. And maybe Tim St. Clair can give an update here as well because I think he was in both of these meetings. But I'm, I'm happy to give a quick update here and then Tim, you can jump in if I misrepresent or you have more to add. I think the high level theme was that we need to do a better job of visibility and communication within the conformance working group to articulate and clarify what the ongoing efforts are and how they play into the broader vision. And one of the confusing things I think for Sega architecture and some others in the community at large is that CNCF has contracted a vendor globant to help with closing a sort of one time amnesty for missing tests and improving flakiness for end-to-end tests we would like to be conformance tests. And I have some proposals for process improvements further down here. But broadly those vendors were working on first PRs sort of starter projects for what would be a new hire in a company. And I think conflating the conformance aspect of a new end-to-end test with the test itself caused some concern and alarm that it was happening outside of the existing SIGs. And so we talked more about including the SIG leadership in those areas. One was API machinery. I don't remember what the other one was honestly. Node I think and I think by the time this conversation that happened those SIGs had already been pulled in and and we tried to distinguish between the two very distinct steps. One is adding an end-to-end test which is not a steering committee SIG architecture level decision. SIG can certainly add more EDT tests and improve their own coverage. And the second distinct step is proposing that that test be promoted to becoming a conformance test. And that is where SIG architecture and steering committee do have input and responsibilities. So I think those were the two high level outcomes and working more within SIG testing visibly for the vendor was also guidance we got. Tim any more to add? I saw you on here before but I don't see your comments. No that pretty much summarizes it. So I think folks were listening and are taking corrective measures and I think that's appropriate. So as long as we're just closing the loop on the communications chain then I think we're probably good. Excellent. Also in that SIG architecture meeting we had a discussion about API Snoop which is an excellent tool I think hippie hacker is here on this call as well. Chris would you like to give a quick shout out to API Snoop status and direction for three or five minutes? Sure that would be great. I've tried to initiate some conversations on the mailing list following up to our conversations last time. This morning after I posted the last night I posted information about possibly mirroring the Sonoboy scanner approach to allowing our community to easily contribute their logs hopefully in exchange for some instant Analyzation to show what part of the APIs are using and possibly identify some practices like automatic RBAC generation and some other incentives to contribute their data. But the steps were a little complex currently was having to set up things with the API server. It was really nice to wake up this morning to the kept proposal for dynamic audit configuration by Mr. Barker. And in addition to that we went ahead if you can pull the two sunburst charts up side by side for Qoog test and Sonoboy. We're still looking to inspect and identify why we have differences in the test coverage results for different tools. These two charts do show an increase from our last results from 824. We've also created an API's new Slack channel. If you'd like to engage and we'd love to help anybody to go through the process of submitting the logs. And also we would like some feedback and thoughts on how can we drive our capex well. At this point it seems like a conversation with each of the capex we're interested in. I know that there's interest right now in driving pod API utilization. So we have a separate breakdown to make it a little easier to focus on what our test coverage is for looking at only pods. But I don't know that we have enough data for a prioritized list of what API pod APIs we're using. I think an intelligent selection of which capex we would focus on to help generate that data would be some great feedback. Rowan did you have some additional features that we didn't add or some thoughts? I guess I had a couple of extra thoughts about other possible features playing around with the E2E stuff. So things that could be interesting to explore is the ability to show the differences between coverage and E2E runs. So across like you've taken one E2E conformance run and then the next version you take the next one and you can show the differences what's changed. To be able to focus, Chris has just gone through that on a certain area of the APIs. To show a timeline graph maybe of coverage over time as tests are added. And then there's also the prioritized list that we talked about some time ago. Could be interesting to explore. I think that's all I've got in terms of other potential features that are going around in my head right now. Thanks for that Rowan. I really encourage some active feedback on the mailing, the thread on all of the lists. And please feel free to reach out to us directly on api.stube.com so that we can engage the community more. It would be the idea of using the dynamic configuration for either what exists today and looking forward to using audit. I'd love some feedback on that as well in particular. I have a quick question if we have time. You said there was a difference in the test results between Kube test and Sunabui. Yeah, could we bring those up side by side? Which cloud provider are we running on? GCE. Rowan, do you want to take over from it and talk about how you started those? Sure, so I can explain how we gathered both of those. We used the hack slash e2e to bring up a cluster on GCE. And then for the Kube test e2e we basically just a hack slash e2e go test. And then for Sunabui we basically brought up a cluster using hack slash e2e and then deployed Sunabui straight on to that by a Kube cuddle. So the one thing that comes to mind that there is a parameter that's specified to the test, which is the provider. And not all tests will run defaultly. I think Kube test makes a lot of suppositions about the provider default in the GCP where Sunabui specifically does not. So that provider flag actually turns, enables or disables extra tests. So are you seeing less number of tests being run on Sunabui versus Kube test? Slightly less, yeah, I think. 235 for Sunabui, maybe a couple more for a Kube test, let's see. The most likely explanation for that is the provider arg. So if you change the provider arg into Sunabui, which there is a flag for that, you should probably get parity. Great, thanks for that information Tim, that's awesome. Real work getting done in the meeting. I want to call out. This is just a really promising effort so please do take some time and look through it. I especially appreciate the way that you guys are using the appropriate mechanisms for gathering this information instead of building in some extra layers and things that have to be maintained alongside. So using audit logs to understand what is being exercised and how. And I'm particularly interested in figuring out how to get more information about the particular verbs and payloads and the range of options in some of those endpoints that are more frequently used than some other more obscure endpoints. I think that will really help in the prioritization effort. So, great work really appreciate this and others please do take a look and give feedback. Well, and I do need to point out that the work is at a little bit of a crossroads so this would be a really useful next couple weeks to come back and say hey here are the three natural directions to go from here that could provide real useful information. And Mithra has a question that sounds like. Yeah, I'm like, probably some, we already know the answer, but where does the code for API SNU currently and is that part of for Kubernetes. I'm sorry, Mithra, can you speak up or move it closer to the mic. Yes, I was just asking where the code lives for API SNU currently, whether it's in the Kubernetes repo or testing for. That the question was around where is the code for API SNU. Yeah, I just pasted it pasted it in. So, it's an instance. Yeah. I mean, there's no, we'd be happy to move it to Kubernetes but obviously wouldn't start there. Great. Okay, let's move along. Thank you for that. Thank you. So, just a couple of updates I think should be a standing agenda item going forward, which is to raise awareness about tests that are proposed for promotion to conformance test. These are the ones that I found and I just wanted to go through them quickly. One is to promote the aggregator ED test to conformance and the PR is linked here. Please take a look, give feedback if you have concerns. I expect as we have more energy going into prioritization, we will see more of this and I just want to make sure we have a mechanism for communicating those. Another is adding a watch ED test to conformance test. And one thing I noticed was just there isn't a consistent format, or, or labels applied to these necessarily at this point. My suggestion going forward is to please help me encourage and spread the word that adding conformance test is really two distinct steps. We talked about it a little bit before in the context of vendors, but I don't see it as being any different for any community member. First is to add or modify an existing test. And then in a completely separate PR to propose promotion to conformance. And I suggest that we use the format promote something ED test to conformance format so that it can be more easily discoverable. And be sure to add the area conformance label. Anyone have additional suggestions on that or feedback. It's not a proposal that we adopt that direction. I think if you do the conformance it portion, just at the, there's a, a SIG architecture PR reviews that probably needs to be poked I know that Brian did that indirectly on the watch PR. But you can specify at Kubernetes SIGs, at Kubernetes slash SIG architecture PR reviews. SIG architecture. Yep. I'll link to it in the, in the doc. We can also make proud basically recognize and label. If it really projects through my laptop microphone. Sorry. Okay. We have to basically could also basically make proud recognize like confirmance tests and label them based on just the way the PR comes in. Like, if it is like structured the way in the format that you suggest you go, then we can automatically. Okay, so we'll move in this direction sound like there were a couple of suggested improvements around automation, supporting this process. If you see folks adding a conformance a test and immediately proposing that it be a conformance test try to reinforce that these are two distinct steps will help to avoid confusion and concern and test getting lost in the frame. Next up proposal to bake conformance into release process and back to. Yeah, this actually came out of making sure that any feature that's going to stable release. We've, we've done the due diligence to ensure that either it is required it is already present in the conformance meet or needs to be needs to be represented, or this, it needn't be part of the conformance week. So, towards that, I plan to take this proposal to the release team where one of the roles, hopefully the CI signal lead role or maybe the features, depending on who they think it's more appropriate for when they get all the feature request look at the stable features and follow up with the owners and ask for end to end tests and conformance coverage for those tests. That's one part of the proposal. That way we won't be accumulating debt when it comes to conformance coverage. The second initiative is to, for the beginning of every release, come up with a target set of APIs or SIGs where we'll focus on to increase conformance coverage. For we 12 we are thinking, as we spoke about it, it'll be mostly node and API machinery that we'll focus on to increase coverage. So, so that way every release we have a few tests going into coverage to going into conformance. And the third one was, as part of another discussion we had with SIG testing, we do have a conformance dashboard, but currently this dashboard is not being looked into by the CI signal lead, because it's not part of the SIG release dashboard. It exists as its own dashboard. So, sometimes we do miss signals where a conformance test fails. So we just want to make sure that this dashboard is also looked into as part of release blocking. So one proposal that we have is to move this conformance dashboard into the SIG release dashboards as well. So that gets looked into on an ongoing basis. This way any new conformance test that gets into this week, we can make sure that it's not flaking. And also we can help us gather official numbers on an ongoing basis as well. So these are the three goals. Do you know whether or not it makes sense to at least have, because right now the default provider for everything is pretty much GCP. Do you know if it makes sense to have at least one other provider, one other common providers to make sure that that signals consistent because even if we hope it was always the case that these things should be abstracted away. The history has taught me that they're not. So if we are going to do the due diligence, do you think it makes sense to enable that for at least two more than more than one. We do have open stack results flowing into the conformance dashboard right now. So we do have GC and open stack there, but yeah, I can definitely go back and then have one more provider. Do you have a suggestion or Whoever's going to offer the resources just says it's more than one right. Yeah, we have we have talked with AWS as well and tends to run the conformance test suite at least on every submit and submit those test results back. This dashboard came out of the cloud provider extraction where this becomes increasingly important to ensure that making a change and testing only on one provider doesn't inadvertently break others. Our hope is that this is a compelling benefit and folks will opt in and submit those test results back. If you are representative of a provider or distribution and would like to participate in that. Feel free to reach out here on that conformance or cloud provider working group mailing lists. And we can set up with the right instructions to run. But fully fully agree we should get as broad participation as we possibly can. Looks like we're on to special topics. Oh, this reminds me there was also a request in the SIG architecture meeting to convert this document about conformance testing for Kubernetes to mark down and submitting it to GitHub to ensure discoverability and visibility and Mithra has signed up for that AI. So that was our first special topic. I don't think there's a whole lot more to say there. So we'll move on to the next one, which is just planning going forward. So far we've been somewhat ad hoc in the planning for each release and just best effort for individual states to submit test that they propose become conformist tests, trying to push that earlier in the Kubernetes process we already have. So trying to ensure that by feature freeze we have at least articulated those goals. I don't know if that's a hard requirement yet, or if that's just guidance. I'm just spitballing here honestly, but trying to push earlier in the process is always seem to help. That is the intention. And the call to action here is this whole effort is really about ensuring the portability of workloads for end users across providers of Kubernetes. If you are a representative of a SIG, please go back to your SIG and the best efforts in this whole area will start from within SIGs that already know what E2E tests they that ought to be part of conformance or they wish were but are too flaky. This is an area these vendors could possibly help out in if there's an bandwidth. But please take take the message back and help push on these areas through the appropriate SIGs. So we have made it through our formal agenda. 28 minutes to resume. I think William maybe is fired if I have my numbers right. As my efficiency trumps is anyone else have special topics or announcements. I want to thank you, Diego for stepping in. And then, I believe we are on a monthly cadence now, which I think is probably about appropriate, given the maturity of this effort, but obviously the mailing list is there for anything that comes up more urgent. I had a quick topic as well. I was just wondering if we should start thinking about KubeCon. There is a call for proposal that's due in a couple of weeks for Shanghai and then for Seattle, just to kind of gather our thoughts around what we're doing and also maybe we could do a potential workshop where people write more tests. It's a possibility. So I just thought we could brainstorm a little bit, maybe by the next meeting or yes, lack. I appreciate you bringing it up because the deadline for that is six weeks from now with the CFP closing the deep dive in interest on the same schedule. I guess my impression is that both the intro deep dive meetings in Copenhagen were quite well attended. I would love to do an intro to this at least in Copenhagen. And so I mean in Shanghai and so I don't know if anyone here would be able to volunteer that they can make the trip for it. Deepak, I don't know if you'll be able to make it in particular or Brad. I'll be there, of course. And then I definitely. This is Brad. If you're talking to me. Yeah, yeah, so I would encourage you to submit a talk as well on related topics, perhaps, and then you'd be in a good position to to help manage an intro deep dive there or one of the other. And then for Seattle, I presume we're going to want to want to do that. We have the ability to do workshops. We haven't really done them in the past. We could also do pre day workshops. I'm not sure there's a ton of outstanding work right now for that. So did you have a specific idea of what it might cover. No, I can't hear you, Amitara. I'm open to it. Maybe we should take it to the list and and brainstorm on it. I mean, what's strange is that I sort of expected that we would have half the vendors on board with the program and then we would do a workshop for the other half to convince them that they should we really participate. But we basically have everyone now. So the question is, is what do we want folks to do additionally or differently or more. All right, good questions. I think we should take that to the mailing list or probably other folks who have ideas and thoughts on that. So let's broaden this conversation there. And thank you all for taking the time. I appreciate it. I'm going to stop recording if I can figure that out. And