 Okay, shall we go ahead and kick off? I will just give a brief overview on where the mechanics of the program sit right now. And then I was gonna hand it off to Taylor Wagner for two minutes who had a question specifically for Aaron about a submission that he just made. So I'm pleased to report that we are up to 84 certified vendors, which is an all-time high and kind of a spectacular accomplishment. And 103 certified products. I'm just pasting in the spreadsheet most people should be familiar with. And we just had a kind pass a few hours ago as the first 1.14 certification. So very nice to see that moving. Also along the way we reached out to, I think almost over half a dozen organizations who were going to have their certifications expire today essentially, if they did not get an older one certified. And only two of them fell out of certification, which Taylor, I believe that was InSphere and who was the other one? It was Hasura and InSpar and there were 17 that were going to be uncertified and 15 of them recertified. So that was great. Yeah, that's 15 is very huge. So congratulations on that. And I would say that it really demonstrates a key aspect of the certification program that we designed and hope would work with that way, which is that folks feel the pressure not to have their certification fall off. And so to keep certifying new versions. And Hasura, I believe has taken a different direction in terms of the product they're going forward with. InSpar was just an oversight and I do expect them to certify again and get back on to the release stream. So from the kind of program management standpoint, I think we're, we remain pretty thrilled with where things stand. Yeah, and you can see the numbers there that were 47, 1.13, 54, 1.12, 62, 1.11 certifications are all really fantastic. So if there's not any questions about that overview, I would ask Taylor to ask that question for Erin and the group. Sure, Erin, I noticed that you submitted the PR for cube up for 1.14. And there was a new to me file in there for build log dot text. And I was curious if that should be in there or not. Oh, good catch. It shouldn't be. I will remove it. That was part of the pre-processing step that's called out in the README. Where I type everything in the build log and then I convert it to make it sound to make it look like Sonombois log. Okay, great. Cause I get some of these extra files every once in a while when people submit PRs. So I would like to check. Thank you for that. I will take care of that right now. Great, and then I will certify you. Thank you. Thanks. This is Mehmet from Verizon. Just as a simple question, I'm not familiar with the program. Is there some test cases documented that you use during the certification? Yes, so all of the conformance tests are the test cases that we use. And one of the common ways that people will do this is by using a tool called Sonomboi, which is a VMware's tool. But it's a list of test cases that are maintained by the upstream Kubernetes projects. And directions for how to do this should be in the CNCF, Kates, Conformant GitHub repo. Can you post a link if you can on the chat box? Maybe I take it from there. Yeah, thank you. I did. Okay. So next agenda item, I think was gonna be AI review. Erin. So first AI, I actually don't understand what this AI is. This description field for conformance docs. Question, Mark. This is Chris. I was taking notes for last performance meeting and we had talked about generating a description field within the conformance docs. And I asked a question around, where was it? It's the next question. What's the command to generate the conformance docs? Let's kind of combine together how can we can automate the process for what fields are available and what changes between releases to have it. I'm gonna put up that question ever to three of us. Yeah, basically a description field is manually populated right now. And it kind of describes what the test is going to do step-by-step so that there is any issue with the test so people will be able to, without looking at the code, be able to identify what the description means. We, yeah, we need to review that more clearly. Some of the description fields are not very descriptive. If that's what we discussed last time, I don't really remember. So to have these questions specifically about like, how do I generate the docs myself? Do you have a command? Do you- Yeah, I think it is that in the Slack channel, the command, it's essentially we are using the walker.go from the conformance subtree. And minus, minus conformance on it. And probably if we want to add more functionality to that, we could start doing subcommands on them rather than having flags. So this is- Yeah. Serenia's also got a cap out about trying to auto-generate conformance docs as part of the Kubernetes release process so that like the conformance doc, the documentation that describes all the performance tests and what they are and what they do is distributed as part of the Kubernetes release. That seems to make more sense to me. I don't think you got implementable in time for Kubernetes 114. So if there are folks who are interested in working on that walker.go tool, because I think we've talked about that as a place to do, not just like generating docs, but also maybe linking Kubernetes tests and whatnot. We would love to see some attention and movement on that cap. That would be awesome. I think that's it for that actually. I don't know if you had any more questions, hippie. No, that's it. Next item on the agenda was, there were some more details needs to be added to the description field for tests that are tagged Linux only. I assume the Linux only test can be promoted to conformance. Then in that case, we need to identify why Linux only tag is added and added that details to the description field so that when Windows tests are returned, we should know what the difference is going to be. As today, we identified one of the tests where it's basically a mount issue that one of the tests are tagged as Linux only. So did we want, we talked yesterday about this, putting those in there, and we wanted to link to the cap. Are all of the reasons captured in the cap already? Is there a section or something we can link to? Yeah. Yeah, so most of those are in the cap and we also included them in the docs. Right now, we actually have two separate PRs open that are working to get this. Based on the feedback from the working session yesterday, we're trying to get a brief, like a condensed version of that in the conformance, or sorry, in the contributor documentation for conformance tests to give people an idea of what to look for in terms of handling multi-arch tests, and then that has a link to the full list as well. So I'll work to get these two PRs rationalized because there's a bit of a confusion across time zones. I submitted one and then someone in Europe submitted one. So we'll get those worked out and that's gonna be the start of that list in terms of the capabilities that are there. I think the other question I had for here was whether or not we wanted to include details in those test cases as part of code comments or whether it needed to be part of the test case description. I don't know if there was something decided yesterday or if that was gonna be deferred to this meeting. So if we are generating, I'll take that question for a second. If you are generating the documentation, usually the tool will read anything that is part of the description field and the comments that are added at the top of the function. So it can be a comment or part of the description field. I would, well, so I think what we walked away with yesterday was saying it should be part of the description. It was unclear to me whether the conformance walker parses out that entire block comment. I thought it looked specifically for some fields and then pull the values from those fields in the comment. It might be worth considering if we want another field for Linux only or something to describe why this particular thing. But I feel like yesterday we said description. The other thing I'll suggest that I, and we get up to y'all to figure out how you best wanna accomplish this is, so the cap has like one massive section. I think it's like things that don't work now or will never work pending updates to Windows. And I feel like the discussion yesterday was around trying to cluster together groups of common reasons that large swaths of conformance tests were marked Linux only. So one of the examples was like Windows today does not support mounting a single file as a volume. And so it'd be cool if we had a single thing to link to for that particular problem, as opposed to linking to hear all the reasons that Windows doesn't work first time. And whether those are like, if you create headers in a markdown file, you can link directly to those headers. Whether you want to try maybe pulling out some of these known issues into GitHub issues. So you can link to the known GitHub issue the same way we'll do sometimes if we drop to do comments and code. And we're like, this is super weird because link to GitHub issue. But I feel like just linking directly to the cap as is today will not be granular enough for what we were hoping to accomplish. Okay. So the PRs I've mentioned that I've listed here are basically for the documentation. I don't think that's the right place for the exhaustive list. I think the takeaway here is that it sounds like we do want to update those descriptions and include the specific links in the test case where it's relevant. Is that what you're saying? That would be my advice. Anybody else from who was president yesterday has more tactical meeting want to weigh in? Yeah, no, I agree. We should link to a specific header or line item that explains why that particular test is Linux only. Okay. Yeah, I agree as well. Okay. Yeah, so we can do that. I guess the thing I'd want that would be most helpful to me is if we're going to put things up in the description field I need that information on how to generate the doc just so we can make sure that it is something that's going to be visible there. So that way someone that's not delving into the source code can find what's Linux only and what the reason is. So if you could share that with me and then we'll see if I can get some updates done. And that's going to be in addition to the PR I already linked there, which is more review guidance. If that's all for the last action item. One last thing was where are we on the automation for the board? I know we've done a lot of manual curation and opening up to the communities worked really well, but I wasn't quite sure on who was focusing on the automation and I think there was addition to automation querying to populate the board. It posit that automation of the board is not necessarily this group's concern. This group wants to use the board and it's up to them to draft policies there's a contributor out there somewhere who's working on maybe a proud plugin by way of Kubernetes testing to like automatically populate a project board with a given GitHub issue query, but I don't have status on that. And then in addition to contributor experience has a number of umbrella issues around how to better automate project management as those seek PM. I feel like ongoing work on that stuff is out of the scope of this group. Sharon, thanks for that. I do think it would be super cool if there was somebody who was responsible for grooming the board and it is unclear to me if that person is Timothy St. Clair since he helped kind of bootstrap and organize the more tactical meeting that is held under a signature architecture as a sub project or if it's Srinivas since he said he was gonna sort of take over like shepherding and whatnot in this group. It is very much not myself. I just try to actively move parts around on the board when I use it. But I don't know who's like the owner of it, if anybody. Yeah, I'll manage the board as much as I'm trying to learn right now, but yeah, I will do that. Okay, I can help out with that too. This is very useful for identifying what issues and then we will take over and review them. So that's one source of truth, I guess. All right, the prioritization would be important there too, Srinivas. Okay. We can move to open discussions. John has a document, John. Yeah, so this is a document I put together a while back just describing that change to the way we track the the way we track the behaviors versus the tests. Right now we've put it all in the tests and in these annotations and the comments and we sort of abstract out or we process it and get all that document out. And part of I think what we need to be able to do is the people reviewing the behaviors, like this is how Kubernetes should behave and what's considered a piece of conformance aren't necessarily the same people who are, they don't need to be the people who review whether a given test actually validates that behavior. And I think it's a lot easier to review the behaviors, then go through all the test code to identify whether that behavior is tested properly. So what I was suggesting in here in the document I linked to it, people please go ahead and comment all I'm asking about here in this meeting is for people to take a look and whether people think it has the general approach has enough legs to move it to a cap basically it's create a machine readable file that defines all the behaviors or maybe a collection of files and then the tests have to essentially link back to an ID that's in that file. And that file then can be independently reviewed by people who don't necessarily wanna read all the test code. So I was going through that John, thank you for that. What I'm worried about in this one is the implementation side whether we get enough people, how do we incentivize people to help with this thing? How do we populate the thing and how do we make sure that it is complete? And I know that when we get people to write a cap we can propose some sections there to get them to fill that stuff. How do we backfill it right now since essentially we'll have to... Yeah, well, it's kind of the same problem we have already backfilling, that promoting conformance tests is essentially backfilling behaviors right now based on end to end tests and yeah, I agree. And so I guess what I would propose right now is we move it forward in a kept form and then we can try and line up, I can try and line up some resources here and it all comes down to the approvers of the conformance suite would decide whether that, whether the behaviors listed there are complete or that sort of thing. I mean, I guess those problems are inherent to this effort whether we do it behavior first or test first and so I think... Right, so just you said machine readable in there somewhere, right John? So then the question is, will we be able to generate the tests from the machine readable form? Will that help incentivize people to do this? You wouldn't be able to generate the test because the machine readable form is really more around what is the, it's around the document we're gonna produce that we've produced today by reading the logs and the test files. And it's about the, what am I trying to say? It's about the ties back to, it's about the human understandable description of the behavior and then having a hook for that to be tied into on the test side. So tests still have to be written by hand and then somebody has to validate this test actually does test this behavior or this set of behaviors and that's part of the review process. It's just separating out, right now if you go and review a conformance test you're reviewing two things. You're reviewing does this behavior is, should it be part of conformance and two does this test validate that behavior? So I'm just trying to separate those things out into two different reviews because I think it's two different, it can be two different people that makes those decisions. Today we're writing code and then writing design docs after we write code and convincing ourselves that that's the right design because the code says that. I think this is suggesting start with the design doc and then make the code match that approach. And that's an analogy. Yeah, I mean, we are talking about behaviors that are already documented in the API documentation or somewhere in communities that I know somewhere, right? So then we're trying to take those behaviors and make them very explicit and clear at least the ones that are part of conformance. All right, so definitely if there are people who are willing to sign up to do this work to produce the initial set that can then be reviewed, I think that will be really helpful because the people who are gonna do the review will not be able to help. Absolutely. Time to do that. And I pasted one more link called Gabby. This is something that we use on the OpenStack side for it has a machine readable form. And then it basically it doesn't generate code but it runs, you know, HTTP resources based tests testing. So that was what came to my mind when you were talking about this. And I think Quinten has his hand up. Quinten? Oh yeah, yeah, I just wanted to add, so I've gone through John's document. I think it's great. I think this is like one of the single most important things that we have to do is actually defining what is and what is not. I think we're very far away from it today. I think what's in the doc is a great start. I shared Dems' concern. I actually have a greater concern. So backfilling stuff is actually, you know, a reasonably tractable problem if we decide it's the right thing to do. I think the bigger problem is actually making sure that this stuff stays up to date over time. Because as soon as we have, you know, tests and code and descriptions of what the code is supposed to do and what the test is supposed to do, we have these three things that can very easily get out of sync. And know, as far as I can determine, no reliable way to actually ensure that they stay in sync. So I'm just kind of thinking aloud here, another approach is to actually have a reference implementation and say this reference implementation is by definition, what is Kubernetes? And if your system behaves exactly like this reference implementation that it is conformant and if it doesn't, then it is not conformant. And then we potentially, and we have to decide whether the tests or the descriptions are, or the implementation are the actual canonical definition of what this stuff is. Because right now the tests are not, the implementation is not, and these behavior descriptions are kind of destined never to be because we can't keep them in sync with the tests and the implementations. So I was thinking about, but I think that's a, I think before we get too worried about how we're gonna backfill things and all this kind of stuff, I think this problem we have to solve first. And- So Quentin, the problem there is, how do we give the ability to someone who has no idea what tests need to be run or what conformance means, the tools to compare their implementation versus the reference implementation, right? That's what we have right now with Sonobuy hiding the end-to-end tests. So that's gonna be the larger problem there. Yeah, yeah, I understand that. But I mean, the reality is that very small fraction of our end-to-end tests around 10% are actually kind of defined to be conformance tests. And as a result, I would guess that it would be completely impossible to write an application that runs on something which is only conformant because it's just not enough stuff in the conformance tests to actually be able to do that. So until we get to a point where we have some way of verifying that something that, you know, I can write an application that I know will run on all conformant clusters, we kind of haven't really got to our end goal. And then we've gone round and round on this in the past. Also in the context of like the LTS discussion, the idea that maybe it doesn't make sense to try and focus rally around Kubernetes until we've actually got everything to GA that is usable and acceptable. Notably, this comes up in the context of storage. Many hundreds, if not thousands of those end-to-end tests you see skipped are different variants of storage tests run for all of each of the different CSI plugins. And it's not our job to verify that Kubernetes is conformant for literally every possible CSI and CNI and CRI plugin that you can hook into a Kubernetes, but to make sure that whichever one of those you have plugged into your Kubernetes, it works as expected. And because we say that conformance tests can like have to rely on default behavior, like any of those CSI plugins, you can't guarantee, like you can't guarantee a consistent, common persistent storage implementation across all versions of Kubernetes. And so that's like one great example of how like applications usually need to persist state in one form or another and conformance tests can't cover that because there's no out of the box consistent way of persisting state. Like we might have that with 114 because of persistent local volumes, but I'm not sure if any of those things are actually GA yet. So Aaron, the CSI, CNI, those are pluggable aspects. So we need to have a clear, I mean CSI forms a clear contract between the Kubernetes infrastructure and the back end. So in theory, as long as we have tests that exercise all of those things, we, you know, it's up to the distribution seeking conformance to configure their particular cluster with the CSI plugins that they want to use and validate the conformance there as opposed to most validating conformance of every different CSI driver. That's correct. It just gets us into that like weird corner case of God, I really don't want to talk about profiles right now, but like Kubernetes can run on Raspberry Pis and it can also run on 5,000 node clusters that have like very specialized storage plugins for them. So are we saying like in order to be a Kubernetes, you have to have some kind of CSI plugin hooked up or are we saying it's acceptable to be a Kubernetes without a CSI plugin? So it's actually worse than that because all of the network attached to providers their volume sources have different parameters exposed to the user. So you would need an abstraction over that, which is some sort of storage class thing and then to find some kind of common behaviors that you would expect across different volume sources. So I don't actually want to rat hole on that specific issue right now. It's a hard problem and the storage folks have been looking at it, but I actually think that particular thing is much lower in priority than covering the basic things that everybody uses. And yes, that's not sufficient for anything, but if we don't have coverage of even that, then nothing else really matters in my opinion. Right, so here, I think John is asking, should he do a cap? And I would say yes, I think we should explore this more. Okay, so I made a comment in the chat, which I guess is sort of related to some of the other comments that were made, but it's more than just the behavior. If you're putting a tag on some tests saying, it tests this behavior, it's really hard to know what that means without going to review the test, because you don't know if it adequately exercises that behavior and test the corner cases that need to be tested. You don't know whether it tests those behaviors using acceptable mechanisms from the perspective of conformance. You don't know whether the test is gonna be adequately poor compatible, which is another requirement. So right now it's pretty hard to review conformance tests. We're not really at the point where you can turn a crank and say, I know how to create a conformance test that's gonna be sufficient and acceptable. That's really tricky. So I'm never gonna trust a tag that someone puts on a test. I'm gonna go review the test. So yeah, I guess what it sounds like you're saying is that you're not necessarily in agreement that the people reviewing the behaviors, that we can segregate the people reviewing the behaviors. This is what should be conformant versus the people reviewing this test. Basically there's your first sentence and all the rest. And I'm thinking that those could be different that the people reviewing that the test actually validates the behavior doesn't have to be the same person. Whether there's value in that, it sounds like you're challenging that assumption. I think we're not there yet. Like theoretically that would be true that you could just have someone get a test to the last point where it needs to be approved. And we've been trying to move in this direction and say, look, is this a valid behavior to test in conformance or not? Like we have tests that cover it totally adequately and properly and whatever. And we just wanna know should we officially add this to the conformance suite? That would be beautiful and wonderful. Seems like we're far from that. Okay, well, I'll still move on with the cap and then we can just keep discussing it trying to move forward, see what we all think. Right, anything that gives us enough information to say this is what is missing so we can go do something about it is helpful. I think John. Yeah, I mean, certainly I'm in favor of trying to come up with a list of behaviors that we should test. Like that seems like a valuable exercise. And in some cases I don't even think it's rocket science. Like just go read through the pod spec and cross out everything that's not optional or non-portable, right? And I, I mean, in the sort of document I put together that was just a, I mean, I just spent like an hour looking through the pod spec, maybe not even an hour. It just wrote out 50 or 60 things, I think, you know? It's like, it's not difficult. It just takes time. Yeah. Yeah, for what it's worth, I have been talking with John about this approach off and on in the past and I'm in favor of it because like I just desperately don't want us to fall back to using a spreadsheet. I feel like that's kind of where this whole effort started way back in the day. And so instead of using a spreadsheet, if we use YAML like that's fine. Cause I just want us to get to the point where we enumerate the list of behaviors and then we sort of map out the state space and then we start to cross those behaviors out as we implement them. And so I think this is a great way of parallelizing let's approve the dump truck of work and then we can have other people work on the dump truck of work. And yeah, like we definitely have to make sure they implement the dump truck of work in the right way, but I think this will help us scale. Right. Plus it'll help us prioritize to saying, do these things first and the rest later. Okay, well, that's it for that topic then. We can move on. Thank you. So next topic is Brian on coverage of HCD, dependent API server behaviors. Yeah, so this is one of these basic things that I've been talking about. And we have been focused on moving mid-conformance tests into conformance to get better pod coverage. That's one of the basic primitives of Kubernetes. The other is the API surface of the API server somewhat generically and there have been some proposals or attempts to create some sort of automatic tests of API endpoints and whatnot, but actually think that the tests that have been written in that area has not been super useful. What would be super useful is more rigorous testing of the behaviors that we inherit from HCD because originally we had in mind a certain model for interaction with the API server, but out of expedience, we kind of just lifted behaviors almost whole cloth directly from HCD. And we have more and more projects that are swapping out at HCD. There's the Cosmos TV implementation. There's K3S using SQLite is one of the most recent that I'm aware of. So there are a bunch of examples of this. Almost the API, SIG API machinery had been working on adding a few more tests around watch behavior specifically. I don't know what the current status was the last time I saw it. It wasn't super rigorous. Just tested that, yeah, if you create an object, if you update an object, if you delete an object, you get watch events. That's not remotely adequate for making sure that the behavior of controllers will be what people expect. You need to test things like breaking the watch connection and being able to reconnect and reestablish watch. There are consistency model issues that we haven't even really decided what behavior we wanna officially support and clients are building assumptions around accidental behaviors like the resource versions are technically officially supposed to be opaque but we don't obfuscate them. So people are doing comparisons on them in ways that we don't really recommend but we don't strongly enough discourage. So there are a bunch of issues like that that need to be sorted out and part of it is writing tests and part of it is actually deciding what, for example, our consistency model is if I write two objects, do I see the watch results in any particular order of those two things? Stuff like that. So we actually need to decide what behaviors we officially guarantee in which ones we don't and write some kind of spec for that and test the spec. And maybe also think about ways we could force clients to adhere to the spec and not depend on things that are not in the spec. Yeah, we could have tests that if say we allow out of order watch events then we can have tests that actually intentionally do that and make sure clients work properly. Well, probably we would have to make changes to Kubernetes to make it actually deliver things in the wrong order or in a different order. Yeah. Pretty lame question, but I'm assuming that the functionality is not fully tested. So we are supposed to write more ETA tests to test all the behaviors or we just start with the spec and then from there we go and write the ETA tests and then promote them. You're correct that we don't have enough tests. I think the tricky thing here is that people we've asked to go write tests aren't sure what they should test. So yeah, we need to hash that out. Sounds good. Actually, we are working with the global team and we are interested in writing new tests but we don't know the direction. So that's one locker for us. For this topic specifically, I mean, or more generally. Because so far it's, we're finding ETA tests that we can promote, but at CDE was one of the top priorities that and from the Ambril items that I logged in for the conformance and we don't have a clear direction how to progress. So. Yeah, we need to discuss with CIG API machinery about what behaviors we think need, what behaviors we officially wanna support and which ones need better testing. My suggestion for a global it would be to de-prioritize at CDE related behaviors and focus exclusively on pod related behaviors. I just didn't like that CDE stuff is going to be significantly more subtle and does need the involvement of CIG API machinery folks. Not so much. So I think we probably need to start with working with API machinery folks to come up with some first cut of this is what the watch semantics, the consistency model, all of this is. And I mean, it's just gotta be done, right? Somebody has to sit down and write it. I can take a first cut at that and work with the team, the API machinery team CIG and try to get, they're gonna be the ones who know best, but I'll get something started. Once we have something started them up, something to debate over. Yeah, that would be great. On the pod topic, do we know where we are on the pod behaviors that are still not covered? I think that's the hard thing since we don't have that list of other than the API documentation itself. So we probably need somebody to do that as well which is go through the pod spec and cross things out and decide what things need to be covered and then go figure out from the existing tests where we are on that. Sounds good. Yeah, it is important that we, if you have to write new ETA tests, it has to be at the beginning of the relay cycle so can't really promote them. Yeah, another pretty basic area that's related to pods but goes a little bit outside of that is networking. So basic pod networking. It's not clear to me that we have adequate coverage. Networking is another one of these things in Kubernetes that is super pluggable. There are lots of CNI implementations. I don't think we have tests that ensure that two pods on different nodes can talk to each other. Like I don't even know if we have a test for that. That would be useful to figure out. There are some tests around that but when I looked at them, I'm not sure that they actually do with what we did because they all use host networking which then has... Yeah, so that doesn't count. Yeah, exactly. So theoretically that's where they claim they test but I don't think that's what they're testing. Yeah, so there are a bunch of different networking combinations and yeah, I don't even know if we have the basic thing that two pods can talk to each other via their pod IPs but there are a bunch of other networking configurations that also probably we wanna make sure are tested. So it would be useful to figure out in that area what should be tested, what tests we have, what tests we need. Sounds good. But again, then we need a bigger number of nodes in the cluster, like at least two. At least two, but we've already said that we're gonna require at least two. So let's move on to the next item which is APS node, user agent picture. Mississippi, one of the things we've been working on is adding the ability to filter by user agent. So now that we have the user agent, it's available. I think this will help us to identify pieces of software that are used within a system and what endpoints they're hitting. We have our initial branch up. There's a link there, but it's having some issues on some browsers. So I went ahead and pasted some pictures here. You'll note that CSI, our storage interface is hitting some beta endpoints. Just so we can be aware of what endpoints we're hitting. The search bar will allow you to do a REGX for all of the different endpoints that are there so we can possibly do things where we look at pieces of software for different, we're calling them CAPEX at one point, but anything using the API so we can define and research new behaviors. I think this might be useful in helping to address some of John's behavioral-driven proposal stuff. I haven't had a chance to look through the document yet, but part of the API-scape analysis is to help us automatically define some of those behaviors based on analyzing a lot of our community data. Beyond doing the filtering to user agent, we're also looking to filter on endpoints and trying to generate something where we have, based on our code, ways to filter on what SIG is responsible for that endpoint possibly so we can kind of bind together and a particular application using the API server with particular SIGs and that kind of flows into Serenia and I spent some time curating the board this week and we were thinking about ways to send out weekly targeted emails which might combine a link to API-scape with targeted applications that are using their endpoints along with a link to the boards and issues to try to increase SIG engagement. These are some of the links to node cluster lifecycle in Windows that we're gonna go through, but I think now that I'm kind of from the meeting yesterday, that's probably where we go through the board more and this is more of a high-level overview, but I just wanted to get some initial feedback on how useful it is to filter by user agent and eventually by the endpoints based on various metadata and also sending out links to various SIGs and the related issues on the board. One of the ideas kind of laid out here in the picture is focusing on all-in-law CSI, all the storage container storage interface as it interfaces to the cluster. I could think of other ways that we could use that to focus on other components and storage controller, CRDs and whatnot. You know, one thing I wonder about one of the discussions that came up earlier was around and we didn't want a rat hole on it, different areas of functionality within Kubernetes that we may want to have conforming behavior around and that some clusters, for instance, might not have some, I mean, running on Raspberry Pi and doesn't have any kind of persistent volume functionality at all. Something like this could help us understand what can run on a given cluster when it only implements a subset of the functionality that it doesn't implement PVs, what components of the system may or may not function or even, so what third-party tooling may or may not function if there's a way to kind of automate that, because we can see what APIs it's calling, if it's calling APIs that aren't supported in some particular set of features, maybe there's a way to use this to flush out any dependencies. When I was more focused on the conformance effort back then I put together a presentation for Shanghai, a couple of ways that I found this user agent information useful was to be able to take a look at what endpoints are obviously exercised by a lot of tests and what endpoints are not to give me some context for, cool, we're touching this API endpoint, but only once. We're probably not hitting it with enough variation in parameters, that would be an area to investigate for coverage. I also feel like the API coverage information would be more useful if we could find a way to filter out discovery API accesses, because today if you just take a look at API coverage due to conformance tests, you'll see there are a lot of alpha and beta endpoints that are hit and it's not like the tests themselves are hitting those endpoints. It's that, you know, Kube CTL or sorry, Kube client or something like hits the discovery endpoint and walks every endpoint available at first. If we could get rid of that, then we could start to really gate the big red flashing light if something is not testing a stable endpoint but something else. And I think that would be a really good sanity check for all this. Filtering on test tags and stuff could be useful, because like all of the different test case names are each their own user agent. And that was really helpful to me for drilling down and exploring this data. One of the things you're recommending there on being able to see what's hit a lot is actually a ticket for implementation in the next few weeks. It's a flare. So as endpoints are hit more within a particular, like with it, it will be longer. So the ones that are really long on the outer edge, I could drop a link to that, but that should help with easily identifying which test are used a lot, where which endpoints are used a lot but not tested and which endpoints are used a lot that are tested but not conformant. Assuming we also filter out the endpoints you were speaking about that are incidentally hit during the test week. Yeah, we have six more minutes. Let's move on quickly. I think Chris briefly touched upon the curation of the board project board. Essentially what we are looking for is a pattern to identify which six to engage and then also on a periodic basis, we have to, we figured out there are lots of rotten issues that are still part of the project board and there are, those needs to be manually addressed. That's all I want to bring up there. Chris, you have anything else on that item? No, a lot of what we written there earlier was around automation and we're out of scope for that. So I removed those sections. I think the last thing is making sure that Williams here, we have a chance to just talk briefly about Comcom Barcelona. Awesome. Hi, man. Yeah, so I just wanted to go over what options are for Comcom Barcelona. We've been approved for a combined track which is like the intro and the deep dive together. So my plan was to present just the intro deck that we have to anyone who's new there and apparently there's a huge number of attendees at Comcom so I imagine there will be some new participants and people who are interested in certifying so that should hopefully be valuable content. For this group, the real question I had now is who's going to be there? What topics do you want to discuss and kind of is there enough, are there going to be enough people there and enough topics that we can have like a valuable discussion in person? And what should that be about? So maybe we can start by just polling like who's going to be there of the people on this call now? I'll be there. All right. I guess if everyone can just put their name in the dark just so it's an over-up idea. I saw Phil's hand. Phil's there. Okay. Let's test Google Box's ability for like all those types of things. Right. Right. So in terms of topics, as people put their planned attendance there, and this is, you know, you're not committing to it at this point, it's just who expects to be there. What kind of topics should we bring up face-to-face? Do you think, is there anything like, once in a champ it's particularly useful, like anything kind of thorny or, you know, any kind of design challenges? So I agree. I felt like the last working group session was good to get some consensus on topics we've discussed at length. I personally feel as though discussion around the concept of validation would be helpful. I think that we have some preparatory work to get us to there. Like there's a PR that I think Brad Popel or somebody started, but I feel like we were talking about that as a way to kind of do maybe like node validation or maybe we're talking about CSI validation and CNI validation and CRI validation to talk about those consistent set of behaviors across the different plugins that implement those things. But I think we had talked about those as maybe a way of trading off the concept of profiles. And there seemed to be some consensus that validation sounded like a good concept, maybe a good way to rename what we now call the node E2E tests or the node conformance tests. But I feel like we haven't spent time flushing that out and getting to actionable steps. I have a comment on that. Actually, yeah, I'm working on some E2E tests for storage and it's easy for me to put them into validation speed first and then promote them to conformance if they fit there. So that way I can progress. So yeah, that's a good one to discuss. Okay. So can we, I mean, would you be interested in presenting just like a short like deck, like, you know, three, four slides just to kind of kick off the conversation? Absolutely. Me or Aaron, yeah. Yeah, me. Yeah, not me. I'd rather not get involved in this. Okay. I think that sounds great. You just think it's a good, you just think it's a good topic. Okay, so Srinivas, I think it's a good topic. Okay. Anything else that kind of like racist level of like a good, a good technical discussion that would be valuable to do in person? I think the point and suggestion of what finds useful conformance and how do we get there might be good. Could be a good time to just sort of review where we're at in implementing John's proposal to the set of behaviors there look meaningful when it comes to the types of applications that that enables us to run. My airpods are literally dying right now. So it's probably for me. And we're at time. So any other ideas? People just just add them to the notes here and we can take it offline as well. But that looks like a good starting point. Bye everyone. Okay, thank you. Thanks for the call. Thanks everyone. Goodbye. Bye.