 Good to know I Posted a link to the agenda to which I personally have added no items, but I figure if you want to put your name on the attendees list I Know Tim had something on here about walking through the backlog. So I pasted a link to the project board and I Added all of the issues or PRs that have area conformance since I last looked at the board which brings us to a grand total of 49 issues to be triaged and Then I wasn't sure if Sreeni you had mentioned something on the mailing list about maybe updating us on globet Yeah, actually I've been attending the global meetings on Thursdays for the last few weeks and We They are working Pretty good team. I just have to say that first There they have done pretty in-depth analysis on some of the problems like pod life cycles free stop hook They found out there are some behavioral things and they did a few PRs in there Apparently one of the problems we are facing is essentially the PRs are There's a lot of time. It's taking for the reviews to happen. So Some of the issues like, you know, you have to keep rebasing this and then Hoping that somebody will review is taking a bit of time Other than that the work is going great out there. So they they are working on Also the DNS test one of the things that are happening there is that a Lot of these tests that we are running into they could be the next only kind of test. So Or we Should they be validation test? So that there's a point I want to discuss a little bit because we're not sure How to approach those I'm talking about the existing test and we are also writing you this We in the sense that So one approach I am going with is like first storage we are I'm working on some tests and I'm tagging it as storage validation test So that you know, there's a validation speed that goes and then we can eventually tag them as Linux only if we need to or Is that the right approach or? So usually we just write the test and Then promotion comes later right the conformance Yeah, I'm talking about the existing test So during promotion to conformance, maybe we can tag it as Linux only That's what it feels like to me because that's when the windows guys are going to be affected, right? All right, I make sense so but Okay, so something I haven't had a chance to look at to in depth But I've had a couple PR slow by that start to introduce the Linux only tag Right, it's unclear to me whether or not that tag is getting introduced because it's a behavior that windows will never ever ever do Or if they're just trying to get it over the line and get this particular Linux variant of the test through versus a windows variant of the test so the one I most recently ran into is a test that was related to DNS Was split up so that Previously the tests looked for both DNS and Etsy hosts And then it was split up into a test that looks for a cluster DNS And then also looks for entries in Etsy hosts and the Etsy hosts test was tagged as Linux only Because windows can't mount individual files for pods I see Brian raising his hand in response to that take it away Brian Yeah, so I reviewed a lot of these test changes to tag tests with Linux only including that one The I pushed the windows folks pretty hard to get a very Detailed description of what is possible today and what will ever be possible and why What the reasons are for the limitations for windows? So please refer to the kept and if something is not clear from the windows support kept we should get it added to the kept They've been very good about adding additional levels of detail like the latest one is ICMP support Which actually isn't even well documented anywhere for Linux or otherwise The on the Etsy hosts issue the The yeah, it is limited by the single File mounting problem and there are quite a number of features that are in that bucket Quite a number of changes need to be made and a number of components to make that happen is As it's been explained to me And that is kind of summarized in the kept so it falls into a big bucket You know, there were claims made that maybe OS changes even Were needed. That's not a hundred percent clear in terms of what OS change means in that context exactly Like how deep of a change that would be But it's definitely not a Kubernetes only change that would be required So I would suggest that we just tackle all of those at the same time If somehow that becomes feasible in the future. They should all be labeled with the reason that You know, this is marked Linux only due to the single mount file thing So it should be possible to find them. Maybe not fully automatically Because there's not like a tag for that specific issue, but they'll have English text that explains that In in the comment section. Yeah. Yes Yeah, so any for anything I Any test that got Linux only I asked them to enumerate all the reasons why it was marked that because for some things There were multiple reasons like this requires privilege and it requires single mount files and it requires, you know Some third thing in some cases So all of them should be tagged to the best of our ability With all of the reasons why they're Linux only and Is does that show up in the description field that is parsed out into the conformance stocks? Excellent question. Okay. I mean was your my I would assume We would want that information to show up in the conformance stocks. Was that your intent? The conformance stocks are the thing that scrape the tests and lay down an ID the name of the Ginkgo test and then the description stuff in the release that the test was added in that right Serene Let's go and I don't know the answer off the top of my head I agree there needs to be an automated way if we were to you know Try to ascertain Conformance for Windows there need to be an automated way to sort that out, but given that it's not Part of conformance yet. I didn't worry about that step yet Like we haven't figured out if it's going to be additive or orthogonal or whatever even right, okay Stringy, do you maybe want to take an action item to have the global folks confirmed that that has been done? I will yeah Okay, if you can make an issue and then tag it as area conformance and add it to the backlog that way we'll know It's an outstanding piece of work And then it's getting worked on Also hi, I really didn't want to run this meeting because I'm not super actively involved I'd love to hand stewardship of this over to somebody else But I see Tim's name is on the other two agenda items, and he's not necessarily here I Did just want to point out the code freeze is coming up in the next week and a half So if we do want to push to get any more conformance tests added now would be the time to Help with the review problem that Stringy mentioned I to encountered Review bandwidth issues and just general shepherding and ping-ponging PRs across a bunch of different six to help out the global contractors Looking at the existing conformance test dashboards on test grid It appears as though the 113 release of Kubernetes contains 214 conformance tests and the 114 release of Kubernetes as it is currently standing Contains 216 for a grand total of two additional conformance tests. I feel like we could be doing better Well, yeah, as soon as the reviews happened problem have a lot more What active steps are you taking to ensure reviews happen? I? Actually, one of the things that I'm trying to do is probably Get to this individual sick meetings and try to bring up this issue. That's I Haven't done that yet, but That's probably the best way because on the issue itself responses could be leaders so One So sorry, I hadn't interrupt when you were mentioning the review problem Yeah, so there the process is are these new tests that are being written In which case they're not even conformance yet There are there are also. Yeah, there are existing tests that table tests that are converted to individual tests and there are tests that are that are written recently for the for the pod lifecycle and Yes, there are tests that requires to be reviewed and So can I maybe Take a step back and ask hands up who here has been reviewing kubernetes tests or conformance PRs in the past month Serene, I can't see your hand. So you're not on video. I'm not Okay, when I get tagged I review otherwise, I'm not actively So of the 12 people attending this meeting there are only two people who are even bothering to drop LGTMS Is what I'm hearing And I feel like perhaps we could do a better job of addressing that Yeah, so process-wise what I generally do is You know, if it's a new test or it's a significant change even a refactoring to an existing test I make sure that a domain expert in the SIG looks at it Right because just as one example that I like to pick on Was there were some changes being made to a networking test And they said well all these things don't need to be privileged and the test still passes but actually What the test was doing was trying to connect to the cube proxy port on the same host and It was totally unclear that the test was all doing the same thing anymore With the changes that were made to the test, right? So we need to get the attention of the folks in that SIG and I strongly recommend not relying on GitHub notifications I will not look at anything if you just tag me through a get out notification. I am signed up for like Tins of thousands of issues in PRs. So it is all just white noise It helps to apply structure. So if there are multiple Tests that need to be reviewed by a SIG I suggest framing it in an email and sending it to that SIG's mailing list saying look we need to To make the cover these features or whatever That covers functionality of your SIG can you please make sure that we have the right people Looking at those changes. Okay, and then once it is passed the SIG level of review by the appropriate Reviewers and approvers in that SIG if it's just a Test change for test not labeled Conformance then it's done and it can just get merged if the test is already labeled conformance or It is being added to conformance then one of the conformance approvers Needs to be added to that and there are only four of those right now because we need to you know on board more folks So if you are interested in that Definitely volunteer and we can start doing some sort of shadowing much like we're trying to do with the API review process in kubernetes But right now those people are Aaron dims Tim Sinclair and me and well, I guess there's a fifth one Clayton who sometimes will Review things that when he's tagged, but he he's not as active on that You know and and even among those people like if you're not comfortable Approving because it you don't have the domain expertise Definitely loop in those domain experts But it's really important that we understand the intent of what the test is supposed to cover and some Parts of the system are more subtle Particularly I'm thinking networking, but there are probably others as well So don't feel bad about poking those folks, but definitely do not rely solely on get hub notifications. They are worse than useless So I want to Am I trying to say so we've tried a couple different like let's use a smaller more focused channel to make sure that people are aware There was a project board in the architecture tracking project Where I would move things around in the column and then I would try to Paint Brian or Clayton directly in slack and point them at that column when things had stacked up there That works well for me actually Okay, so we could go back to doing that model another thing I feel like you suggested on the mailing list when Patrick had a question that he wanted to address was Just use the mailing list. We don't often send a whole bunch of traffic on the mailing list. And so we could Send something about a PR being ready for conformance approval on the mailing list Yeah, so there's the conformance mailing list which is super duper low traffic, so I don't have to filter it And probably other people don't either if they're really interested in this area. They're also the SIG specific Mailing list so again get have notifications are super hard The teams don't have the right people in them a lot of people are subscribed to issues in PRs And they're no longer the best people or the right people at all And so on so I really recommend You know using other forms of contact the get have notifications are completely unmanageable for multiple reasons And adding a little bit of structure and context will help people understand Like importance and urgency and the bigger picture around that as well Right, so I understand shepherding around other things I'm just trying to like so what I've what I want this group to try and get to you to help out newcomers such as Steve Who have volunteered to help us with this problem is to make sure that we're all looking at one list That is prioritized according to this group's agreement That is not something that I personally individually can do. I feel like we owe it to ourselves to prioritize that list and get agreement that that is correct Prioritization, so that's why the project board I have linked in the meeting notes has a two triage column and then a prioritized inbox column is that what I called it Sort of backlog column and then we could be working off of that sort of backlog, but I also want us to use one channel for that High signal. Hey, we think this is ready for conformance approval So to me, I feel like using the mailing list, which is currently low traffic would be the path forward Whether that's a link to the PR in question or if that's a link to a column on our project board That way we are all looking at the same thing and we're all using the same channel so beyond that like I've got to stop talking because I'm really focused on 114 and haven't had any time to dedicate to this. Yeah, one thing I can do is basically I'm maintaining based on Some of the areas that team asked us to work on I've created a spreadsheet. I can actually on a weekly basis update the status on the spreadsheet and the link that spreadsheet to Send it to the work group, a conformance work group mailing list on a weekly basis or To start with and if that's too much of emails and I can Do it by Shini, I, I, I like that idea just because once we get into some kind of rhythm, then people will say, Oh, okay, there is this email. There is this four things that that I can help with. So go do that. Then go do something else. Right. So just getting into that rhythm seems to be like the right thing to do here. I agree. Yeah, that's a good idea. I hit hippie hacker. Did you have something to say we can't hear you still can I hear you. Okay, here you while we're waiting. I'll just do my usual spiel where I have found that spreadsheets are a source of pain. Because you ultimately have to reconcile that spreadsheet with the state of GitHub, which is where all of the work actually happens. Which is why I would suggest that we try using a project board and consistent use of priority labels, if nothing else, to give us some form of rough ordering in the buckets. I have a question is what works for you, then please just any anything I absolutely agree. But there are items that Globe and had questions about whether to start working on those and there are things that may not show up on the dashboard. So it would be My response to that would be why aren't those GitHub issues Just just to better help us visualize this the scope and volume of work and who is blind of what I wanted to echo Aaron's thoughts around not using the spreadsheet and see about having someone go through that on a regular basis like on a weekly And make sure that we have this email that's after we've gone through and contact the SIG. So going through the SIGs in that week and then going through the The prioritization and looking at those tickets with an email. And I know we've talked about doing the spreadsheet. We get to maybe shadow with where that spreadsheet is now and see if we can combine those techniques. I'd be willing to step into that. Yeah, and just for my own workflow. I also prefer the A GitHub based solution, not notifications, but the project boards when they are up to date have been working well for me. They're easy to discover. They're easy to search. They're easy to manipulate. If you're in the right permission sets. So it's really low friction Yeah, that's my preference as well from we're using cube docs and it does work well. So hopefully we can move as much into that. Yeah, so the distinction between the architecture board and the main board is the main board has Lots of conformance related tasks in it and the architecture board was specifically focused on the final approval For changes to the set of conformance tests that that small group of approvers needs to sign off on So that created a really super high signal place that I could pull when notified through some other high signal channel to go work through the set of things that were ready to be worked through Doshini, can you paste a link in the doc the current spreadsheet itself. So maybe that'll help People go look and see how things should which which of those columns should go where in GitHub like Well, this is very rudimentary right now but But eventually the idea here is to add a couple more columns to to to see if these are black at issues and there will be sub issues that we'll be adding here and what the status of the sub issues like for example Pre-stop hook we added a new test that is still needs to be go through the sake to get added and then we'll push it to conformance. So that level of tracking I cannot do it through project dashboard but in the spreadsheet I'm planning to do it. We use the umbrella issues also for things like this. So right in the first box we have we have the list of items and we check through each one of them when they are done right. That might be helpful as well in addition to the project board project board plus umbrella issues should be able to do what you are doing in the spreadsheet right now. Wait, sorry, I'm looking at the spreadsheet and it just has links to issues and that's totally something a project board can do so I don't understand the problem by the way in the Kubernetes or we're also working on a bot that will automatically populate a board from query. So that hopefully will make it easier to Slurp content into project boards automatically. But for this number of things like 12 things that they could just be copy pasted into a project board in like two minutes. Right. I agree. We caught it early. Sounds good. I can work on that. But still we do a weekly email. Do we want to assign someone to that? I'd like to step up and be involved in the board and SIG reaching out to the SIGs. If someone wants to co-do that with me, I can just take it. I would be happy to help with that and we can coordinate. Thanks, Sri. Let's do that. Yeah, I think the weekly or the email will really help because I'm sure we're all getting pulled in different directions and I'm going to follow my sword and say, yeah, I haven't done any reviews because I got distracted on 20 other thousand things. But seeing the email and seeing what's going on, well, I'm sure helped me and also help a lot of you folks who are even busier than I am. It was a good point, Aaron, to pull that out. That has been happening. It's been a crazy start of the year. I mean, I just raise it because to champion the individual who's not here, Tim claimed he had like rallied the troops and had was going to bring some review bandwidth to this group. I understand there are people who need to like talk about this strategy and the path forward here, but we also actually have to have people who do the work. And so I'm just trying to make sure that we have the right structure in place to encourage that level of growth and then we have the right people showing up. Perfect. Yeah, in particular we had, you know, I'd sent out the email about the priorities to the list and there was an action item to convert that to issues or something has happened done. I am not aware of a one to one mapping of that having been done. I believe that's why you're seeing the global contractors work on some things related to life cycle hooks. And yes, determination and things of that nature. All the items that are listed in that email. They are now issues in the guitar. I fear that those that there are any others please send it to me. I fear the issues and follow up on them. May I bring up another issue? So for storage test, storage test, we need set up with strapping to run those tests. So we're creating storage validation speed kind of like no performance. At some point in Seattle, we discussed that instead of calling node components, we should call it as node validation. And similarly, I'm calling the approach to for the storage adding a tag called storage valuation. Is that the right thing to do or is there any other. Yeah, so it's not going to be part of the conformance suite, especially if it doesn't interact with a as an end to end test with the whole cluster, then we should call a validation suite. This came up, I think just earlier this week as well where that signode is planning to create a test suite to validate CRI implementations. So I requested that they not call those CRI conformance that they call them validation just to reduce confusion. And we're going to need that for CSI. We're going to need that for other things. The big challenge we have with storage. I haven't had time to get back to it is you don't have sufficient abstraction around all the different volume sources with consistent functionality that we can really test it in a general purpose way. I think it is possible, but it needs more thought. I don't know what the current status of that the storage was working on a proposal at some point, but I put working with them on the back burner to get some of the more basic urgent things covered. Yeah, I'm more interested in the general approach so that we can we can have a uniformity in in how we tagged the test. But which which API is with the storage validation covered the CSI level or the API level or. It's some of them are API level. Some of them are CSI level. There is a list that we're working off of right now, which involves some amount of things like volume sharing between different pods and Wait, that's volume sharing between pods can be done with empty during should be portable. So that could totally be just conformance and should not be storage validation. Right, but yeah, could be right. Then there there are CSI driver level for spot if are any with the default storage plus then those tests may not be portable. Maybe in those case we need to have it right so right so if it's CSI level I would call it CSI validation if it's Testing optional functionality that or functionality that might not be entirely portable. Those are just end to end tests. Assuming that they are end to end. So I don't know that they need a specific term to describe them we have lots of end to end test coming covering such parts of the system. Is the six storage driving this shiny or you you're doing it. I'm helping out the six storage. They are driving this and they are identifying the test but again we need to bucketize those tests like Brian said some of them could be We can blanket call them as storage validation or we can Subdivide them as CSI validation versus Flex volume or validation or whatnot right Because not everything is probably available by default without some bootstrapping On the on the cluster we are running so I kind of get it and for CSI we can start using CSI validation there. I cannot think of test that Are not portable But can fall under the blanket of storage validation so Well, so example of portable tests would be would they have to do a storage would be any Most things in teacher related. There are some exceptions to that like It's depending on the media type But you know Local storage emptied or Should be portable. It's all the other network attached volume sources that are all non portable and all optional And that's where you know I'd ask them to think about how we could abstract that into some kind of basic functionality that could be Just represented through the default storage class that a PVC Could take advantage of and those could be candidates for Incorporating into conformance some way in the future But still I would just categorize those as they're just into in tests It's really the extension point Validation tests where they want to determine does this Flex volume work at all. Does this CSI Driver work at all where maybe we need a different set of tests that are more like the original node conformance tests Where they actually launched cubelets as part of the test and just exercise specific cubit functionality So those sorts of lower level tests, which don't even have the mechanically aren't even Possible to incorporate into conformance when written that way Those are tests that clearly should have just a different kind of Different kind of name where someone who's building a cloud provider or building CSI driver or something like that might want to run those more targeted Tests that just exercise that interface Yeah, so this is Good Yeah, that's what I'm targeting basically Which I'm trying to name them as validations rates So to subcategorize the test. So is that the right approach on one of the things that I wanted to do is also take the node performance tests and call them as no validation tests But then there will be a lot more deeper impact if I if you do that. I don't know Because right now I don't know if those there are scripts that are already referring them as node components tests and things like that. Should we should we provide some direction and how this test needs to be changed Essentially Sorry Naming is hard. So Let's get the tests in first and then we'll leave and tag them I guess So to respond to Aaron's last comment. I don't think it's a new way of punting profiles Because just again like the examples that have been raised to me In other forums like CRI Tests If it's actually literally just exercising the CRI Extension API then it will never ever ever be a conformance test which requires a whole cluster. So For that for those kinds of tests I just recommend I just pleaded with them not to call them Conformance weeks because it in the past like with node conformance has engendered confusion They should just from the get go call them CRI validation or something like that because it has a totally different purpose So this is Go ahead Sorry, so this is the what continuing one of the items I have at the end of the agenda. So the cloud provider wants to have a test suite That validates behavior in for entry and out of two providers So like providers can use that test suite to to make sure that when they do the switch over that, you know, things work as expected So yeah, I talked to Tim about profiles and he said that was kind of stalled right now And so I personally don't care what it's called But I would like to know if Or let I guess like best practices for how we label categorize these tests like are we good to just add a bunch of Test and tree And then worry about what to call them later or do we want a solid plan for what is going to be called now before we go to the test And add a bunch of tests in there Would these cloud provider tests Use the existing end to end framework or some new sort of integration testing mechanism So that's something that I would want guidance from this group But it would be much easier for us to because there's already like a provider framework in the end to end test so leveraging that would be nice but With the whole like out-of-tree direction going I'm not sure how much it makes sense to be adding more provider specific tests into KK These would be provider specific or these would be generic for all providers Sorry one second I had a phone call So they're not so the tests wouldn't be provider specific but the implementations like we would still need underlying implementations that do provider specific operations That makes sense. So like one example is like the delete node case right so the test case or what we're testing for is generic but We will need we need to call like an underlying implementation that can call the AWS or GC API and delete the node to validate that behavior Well as far as how you mechanically implement this I think this is a good discussion for the testing common sub project which I literally have to make a zoom for right after this meeting And their next meeting will be Friday at 730 a.m. Pacific time This is the sub project of sick testing where we talk about the best practices for how to write an architect tests just for what it's worth And I feel like there's a concerted effort to try and extract the provider specific stuff out of the e2e framework and so that would be the right audience to ask those sorts of questions to Okay, but I guess okay, but we know for sure that we don't want to be labeling these tests as conformance or like we may have profiles in the future but that's different discussion tabs Correct so what I would do for tests that exercise cloud provider functionality is I'm good with labeling them cloud provider or something like that so if a cloud provider just wants to exercise The set of tests that's going to test whether their cloud provider implementation works there's an easy focus that they can set to do that and if we develop a profile that incorporates common Cloud provider functionality in the future it will be easy to incorporate them but also easy to exclude them right now so I so I wouldn't use conformance in the name at all I would just call them Yeah, like feature cloud provider or something that Aaron is typing in the chat seems like a good way to go assuming that they're going to use the end to end framework. Okay, sounds good. Thank you. I feel like you're going to run into the same sort of issue that like CSI stuff would run into where there's a certain set of behavior that you expect to work across all cloud providers, but that there is also some cloud provider specific stuff. Like, you know, maybe GCE offers some things that Azure doesn't offer that AWS doesn't offer that sort of stuff and you're going to want ways to exercise that but it sounds like what you're talking about is making sure that there's a minimum bar that all cloud provider implementations meet to do things like node management, storage management, load balancing, yada yada yada. Yeah, and pretty much given the profile stuff is still they just wanted to know next steps for what we can do now and it seems like we can just add tests entry labeling them with feature cloud provider and then figure out how to put it into profile later. Brad. So Aaron made a great comment about hey, we talked about this but nobody ever wrote this up and it's a little squishy. So, so myself and Trini will be happy to take a first draft at writing up what we talked about and why we're calling things validation suites. If that's okay with everybody will be happy to grab that right it up. Because I mean think you all verbally are talking about what we agreed to it's just anybody who doesn't have that tribal knowledge is going to get frustrated and I think that's a great point Aaron so we'll take that. Thank you. That be part of the existing best practices document somewhere or it would not be part of the conformance documentation would be part of our testing documentation. Fantastic. I think you had a couple items on the agenda that we haven't gotten to yet. So, if you remember, we were talking about the adding a end to end test for the conformance image. I ended up filing a PR against testing for for that. Please take a look when you get a chance. Right now it'll run only one test. This one just to kick the tires and it's an optional test which will not be triggered, but you have to trigger, you know, by hand if you want to try it out. So it's so that it doesn't stop anyone. So once we get that working then I can remove the restriction of just one test and run all the end to end tests so please take a look at that. Sure. Yeah, it's easy to try it out in your local environment also because it uses kind and then it has some scripts to build the conformance image and then run the conformance image and get the e2e log and the g-unit log out. So please take a look at that. If anybody else is interested, Steve, maybe you might be interested also because you're doing stuff with Sonoboy. This is the Sonoboy image which got imported into the KK repository last cycle and then the cycle trying to do a little bit more with it. I also had a question about whether or not this supersedes an existing PR that Srini opened back in October to try and get conformance tests running with this image. Yeah, I think it supersedes that one. I don't think we made much progress there. We did not make much progress. We need to add the job into the... Right. So I lined up all the ducks in a row now. So there's some changes in the KK I had to make that merge this morning. Tim did that. Okay. Yeah, I think there are issues with the image being built that blew up timing of releases being cut. So I'll keep an eye on that. Thanks. Okay. No, we haven't changed how the image itself is built. I'm not adding it to the any of the make cross any of the things that we use for the release itself make release. It doesn't show up there. Okay. So then the next question I had was we I think we talked about publishing an exact list of tests for each release because you know, we and using that list to check if somebody even the conformance folks get a get a text file saying these are the tests that ran then compare it against them against that list to make sure that they didn't they didn't skip any of the tests. Right. Do we have such a list? Are we publishing it with any of the releases yet? Like I mentioned last week, I mentioned about a cap that is is still out there. Not a book. And I do have a mechanism. I'm testing out to build on this document as part of the release bill. So it will be part of the tar under docs. So if that kept goes through. So that way, the conformance document that will be generated as part of each of my minor releases at least so that they can be a reference document. Right. So that was the first part of the equation and the second part of the equation was I remember Ben the elder had a PR out which would take two sets of lists and compare and see if anything got skipped. I think that was the other. So give people a tool so that they can check if they accidentally skipped any of the tests. So I have to go chase that I think. So I'll do that. Those are the two things I had Aaron. Okay, yeah, as far as I'm aware we don't currently produce a list of conformance tests as a release artifact and they kept for that didn't make it to implementable status by the enhancements freeze. So we have to proceed with what we have been doing thus far, which is a manually produced list of tests being sent to the Kate's conformance repository. I feel like we have a number of PRs that Serene has opened that are a little backlogged on chef tacos approval, which I can try to poke on some more. I feel like I've been poking the past couple of weeks, but all I can try again. Thanks. Thanks. Wait, so this is different than that conformance dot txt file. The list of subset of tests that are included. Actually, I'm kind of unclear that list is is is hard to deal with sometimes because it strips away all extraneous tags. So you just get like some of the text. What I'm talking about is a document that is generated a markdown document that's generated that contains the blocks of comments parsed out a little bit. So what release does this test added in what's the description, the human readable description of what this test does stuff like that. Okay. About a machine readable thing dims. Um, yeah, machine readable was more like it because then we could have a tool which will spit out what this is. This is one of the things we've been looking at with API snoop was actually to have a basically like a web site to go to where you have each of the API's and their history of when they were added when they were permitted to conformance and what applications within the Kubernetes. Test to hit them since we know their user agents and eventually what applications in the community you're hitting them. That's of interest. We could also generate the user generated portion if we want to do that. Okay. Um, yeah, and that's more information than we have right now and as I would think we need to consider who's going to consume the information like who the audience is. If there's if there is anything that is actually checked into the repo, I think it should be relatively straightforward to get it added to the release bundle because we copy a bunch of other files from the release bundle. At some point we used to include the whole source tree. I don't know if we're still doing that now. But one thing that I think would be straightforward to do at least which I requested recently and I just put a link to that but the notes and in the chat is for anybody who is reviewing changes to the conformance tests, just ask the author of the PR to include a release note. That should be a relatively straightforward thing that we can do. Um, that was probably of interest to some set of people, but the release notes in my mind are the places where, you know, cluster administrators and people building clusters and Kubernetes providers as well as, you know, end users should look for a while what what has changed that's relevant to me in this release and we might want to think about breaking down the release notes by audience that's a lot more work. But at least for now there aren't too many of these changes like Aaron said they're like to the coming release or something like that so just ask for release notes on those changes for now until we have a better solution. So that only compares the delta, but if you're running like 216 and if one or two tests are skipped, there should be a mechanism to compare what's been skipped. That's what I think that is asking about. So on the skipping. I actually commented on this somewhere as well or maybe I found an issue. We have a relatively small number of skip. Directives in the test framework. I think when a test is labeled conformance we should just set a global variable and if that thing is set when and skip is invoked it should just crash. Like we should not allow skip to be invoked during a conformance test. I, are you talking about the ginkgo skip itself, Brian, because ginkgo skip will make sure that it never gets to the point where the test is invoked. I'm talking about the skips that are in the code. Yes, skip if that sort of stuff. If provider is blah blah blah. I don't even think it needs to be a linter in this case could just be an assert. Yeah, those kips we already removed they were. No, I just found one. I just found you did. Okay. Okay. That's why I brought this up. Okay. I believe there's also still some confusion in the appropriate regular expression to be used in executing. Maybe we can hammer this out as we hammer out the test image dims, because I felt like at one point in time, we discovered that son of boy was using a skip regular expression that included where it's like. Alpha cube CTL feature, whatever, and you and I have been trying to reduce the delta so that all you have to do is focus on conformance. That's it. That's it. Full stop. And it should be really easy to generate the list of tests from that. Right. So just related to that Steve, are you working on son of boy too. Yes, yes, I am. So I'm, I'm going to help out with Tim Tim Tim's just here to help me help everyone here. I was more so I'm happy to help with reviews shadow of things. Sounds good. So I guess for the neck by next release opens, let's see if we can get to a point where son of boy can run the conformance image that is there and the main KK repository. So then we won't have to worry about two images. Yeah, that's one of our action items is actually get that working. Yeah, absolutely. Thank you. Okay, we reached the end of our agenda. Four minutes to spare. Four minutes. We did not talk about the coupon. Is there anything that Who's organizing or running the coupon stuff. I guess that'd be me. So my goal with this one I've requested a 50 minute session. As a combined session, I think it's more valuable. The goal is that we spend kind of a short amount of time giving an intro to anyone new, but then really actually just focus on making progress tasks that we have. Because I think that's kind of the value of us actually being there. So with that in mind, I am collecting topics if anyone has something they want to raise that that'd be helpful to kind of hash out in person. We know, for example, last time in Seattle. How much more progress. Do we think we can make then during this meeting. I don't know one of the main benefits of the last coupon meeting was it included people for whom the these video meetings are hard to make like people from Australia, for example, but that's like that was like one person. So I'm not going to be there in Europe. And that probably a lot of other people from the US are not going to be there. So I don't want to have another meeting where we end up just like rehashing all of the content of that meeting during one of these meetings after it. Yeah. Well, maybe I'll maybe I'll put out a call and see actually who will be there. So we can gauge on how useful it will be. And if there's critical mass. Does that sound like a good next step. Yes. Sure. The only other thing I had for this group was, I noticed that the K3S folks stripped out a CD and they're using SQLite three. And we don't have a conformance test and they pass the conformance test to so I don't know what we want to do for things like that. So please everybody take a look at that and we'll put on the agenda for next time and follow up on the mailing list if you find anything interesting with respect to that this with respect to the SQLite thing. That is one of the reasons why I said we need to cover all SED dependent behaviors in the conformance tests. We also have the Cosmos DB implementation by Microsoft. People are starting to swap this out. So we really need to make sure that the semantics we expect to be respected are actually implemented and covered by the conformance test because workload portability is one of the goals but ecosystem tool portability. Like, you know, there are literally hundreds of operators now. We need to make sure that those things just work. So with respect to the other things they removed, I haven't looked at that in detail. That bothers me less than, you know, coverage of pods and API server and networking. Like those three things really need to be very well covered. Being certified for Kubernetes 113 does not mean you are certified for all future versions of Kubernetes. That is correct. Yeah, so please follow up on the mailing list if you discover anything interesting and let's have a more informed discussion about that at a future meeting. Thanks. Sounds good. One minute. Okay, see you all later. Bye everyone. Bye.