 Hi everyone, Tim Google here, William, Jagger, and Mehdi. We'll just give everyone a few minutes to arrive. We'll start it. William, would it be possible to update the invite with the Agenda Dock? For future? Okay, yeah, I can ask the Linux foundation people to do that. I don't think I have access. Actually, I was wondering if it's possible to just have one long running Agenda Dock. That way it doesn't have to change every week. Yeah, we could do that. We could do both. Yeah, both. All right, let's use today's one. Then there's the ongoing one. I'll get it added in the Agenda. All right, we may as well get started. Dan Cohen is just going to be a couple of minutes late today. But here's a very exciting update for us, which I won't spoil that surprise. Okay. In Jagger, you had a couple of topics. Yes. So you linked the Agenda into the chat. Is that right? I put it in. I can print the channel. Yeah. So if you just open that up, the thing I want to make sure that we over communicate in the first few releases where conformance program is getting off the ground. And we're solidifying those communication channels. Is that there have been three conformance tests that I know I've added to 110. And I just want to call that out here. So one in the workloads API. There's a link to the Damon set test that was added. So these are E2E tests, which existed before. And are now exercise that process of going through. The conformance IT making it a conformance test. Getting it added to the gold list of conformance tests. And proposing that to SIG architecture and getting approvals from folks in SIG architecture to include that in the conformance test suite. So in 1819, there were no changes to that list of conformance tests. So this presents the first set of new tests that are added to the conformance program. And so I just want to call out that we have exercised that process. And both in workloads API and in API machinery. There's one for garbage collection. There's another PR, which I will find by the time this call is over for watch. And my suggestion on this one is going to be that since it was not completed earlier and didn't get the milestone, my suggestion is that we have a convention that we don't add conformance tests during code freeze. Leading up to 110 and that we instead include that in 111. I'm open to other conversation on that, but that's my, that's my current thinking. A couple of quick questions. Sure. Is there any reason like, I think I understand we don't want to keep raising the bar for. One nine conformance. But is there any reason why the demon said. Test wouldn't pass on a one nine cluster that we know of. There's no reason that I know of. It's more about. We need the same way to track the list of tests against which a cluster is being verified. And there may be some reason that a provider or. A distribution or a platform just. Has a bug or it isn't clear. And so we can't be shifting the definition of conformance after that code freeze. Totally. So that. Yeah. So that's the philosophy behind the. Keeping that conformance tests that are associated with that version. As part of the. Cutting the release branch for that version. Right. But if that's the only change and maybe it's not, it sounds like a one nine cluster might actually pass one 10 conformance. I would imagine that is true in these cases. I wouldn't expect over five or six versions that that would necessarily be true. So. And I think in so one nine cluster would pass. Likely because. That was in the D one. API group. But a one eight cluster would not because that API. It did not exist. It was in. Extensions. So I think as a prerequisite as we're starting to review these changes going in. When we add. New conformance tests there of their new first specific version. There already exists inside the test. To do version verification. So if only only run this, this specific conformance test. If greater than or equal to version. And I think we should definitely exercise that capability. That's kind of where I was going with that. Well, I did the. So the ETE tests are part of the release branch that's cut. And so I would expect that by definition, the tests that are being run are being pulled from that. Repository. In that release branch. We do have a son of boy conflict for each version. We do have a son of boy conflict for each version, but the. The specific. I put a lot of effort in the last couple of releases to make sure that the latest version of whatever the. Conformance test suite is will be backwards compatible. And adding that capability for new feature additions to. Make sure that they disable for older versions is a very minimal change. And it also allows the latest version of the test suite, which may have fixes and other modifications to be able to run on older Kubernetes versions without, you know, blowing it up. So I think that's super useful and important for the sanity of the people who are maintaining these platforms and distributions. But for the purposes of the conformance. What would be the outcome if you. Ran the one nine conformance test against one seven, and it just didn't run some of those tests because the version wasn't greater than. Once it would, would you then. Essentially be. You would not have a mishmash of tests like so if you added new conformance tests and maybe something got promoted to be one. You would fail those tests on a one seven cluster, even though you could just add the version gate check as part of the test and skip if not version. So I think this could just be part of a code review process for any new conformance tests. I'm just trying to call it out here. Okay. The expectation would be that someone running a one nine cluster, even if that cluster would pass all of the one 10 conformance tests, would not then be able to claim their one 10 conformance. That's great. Right. Right. Okay. I mean, we also have an additional check. When you assume the convalescence results that I have to actually be running that version of communities. So it would fail that separate check as well. If you saw anything else. Hey folks, stand con joined late. I apologize for that. Yeah. Yeah. Unbelievable. One of the largest consulting firms or. Analyst firms just sent us a draft report saying, so you can just become a leader in Kubernetes by buying a platinum board. That's the answer. Right. And that's how you can take over the project. And so I needed to do an urgent call with them. So you know, there's just been called. Yeah. Amazing. But even though it's out of order, I was hoping to actually just give an update on this. Performance test process. Yeah. Okay. So. I just want to remind everybody for context that we started talking about this. Like right after we announced the program in September. And specifically. From any C pointed out that we're at a 14% acceptance rates on. Test rate. And so I. The governing board in December. That we set aside money in 2018. To pay an external development company. To, to approve a test. And the pushback that we got was. Well, he's a governing board. Signing up for an unlimited liability view. You can pay. So what do I test? New API's are going to go. Mature. And then. There's going to be more tests. And so. We then went to SIG architecture. In Austin and said. Hey, couldn't we have a new rule that says that in order for. APIs to move from data to stable. That they need to. And there was a discussion in the back and forth on that. And SIG architecture promulgated a new rule. And so. And there was a discussion in the back and forth on that. And SIG architecture promulgated a new rule. I mean, which is really like one book. A couple of weeks ago. And then I was able to bring that to the governing board. Monday. And get sign off. That. Get approval for the. And so in parallel. We've been. The firm. That. Google has. Worked in the past on external test development. And so. I just learned. And they are starting. And just. The ballpark number. To the truth. Is it. 25 K per month. For. Two developers. Plus a little bit of. Part of management services. And so. I'm. Really. Even now. That we do whatever we can to support them. On getting a. And begin to submit. For requests. Into. SIG testing. Because all the work that they do. Is that needs to come through the regular process. And so. If. I'm hoping to. You know, Help train them and. And shape that. And make sure that the works. Going. Well. And then hopefully after a couple months. In a lot of background. But. After a couple months. We'll begin to have some insight. Just to kind of what speed they could progress at. And maybe begin to do some extrapolations. As to. How many months or years. We're talking about here. But. All that stuff. And so. If. I'm hoping to. You know, Help train them and. And shape that. And make sure that the works going well. And then hopefully after a couple months. In a lot of. Background noise. But. I don't want to bring up the point that. If this firm isn't the right one to do. To do. Okay. If this is the right term to use. We could cancel this contract with a. A 30 day termination. And so I don't want it to be seen as. Oh, we're just, you know, locked into 300. For a year. The other side of it is. If it's going extremely well. We could potentially help the. Of the spend rate and adding a third or fourth person. So. That's the overview. It's kind of starting now. I would really encourage folks who care about this. To. To engage in suggesting. As. These folks start. Producing some work. And ensure that it's up to the quality standards. And the case. You know, the thing else. And so. The other side of it is. If it's going extremely well. We could potentially help the. Of the spend rate and adding a third or fourth person. To. You know, the thing else that. That is reasonable. Did I answer any questions about it? I think one of the agenda items from last meeting, if I don't recall, or even before that when we were in. Austin was to get like a. Apparent issue. Broken down in the main repo that identified. The areas that needed. Coverage. So that way it wasn't just kind of. You know, Listed down the key concerning areas that have the. The require the most. Effort. I don't know if that's been done or not. I think Yago. Mentioned one day. Had it. Having some type of. Spreadsheet or other piece that he was going to write a new issue. I vaguely recall this. Yeah. So. Me. Mitra on the. Google side has volunteered to take that on. And part of that is to essentially shop around to their related. As the domain experts on what should be in conformance for those. The components owned by those things. She's in New York this week. So she's not on this call. I don't think. But she will be putting that together in the coming weeks, I think. Okay. You might want to connect with a Srinivas. I believe he's been putting together a spreadsheet as well. And I think they might, you know, may be able to collaborate or something there. Cause I know he's already got. I have. Sorry. I can talk about that. Yeah. Yeah. Yeah. Essentially I'm trying to build based off of, you know, the API's that we have a list of all the API's in a spreadsheet form. And then. I guess that say. Get what is covered. And what are the existing eating you test. In one column. And then what is currently. Part of the content. Test suite so that we can leverage and say, this is the area. Like, for example, our back is not covered today. And what are the API's? There are four or five API's there. And should, should be part of the components. Then we, we can, we can use a spreadsheet to leverage and say, this, this is the work needs to be done. Right. So that's, that's what I'm heading for. I do not have access to other six at this point, or I'm not. And also I'm not really contacting other six, but that would be great if I get information from them, then I will know exactly. It probably much faster than reading through the code. So for traceability with upstream execution, the, we might want to break down, like a, a, a, a logistical way that seems tenable would be to break down by API group and have a tracking issue for each API group. And then as people do PRs, the PRs can reference the issue. And then we can close out the issue once it's completed. But I think the spreadsheet alone won't get you the logistical tracking you need for execution against upstream. You need to either reference the, the spreadsheet from a main set of tracking issues on upstream. So that way. People who can do reviews. Can target the milestones that we want to get stuff in by. Okay. Yeah, I agree with that. The other thing I will point out is that I think we may have. Focused on API surface area coverage because it is convenient and fairly easy to measure. But there are other angles that we need to consider as well. So there are certain APIs that have optional fields that may be more important to test various. Configurations given optional fields that it is, you know, to test certain APIs. Likewise, there are many different patch operations that might be, and those combinations may be more important. In certain endpoints, then to get to certain other and put like the pod template API is one that is rarely used. So I think it is in the, in the, in the, in the, in the, in the, in the endpoints that are hit during the conformance test is a meaningful metric, but it's not the only meaningful metric. And I think the prioritization should include some of these other angles as well. Yeah. I'd agree. Question. Yeah. On that line, particularly for Jego and Tim, which is, um, I believe in Boston. You guys are Brian mentioned. Some sort of rocket science work that was going on where there's an aspiration, I think using some of the swagger definitions to try and be able to test a large swath of the APIs, at least in a service. Um, wrote relatively quickly. And, um, I just, could you remind, I'm like, I'm probably not summarizing that well, but could you just remind us what that thought is and what there's an issue associated with it? And then is it possible to get our test developers focused in an area that's far away from that? That is, um, won't be covered assuming that that project is forward. Uh, sure. I think that was me. Uh, there was some work. Uh, someone on my team was working on essentially a thought experiment about a data driven test to. Explore the API and using, uh, get a simplistic, not flaky, but not. Entire, uh, not testing the behavior, but rather simply testing crud operations on specific resources. Uh, and it's based on some work that Eric tuned in previously for off. That was seemed, uh, valuable. Uh, I think that is still in the thought experiment phase and not in the, uh, plan of record phase. One comment I think I saw recently from David ease was, wow, this looks like a nightmare to maintain. Uh, anything that has a comment like that from someone thoughtful, uh, is a concern. So, uh, but the intention is, uh, just, is there a non flaky, repeatable way to. Verify that end points are exposed. That may be necessary, but not sufficient to demonstrate that a cluster is, uh, implementing the desired behavior of the end user. And I will drop the link to that, uh, work in progress in the notes from today as well. That'd be, I appreciate that. And it just, we get through in particular, could just think about, um, areas that, uh, we could start on that, even if that's totally successful. There's, I think there's still a ton of other tests we want to do. Yeah. Absolutely. For sure. Completely agree with that. This would be, uh, um, just that maybe a first test to run before, you know, and if it failed, there's no reason to run the rest. For example. Okay. So just going back. Uh, you had a proposal. Uh, is that something that people agree with? Or do we need to discuss that further? Uh, I totally agree with that because it's too hard. There's too many other things that are happening during code freeze time to try to triage that. In a meaningful way. I think it's, it's almost like feature development and should be considered as such. Right. Yeah. And I think in five years, we'll have no idea. If something goes wrong, we're going to have to, you know, re-test it. Yeah. And I think in five years, we'll have no idea if some conformity tests without it in one 10 or 111, but it will be. Very costly. If it starts breaking just a couple of. Yeah. So basically the. Set of components tests kind of an artifact of the. Really. Yeah, that's right. And that is intentional. Yeah. Actually. I'll put that for a sec. We actually do a version string. version string. It's part of the components metadata, so we will know when it got added. That's right, actually. The documentation update I did for all the components tests from 1.9, there is a release tag information in there, which is like a string form. So we should be adding information about when the test is added or modified, so that kind of helps us try. All right. Are there any other agenda topics for today? I added one more. In fact, Ben, the elder is here and can give this update. I just wanted to share that there is a document process published on contributing conformance test back to test grid, which may be useful for other folks to be running their own CI, running the conformance test and posting those results back to test grid to raise the visibility that there is some breaking change earlier in the process. Ben, do you want to add? Well, I expect to have a PR out soon. It pretty much follows that doc as a small tool right now. It's a Python script, but I'm open to porting it to whatever works best for everyone. Pretty much just take the standard E2E log and JUnit output and prepares it for test grid. Test grid needs a little bit more metadata than no one in the job is running something like that. And this specifically overlaps with the effort to extract the entry cloud providers and support new cloud providers and Kubernetes as well. One of the issues and challenges in having more than just one set of binaries that's required to run Kubernetes in a meaningful way is that we need to understand and visualize and raise awareness when one contributor from one cloud provider inadvertently breaks another or all of the others. And so this was an overlapping concern there, but the conformance tests are trending in the direction of being a really useful smoke test to run. And so that's where this effort originated, but I think it may be useful for this group also. So basically, if people have to test results to test grid, they'll get an early warning. That's the main concern. And it will give us the ability to have a single dashboard with multiple cloud providers. And so if we, for example, add a new conformance test or add a new feature and it breaks other cloud providers, likely an engineer working in one group is not likely to be testing across every single cloud provider or distribution. And so that's the intention. So, Srini, I'm not sure the status of your PR, whether it's an emergency, but Srini has a PR out there to add some documentation to communities about how to add the performance tests and what the expectations are to turn the documentation and stuff like that. I think it would be useful then to point to this document from that document, right? Yeah, that's, that's true. Basically, there is a location where we're talking about the conformance guidelines to write the test and about the metadata. Would it make sense to pull this document into that other document or keep them separate? I would keep them separate from what is a detail. The other is the process. Okay, that's fine. But I do think having a link between them would be good or at least from the communities documentation of how to do conformance testing and all that stuff to this document would be useful. Agreed. So Srini, can you take care of that? Either add that to your existing PR or add another PR if your PR has already been merged? Sure. Thanks. All right. I apologize, but I had to drop off in two minutes. I'm at the open source leadership summit. But is there anything else that I could do for you or answer? I mean, I'm very excited to start this work. I'm really looking to this group and suggest to make sure that it makes sense on the right track. I had a quick question while you're here, Dan, to inquire about whether or not we have meeting space or anything else set up for KUKON EU. That's an excellent question. I don't know that we do, but I think it may not be too late to get it. William, you didn't reserve it, did you? We have a Kubernetes conformance intro and a deep dive. So we have two different ones, sort of. Yes. I think that the deep dive, we can just do a regular meeting, or it's just like one of these. And then the intro is more aimed at people that have never heard of Kubernetes. So actually, it's good that you read this. I'd love to develop that program together. If we want to share the talk, we can have a bunch of different people maybe getting up and sharing their experiences. It's not a particularly long one. But yeah, that's kind of like high-level topics, just to give people a rough intro. And then the deep dive is more or less just a working session, and people can just bring whatever questions they have. William, could you take the lead on directing that? And we can try and get it done in the next week or two, so that people are thinking about it as they plan the schedules. Ashley? What about the un-conference? I think Brad, you're just going to end that up, right? What's the relationship between that and these other two things? Or is that the kind that they're all separate? I think that on the day before the conference, we could run a conformance print that allowed people to write conformance tests. I'm not sure of that. Yeah, we could do that. If people are available. Yeah, or at the very least, give an update of what the state is and where people think we ought to have more coverage or combine it with other topics that folks have on this call. I think we've got a lot of flexibility there. Do you think that could go in a deep dive that we have scheduled or should that be separate? My only concern is that people tend to be pretty busy that week. Yeah, that's what I was thinking. We might want to try to combine these events because I think I've heard we have three different events and I want to make sure we understand what the three are and maybe we should start combining them because people aren't going to have time to do it all three. Yeah. So, I like in principle the idea of like a spring to a hack phone conformance. It would be a good forcing function. Life seems to go on as soon as we break up from this meeting. So, I am in favor. Yeah, I sort of have to be how much availability people have that week in school. Why don't we take that one offline? It sounds like there's an awkward silence that means no anxiety to really commit completely. So, I'll take that offline and communicate it. It was going to be on day zero right on the on conference day. I mean, if you try to do it on the other days, I think you're going to have trouble. I'm happy to take a sprint on day zero. Okay. If there's already time for it, it sounds good, I guess. So, that's Tuesday or Wednesday? May 1st, right? May 1st. Yeah. So, that's Tuesday. That's Tuesday. So, on a quick related topic, Shree, didn't you have a list of PRs that you needed folks to review? Right. That is true. Basically, for all the components that I did generate about seven PRs, they are all consecutive numbers. I can show one of the PRs. Thanks to Tim. He reviewed one of them. But all these PRs are basically strengthening the documentation for the test. Actually, using the RFC 2119 format, basically, what should happen in Kubernetes when we run this test and give description of what test does. And also, we added release information into the test metadata. All these PRs are basically documentation and cleanup sort of PRs. I can go to... It's in my bar club to go through it. The problem I've been having right now is we are in lockdown for the end of the release. And so, at the end of every release cycle is a exercise in firefighting. And I've been doing firefighting duty for the last couple of days trying to track down a couple of test blockers that have occurred for different areas. So, it's on the backlog, but it's just not the highest priority right now. Absolutely. I agree. The reason why I'm trying to emphasize on this is because once we want to generate the 110 conformance document that I am putting out on CNCR, the documentation should use the new format for the test. It's pretty much it for me. Other than that, I would probably go through the spreadsheet in the next meeting. So, I'll have some content in there. Okay. Great. And so, is the goal to publish that doc with the 1.1 release? The 1.10 release? For 110, yeah. Okay. I think that's every item I've seen so far on the agenda. Do people have any other topics they want to discuss? Okay. Well, we'll set out a list of action items after this meeting. In particular, check out the doc from Ben about through the test grid. Looks like that's pretty valuable. Maybe we can send that out as an email. So, don't miss it. And we'll start on the two concessions. For the un-conference, do we need to actually do anything for that or can we just propose that on the day? No, no. I've submitted something. So, we'll see if it gets accepted. But what helped me is, you know, who are the list of folks who wanted to be involved and participate that? We can take that subgroup and I'll work together to put on something that will really be productive. So, I heard some names that I would follow besides myself. Who else wanted to participate there? So, if we can get those names, William, then we can have, you know, sort of a sub-meeting. And Dan may have some influence in helping get that accepted also. Excellent. Dan, are you still there? Did you drop off? I think Dan might have left. All right. Fighting fires. All right. Well, thanks everyone. Cool. And Serene, I added you as one of the point people with MEPRA to help come up with the prioritized list of EDU tests that should be in the conformance suite. So, you two can connect. That would be great. Awesome. And yeah, definitely. Thanks. Super. Great. Thank you. Thank you. Thanks everyone. Bye, gang.