 So weird. All right. So on that note, Dan asked me to run the meeting today. I think you had a conflict and unavoidable conflict for today. So this is the 26th of September meeting on a Wednesday and this is the CUBE conformance workgroup. Thanks for everybody for attending. I did put a link to the Google doc that has the agenda in the chat. If you don't have it, you can ask me and I'll post it again. I'm going to go look at, so people are adding their names in for the attendees. And let's see the agenda. So the first person on the agenda is Doug. And it's a question about the frequency of the calls. So Doug, why don't you go ahead? Yeah. So this topic actually does tie into the next one. So I don't be surprised if they kind of get merged together a little bit. I was thinking that with the amount of activity going on, at least recently relative to conformance, some of the PRs that have been submitted and stuff, I feel like we need to have more often than monthly phone calls just to get everybody in sync on the same page and make sure that we're all headed in the same direction. So I'm wondering what people thought about changing the frequency that calls to be at least weekly for the time being until we run out of things to talk about. And then we could stagger it more. Was that one of the robots? It's trying to take over. Tim, your hand is raced. So I don't know if that makes a lot of sense as someone who has way too many meetings already. Maybe it might be slightly selfish, but I also think it's in earnest because I think this group is responsible for the higher level picture and objective, but not necessarily the low level execution, other than to verify it, I think periodically. And that's part of what those other meetings on SIGARCH, as well as folks when we talk during SIG testing, we kind of go back and forth on these topics. And that's part of also the reason why I put up the separate team for notification on PRs and issues. You could just slash CC at the team. So that way we can all stay informed. Now, do I think monthly is enough? I struggle to find a balance there. So I think weekly might be too much is just what I'm saying. Yeah, so this is the part where it kind of leads into the next topic, which is, and this could just be me. So if it is me, just don't want to go away. I'm okay with that. I'm a little confused about the relationship between us and SIGARCH, because I kind of was assuming that we were responsible for producing the performance docs and setting up the testing and automation and all this stuff, everything that we're doing right now. And that SIGARCH would be like it is with other groups within Kubernetes, which they're more of an oversight committee, right? Make sure people go in the right direction, have questions. Do the answer, they go to SIGARCH. Now, we may be slightly different in the sense that SIGARCH architecture has more of a formalized approval process. In particular, we know that they approve our upgrading of tests and performance tests and stuff like that. But if it's as you described him, which is SIGARCH is where kind of like the work gets done, then I'm kind of confused as to what we do, because I thought it was almost the exact opposite, right? Well, we do the work. If some of those guys from SIGARCH are interested in our stuff, then they'll join our phone calls. But we really only talk to SIGARCH, but we need their input on something or review on something. And so I'm kind of confused about the relationship here. So in our little read me for the working group, it says that the charter of this working group is to define the process around certifying Kubernetes conformance. But it's SIG architecture that earns the definition of conformance. So a lot of the process discussion and Google Doc stuff that you've seen flying around has been us trying to refine that definition and describe how to refine that definition. And then finally, SIG testing is the place where we work on the mechanics of how the conformance tests work. So I share your question. I'm not, which to me goes back to, I'm not sure that a weekly cadence makes a lot of sense for this group. I have historically viewed this meeting as a useful checkpoint to report that progress on the higher level bigger picture perspective. I have a meta point where like over time, as we start to, because William is not here, but at least I don't see him here. But he had originally defined the idea over time of starting to approach different aspects of conformance through profiles, right? But we haven't even really gotten there, right? Like we need to deal with just the base and get that done and get it cleaner and more hardened. We've kind of spun the drain several times on the details around it. I think we just need to execute on that piece first and then revisit the process for profiles once we actually have the first step done. So, so, so I agree with you. I definitely don't want more phone calls. I guess part of it is, Aaron, you there mentioned, sort of, we now have three different groups. If I got that right, we have this group, SIG architecture and SIG testing and each one of them sort of have a different set of responsibilities for the complete picture. And it's not clear to me, as I'm looking to schedule my week and my wonderful list of phone calls that I have to join, which one I need to join when for what particular topic. And I guess part of my push was for having a weekly call here was to try to get those conformance discussions into just one meeting as opposed to spread across potentially three meetings. And maybe that's the wrong approach. But that's kind of where my head was at. So I'm open to other ideas to resolve it. But I'm just looking for something that says I don't need to track three different forget me for using our term working groups, just to talk about conformance. Yeah, this is a deep outreach from Bobby. Actually, I have a similar concern as well. You know, Doug, I think you brought it up. This is a very good point because a lot of activity going on. So we need to I mean, at least in this working group, though, at least we need to summarize, you know, what's going on, because there might be an impact on the way we doing conformance now, the car may may want to impact our current certification status as well. Though. So yeah, go ahead. What is the what is the sheer volume of activity that is going on that is overwhelming you guys? Well, I think I look at the PRs and all the documents kind of going on in the I think the document which you published, Aaron. So there's a lot of things. I mean, I know they're very relevant. I'm not questioning that. But I think so at least maybe, I don't know, not weekly meeting or maybe biweekly, biweekly meeting, but at least kind of summarize, is there like a impact major impact on the end end user like a company wise, you know, like so I've seen like, I've definitely seen a lot of engagement around that one document which has been discussed at Sega architecture on a weekly basis as we look to refine the definition of what conformance is. I'm curious what other PRs have been crossing your radar or that have been necessary across your radar that you're finding you're not able to keep up with? Yeah, go ahead. Well, I was gonna say for me, it's less about the specific work items because you're right. Right now there may not be a long list of things to track. For me, it's more of the what don't I know is going on, right? So for example, if something comes up in sick testing related to conformance, how do I know that? Right? Do I now need to monitor their agenda doc, join their meetings just in case something comes up? I see people in Sega architecture all the time to check their agenda doc before every single meeting to see if something might come up there. That's that's the part that worries me is the unknown. That's why I created the teams. So that way when we have when we need global notification for something that does cut across horizontally, you can add the team and then the team will be globally notified. You know, obviously, at this stage of the game of being involved in Kubernetes for, I don't know, like for human years, I don't even know that like, filtering and separation of concerns is something that I often invoke, just because I can't be involved in everything, there's just too much noise. So be using jurisprudence of when to add the team and when not to is usually the way I operate. Yeah, so I, I guess I have tried to operate as a liaison between those sundry groups, because like, I have to attend say testing on a weekly basis. And I show up here as sort of the touch point to make sure that concerns are being raised and addressed appropriately. I think this group's concerns probably are more overlapped with Sega architecture. If you have a lot of like Brad went on an epic, I don't even know how many minute rant about profiles and fragmentation where like we were all in violent agreement, and it took us about 20 minutes to realize that fact in Sega architecture. And that was more about the what is conformance when it comes to a definition perspective. And I agree, like this group has a lot of opinions there, but that's the forum to discuss it, right? This is like when it comes time to do profiles, how are we going to run through the process of certifying profiles? There was a lot of time and energy spent here around like the legal framework. And this is a language of certification, right? Yeah, I think I mentioned somewhere also that the Sega architecture is coming up with the documentation saying go do this and we will all endeavor to do the things in the document in some form or fashion. In one thing or the other, it definitely it's not going to be in one spot. It's going to be spread over different areas. So and as long as the people who are actually doing the work use the correct labels and notify the correct mailing list and attend the meetings, I think we should be fine. So that's the way I see it. And we are not yet there are some thoughts in there. It's a living document. This is a distillation of the things that happened over the last couple of months, if not more. So it contains the learnings from running the conformance tests across including the OpenStack provider, the AWS ones and all the new people who are reporting stuff to TestGrid. So please treat it that way and not as like, oh my God, what is happening kind of thing, please. Well, I think, yeah, I think what we're saying is just to kind of summarize. Okay, I just had two questions. One is more of a technical one, a process kind of question for Tim. You had mentioned when there is a topic like a PR or something that comes up that concerns this working group that they should tag it with the slash conformance WG team, whatever. Just as a, just my understanding, how do you guys actually manage that? Because I suspect I'm on the team. Actually, maybe I'm not, I don't know. But if that's just who I am, I'll then obviously get a GitHub email about that. But how do I know that that is related to conformance, as opposed to all the other thousands of emails I get a day from GitHub? How do you guys manage that? So I have, I have filters for filters that have filters. It's the add team mentioned and then what the team is. So it's like, it's not pretty. What I've created over the past four years is not is a Rube Goldberg machine for GitHub notification. I would love for you to share that with us at some point, Tim, because I think it's not, it's not a pretty thing. It's not even like consistent. I have sounds like that sounds like the rest of open source. I don't want to share it because I have to clean up all these hacks first. The fire hose guys. Yeah, all have the same problem. The other thing is not unique here. So the other thing I was going to say is that I have tried to use a label called area slash conformance, which is available in the test info repo, where the testing stuff happens, the community repo where the doc stuff happens, and the Kubernetes repo where the conformance tests are actually written. And then what we could do is have pull requests that automatically touch certain directories get automatically labeled with the area conformance label. We have the mechanisms in place to do that today for like docs that touch different six directories automatically get labeled with that sig. So it's not necessarily the same thing as a push notification. But it does give you a set of queries that you can run on a daily basis weekly basis, whatever cadence makes sense for you to keep up with that valuable work. It's not quite as rib Goldberg ish as Tim's method. And so that's why he created the team to try and be more of a push notification, especially to try and raise the bat signal if it seems like we really need folks attention. But that's not as easy to automatically apply as labels are. Okay, so that helps me a little. Thank you. Let me get my second question, which is, I like concrete examples to wrap my head around things. So Aaron, since you brought or me was Tim somebody brought up the notion of the profiles. Um, so when we actually sit down have a discussion about whether we need profiles at all, where does that discussion happen? Is that that's that is here? Okay, that is here. So like we would define what is it mean for a profile to be conformant and what is the process by which, and then we would have to go back and get that okayed by arch, right? Like, it'd be like, what is the process that we want to follow for defining X, then they'd be like, okay, we can say that X makes sense, right? So it and then how you certify it and how you do all the legal wrangling around it and what it means to be in and out of certification and all the other jurisprudence that's associated with it falls on our shoulders here in this group is that's, you know, it's definitely underneath Dan's purview for what it means to boot somebody out of certification, what it means to certify, etc. Okay, cool. Thank you. That's the answer I was hoping to hear. Okay, cool. Thank you. I would like personally, I'd love to live in a world where we don't talk about profiles for a while, because there's so much other functionality we need to cover first and foremost. I agree. I just use that as an example of a topic that I think that I want to get clarity on where it gets discussed. So thank you. So I cut off D pack too. I apologize, because you had your hand raised. No, I think, no, I think that's fine. I think the dog pretty much covered everything. Yeah. Thanks. Okay, back to you, Brad. So oh, yeah, I was just gonna say, I mean, would it make sense twice a month? Is the problem that you're thinking once a month is, well, you wait a month and you feel like everything just kind of blindside you because we only met once a month, or we still feel, well, twice a month would be a little more frequency to have some discussion. I think from my point of view, it seems like there's a lot of interesting stuff going on in CIG Arc. And, you know, for my CIG doc responsibilities, I told them they wanted somebody to cover CIG Arc anyway. So I said, well, I might as well go double dip, right? And cover for CIG doc and then cover since there's so much conformance discussion going on. That's kind of how I resolved to do it, which is to cover the two out of the three and just assume Aaron's got the whole testing thing covered. You know, I think it's hard for somebody to come and show up and kind of get a clear view of, okay, what's done in the one group versus what's done in the other group, maybe because it's not well documented, maybe because it's just kind of tribal knowledge. And I don't know how to solve that beyond even myself just kind of going to both meetings. Yeah, to answer your question, Brad, I personally don't think once a month is very useful, to be honest. I think if you're only meeting once a month, it's almost not worth meeting at all, to be honest. It's been my experience. That's why I would at least like every other week kind of thing. If nothing else, just to sync up verbally with everybody else about what's going on, what people think they should be working on, what things are going well, what things need attention, because otherwise I'm not sure how that happens. I'm not sure what other mechanisms would make it as productive, put it that way. Let me ask a different question. How are the people on this call actually helping with defining new test suites or new features to be added to basically the conformance test suite? Is that work happening here? Are the people here on this call discussing, okay, we need to add these kinds of tests and those kinds of tests are missing or this feature needs to be tested? Is that happening here or elsewhere? I would hope it would happen here. I don't think it's happened yet, but I'd like for it to happen here. I think that's where the whole clarity around what is process and what is what just Dim's mention actually. So I'm going to confuse as well. So when I think Tim mentioned that what we discussed in this working group is the process thing. So, yeah, it's a profile, obviously one good example of that. But other than that, but just Dim's mention. The reason I'm asking is it seems that so far this group has been doing the post work in the sense that everything is done and the main KK repository after that, people take that output and try to do something with it here. That is what I have seen. I mean, I've heard at least that this group is doing. And if it needs to change to say, okay, we are going to be full fully invested in actually make driving changes to the test suite that is in main KK repository, then we should not. We should still, you know, do the separation of concerns that Tim was saying that we need to have like an actual SIG or a working group in Kubernetes and use this group the way it is currently is. So a little bit of history. One thing that we did do and take ownership for the first release is, you know, making sure that, you know, that the plan was to go document the test. So once we did an analysis and said, okay, well, these are the ones all labeled conformance and make sure they're labeled, you know, they were already labeled and they seem like to make that they were the right test. Somebody to go in and add a little of all the descriptions that were in the spreadsheets and put them all into the code, right? So we did take some ownership there. I mean, I know myself personally did a large number of those trying to get, you know, some notion of a reference documentation for the test. And so that's kind of where the split was before on the first release dim. So it seemed like did we write end-to-end test? No. Were we trying to add the conformance piece, whether it was that window dressing of creating the references or, you know, the other thing, you know, the Sonobui came up through this group, right? Those kinds of things. Those all come, those kinds of topics of, oh, we should use this for the document. That's what at least I remember, Tim, historically, that we did. Yeah, there was a lot of individual things to get us ramped up and moving to get us to the point where we could have a means by which to have a process, right? And because before it was kind of, there wasn't anything there. And some of us, like Matt Liggett and myself, actually did go and fix a bunch of tests and actually get a bunch of, all the, all, Matt Liggett did a lot of the original conformance stuff that exists now today. Tim and I worked on a bunch of the other details with getting containers auto spun up and getting the information out so we could actually do certification. Because before it was just kind of a little bit hodgeprojury and you actually assisted on some of the introspective work to run the containers inside of the cluster versus external, right? So there's been a lot of details and I think we can probably coordinate execution on some of these things if there's common awareness that, you know, these things exist. So as a working group, we don't, or as a, because we're not really part of the Kubernetes project, we're kind of external to it. Right, that's why I was saying that, Tim. It's like, it, I want us to be like, okay, we want to drive the change that we would like to see to happen and not like be the people who say, okay, don't do this, don't do that kind of thing, right? Yeah, I mean, I want us to be proactive in discussing the things that we need to drive and actually helping with making things happen. Like a concrete example of one of those things I want to do is Matt Liggett and I, and Erin brought it up in the last conversation too, is that I don't want Heptio and I don't want us to own the kube conformance container. I want that to be published as a separate thing in the KK repo. So that way Sonobu uses it, sure it's great, it's a way to run it, but it's not, you know, we don't own it and it's part of every release so anybody can work on any version of it at any point in time. And that's the canonical location is in KK. That's one work item for sure that I want to accomplish and could use help on to get that done. Now there's other, there are other changes to the upstream test suite that need to get made in order for us to be good stewards of other people's clusters, right, that have existed, that exist in different portions of the Sonobu repo but they shouldn't. I have like the tag that says fix upstream and the Sonobu repo and I've just leave it as a backlog item because if I file it in KK it's like lost in the noise and was paying attention to it. Like one of the things that I wanted to do just to get concrete examples is auto labeling of the entire dependency chain of all the containers and everything else that gets spun from the test suite. So cleaning up is as simple as deleting a label selector because currently it spins a bunch of namespaces and the cleaning up and the terminating can take a long time on foreign environments and you know it's part of the end-to-end test suite for auto running and test infra, you know they just nuke the cluster at the end so you don't care but if you're running on somebody's environment how long it takes to clean up and being good stewards in that environment matters a lot so we need to make sure that we do those types of things. So I can come up with a list of action items that I know that I'm, I could use help with that I think would be good for the CNCF conformance effort right but I don't exactly know you know I could make the issues in KK and tag this team right that's one process that we could follow for how to get this stuff done but we didn't really have that team before so before it just kind of lived in their own separate locations. But to Doug's specific question you know I don't know if we want to do thumbs up or thumbs down like we do and sick docs but you know there are people as the majority of people thinking that meeting five monthly is too much so you're inching about that dims or I don't want to talk about the frequency of meetings unless we actually put our foot to the ground and actually accomplish some stuff. If we if we say we want to create and curate a backlog that we can execute on then I'm happy to help facilitate that because I have a backlog. I just think no one really cared. So exactly Tim so we need to care about this if we need to have a meeting we can schedule meetings I don't want it to just take up meeting time just because you know. Yeah for me I'm trying to think of like what is what is it that you are trying to achieve Doug that reduced meeting cadence is impeding. So from for me it's like I said a lot of it is I don't know where things are being discussed and I would like more discussions to happen within this working group so for example in my opinion almost everything that was discussed in sick architecture should have first been discussed within this group and then take our preferred answer to sick architecture for approval but again that's with me not completely understanding the various roles. I would like to have these discussions here talk about what are the things we want to do going forward. Tim mentioned he has a backlog of backlog of things. It would have been nice if we had a regular meeting so that could have come up earlier so Tim could ask do people want to see my backlog where should I put this backlog work items so that people can know where to pick them off to work on them. We discussed this last month where Tim really wished super hard for a backlog and I thought we were in agreement we could we could have Tim file issues here. I also would love to put together a backlog from a fleshing out the definition of Kubernetes conformance perspective but first I need to like refine the definition so that I can then put down what our priorities are for filling things out but we did loosely speaking have a discussion last month and got agreement from this group that we wanted to head in the direction of pod coverage. So I feel like we've we've been covering and addressing the things you're talking about and it could be the reason you're not hearing so many discussions this month is because we've been busy with other things I don't know from my perspective I'm not really drowning in notifications related to conformance. I've been like wrangling the conformance definition doc has been a lot of fun and I hope to not have to go through that iteration again. I don't know how long it took Serenibus to put together the first iteration but this feels like at least as long but I suspect that like we'll probably you'll probably hear more conversations as we ramp up to Kud Khan Shanghai right you've been involved in ongoing discussions about how we're going to present intros and deep ties and things of that nature so to me I just don't want to have more meetings for meeting sick especially when I hear phrases like syncing up that sounds like airtime and and I liked him already regularly attend a bunch of meetings throughout the week I view this more as it's the right cadence for syncing up about topics at the right level. So how does how does this sound as a proposal for doing this we I can file issues in the CNCF repo that are generic enough that we mentioned last time that I could use help with that are generic enough that they apply to everyone who's working on conformancy things on different environments right I'll try to make sure that Sonobuy specific stuff lives over there and anything that follows the ET test suite and features around that would be filed here it'd be kind of weird because they're really eat to eat test suite stuff I don't really want to file it in the KK repo because it's just like noise factory but I'll file them here so they're pertain to this area right of conformance. That'd be good and like I said I'm not interested in phone calls sake of phone calls I just I was a little confused as to what's going on where and it maybe it's because there has been so little traffic on activity that I got the sense that there are things going on that I'm unaware of and maybe I was just wrong maybe everything was focused on that one yeah we were all drowning in really stuff Doug come on other than that it was just that dog okay no that's fine that's fine yeah again I want to do things like for example the Sonobuy supporting arm right we did multi arc in 112 where all the conformance test suite now works again arm and other architectures now I want to do the Sonobuy plus arm so people who are testing can do that too and I want that to be you know part of this group if there is people from here willing to do the work that is okay so sounds like you know it's we'll stick with what we got we'll keep it in once a month I think if I can get more clarity of when this group is going to go get approval from SIGARC or when SIGARC is going to say oh this is time for the conformance folks to get involved and get their approval and I'm not going to lie to I think that will help me in the future to kind of get a good feel I think it was more of just a oh they're discussing that over there did not know I mean I mean there there were examples I mean of things that that that you know right or wrong it felt like oh the hey that's where the discussion is going to be well that's the meeting I'll go attend that doesn't really matter but it was just I think that was part of you know Doug's point of wow I kind of thought that would have been over here and okay it's not you know but let's let's go figure it out but as we're going forward I think trying to understand you know the cadence of when the conformance needs to go to SIGARC or the SIGARC thinks they want to get the the conformance's input on something as we decide to do another really I think that's just the confusion people have does that seem like a fair way to kind of wrap that topic up so I think we can and I'll work on the thing we'll stick with that and I'll mention hey let's let's let's see if it's clear for when you know one group is going to the other and that hopefully that helps with the interlock the next topic is I think that we were covering was the status of a get repo and issues to track outstanding work items Tim you had alluded to that just a few minutes ago I think correct I started writing an issue inside of the CNC of conformance repo and the problem there is that I don't it doesn't I can't reference and label all the things I want to so I'm actually writing an issue inside of the KK repo and adding the team and adding the conformance label and I'm going to see A or B which is cleaner okay yeah I like you using the main KK repository it's already it's not like it's already drowning so a little bit more is not going to as long as you have the label and the team that's all that matters yeah also I was interested in trying to figure out how are we with the things that for example HPA Hacker is leading here with increasing the coverage or things of that sort that's why I showed up here yes I don't I personally don't have much that I can share there but I think hippie because I think hippie sort of subsumed some of the work that I've been doing in the background for the past month like making sure we get audit logs automatically generated on the conformance test grid and stuff like that but I did I like I forget if I mentioned it last meeting but I know I proved it with a PR I submitted to hippie that like now that we're not counting deprecated fields as part of API coverage our API coverage is looking a little bit better and I have numbers that can prove that we did more in Q3 than we did for the past three quarters on raising coverage and hippie you had some PRs for for sending things in the HTTP headers all that gotten there's nothing left to be merged I'm still experimenting because getting the data where we can all have different analysis processes and it really required a change in where that data goes and once we started having the I could go and transition into the updates I guess is kind of answering your question let's go ahead the the so we'll back up just slightly I didn't get to present last time but user agent instead of doing the really in depth stack trace and embedding that inside the user agent we decided to simplify that to be the string of the test so now it's not in the UI but it will be soon where you can drop down and see for an individual test ED test what endpoints did it hit and a little bit later when we start combining together the work that Catherine is doing you'll be able to see what lines of code on the client side and on the server side hopefully that a particular ED test is hitting and those will be that's some meaningful insight I believe and then pass that I dropped a link but I haven't had any conversation on it we had a UX UI designer come in and create some possible UI updates and I just if you're interested let me know which ones are prioritized because it's not something we focus on until we have more data but if there's something in there that looks interesting let let me know that's in the project UX area for API snoop also we I think we presented to sick testing but and it's it's it was interesting because it was just moments before the call that we started going oh my gosh all the data is available in test grid finally and so we manually pulled that in and was able to show that increase in coverage for the different 1.9 1.10 and so on the interesting things around the user agent being there for the test obviously won't be there until like 1.12 and later I believe but we'll find some way to to kind of put that together and this is where this this big long list of wow we are now pushing data via proud jobs to test grid via gubernator to gcs and that's just beautiful currently I'm looking at bringing in conformance all and having the drop downs dynamically allocate from the the test grid definitions so that we can pull in the latest and possibly the we need to kind of chat about what what we want the audit logs being available obviously via that or what that possible or we're looking at processing those all in node so that we can do it all in browser possibly where you're if you want to do UI stuff you can just pull out you know check out API snoop pull up an instance and you're pulling the data and you can do your own thoughts and processes and renderings or analysis it's not there yet but katherines code and the and the coverage stuff is in the middle of getting a proud job and I am so excited to have that available to start looking at we're also going to instead of pushing the backtrace fuck a pop of stuff via the user agent I think we're going to try to do matching it with the audit so you got your audit header and the audit the audit ID so each API request you will be going to pair that with some type of data getting dumped onto the the node and we'll put that into GCS so that we can do post analysis on that and have lots of different approaches so the API snoop will again pull all of these things together and from a checkout so everybody can start participating in creating the visualizations and different approaches to stuff all of this so far is focusing on Kubernetes and the ED conformance suite but what would be really interesting is to start applying this approach to lots of different applications particularly those within the CNCF where we allow them to continuously generate this style of logging whether it's integrating Catherine's approach or integrate an API snoops fuck a pop of stack trace stuff what's going to be really cool is we can start using this this large set of data to start looking at pattern recognition for common use cases for Kubernetes both on what client libraries and what lines of code and what overlaps on client and server side and which which group should be talking to one another and which groups we could possibly refactor some things and then of course the big thing here within our group is a possible guidance for writing tests and so when we start seeing these patterns not only will we see identify the patterns possibly with machine learning or other other tools we're going to apply but we could probably generate tests that and I don't know we need to actually get the data to deal but that's that's kind of the dream and open that up for discussion and the questions sorry somebody else was talking I was just saying did you want to give a demo sometime because let's describe a lot of cool stuff sure I think a lot of this like the UI thing is there and it's not much more than going to apisnoup.cncf.io and if you click on the drop-down all of it's manual and it's not all in GitHub yet because we had a transition in staff so we have and now we're we're refactoring that I've got Zach Mendeville who's doing a refactoring actually this week so that it will be inside the API Snoop repo and pulling the data from the from GCS I think part of it has been syncing together like creating the data so we've gotten really intimately familiar with test infra and kube test and all the various ways to spin a clusters and I I realized it didn't scale for me or the API Snoop team to try to be spinning all this up we needed to to offload that and yep is a quick question here so we are just talking about the end to end test making calls to the API server right yes and not any of the interactions between the different Kubernetes components so with the with Catherine's code it is capturing those interactions and when the different components are talking to one another right now we're missing the stuff for not GRP for like back to I forget exactly how it's architected but the the node itself right the communication back I think I would love to find a way to include that information with like because we don't have audit for that but there's something and we should find a way to you know so here's a concrete example of where we could file an issue the the auto was pushed out of 112 the dynamic audit log capabilities but we're absolutely trying to push very hard to get dynamic audit log enabled for 113 and if we want to and if we want to add that as a feature for part of the conformance submission is to have that as part of it I don't know what exactly we do with it yet we could mine it but it doesn't necessarily mean that it's like gives you a thumbs up or thumbs down it could give you pathological behavior if you're trying to diagnose problems but we could absolutely you know have a feature requests that we could submit to enable it as part of this submission and you know enable it for a so no blue run or for put it somewhere inside the test automation I don't exactly know where we live for upstream stuff but you know we could figure that out but really that's an example of something like that I'm excited for go ahead I was just gonna say yeah this so this is the sort of stuff that like I haven't had time to give a sufficiently in depth update or like I didn't have anything ready for it by this week I will probably have something ready by next month where we'll walk through not just this but we'll walk through when hippie was referencing Catherine's work she's done a bunch of things one is an alternate view of the rings of color that she see generally into something where you can drill down by group and table to see specific API coverage for specific endpoints and then like we talked about last month the idea of looking at code coverage or line coverage is a proxy for progress as opposed to API coverage for progress because we're getting close to about done on API coverage at least for pod related things but even like the circle graph while cool at the highest of high levels doesn't necessarily give me actionable what are the pod API endpoints that aren't quite covered if I download a csv and do some analysis or if I look at an alternate table I can I can drill into that so that's sort of the direction that this is headed and I'll be talking about next month and I'm really really glad hippie is leveraging the the conformance test grid thing that was sort of my dream to make sure that we've got like we've got instructions on how people can continuously run conformance tests for their cloud provider of choice and so today we've got results from by where we were still working on alibaba and by do I think but by do digital ocean and gc open stack I think I thought Gardner was around here somewhere they may have dropped out but I I really to me conformance is a one shot pr submission is neat but I don't trust it as an engineer I want to make sure that it's actually continuously passing all the time especially as patch releases get rolled out that's yep that's up to the provider to determine that's one of those boundary lines because people have built out automation that's totally up to them yes correct did we have another topic here to cover or is this a new someone else's topic about the expand included the capix and that analysis all together right here I kind of went through it but where we're we're all on on just the conformance suite being able to know how we what we need to write next and what those tests might look like would involve actually engaging with the community of large and seeing what other Kubernetes API consumers are you doing and using and if we apply the same same process to I would say let's just grab all the CNCF tools as one one one thing all those projects and see what endpoints they're hitting in the and the ways that they're hitting it particularly looking at the stack traces and using that data to see wow there is some really common patterns over and over again and not only do we have the source code for how all those applications or anything similar we could probably auto generate at least a framework for people to to go through and write a test I'm not saying auto generate and it happens automatically but suggesting some really straightforward patterns for increasing our coverage to make those tests available I think that's what I'm actually most excited about is connecting it even if it's not describing the test it's saying this application and group should be talking to this application and group and this group within Kubernetes and finding to encourage the conversations yep okay cool that's excellent are we ready to move to Srinni's topic on the agenda okay Srinni go ahead yeah before that I have a question on the the thing that we decided in this meeting about moving all the work items all the issues into KK back with the proper tags so Tim I've seen that the one you created just now is it going to be like we assign a sick to and move all the existing issues from the CNCF repo back to KK or I don't see a need for sorry okay yeah Tim was black go ahead I thought you you were doing something else then but yeah I'm finally an issue go ahead talk okay so the way I was saying it was for people who find the git repository for conformance if they have to file an issue we'll still they'll still go ahead and do what they usually do is like I saw a repo I go create an issue that I think is relevant to that repository and do that right so this what Tim is probably doing is doing the backlog that we all jointly need to work on you know in improving the common pool of stuff that we are going to reuse and in a shape into the the program that we are running here right so that's that's why it makes sense to keep the backlog in the main KK repository so we can show other people like this is the stuff that we expect this is why we are asking you to do for example the dynamic audit laws or whatever you know this is the reason why instead of pointing to a an external repository and trying to do that it seems better to just keep it in the main KK repository I'm confused so picking on just from the performance issue list missing conformance stocks for 110 and 111 I think Aaron opened up that one if we were to open up that issue today as a brand new issue would it be under the conformance repo or under the KK repo that conformance repo because that's where it belongs to right now right I don't know what I'm asking yeah so it is related to the stuff which is in that repository so yeah okay what it doesn't need to be in the main Kubernetes repository so that is directly my question basically there is work that our group is going to do which is related to conformance nothing to do with any of the six then that's why I was saying that our group needs to be into two pieces one doing the work for outbound and one the inbound stuff which is actually trying to do stuff that we need to work on for the next release so I mean to be honest it's probably going to span repos right like the backlog is not going to have a canonical location just like Kubernetes Kubernetes doesn't have a canonical location it's got many repos that the data is split across the data the data is split across a ton of that stuff so if you try to file docs updates it goes in the docs slash website repo right and you know for kubadm issues that goes in the kubadm repo etc etc etc I think the thing that we should probably do is as part of maybe the next meeting the next time we meet is to have a list of these different backlogs and identify a key stakeholder to just basically walk through the details of what's in those backlogs and if anybody's going to volunteer to jump on the hand grenades that exist to get the execution done yeah and it makes sense I mean you've got a backlog and there's there's items there and their items you're aware of and items that need to get done and then at the same time you've got the the conformance repo where if somebody wants to go open up an issue that might be a little more user friendly from an external facing point of view if it's just a purely conformance related issue is that kind of how you're thinking about it Tim? yeah you can also use the cncf repo as the parent and then all the children get federated across the other repositories so if you wanted to say an umbrella would be like make end-to-end tests cleaner or something it could break down into several issues and you know that could be federated across that and make the docs better another generic thing which could be several issues filed against it referenced okay does that answer your question Sreen? yeah it kind of makes sense I mean like one of the examples is right now we're going to enhance part-spec tests and in that case we should have a tracking item probably at the cncf repo level and then have individual ETE related issues or into whatever the other sake it will go into right so so like another great concrete example of an issue that like I have with the docs but I don't necessarily feel like following in this repo is the fact that the code that walks through all of the conformance tests and tries to parse out based on the going abstracts abstract syntax tree whether or not it is a test worthy of being can worthy of being picked up to be listed in that conformance list checks of everything is a string which means that we can't use table-based tests to iterate through and and define conformance test cases for a variety of conditions so we need to figure out how to reauthor and rearchitect tests to fit that pattern whether we mandate that conformance tests must always be described with a single string or whether or not we need to update the the lexer parser whatever you want to call it to figure out a different way of deriving the metadata and documentation information and that to me sounded like it was just sort of too fine-grained for this group to hash out a lot but it'll be it's documented in KK issues intact as area conformance right now right yeah that's uh yeah that's tenacious but the problem that I actually when I discussed that with you and I I did do some experiment on that because I believe that we still have to build some of these tools by ourselves although that's that may be somebody else's responsibility but like generation of this conformance doc is something that we took on and the parsing the test probably makes sense in a way for us I would view it as similar to this group iterating on and deciding to use son of boy as the recommended way in its documented submission process for conformance results similarly this group wants to find a way to automatically generate documentation for all of the conformance tests with just written descriptions of what it is that they do and so right now they have tapped to an engineer Matt Liggett to write this hand-coded tool and then you helped him out serenity of us and that lives in Kubernetes repo but it could be that the ultimate final solution would better be suited to to be like something some other vendor provides or some other out-of-tree tool or whatever right yeah could be but currently the approach we took is we we do some tooling which we I don't think we can get away without doing some tooling around to make the conformance work progress this is one of the examples of such tooling so um yeah to answer your actual question that you know it's Brian Grant also commented in the architecture meeting that we should keep it at the individual test level parsing rather than if there are table tests that you need to group and have conformance documentation at the test level I did live with it a little bit but it is ordered to do with syntax parsing it's not runtime parsing so yeah I was just trying to use that as a as another concrete example those kind of the problems we need to discuss in this group right still it's valid for us to figure out what kind of tooling we need yeah we can't have a single repository for sure it's going to spread all over the place so we better get used to it I agree with that I mean that's not the the point I'm trying to say how much of responsibility we take to build the tooling that's why I said it's tangential discussion but still we need to consider about building some of the toolings by ourselves I guess great by the way the only point if we go on the only point I wanted to discuss today is automating the process of the publishing of the Confirmance document for each of the releases essentially the idea that I briefly discussed with Erin and then I went on the idea is to to generate the Confirmance document as part of the quick release step and it will be placed as part of the tar ball Kubernetes tar ball that will be the release release only for the major and the minor releases because that's what I think we should do and then have automated PR generated against the CNCF website where this document will will go I have played with that and we can do that we do use go GitHub API and we can leverage that that's used by testing for today so the document PR will be released for each like 12 113 against the CNCF repo where it will be placed under slash doc whatever the location we currently picked just looking for any comments from the group to one comment one question so the question would be like where is your written description of this proposed way of doing things so that I could comment I have opened a PR sorry I haven't opened a PR I have opened an issue in CNCF so I can I can this is the automate the generation of conformance document that's right okay right so it's just like it sounds like he just rattled off a proposal there and typically I would expect to see that proposal in the get in the in a follow up comment in this issue or get a Google doc or something the only other point would be make quick release not sure if that's the right target because I think that's built for every I think you probably if you're talking about wanting it done only for minor and major releases you probably want that done as the make release target not make quick release right but then my question is why do we actually need to do that can we because all the artifacts are available in various places that that are generated by the release process as well as you know we already we also have the get repository branches and tags so why does this actually have to live in the make release or make quick release this is an artifact generated on top of the normal bit process so conformance document is generated is this already in for 112 no no it's not so it's not going to go in in 112 for sure no it's not going to go in 112 for now what we are doing is we are manually generating the conformance document and pushing it out to CNC website right so I'm proposing an automated way of doing that yeah I mean we have publishing bots kind of scenarios where we work on repositories and we generate artifacts that go elsewhere so we can use that pattern and we have the list of releases and that we need to generate this for and if we find a new release then the bot will wake up and do something like that right so it sounds like I gotta run shortly yeah we gotta get out so we should iterate on in a proposal so if you can link that and maybe CC Dimms and myself Surya Vasudev that's good perfect and on that we are out of time thank you everybody for joining bye everyone