 All right, shall we get started I have this vision of you guys sitting in this room and you guys are all huddled in one Little corner Actually the camera might go a bit further back, but it's a little bit did delicately place Maybe I'll buy a tripod I'll give a 30 second update to start that we're now up to 60 certified vendors Rackspace just came in, which we were very pleased with this week, which is one of the last major well-known clouds out there Who had not been offering a hosted service before so it The program really widely used and the two kind of focus areas coming up or we have Kubecon cloud native con Shanghai in November 14th and 15th our first ever event in China And we're definitely going to want to do a probably two sessions there in intro and a deep dive and so I'm hoping that some folks from this call can join me for that and then we'll be in Seattle December 11 through 13 So look for I'll be sending out an email shortly asking folks to sign up for that, but I'm not sure if people now could volunteer that they're interested in attending and presenting on that for Shanghai I can answer any questions but that's my update Sounds great. Great news about Rackspace and the 60 providers. Maybe if you send that email definitely I'll circulate it around see if we can get some volunteers Yeah Dan, I think you can count at least one person from IBM and we'll be able to be there At least one among the two of us if not me and Aish We're happy to help the native capacity Great Okay, thanks for the update Dan. With that I guess let's go through the agenda So the top of the list is the conformance testing guidelines Yeah So I just wanted to give a quick update of what we've been up to since the last meeting So one of the PRs I've linked there is the conformance guidelines So we've been formalizing some of the guidelines Taking the work that Shani Mitra did and then passing it around with SIGAR to get some of the guidelines as such So the PR is out there if any of you want to take a look and comment on it We are waiting on SIGAR's approval to actually check it in So that's one and the other is as for conformance coverage efforts still We've been looking at two main areas one is the API machinery and the other is Node With the aim of identifying a few tests that we can a promote to conformance in 112 And also some areas of CUJs where we can actually start writing into a test And then promote them after they fare well So in terms of API machinery again I've linked to the tracking bug whether they're ongoing discussions Feel free to chime in Working with Federico and the SIG API leads here We've identified like six cases to promote to conformance in 112 Out of these we have PRs in flight for three of them already One of the PRs we're fixing we're trying to deflate some tests before we can actually promote it The other PR is we have a couple of namespace tests that are again waiting on review from SIGAR before we can promote The next few we need some guidelines from this group because one of the features is in beta Which has been identified for promotion we have more discussion on that below The other test that was identified we already have coverage for that in N2N But going through the coverage we found that it doesn't exercise all the scenarios And there is some room to update the test or add new tests there So again we are working with the SIG itself for guidance on that As for node again there's a link to the tracker bug This is a much more involved effort than API machinery Considering there's so many endpoints that's touching forward and you're trying to understand the interworking of everything But many areas that we are focusing on is Gen from node He shortlisted a prioritized set of APIs for us to look into And we ran API Snoop coverage on both the N2N non-confirmance test and the confirmance test to see if we have any delta E2Es that we can promote We found that the epic API endpoint is the one where we have some N2N test cases that could be candidates for promotion Again a bunch of those are and in the discussion we found that those APIs in the feature itself is again in beta So we need some guidance on that And Cuny was one of our global vendors She went ahead and took a stab at coming up with some test plan and CUJs for patch API Which is one of the prioritized APIs to see if all of those scenarios are covered or which of those we can promote So I've linked the test plan We are going to be working with more with the node team itself to go through the review the test plan and see if any of the scenarios there look good to add So if you want to take a look at the test plan and comment there that will be useful as well So hopefully some of these might come in time for 112 for promotion It would be possible to have a not that I want more GitHub teams because I don't But it would be possible to have a conformance GitHub team so that way when like something gets promoted People within this group a subsection of this group as well as a subsection of arch are notified at you know conformance PRs or something like that So that way it's it's clear and the loops in the right people as part of the process So right now for those promotion PRs we are looping in Sigarch But I yeah, I can create another team and then I would circulate it just to see who wants to get onto the team and then I make sure the PRs I mean it gets active the PRs Yeah, because I don't think other people who are part of this group here or visit have visibility besides the Sigarch folks So there also might be a subsection of Sigarch that cares a lot versus all of Sigarch which might be spammy Yeah, so this would just be a group like anyone can join just to get notified just get pins of these issues as I was suggesting Cool, so that's it for updates from me Cool, so as you mentioned I guess twice there's some like beta implications So this is something that we wanted to discuss with the group today Currently we kind of have a process where you know now that the programs are more mature at least the test kind of land only once Everyone sort of ratifies them and they're deemed like reasonable for conformance What we're missing is we're missing a way that beta features can have the performance implications tested And we're also missing a way for a test even if a non beta feature to potentially go through a stage where it's like getting evaluated for inclusion conformance So one thing that we were talking about and actually we raised this probably like I don't know a year ago as an option because I guess like now starting to become relevant You know maybe we need to start labeling things as like beta conformance and having it's kind of like it's not such a formal profile as it is just a group of end-to-end tests that people could run For the purposes of a qualifying beta features and potentially even including tests that are not yet that are of like GA features but that are kind of just to go through like one set You know to avoid kind of just landing in GA and like surprising around all of a sudden people like oh I'm not conforming and it's like a big hassle If it goes through this beta stage it's kind of like a canary in a way people can see if like the pass and test before you promote it And this would also be helpful I guess for projects like Istio that are actually like you know and as we look at like K-PIX There are some projects that they're relying on a lot of beta features which people are deploying into production for better or worse So the beta profile could actually be useful for end users as well which is like hey I'm actually using a whole bunch of beta Kubernetes You know and I guess at that point the profile does become a little bit more formal What do people think about essentially creating a beta conformance tag? I don't mind adding a tag of some kind to indicate beta features but I put in chat that the term beta conformance is an oxymoron It's like plastic glass or jumbo shrimp because like the whole purpose of conformance is that these are features you can rely on that are absolutely production grade But by definition beta we are not making that guarantee and we are even saying that we will break or add things in the future So the guarantees are different so I don't mind adding something that says like a beta or some other It's kind of like a conformance candidate right? It's not part of conformance but it's a candidate that may become part of conformance Is that a better way to phrase it then? Sure but I just think the term beta conformance is I think that was a hilarious point The other thing is you're building a workload and you want to be affordable and you're starting to depend on beta features You would like to know what cloud providers are conformant Again we should not use the word conformant because we're totally conflating it I think we're conflating two big themes here too One is giving providers a heads up as to what have been proposed as conformance tests that are coming To give them an early warning that they need to either push back or make changes And the other is a profile that describes what APIs are exposed in a given provider Those are two very distinct concepts Well there's a third thing here which I thought might have been what the person was asking me in the corner Was the testing of beta features and I thought we weren't doing that I guess that's why we're talking about it now I absolutely love the idea of a tag specifically for beta features It absolutely gets the idea of having a beta conformance tag Because those two things are mutually exclusive Okay so can I just get some clarity are we talking about a set of conformance tests that are proposed to be conformance tests And that's why they're quote beta or are we actually writing conformance tests for beta features Well I mean they're a little bit conflated but I guess we need to address both problems Whether we do that with the same tag or different tags I think we need to address both problems So when it comes to beta features I think it's actually extremely important Like I would like to suggest maybe as part of your document like the candidate document That like a feature should not be promoted to GA and Kubernetes anymore Without the conformance implications understood Now you know we don't want that just to like drop in there suddenly So I think like it makes sense that as a feature itself is and like they shouldn't have been over Right so as the features themselves kind of go through the graduation process Like they should be having tests and that should be a gating function where they're going to graduate So we need some way to capture that now I don't want to get too caught up on the name But I think you know I also think that that is a way for providers to kind of raise the flag And say hey actually this is a problem for me and have those discussions like as early as possible To J.O.'s point there like we don't we don't actually want to like drop things on people by surprise And like you know then there might be non-conformant without having a chance to kind of argue the case So I think that's important and then you know since we have a low coverage situation that we're expanding That kind of like surprise here's a new test thing is also happening on GA features So I don't know if those two things deserve two different kind of streams of tags But I think they're both relevant problems for us to solve I feel like there is value in having results reported in two places like we have a conformance test grid And we should probably have some or features that are in beta that we want to make us GA And we have a separate test grid and they should be part of the promotion process So yeah yeah I just want to jump in I hope that we can solve automate the process of distinguishing those beta features Those beta EDE tests which are calling beta APIs Distinguish that in an auto Well but they're not just EDE's right they're EDE conformance test candidates as well So it's like it's like a subset of the beta EDE's But my point is maybe calling the same conformance IT method If it is depending on a beta API it doesn't get included in the gold list But it is run as part of the conformance suite and it gives you a separate output I don't I'm totally making this up as I talked But I think what we're doing is features that graduate to GA have conformance test as part of it We already encourage EDE tests to be written even for beta APIs So I would hope that we can come to a process by which that's more automated than coming up with a different tag Are you saying that it could be the same tag but because the underlying APIs beta You wouldn't make it at the gold list That's right Okay So that could be the early signal That went from a process point of view I think the other part of this problem is the what I would hope is a one-time closing the gap of what are currently GA APIs That are not part of the conformance suite And I don't know today if we expect that to be a hundred percent or not I expect there are some interesting conversations to be had there But I think there that that's a separate problem maybe where we need a proposed conformance concept Or it gives it some big time and we can have the discussions we need to have as we figure out if it's a hundred percent API coverage that we intend to have Okay, so what are people think do we want to create two separate things here to handle these cases? So I think a label to indicate this is a test of a GA feature And this conformance test is still in the working stages whether you want to call it beta or whatever you want to call it I agree with the label for that The other label I want to make sure on the same page as you guys You're talking about a label for a test that almost means the exact same thing as what we just described This conformance test isn't officially approved yet And it's for a beta feature which is not gone GA yet which is why it's obviously beta And hopefully both those things will happen at the same time meaning the feature will go GA And the conformance test will go quote GA at the same time And you're looking for a label for that is that correct? Yes I think we've had conversations in the past where we may need to separate those temporarily by a release That's right I mean I would tell you that if the feature graduates the test should graduate with it So let me make a proposal if you don't mind which is that we currently 100% of the conformance tests are called production conformance tests So they're GA level I propose that we create three new categories of tests of candidate alpha, candidate beta, and candidate GA And then as new features are maturing through the Kubernetes process The candidate test would come along with them and we would be encouraging people to run those Insanity but none of those candidate tests would be required in order to be conformant But the candidate GA ones in particular would be the most important because the assumption would be that those would be coming in very shortly And that would be reserved for the this work that we're doing on the back I think the point Jego was making was basically based on the structure the directory structure of the API or something like we can infer that something is in beta So we might be even if we label it as conformance it based on where the test is located on the structure We can find out that this is a beta feature and we might be able to do some smarter qualification automatically So we might not necessarily need another label Okay that would only leave us with a label for the GA candidates I guess But so I mean right now we don't we don't lag the candidates by release do we? No And we haven't had any problems with that I guess I mean I guess we're introducing a candidate GA label meaning that we can actually like move faster in a way Because we could you know we could basically label a whole bunch of tests as candidate and then use like a cycle to get feedback on that Rather than would that help us move faster or I mean the test that we have to promote them If we identify the blocker that I'm having there is identifying is this a test that's worth to be promoted Right, and then if we have that input then we can yeah move faster So then no you won't move faster because that input velocity won't change Yeah, it's that it I'm blocked to getting that input as in like is this human human minds Yeah At least with the whole beta feature The thing that at the problem at least I'm trying to solve this When do we go to a feature owner in the release cycle and say do you have conformance test for this or not It sounds to me like we should be doing that right at the beginning they should be labeled conformance and then the tool automatically based on the directories just say Like did does that sound reasonable to everybody Because it seems to me to be an artifact that is attached to the feature Yeah, I'm not opposed to doing the automation behind the scenes but the the labeling of what we call it I think over time needs to be clear But I said in the chat window why don't we move this to a proposal And that way we can actually have a proposal where we're going to we're going to get other people and stakeholders that we're going to need to solicit Like the big testing folks are definitely going to want to be able to set up dashboards for this stuff so their input matters too so Okay, I mean certainly good to have some rough consensus here so it sounds like we want to be we want to be making adding tests to alpha beta features There's two ways to go whether it's the same label with with directory structure or maybe a different way that can I guess be left up for discussion based on like the tooling what's the better operation The second issue of GA candidates do we need that or not I don't have a specific I don't see a immediate need for that and based on what I'm blocked on so currently currently we're just labeling them and then how like Is there is a risk that someone could I guess discover that Oh, there's a problem with this test like post release or I guess it doesn't really matter we just we can just drop it after the fact if that happens I think the justice league here so So when we launched the conformance there was about I forget exactly the counter about three tests that got excluded in the end Because it's like oh this was it had a bug so like had a bug that kind of like had to be fixed so we like dropped it from conformance temporarily but it was like so technically labeled conformance So people were still technically failing it. We just said okay a failure on this one doesn't exclude you right that there was one there was a bug there was one that like just got stripped from conformance is like okay it's labeled conformance We're just ignoring it I'm wondering like what happens if we graduate if we like promote an e2e test today and like land drops in like 112 and then you know let's say a provider wasn't like hitting up today and then like we're taking this ahead of time which I kind of don't expect them to And they're like whoa this test is a problem for me and then maybe we need to like reconsider it like is it you know we thought it was something that everyone should be conformed to but that there's some contention like do we need to do anything around that or is the kind of process I think what you're saying is we need like dry run of proposed conformance tests I don't know if we need I am wondering if we need that It sounds like it hasn't really I mean yeah do you think we need it I think it would be useful Because we can always drop it after the fact anyway we can always say you know we promoted five tests for 112 turn out one was one needs a bit more works we're gonna like ignore the results of it temporarily like that's the other way to do it isn't it I'd rather not do it that way and then potentially piss off lots of people because certainly the non-conformance I'd rather do it the other way and say guys here's a set of conformance tests or we're going to add you better darn well run them by this date because this is the testing Sure but it's okay with it So what I was I think proposing before as a process to fill this role was to agree by feature freeze for 112 what ETE tests we intended to propose as Okay for 112 Those ETE tests already exist people can already run them but this would be a much more visible way to get that feedback So I agree with the direction and I think that the trick is always making sure that the signal is valuable More complex Basically you're just saying I just tell people a little bit earlier you know by the way you know we just feature froze you may want to run the conformance test and let us know if there's any problems in the next like few weeks as opposed to I mean the nice thing about that is we avoid we avoid lagging a version which is good Yeah and that the distinction here is I think there is a time a calendar day gap between what features we what ETE tests we propose to be part of 112 and when those are actually reviewed and approved and added to the gold list and will actually be run as part of the conformance suite So what you're proposing would put that into the tooling and I think that is an improvement that would so that the proposed conformance or coming soon Right so I guess we have two options here then like one is we just we make a commitment that they're all in by the feature freeze and then we expect providers to test the you know pre-release version communities and report any problems otherwise it goes into the release and we can always deal with you know escalations artifact Or we introduce process where something gets a flag so it's soaks for version Yeah we should be really explicit my my earlier proposed the guidance that I was trying to push folks towards was to have the proposed list by feature freeze and the list and the actual code committed by code freeze just like everything else Okay right I was not earlier proposing that we have all Approvals and committed and update the gold list by feature freeze that I see what I was suggesting I'm open to whatever folks think is a good idea but I was just trying to use the existing gates If we if we don't if I understand you correctly code freeze is when the test cases would also be sort of frozen right yeah and what if but what if people don't actually run the test cases to verify they're okay with Conformance test suite until all the code is actually written which means code freeze and then we find something wrong with the code in general the concept of the test is correct but the code itself has a bug in it how do we get that in The way Android does this for compatibility it actually they have a separate date for seats for compatibility test freeze which is very similar to the conformance freeze here but they have it basically midway between code freeze and feature freeze So we we already have a process by which we do bug fixes between code freeze and the release And I would expect this would behave the same way if there's a bug in the test or there's a big discussion and we realize that it's actually shouldn't be in the conformance suite after all we all made a mistake We just make a change and cherry pick it into the release branch just like I'm okay with that as well as long as the people who get to approve Those those hot fixes for lack of a better phrase understand that even though these are test cases they are serious to know that they should Go around that or that they should follow that same process or be allowed in and not look at it and say oh it's just a test case we don't need to pay attention to it How do we From the current process we make pretty significant changes after the code freeze I understand it's just people some people may who may not understand the importance of the performance test may look at this and just say oh it's just a test case we can let it go it's like no I'm sorry conformance test is Is just as serious as something in the mainline product at this point Okay, so how do we emphasize that doing do we need like a document detailing like how we graduate tests then that would include such a statement or Yes, I think the kept process is probably the right one for this to It sounds like we need to kept for how things get graduated then Yeah, which seems like a lightweight kit but just as a way to is that is that what I'm hearing is that what other people think as well Yeah All right, so I guess the actual item is two caps But it sounds like I mean does anyone object to this direction that what that we're going with these any feedback that we should know and take on board before before the kid is proposed obviously it's better if As a work group that we kind of have some consensus going to the kept process So we're not arguing amongst ourselves All right then Next topic is API snoop updates user agent hack. Happy here. Good. Would you like to take it away? Sure. Um, what I wanted to get some New Zealand culture in here because there's some things about introduction that it's been difficult for me to convey and I figured with a really short story it might make more sense MRI is where people come together to do social things in New Zealand within the Maori context And when they do so there's a protocol involved one of them's the Haka where there's you've probably seen it in front of sporting events where the New Zealand all blacks team will give an invitation to be their best at the rugby games And then there's the concept of a walk up and New Zealand was populated not that long ago by people coming in on canoes And when we get together at MRI or they do a formal introduction people will be asked their fuck a papa and what they're asking is where are you from what's your history why are you here And they go back if they're Maori they'll say I came on this particular canoe these many generations ago and I came this step and this step and this person these are all the humans connected this line and that's why I'm here today So keep that in mind the who they are when they're showing up is important in any context for particularly for me And I when we look at the trying to figure out who we are when we're talking about these API stuff I looked at how do we normally identify when someone's talking to us via HTTP and it's the user agent. So a lot of this is around saying who are you talking to me about this So one of the approaches that I took was doing hacking client go to when the API call is made creating that fuck a papa all the way back to main if possible or back to the assembly code where we were in the weight So we really know the fuck a papa for this particular conversation and when I pull all that data together it really I really see some interesting patterns I don't have those defined yet But I want to understand that we need it from all applications if possible just to say flip a switch would you mind telling us what you think your fuck a papa is So that we can correlate and provide some really meaningful in depth user stories because now I think with this data we will be able to create some data driven conformance possibly some automated test that we see through machine learning or other Here's the actual patterns that we see over and over again throughout our community. So I said all this to do content because it's been hard. I know that user agent is not necessarily designed in this way there's some limitations because in the past we presented Hey that's just supposed to represent the application and maybe its version but for me who you are when you're talking to me can mean much more and may need more space. So there's a user agent release for us that has a lot of interesting data. It's based on the data the same structure that we used last time But I don't have a lot of correlation on it yet because it's it's it's been a journey to get all these pieces together Because it affects so many different pieces within the ecosystem. I think it's going to be best to create a cap to convey the importance of why we need this and and and I love some help in the authoring the editing the definition So that we can find a really good way. I think to do some stuff in client go and to make sure we get it all the way into audit so there's not a necessary need to upload stuff later But I'm open to any other options or anything that allows us to you get this level of depth and understanding of our community and what it means to to as far as the API being a part of it. It's amazing. Yeah. What's the story. I did not see that coming. That was amazing. So help me understand is this would this be then so anyone using client go like just just kind of uses would be would be then sending that information to their own API service is that is that what you mean. Well that they yes but in order for us to collect this and make it meaningful. I'm also suggesting we provide a way where they can run something similar to Sonoboy. And I says hey this isn't actually my production environment but I'm going to do all the stuff I normally do because man I want this to be a part of what what is tested and it just configures a dynamic audit thing which is another kept that's being worked on I think related to this. I don't have all of that figured out but I do want everyone to be able to send their their as it works within their cluster or even within our application I'd love to do it like for all of the helm charts and for all and find an easy way for people to enable it. Easy thought was have to enable the fuck up up a variable, you know and client picks up it says hey I'm going to do that thing that I don't do normally but I'm going to do the formal introduction thing. Plumbing and like an x trace ID through places where it doesn't. It isn't passed along today. I mean there's like open trace and so is it a tracing problem that you're getting at or I don't think I don't think so because I looked into doing those those approaches and they don't interact well with the API like getting it through the API server itself they do all the other components. And another thing is wanting to not have not not complicate the the contribution of this information I it feels in some way if we could find a way to have it just be a switch they flip on and then they're pointing their audit there if it needs to be more complex I'd like to see. Yeah, what would work some other options because it's so long trying to create instead of the full fuck up up there doing a. hash maybe and then making client go generate a hash to the fuck up up a list. Just to say fine so I'm keeping a straight face when you say. I can call a stack call stack or whatever like let's find the term that works but there's a difference like there's the function the function history. You know how we got here from Maine and then there's also the per line I think that this being able to for example in some UI in API snoops metadata place going to a particular function and saying here all the places in our community that flow through this function. And if we don't make it super easy to contribute to that we may miss some interesting stories. I'm just trying to lower the barrier to contribution for for these this connection. So I guess what I'm trying to figure out is if it's like more like attaching a debugger to a running process or more like forming a trace ID through all components in the system. Is it one of those or is it totally different. It's probably more similar to the second to the first because I actually thought about reducing it to to an ID like I said it's for if you want to look at if we don't go into all of the per line trace stuff and we just identified it as a hash and we didn't even translate it. At least we would know this particular API call is coming in for the same reasons. I don't know the reasons I don't have a lot of metadata around it, but I know it's the exact same fuck up up as this other one. Right. Sorry, that's why are you here right. I don't know the why you're here but I know it's the same as this person this other request. So does that value change on a per request call. You're assuming it stays the same or what is the value. Let me paste what one of these looks like it may I'm going to copy there's a link over there to the agent audit data, but I'm going to paste it in the document so like this is. This is who it's long and it's in the doctor but this is the API snooper the new stuff. I won't call it. Right and there's the this is what you said your did you pay something to the chat. I don't know where you're looking sorry. The document where what there's there's the link to got it. Okay, sorry. Yeah, it there's the function in the file line there. Who are you tell me really tell me who you are. And this is on every API request. When it's enabled and it's only when it's enabled this is a special very verbose tell me who you are. And it were the nice thing is it works for all the applications now so we get some really interesting data for like what was this one this is for the the kube API server so instrument in the kube API server is not really going to be. Possible and a lot of the different approaches that we have, unless we just say hey, why don't you tell us who you are when you show up. Sorry for the anthropophysiation of all of this it just I'm people person and. So then we can kind of compare different users of the agro so that's kind of the goal and yeah, like I guess anyone that's Easter would then look like you still because we would sort of have a way of analyzing that usage is that yeah. I mean you and you're going to include all the stuff into the user agent field. I did. I did. It's not clean like this is a rough. If we go. I mean frankly it's not much longer than the regular like Mozilla use agent. Yeah, there's and spend knows like 12k like Microsoft you know the communication stuff so there's lots of headers that can be much longer we just have to agree as a community is it okay to tell us this much information in the center. I mean definitely. Say again yeah definitely opt in this is not a default and then it's okay that use a variable or something where it doesn't change the user experience from client go to turn it on. To turn it on what would be nice if we have those things is just to say hey here's a coup cuddle apply that initializes a. What are they called the initializer and it just sets the variable for all of your pods so when they come on it's all enabled and when you provision your cluster. Go ahead and make sure you set that variable in your provisioning so that everything community wide all of a sudden if they want to and they take the special steps to enable it provides us with the. This thing. The API Snoop we can call it API Snoop or something else but. I'm fascinated and processing not. The direction. And I'm trying to figure out what is what is a near term ask is it just sanity check is it. Is it a specific request for API machinery to instrument some aspect that would enable you to get more information here. I think that the. Sanity check and can we get it kept to say let's increase the field that the ask is I need the field to be enabled. Across the community by you know the next release or something so that we can start all the binaries compiled after that point we can go ahead and run automation start collecting. Meaningful automated data driven conformance user stories from everybody. I don't necessarily have to participate we can do this first time. Is it just writing to a long file. I mean. How do we actually collect it. Audit logs. Okay so people people who run the API server would share their own logs with us or. Yeah I would say an easy way would they say configure a dynamic audit log and point to the centralized thing and it could be like son of a we say here's a new. Collector just for for your data and then they send the data through we do whatever and and and now that's one of the data sets. Yep. That's actually what we were planning on doing with the dynamic audit configuration anyways it was going to be part of that was a. Yeah we call it's a master worker paradigm we call it the aggregator so if you just spin up another piece of the aggregator to do this then it makes perfect sense. Cool. Sorry. I'm not against doing this I'm just more curious more than anything else like I understand that we wanted to get some sort of. Like a better phrase tracing through the API server to see what's the code we're hitting to make sure we get good coverage and stuff. So what does this information provide for you because this is information about more or less the client side right so how are we going to use this information going forward. So one of the things that I'm seeing is when we're. I think if we put I haven't seen the patterns yet because I'm just getting to where I have all this data in a single set. But being able to not just identify end points that need coverage that's kind of nothing and not just identify tests current tests that I guess it actually would be kind of a test that are getting. Sorry they're called normal tests that are not promoted yet really good data there for the for the for is the current test doing what we're what the community is doing. And then when we're starting to write a new component or a new test going and looking at those similar programs and their line of code where they're starting and saying oh look this is the flow of the of the the logic through here and here's an auto generated set of what a user story user user store might look like generated. We don't we just start off looking at that and saying is that OK and we can use the type of expert system style to start. You know making the machine learning of the algorithm work better to auto generate meaningful stories are always going to be approved by human but I think having these that that's the goal I guess for this data is to provide a way for data driven. User stories to to come forth that we might not have even seen before it's definitely about and I think to do so we do need some type of trace of who why are they here and why are they here in a very specific and I think this is the the shortest and concise form that provides us that level of clarity. Can we figure this effort with the vulnerability effort where maybe there's other motivations to possibly turn this flag on such that like oh well I'm having weird issues and I want to turn this on so I can provide an audible log to send up to some cloud provider like. I wonder if there's an additional use case here for the yeah yeah no I suspect there there will be not just for the for the data but maybe. Yeah yeah yeah like a stack trace when it croaks or the other thing I think would be useful is to figure out what are we testing by accident through the existing. I suspect there are derivative API's that are getting called sort of unintentionally or that would be really useful to understand as well I did I don't have it. That was the goal sorry just quick check we need to move on to Michelle very shortly give me do you have any last thoughts like one or two minutes. Please participate in the cap if possible that would really help to to continue the conversation outside of what we have here. Did you share a link or will you. It's a Google doc for pre PR but once once we get a consensus and I have sponsorship by the right I think I have to have some six and some. At the top area well between the two little ashes I think that'll have context for the rest of the conversation and I think the cap can be put forward with just the first two sections filled up. Cool so the cold action right now is reviewed the draft cap. Yep and if you're interested with that data I'm really would love to work with people to make that data meaningful I think there's there's really nuggets of gold in there. Okay so review the cap and think of other use cases and share with you. Cool thank you very exciting. Okay Michelle are you on the call. Yes can you hear me. Yes. All right cool. So I've started looking at ways that we can add the persistent volumes into conformant suites. The main challenge we have is with the persistent volumes layer it requires a vendor specific volume plug in in order to really test the full functionality. So I believe there is this concept or maybe I don't think it's an official concept but I think there's an idea of having profile conformant suites with these sort of more optional features. So at least for sure I think we'll want to add quite a few tests into this profile suite that can you know test both control plane and data path for using volumes. The main question I had right now is is there value to having a core conformant suite for volumes that doesn't actually test a vendor specific driver like it just tests a mock driver and it tests that the control plane calls are being made. That would basically ensure that distributions are running all of the necessary controllers to be able to run volume plugins but it doesn't actually test something that a user would experience. Okay so two topics there one is should we start thinking about profiles since we have some conformance tests that did not everyone can pass and the second one is do we add kind of abstracted tests into the base conformance suite is that roughly. I kind of want this the first time I thought about this so forgive me if it doesn't come out quite right but I I kind of question the value of testing just to make sure the controllers themselves are running. I'm wondering how much value that really is. Yeah, that's that's kind of my concern too. Because from a user perspective, just because the controllers run doesn't actually mean a volume plugin will actually work. Yeah. Yeah, so I, and I know that there's been some pushback in the past on on testing specific plugins themselves or extensibility features like this, but that's really what people want, unfortunately. A little bit of functionality says you know if I provision a volume regardless of what volume plugin I have I'm going to get a volume. As opposed to something completely different like a network. Yeah. Yeah, I guess my thoughts are testing against effectively a reference or a fake is more useful as smoke test or end to end tests that make sure that the conformance test will pass if they make sense and sort of good for gating and protecting those other tests but not conformance test themselves. Just a quick question just a clarification. This is Deepak Vishwanvavi. So profile when you say profile, is that the same thing you know the certification, the profile certification like the way one time we were thinking about multi tenancy profile or is that fall in the same category because if that's the case this will be too granular though there'll be an explosion of these kind of profiles. Right. So yeah, so I think the second point here is is that yes. And Brian Grant did send some feedback this morning where he said kind of a similar vein is a view that you know let's not get to granular so then it becomes a question of do we want to create like a dynamic profile where some of these things go. Yeah, because I think that that's what I thought I know there was another discussion going on the snapshotting volume snapshotting so I'm just giving one example so that way you can have like a thousand of these kind of profiles. So that'll be that'll be kind of confusing. I think there's general roll up level of snapshotting or general roll up level of features that are provider specific like you could you could have an entire category if you lump them into larger categories. The number of categories you have is finite right it's not. But I think the I think defining the categories and the behavioral level of testing is key. So like storage as a lump sum for all of storage features makes perfect sense, getting into the minutia of of or getting further down the tree into the minutia of X or Y becomes not beneficial. And people basically want to ensure that they for storage capabilities that features are supported across providers and that is a thing to test that they need to have tested. Yeah, and the wall the extra challenge with storage is that we there are various types of storage so like we have single writer storage and multi writer storage. And those are going to have different end user behaviors and some volume plugins might support snap snapshots and others might not. So, having just storage as one lump sum profile might limit the number of features that we could end up including in the suite. You could always categorize and break down the tree by by major features right like storage is a lump sum and then feature enablement is the other thing because not all storage providers will have all features enabled. That's correct. And I think the important thing to an end user is matching their workload to a capable provider that has the feature that's required. Yeah, but I guess so in terms of like conformance like would would we just have a storage profile, and then providers can checkmark that but then you have to drill down deeper to actually see which features each plugin will support. I think you have to have a list of features that you can consider to be conformant and then features that are outside of that list. You know our features that can be tested, but they're not part of the conformance list. And then eventually there have to be a graduation. Does that make sense because that you're saying there's some basic level. This bar is what what conformance typically is in that we say that if you meet this base bar, you are conformant, but we absolutely have that extra tests that people can use to verify that that features that they may depend upon exist and are working properly. And to Jago's comment, I think the user benefit here is that it means that if you code an application to that bar that only needs is where that bar, then you'll know it will work everywhere. That's kind of what we practice. I think the thing I'm kind of concerned and kind of confused about is if you have a storage as a profile, then you can have so many permutation combinations for example the device plugin thing. Each device has its own plugin. So would we have all of these permutation combinations in that profile though. No, no, we would provide the behavioral level tests and then set the minimum bar. Okay. And as long as the minimum bar like the permutations and combinations of configurations that people will have, they can that can be NP hard. And all we're saying is that the test pass at this bar. Okay, so some kind of a higher level of abstraction of testing. Yeah, it's all behavioral. Yeah. Okay, thanks. Okay, that makes sense. So I think as a summary, we can start looking into what kind of behaviors can meet the minimum bar for a storage profile. We can still have optional tests for other optional features, but those won't be required to pass the conformance suite. And I think two things. One, I think we had general consensus here, at least that testing the controllers that don't actually do anything is not useful to the conformance suite as it's currently defined. I think that's a reasonable position. And just wanted to make sure we captured that because I'm sure other groups will have a similar question. So so usually that test just goes into the profile rather than the, the, but I did want to make the distinction. I think the, the conformance for storage as part of the default profile doesn't need its own additional profile. I don't think we need the conformance suite plus in storage profile. Like this is just base profile for storage, just like we have base profile for API machinery and base profile for node. This would just be part of the existing conformance test suite for storage. Yeah, but are you, are you saying that all providers have to provide some storage capability? I think I would look to Sieg storage to come to a proposal on that and defer to Sieg architecture to agree or disagree with that. I think there's layers. I do think there's a separation. I hear what you're saying, you know, but there's some providers may not wish to have that profile just for maybe security concerns, right. They don't want to allow workloads to be run on their environments that have storage capabilities. Right. Maybe it's for whatever reason, we could, we can consider feature acts or feature why for these different things. And they might explicitly shut them off or have a different, different things, but then that guarantees certain API level behavior will be supported. Right. So that way, if that way you have the separation there for major features that are extensible. Okay. That's fine. So the base profile does not include storage, and that would be for like stateless workloads only. And then you have a storage profile or bad or something that gets added to the conformance. Okay. So we're bad at a time. Sounds like I mean any test that we can include in the base conformance profile, like we should if it's appropriate, but it sounds like we're going to proceed with the initial creation of a of a storage conformance label, which would map to the storage profile is that Yeah, I think that sounds good. And I can start thinking about what tests, you know, planning what tests and stuff can be included in that base storage profile. Cool. I'm Michelle just a quick question. I think this feature is not there yet in 111. The dynamic volume provisioning thing is No, it is. Because I think one of our guys working with you I think any mention that he got pushed to 112. Oh, so that's that's related to topology I just mean today. I mean we have we have had dynamic provisioning since I don't know one five one four or one five or something like that. Okay, so maybe I need to because I'm going to confuse that because that's what he mentioned there was something that didn't get. Yeah, there's there's some work going on to make the dynamic provisioning smarter. Okay. But actually the base dynamic provisioning concept has already been there. Okay, okay. Okay, just in our last minute, I wanted to propose that we review that meeting time as it now conflicts with sig architecture meeting time, which is Yes, there will be a doodle forthcoming to select a new time. I guess, I guess I guess probably I don't know if maybe nobody cares and and we can figure that out too. But going back and Yeah, I mean I was I was starting to attend some of those cigarette ones personally. I'd like the option least I imagine other people are similar. Yeah, I have to so Yeah, you have to figure. Well, thank you. Thank you for joining this one today Tim. We'll find a new time. Okay, thanks everybody. Thank you.