 Okay. Let's go ahead and dive in. This is Dan Khan from CNCF. I see we have 15 folks on the call right now, which I think counts as quorum. Insanely, there's like 250 people on this mailing list. No, 117 people on the mailing list, which just says there's a lot of people who care about software conformance. So we haven't had scheduled meetings and we had some requests to set this one up. We're set for the second and fourth Friday, excuse me, Thursday of every month. Let's give this a try for a month or so and then decide if maybe just once a month or something might make sense, based on how much agenda we have and material to go through. But based on this agenda, which I appreciate William putting together, I was just going to give a two minute update, which I think is mainly the same thing that I said in Austin and our in-person meeting. And then we can go through a few other areas and then I do want to circle back at the end to this contract development of conformance sets that I've mentioned before. And so without coming off as complacent, I really do want to take a moment and just point out how extraordinarily successful our conformance program has been to date. And again, not to go too far on it, but I would really compare it to almost any other conformance program in the history of software, open source software or something like that, where it's almost unprecedented to have gotten, I don't have the exact numbers in front of me. I think it was 38 companies out of the gate and we're currently up to 49, and I'll just paste that spreadsheet in. And so obviously that success is really due to all of you and particularly the Kubernetes community and SIG testing and SIG architecture and the Steering Committee and everyone else. But again, I see it as basically unprecedented to get everyone in the industry to sign up. And I think we have, it's literally nine stragglers at this point. And several of them just like tried to certify the wrong version or are not really going to continue with that product line or other, you know, things come up. But the obviously the big exception to that is Amazon, where they're not certified yet, but they have announced that they will be certified when their product reaches GA, which should be in just the next couple months. So assuming things keep going forward with that. This is really something we can all be proud of. And so now the question is just how to make the program better. And so I think we're all aware of the fact that there's some limitations, particularly in the quality and breadth of the conformance suite. So for CNCF and me personally, it's a big priority to invest in that over 2018 and try and end the year with that certification meaning much more than it does today. Diane to get access. I'm sorry. I'm having trouble accessing this document. And you just need to be part of Kubernetes dev, I believe. I'll put the link in the chat. You just have to join that man with you don't have to actually get the mail. But I'll go ahead and give you access to answer there you go. Okay, well, let me stop there. Any other intro comments or questions about where things stand. I mean, I kind of presume that most of you are watching the GitHub repo. And so you're seeing between that and the email list essentially all the interactions that we're having with folks. We continue to have people make little errors or need help with things or other stuff but I feel like we've been able to be pretty responsive. I do want to give a shout out to Caitlin Bernard, who was the launch person on this and our marketing team. And really did a fantastic job on on where things were a lot bigger of working through all that. But I'm pleased to say that she's actually been able to train our project manager Taylor Wagner and Taylor Taylor has now taken over all these functions. So, you know, she still escalates to me or Caitlin, any questions or vagaries she has but then we generally go back and directions. So, the process could be as well documented and transparent as possible stuff there. There was one general PSA I wanted to mention in preparation for what can had uncovered in our conversations during the KubeCon. We have set up a separate sub project called testing commons to be as the clearing house location for folks if they want to get their PRs reviewed in a timely fashion or if they want to converse with other folks on the topic of conformance tests as well as other testing common area that that's probably the most beneficial sub project to be the clearing house for the stuff. There is information in the community site and I can also post it inside of this. These notes here for other folks take a look at but that's, that's the venue for upstream now besides sick testing sick testing is the main meeting and then there's a sub project meeting now specifically devoted towards this type of this focus. Cool. Thank you. All right. Great. Thanks. Thanks again. I think that sets the scene perfectly for what Jacob was going to talk about. And the link is I'll just take the link in the chat for anyone else to see the deal. Thanks. And congratulations everyone. Great. Very well run on your behalf for sure. And thank you for all your work making this launch as successful as it was. At the top of the doc, I kind of put some background just for future civilizations, or our future cells looking back at this time in the conformance program. And just wanted to give the context that we intentionally focus on the process of evolving the conformance program ahead of building out the surface area coverage of the conformance program. And that was both to ensure the widest participation. And also because we didn't want a flurry of activity and what was considered conformance leading up to the launch of that program. So that was very intentional. It was a recognized gap, even in the very earliest conversations. And just as a reminder, conformance label was essentially that is a text string that anyone could add to any test. And call that part of conformance. And so there is a disproportionately high number of conformance tests in some areas where some engineers just happened to think that was useful. But really no rigor around what's thought to be part of a conformance test suite. So thanks to a lot of work by Tim St. Clair and Matt Liggett. We have both a repeatable way to run tests and ways that are clear for adding new conformance tests. And this document goes through a few areas that I think are the result of just thinking through this and previous conversations with this group. And I think there are a couple parts that will likely meet some conversation when and how conformance tests ought to be added and what the priority for those tests ought to be. I think the part one, the long-term sustainability part of this came out of conversations to the governing board about how to close the gap in conformance tests. I have opined before and still believe that there are a large number of assets we have lying around. End-to-end tests, which are perfectly appropriate to be part of the conformance test suite. And since we need to go through the process of adding that, I think we will exercise the process of getting a big architecture to agree that that is the appropriate test to add. And the mechanics of adding that to the canonical list of what are our conformance tests. So specifically, I think the highest priority areas to add are for the workload API. The real value of this program to end-users is that they will know that their own workloads will run on conformance Kubernetes clusters. So by workloads API, we're talking about deployments and statement sets and stateful sets, for example. And it's critical that conformance Kubernetes clusters expose those APIs for users to be able to depend on them. I also suggested a couple of toes in the water in API machinery. Garbage collection and watch are important for many of the use cases that users writing custom controllers, for example, need to be able to depend on a watch API. And so the two I suggested for API machinery were the garbage collection and watch. And so there are some folks looking to add those E2E tests to the conformance program as well. I think SIG node ought to propose some tests. I haven't even taken a swing at what those ought to be, but I think that's an important area as well. And one that will likely drive some interesting conversations about what is conformance being as it applies to Kubernetes. And then towards the bottom of the dock, I mentioned briefly the new test development. I expect we will find areas that either don't have E2E tests that the related SIGs wish had existed at some point, or that it's through this process we discover ought to exist. And I think there may be two ways to approach this. One is to a data-driven test suite that's based on earlier work that Eric Toon did for ought, and seems to be an effective way to create non-flakey tests that don't necessarily focus on the behavior, but focus on the API surface area and that it exists. And so I dropped a link in the effort underway for a data-driven test that exercises, I think at this point, only namespace resources. There's a possible follow-up PR to that for non-namespace resources, but that is out in the future. And then we have also discussed through with CMCF contracting with external vendors to potentially have a one-time amnesty program for SIGs. Oh, I wish this test always existed. I've never gotten around to writing it. And I expect there will be a small final percentage, the final 10 or 20%, that will likely require some effort to write those tests, and they will be approved, of course, by the community. But that is touching on the discussion about getting some funding from the CMCF to outsource some of that. I'll stop there. I think that's a fairly good starting point. And again, this is a brainstorm phase. Once you get through the initial, generally directionally, right, I do expect to turn this into a cap and go through that process just to make sure there's full visibility, but it's been useful to have initial feedback from this group before it even gets there. And I think that pattern is one we've agreed is useful to continue. So what are two questions? One is how are you using coverage? I don't think it's code code. It's based on the API, right? It's an API surface coverage, and that is an important distinction. I think the implementation is the API conformance. And there are many ways that users can swap out different components. And I think we're not going into that level of detail. So it's basically just looking at which APIs are actually cultured in the EDBs. That's the definition of the target. Yeah. And I think by definition, the EDBs are sort of external getting APIs, not internal. Right. However, I expect there is some breaking of rules. Yeah. So there's no guarantee that we're really looking at which ADBs are hit, not necessarily that they're tested completely as well. I guess that's another topic in the future. Yeah, that's actually something I was going to ask about, because API coverage from just saying you hit the API may not be sufficient. I think at some point it'd be really nice if we could get some kind of brainstorming or thoughts around possible ways to test the different variants in which the APIs can be called, right? What are the parameters? What are the different values for the parameters? And then different scenarios in which they can be invoked. I know that's non-trivial and not necessarily something that's machine-generatable. But I'd like to at some point see if we can think about ideas to maybe increase our coverage or our measuring of our coverage based upon the actual semantics of what we expect to have happened and instead of just, did you hit this API? Because just saying the API alone isn't necessary. A lot of the tests that exist today are behavioral-driven ones. I mean, it's not complete by any far stretch of the imagination, but most of them are exercising that, yes, you hit the API, but you also exhibited response and behavior that you expected from hitting that API before you completed it. So all of the E2Es that exist today are built in that vein. So they're not just pure coverage. They're behavioral-driven tests. No, I understand that. I agree that the tests themselves test behavior. I'm not questioning that. What I'm questioning is our code coverage statement, right? Because let's say our entire API set consists of a single API, but there are 255 different ways in which you could invoke that with different parameters and stuff, right? Our coverage tool right now will say, hey, we have 100% coverage, even though we only test three variants. That's my concern. Yeah. I think Tim's point is an important one, though. The data-driven test, for example, is not testing behavior. It is explicitly not testing behavior. It is only creating a resource that would give a name, updating that resource, and deleting it, for example. So that may be necessary to demonstrate that the API is exposed to end-users, but it's not sufficient to demonstrate that the user gets the behavior they are looking for. And so those are different layers. And I think that's what you're calling out is yet another deeper layer. And I think through this process, it's important to go through and prioritize what the goals are. And I think that the primary goal ought to be to demonstrate that the API is exposed and then get into behavior most common use cases first. And then I think we will likely see some debate as we get into more nuanced differences between environments. Yeah. I definitely am in favor of a layered approach. So I definitely agree with that. For a step-wise approach, I should say. And so one of the important elements of this doc is looking at new features, too. So we have the buggy advocacy, the test coverage advocacy program that I think from what I heard from Dan, or you passed on from the board meeting regarding this amnesty is that it's not going to be open-ended, that we need to make sure that at least going forward all new features have that conformance coverage. So one thing we're hoping to get the community to adopt is a policy that features going to GA have associated conformance tests. Now to achieve that, we probably need to actually be getting those tests developed a lot earlier than that. So Jay and I are taking around a couple of ideas like do we need to have like a beta conformance tag that basically indicates this is a conformance test for a beta feature. It's not technically part of the program yet because the feature is not yet GA. But it's something that potentially we can package up in like a sort of way run that people can be kind of analyzing ahead of time, seeing if they have any problems, seeing if they're conformant to it. So then when the feature itself graduates to GA we can graduate the test at exactly the same time knowing what impact that would have on various levels of conformance. Do you have any feedback on that? It depends upon the API group really. Some API groups have a track record of growing into GA things. But sometimes there's a shuffling that occurs across API groups and that is actually very painful. So if you were to do an API driven test and you switched groups, like even in Sonoboy we have a lot of glue logic right now that exists that detects and checks for this shuffling of API groups and the earlier reference of apps and demon sets, that was an example of something that was originally in extensions and now is in apps and now is going to GA. So like I think being careful about when we tag it as beta conformance and making sure it's got a track to success it's going to be a little bit of tap dance because the ability to for us to be able to test that and give accurate signal is going to be rough across the versions if they start doing this API group switching. I think you might want to call it conformance candidate as opposed to a beta, right? Because it's a candidate for conformance, just to make it clear it may never make it all the way from candidate to real conformance, but no. I guess, but I would expect that every feature of Kubernetes should have a conformance implication. A couple of thoughts on this. One is if the tests are checked into the Kubernetes repository and our version along with the Kubernetes version and they're off not within that version to be a mismatch in API group I do see that some of the upgrade rollback functionality might get horrendously complicated so I think that deserves some more thought. I can see that. And one outcome of this I hope is that this group are also involved in other working groups and say we'll start the conversation about what are the conformance implications of this feature far earlier in the process. So I hope that we start to build that into the CAP process and just request on full requests and have you considered the test suite for this and which part of this is required or intended to meet Kubernetes? I think that should probably be put onto CIG architecture to basically add maybe something to the template of the CAP that outlines it's there is something that outlines the path of its lifecycle to GA but maybe we could explicitly say like the path of the test to GA make sure that there is coverage. I'm totally in favor of that and I think that's a great idea. Awesome. The other thing I wanted to follow on what William was talking about making sure there's visibility on the way through the lifecycle is that I'm also involved in the effort to extract the cloud provider code outside of core and I think there are important implications here as well. And one of the things that we've been working on there to get ahead of the madness is to have other have all cloud providers running CI tooling in their own environment and then posting their test results back to test grid. We have a wonky idea to actually make test grid a multi-cloud application as well so if you're interested in that I mean no but hoping to visualize a dashboard with the cloud providers that are participating there in a single dashboard so if someone working on one cloud provider inadvertently breaks another one we surface it really early and I think that fits in really nicely here as well I would expect that would be a useful signal for folks participating in the cloud. This is really thorny in the fact that history has taught us people don't look at test grid unless it's a blocking test and this has been ongoing for a long time and I see Justin is up there and we get coverage immediately when cops break something so we know it definitely well when something is breaking for cops and without having that blocking level signal it becomes kind of blind to a lot of people because there's so much noise in the system right now that is unless you are the provider and you are looking at yourself and you are keeping a watchful eye on it everyone else will kind of turn down that noise and have no idea what's going on. Yeah I agree the signal-to-noise ratio is unsustainable I do see that I can think of some ways and some groups that may be able we might inspire to keep an eye on it at least around release time and if we do have this concept of a conformance beta or candidate just getting on top of it earlier can avoid some really uncomfortable and difficult conversations towards the end. Who's this talking? Sorry I can't see it. This is Jago. Hi there This is Bob Wise. I'm the GM for EKS This is an area of deep interest. I really want to try to figure out how to participate in test grid I totally agree with what Tim is saying but I feel like we actually do have to prove that we can keep it green before we can really make it blocking but at any rate I don't know if this is the right form or not but I would love to work with you to figure out what's a good way for us to think about because I think we do think of this in terms of how do we support Kubernetes on CI on Amazon but also hook in EKS so that we're getting twice the signal and it's important for us to be able to do both. Excellent. I think Nick Turner from your group is showing up to the cloud provider meeting I have asked him this is part of our way to try to get plugged in I have asked him to start doing that because I know this is important but I think the specific way best practice for a cloud provider to interact with test grid is perhaps a bit outside the remit of that group. I don't know maybe it isn't I also think I do support the idea of having more providers in test grid because COPS is currently the only one testing a bunch of other stuff and I would love for COPS to be just like one vote amongst five and so COPS is failing it isn't blocking everyone as long as I'm the only one letting the side down. Right now it's like all on me but like if I had a free pass as long as I was the only one of those five that would be helpful. So I will take the action and we get the folks who are working on that in the cloud provider working group to socialize that more and loop this group in as well. I think there is significant overlap and it's not a coincidence that that group is working on submitting results pulled out of Sonobui running the conformance test back to test grid that came out of me also being involved here so I think it will be at least worthy of exploration and maybe we find better ways of doing things. Okay, any more questions on that topic before we move on to the next one? Okay, I think Srinni has the topic. Srinni Ramu. Yeah, hi. I sent out a document just a few minutes ago and I pasted it out in the chat. Probably I've shared it on the screen here. So much easier for me to go through. Can you guys see my screen? One second. Some of the work items I thought that we can work on. I created this document about a few weeks ago and and the timelines here probably will not make much sense but the overall goals for this is to increase the certification coverage and raise bar on the conformance documentation the documentation that we have right now I'll walk you through some of that quickly. How much time do I got? About 10 to 15 minutes. Okay, awesome. And then some tooling that is required to approve conformance tests when PRs get merged. And then we can also gather some future list items here. Some of them are like the one I stated here, the exploratory items, which I'm not really very familiar with but we'll cover that at the end of the other things. So in order to achieve the first three goals which is conformance coverage and documentation and tooling, I spread that into two pieces. One is the test suite enhancements the other one is the tooling enhancements. About the test suite enhancement it's an iterative process we we keep changing or adding conformance tests I believe and they all love to look when it is major release. So which will address the areas of coverage gaps and strengthen the existing test suite what not. The proposal here is to approach individual sakes to identify such gaps in the test cases or strengthen the existing test cases. I'm mostly referring to E2E test cases here that's what it's been so far. And either the owning sakes or individual parties would put an effort to add new test cases or fix the existing test cases to have better checks. Some of the examples like existing test cases are part of the conformance but they don't do all the checks that are required maybe we need to strengthen them. And at the end of it this process SIG architecture I don't know I believe it is the SIG architecture needs to approve the newly identified conformance test cases. And like we talked about one other thing here is about the coverage basically what percentage of the EDA test cases are now conformance test cases and out of which how much percentage of the core code there are testing so that's a much more complex topic but we need to address that at some point in time. That's pretty much about adding coverage which is an iterative process. The second part which I kind of started working on this is 45 of course based on that approval that if we want to pursue this I think it's important. 45 the test documentation that we have right now so we are generating a test document for all the conformance tests and it is checked in under CNCF docs section the tech debt here is that the conformance documentation needs to be solidified basically it meets it needs to conform to RFC 2 119 keywords meaning that doing this and this would enforce this behavior when it is and we need to do not have to go and look at go code to understand what the test is doing what the behavior of conformance should be they should be able to read and say this must happen as part of running this test. So quick question there are we going to I know there was work by other folks to get some of the documentation in place but are we going to have like a breakdown of some of these tests in place that will follow this documentation and publish it as part of the main doc site underneath like a conformance label because I get questions all the time about test A versus B and what does this actually mean Mr. Currently this is the document we are generating so we have some metadata that we extract from the ETA test cases in the go file and then we have a nice name to the test and then the documentation is part of the comment section about this test the test has this metadata on top of it which gives you the test name and the description should be like I'm showing here it should be very detailed enough so a person reading through this documentation understands what this test is doing and there are ways to describe this is describing what the test is doing and rather than if you describe what the behavior in Kubernetes should be it would be a lot easier for people to understand and then explain what the test is doing for example some of the work I'm trying to do here is in this particular case that I like the original documentation is very skimpy like one liner saying make sure the party with readiness probe should not be ready before initial delay but person to understand the test we are trying to do here is create a pod, configure it with initial delay set on the readiness probe and check the pod start time things like that so that people will know what the test is doing so I love this I agree with all of it my question is a question of logistics like are we going to as part of a release process for 1.10 is this detailed information for that specific release published as part of the Kubernetes docs because I think it almost has to be right the intent is to go through all the existing test cases that we have because right now a lot of them are very sparse with respect to documentation so take all the work that is done here put them into the existing test cases and then do exactly what you said Tim make sure that as part of the process we document the expectations as it comes in the future to meet this bar yeah I love it I love it sorry go ahead sorry I was just offering encouragement saying this is great I didn't mean to cut you off no problem thanks so yeah the document page currently is CNCF definitely docs needs to be that process needs to be employed sometime so that's about the conformance test suite enhancements basically we are proposing this standard and the idea is to generate the documentation based off of this standard and for the 1.10 for the existing test and all new test should adhere to this newly specified read like this specification bar standard right so that's what I'm trying to say in that particular case the other thing I'm proposing is doing enhancements around conformance test the idea here is anybody today can submit a PR and by accident by intention they can change an IT clause to conformance it so by conformance it they test would be part of the conformance suite and there are no checks and bounds right now to identify hi this is Srinni sorry we're just going to interrupt for a minute because there was some work done on this I think maybe it didn't get communicated broadly so I just want to make sure that Matt has a chance to talk about I think what he did to address this exact problem yeah so this is Matt we get if you have a look at the owner's file under Kubernetes test conformance it already that test verifies that the list of conformance test doesn't change and changes to that file require approval from SIG architecture and only SIG architecture if you look at the list of conformance test it hasn't changed in about three months I'm pretty sure that's because nobody has either gotten or received approval like asked for or received approval from SIG architecture in other words agree this item is done so we should document that I guess in the conformance work group yeah that's awesome so how do we essentially enforce that I mean if I have an ETA test that I'm writing yeah if you add any if you were to add conformance it to a new test would make a different test under Kubernetes test conformance fail because that thing checks that the list of all conformance it invocations matches a golden list of conformance test to get the test to pass you have to edit that golden list but to check in changes to the golden list you have to get an approval from someone in SIG architecture definitely I would like to know more details about that document somewhere drop it into that group chat cool another question another question I have if somebody is modifying the existing conformance test to catch that or the reason I'm asking is because the conformance test passes for 110 and somebody changes something in the conformance test right but that's all the test group right no no that is a gap and that would be a great place to spend effort is to check I don't know file length or actually compare or differ this gets pretty hairy I'll say because the test can just be one line of the cost another function or I mean that's not likely but the point is if the test code is going to call a bunch of other codes it can't all be under this level of review so checking that the definition of the test doesn't change is very difficult but at least checking that the list of tests doesn't change is something we can control I mean I think participants of this program should be running the early versions right of the release to catch these things because the conformance test changes in a negative way that I think is a bug that should be called to release candidate checking or like but it is worth additional thought we talked about it and backed away from it because it is much more difficult so full disclosure we didn't really think too deeply about that one and that would be a great place to put effort if you are interested in improving the tooling but again that means I understand this is a difficult task because going through each and every PR and semantically identifying such behavior is going to be hit on the CI process and whatnot so it's going to slow down lots of things but I was thinking like go form to Iran go into Iran if we can have a tool that we can run as a developer live in I think has changed that trickles down to the actual conformance test that would be great but I don't know I'll put some thought process probably if that is done the other little proposal I have here is about the versioning of the conformance test today the metadata around the conformance test is just the test name and the description but we do not know when the test has been added as part of the the documentation that we are generating also tells when the test is added to the conformance and or any other modifications to the test that information is also part of the documentation so I'm proposing another metadata around the test comments that we add on top of the test to have a release number saying that this test is added during release 1.10 or whatnot so apart from that there is a little discussion we had with William on this exploratory item where if there are any use cases where people want to certify against the subset of the test cases which is which is an open item at this point if there are any thoughts on this we can discuss or any other someone added this to the agenda as well I'm not sure who but there was I think maybe Dan knows the full story but someone actually tried to certify that there was more of a tool rather than a distributional platform and we had to actually release this certification for various reasons and it kind of raised the point like do we need something this is kind of more long term I guess but do we need some program for such tools let me dive in here this really should be on the agenda for next time it might even take up the whole call the quick background is that aqua participated in the original certification program and aqua is a security add-on that runs inside cars and a controller and basically they installed aqua and then showed that the Kubernetes distro still passed all the conformance tests hey Dan what about that I'm very interested in this topic but I apologize for the error but then they were essentially using the conformance test to test something that it doesn't test and that it was never designed to test and I reached out to them and they very kindly withdrew their certification because otherwise we were just going to get dozens of companies that were going to come in the same way but essentially my current belief is that there's no need for a container certification program because the Linux ABI is well enough to find Docker containers and OCI is well enough to find but I do think that there's potentially some value to something like Kubernetes third-party add-ons and that's a very vague term examples would be a security add-on like an aqua or a twist lock or also possibly a storage vendor and the basic thing to test would be can we confirm that these are only using public APIs and so you would have a conforming Kubernetes you would turn on an HTTP proxy and then you would make all of your calls as part of the installation through that proxy and we would verify that yes every call you're making is an allowed public API Kubernetes call and then at the end you would say yes I've fully installed my third-party application and so I'm only using valid API calls and so it would be a totally separate program that the I don't know that that proxy exists yet I mean I think the list of APIs do but the process and such and so I apologize I'm a month late now on sending an email to the list laying out this idea and I don't think we can resolve it all in the next nine minutes but I did want to just bring it up as something that I think there is a meaningful demand for I like how you separated it out as a separate program to avoid confusion I see the need for it and I do like how you're positioning it I think it will reduce confusion the way you're positioning it The reason we did it initially was frankly because it was a little confusing when the program first came out we really wanted to make sure that if there were such a thing that we were taking advantage of it obviously after it came out it got more specific about what the target was which is why we were confused but Dan the term that we had talked about over email with that was Kubernetes certified tooling and I thought tooling was a particularly good term because it kind of implies that it's not applications that you're going to run but like kind of system level utilities and there certainly is I mean there would be a lot of interest in that from us certainly but I think just the larger community would like to have that as well I like yeah but I'm not sure that it's big enough because I think there's like a real interest from storage and I don't know that that quite counts as tooling but I do want to come up with the term to exclude applications because I mean I just there shouldn't be a need to just certify your MongoDB or your WordPress or whatever it is you want to run that's right and just curiously how are you going to keep those tools from doing some third party or hidden API that doesn't get through your proxy I'm just curious how you enforce it I mean surely you can capture every time they are going through the proxy to a Kubernetes API but I would assume there's multiple ways they could be like doing things under the covers but maybe that's for another meeting I was just kind of curious it's a fair point but I mean the basic idea would be the same as with this where people can obviously lie about their performance test somebody would try and install it on a certified Kubernetes installation in the future it wouldn't work and then they would report that and we'd go and investigate it so it presumes a non-maliciousness but there is a kind of crowd sourcing check board got some William do you have any comments on that tooling idea? I think we should definitely talk about it I like the idea that I mean some of these add-ons and tools do kind of modify the cost so I think it's, yeah I think users will want to have some assurance of their performance I'm sorry JJ together a doc a couple months ago as we were going to the conformance program about made for Kubernetes program which the inspiration for that was the same idea that it would work on any conformance Kubernetes cluster and I do agree that it's a separate program but essentially ensuring that there's no requirement for vendor specific identity or some tie-in there and that they would be portable across various providers and distribution so I will clean up that doc and share it with the rest of this group I think it's relevant to this conversation would love to get thoughts on it sounds great so this conversation seemed to focus a lot around tooling and I just want to point out that if you look at Shrini's doc actually Shrini can you share that there was one other thing in there which is how do we allow for some sort of conformance checking of plugins and I think some people on the chat here might have casually mentioned it but I don't think it was explicitly stated but I think at some point we need to talk about how does a CRI plugin or a CSI plugin get some sort of certification statement that says yes we conform properly to the CSI interface and you can use us as sort of an approved Kubernetes CSI plugin type thing there's no formal proposal around that if you scroll down just a little Shrini but I want to make sure that people start thinking about how we're going to allow people to certify those plugins I have a related question and I apologize in advance if you guys have covered this already and you can tell me to go read the docs or something but I think the related question is about and it could be related to say third party someone's adding something onto a cluster but what they need in order for their proprietary thing to work is a customer scheduler how do we view that in a conformance case as long as the default Kubernetes scheduler passes all of the default Kubernetes conformance we're good Bob I think that's directly related to what I was just saying because I think a custom scheduler is no different than a CSI plugin from that perspective and I think we need to brainstorm around how we're going to do certification around those because it is a different beast than what we've talked about in the past yeah be careful though you're going to make the mistakes that past communities have made where we think we're providing value to the customer by doing things like that and you end up hurting the overall community by doing it because then things stop being you're differentiating based on your scheduler but now you're actually hurting the overall interoperability the ability of the customer to have no vendor lock in so please you know really think through what you're saying and think of how it benefits the overall customer because it's very very dangerous waters yeah well I think what's important is that the base Kubernetes conformance test suite should probably always pass there may be examples that break that but I think my default position should be the Kubernetes conformance test suite still passes even with a custom scheduler as an example but that doesn't necessarily mean that the custom scheduler itself is necessarily 100% kosher put it that way and then a change only working group came up at their own set of test cases to verify that plugins at this point that the Kubernetes architecture are compliant the example I'm thinking of is slightly different than that there was even the community meeting a few minutes ago there was something that was sort of along this line but if you read like a lot of the docs around Kubernetes one of the cool things is that you can write your own scheduler to do special things so there's a lot of encouragement around people to do that a lot of the tests here look like are headed towards testing a standard default community scheduler what you're kind of saying is that the implication here is that to pass conformance you're going to have to include the default scheduler but if you have some additional things that are based on naming a scheduler after that and that's added on then you should be fine it's replacing the default scheduler is a much higher bar I agree I think we're getting a little bit ahead of ourselves to be honest I understand the purpose of trying to talk about this early but I do think some of the aspects of being able to support this in the community are not there because right now a lot of SIGs individually have enough time keeping up just with the community itself and we're kind of asking it to up the bar to be able to have these levels of guarantees around the behavior so I appreciate the conversation and I do think it is is it important in the long term I also believe that it's way ahead of where some of these SIGs are at in the nature of the project at its maturity level at this point I would agree that's why it's under the exploratory items although I've said that I wouldn't mind at the next face-to-face meeting we have whether it's KUKAN EU or someplace else to start brainstorming some ideas around there with concrete proposals but at least have people start thinking about it and think what kind of things we could consider doing and then as we have future meetings we can maybe start to solidify around a proposal that at some point in the future we could bring forward to the group I think that the likelihood that something like Furniment would show up at SIGs scalability and say hey we want to start working with the scalability SIG is probably more of a short-term thing I'm guessing like I can see that happening and it would be good for us maybe we can add this to the brainstorming like what as a SIG what should our reaction to that be like yes here here's how we'll deal with an add-on scheduler or no go past conformance and then come back and talk to us when you can cover all the basic cases so folks I think we should probably end there on time but it's pretty clear from the conversation that we should keep going with these course-a-month meetings and so on the made for Kubernetes and the tooling certification or whatever we said to call it I'll bring that to the list I'd encourage you just to bring other topics to the list as well obviously each with their own subject header and then I'll talk to you in two weeks okay well good yeah thanks everyone thanks take care