 apparently zoom thinks I'm dims today this is this is great that's still flattering who are you I am I am Aaron of sick beard today big beard no I couldn't see you when you were speaking all good yeah no zoom zoom gets really confused and thinks I'm either Aaron of sick testing Aaron of sick beard dims sick release or occasionally say contributor experience depending on which meeting I'm currently signing into which meeting I signed into recently it's it's nuts and are you dialing in versus clicking the link I'm always clicking the link zoom get I don't know there's some seems to be some weird thing with like cookies or something definitely I'm embarrassed to say that I went up to David Oppenheimer in Copenhagen and thought that he was chase and from the side there's actually a really strong resemblance he told me that I was not the first person yeah it happens a couple times okay and Diego and Aaron could you see if William will be able to attend he had a proposal out that I was hoping we could decide on okay it's five after show we go ahead and get started we have the agenda doc that I would just drop into the zoom chat here and I think the first item was in the tailors going to start typing in folks here but what was decided at the Cube concession Seattle Aaron would you I actually put the agenda item in because I'm sorry y'all like I've been talking a lot but I kind of forget what we actually really well and truly decided so just scanning through the notes I feel like we agreed that conformance is non-blocking for getting windows to GA I feel like there was a lot of nodding of heads that we want conformance profiles to be additive only but it's unclear to me whether we actually got into the finicky details of how we're going to implement that to windows so one of them so that so there's that am I missing anything so one of the things we looked at was calling certain things validation suites I believe because then we would remove the confusion of for now at least we've got the conformance set but this allowed everybody who had to build all the use all the all the end-to-end tests that they needed we were calling them validation suites I remember that being a big part which was really helped relieve the confusion of listen when we get to have in a real profile well we'll know what a profile is I mean what we can call it a profile maybe the windows one day gets to profile Brian I know mentioned hey we still got a few a lot of stuff out here but they can move forward but the the calling that things validation switch really relieved a lot of the pressure as you know until we have a good reason to have a quote unquote conformance profile you know I tried to articulate that you know if we ever do an on interoperability test with 15 vendors on stage and you got to make sure the stuff works all 15 vendors trying to go through and and and deal with everyone having different subsets of profiles would really be hard so strong agreement there so the way this was phrased in the meeting notes is that right now we have a set of tests that are called note conformance but it defines to exercise all of the behaviors we expect out of a note and so the thinking was let's rename node conformance to node validation yep and so there is currently a directory called common sorry I'm blanking a little bit here there's a directory called node e2e that's all of the tests that can only run on a note so they require they assume privileged access to the node to do a little bit more like white box testing of these behaviors and then there are set of behaviors that can be tested by way of conformance testing so you want to exercise the same behavior both individually on the node as well as at the cluster level and so thinking was that windows tests would go into a new directory called windows and those would just be windows only behaviors that are at the node level so the the example that we were using in SIG testing yesterday was to talk about file system permissions for example as a behavior I think file system permissions are a pretty fundamental requirement of how Kubernetes operates but as an implementation of that behavior there's like a different string that you have to use for Linux versus windows in terms of how you describe those permissions so the hope would be that you have different implementations but it's the same behavior and that's split across two different node validation suites are we all getting big not many of heads the OSI the container OS interface well I think part of this party issue is some of the tests were written in a fairly expedient manner ending on specific Linux utilities like LS for example so it's clear that those just aren't going to port over and work in the same way there are other such examples I agree that we're kind of lacking we piggybacked so far on Linux isms and in general we need to identify more of those both in the API and then lower level behaviors in the system if we hope to abstract across multiple operating systems in any significant way I think we're just at the earliest earliest stages of that at the moment for now I windows is one of many optional behaviors in the system I don't want to overly fixate on just windows since we have other issues like single node versus multi node or GPUs or other things so as we keep in mind how do we support these optional non portable environment specific or platform specific features what is the right way to accommodate those like we don't have any significant persistent volume tests in the system because there's no common persistent volume implementation or conventions so actually think that one is way more critical in some sense to think about conformance wise for windows I want I do want to unblock them like any other optional features they shouldn't be blocked by lack of conformance tests it is a good use case to keep in mind it has some pretty significant implications across the system and we're still working through those my party right now is just to help them meet the general expectations of an implementation of anything in the project so that's sort of where we are with windows I agreed to reduce confusion with the node conformance test that renaming them validation makes a lot of sense just to reduce confusion I'm not sure any specific decisions were made in the meeting and usually we try not to make decisions in in-person meetings when everybody can be present although we did have pretty good attendance that's why I'm trying to get us to rehash some of these conversations here in a publicly recorded medium although that meeting was also recorded and is posted on YouTube thanks to the wonderful efforts of the CNCF paying all the recording staff for that so I feel like we're in agreement on those two goals that we want to unblock windows so windows is not going to be necessarily part of the conformance discussion for at least this quarter and there is a hope that somebody will work on untangling all of this by renaming node conformance to node validation just to speak a little bit to how tactically we're going to proceed with windows if this is of some interest to this group we have had a thread bouncing around between Sega architecture I think this group and I think sake testing so I finally said look Patrick let's just let's all get in a room and talk about this until we have agreement at least from a sake testing perspective that we're going to allow for there to be a Linux only tag and oh and a SIG windows tag to allow us to specify in stringly typed test metadata that is regexable which tests are never going to work on windows because they are Linux only and which tests are never going to work on windows because am I getting this right tests that never work on windows because the Linux only tests that never work on Linux because they're SIG windows tests and the goal would be to have SIG windows drop in the Linux only tag in all of the tests that they're unable to get passing on their implementation of Kubernetes and then try to reduce as many uses of that tag as possible this tag being completely orthogonal to conformance and purely for informational purposes and I linked the mailing list where we kind of summarized that discussion Brian you have your hand up yeah I just want to clarify that on tests that are just don't work yet on the windows I don't think those should be tagged Linux only I think only the tests that are inherently Linux only because they depend on specific Linux primitives and semantics and APIs should be tagged Linux only like set comp or Linux capabilities or I see Linux or app armor or specific files with some permissions or whatnot things that don't work yet on Windows we need to tag a different way and there may be some other categories of things like this you know we are tagging things that require privilege and noted Windows doesn't support privilege it's just one example so I'd rather be more specific about the reasons for why tests might or might not work in a different environment so I just want to clarify that one point we mentioned that on the call to there was the same concern so yeah I think one thing that we did discuss at the meeting which we do need a resolution to and I don't know that we have one is getting more people more feet on the ground to help with some of the work like relabeling the node conformance test to node validation or to help improve the coverage in the areas that on the agenda Brian okay if you look at the chat really just volunteered for that I hang on I want to go back to what Brian just said to make sure I can I can save dear Patrick further work sounds like Brian is proposing that we use three tags one called Linux only one called Linux ism and one called Windows not exactly but sort of in that vein like just for the test that don't happen to work on Windows I want to tag those for the more specific functionality that they require that's fair I just like as long as you think it's kosher for one sake called sake windows to start adding a tag to all the tests that say like these don't run on Windows yet we will go do that yeah in general anybody can test can tag test as long as they get reviewed by subject matter experts okay I was just trying to make sure we document the agreed upon set of tags their meanings and how and why they're applied so so Dan is raising his hand but I didn't want to clarify that we do need to resolve with sick testing what the best most appropriate mechanism for is for tagging the tests for different behaviors and environments right now there's a really long explicit test list yeah that's that's what I'm trying to say like we we agreed that two tags were fine from our perspective if you're saying that's unacceptable from your perspective a I wish I had included you in that conversation but be I'm happy to take three tags back and drive that forward okay so if we just say tagging is the mechanism I think that's a good starting point Dan so agreeing with everything so far and in particular if there wasn't consensus in Seattle to move forward with the Windows conforms program yet but I think there was a particular eagerness from the sick Windows folks to understand what the steps are to be able to have that profile and so I was wondering if we are willing to fast forward a few months here and say if all of those tags are done and there's a clear dashboard and all the Windows tests and all the whatever joint tests are all passing whether what other things they need to do to to be able to go forward with conformance I think I ever bottomed out on the discussion about like where I am certainly comfortable with a cluster optionally having nodes which can meet the requirements of Windows only workloads but the cluster itself being fundamentally different and behaving differently than Linux based clusters is one that has implications on workload portability and tools built on top and so on and so forth and so I don't want to suggest that there even is definitely a series of steps which could align to Windows conformance without more conversation we had one of the things that's written in our meeting notes from the session was that we had decided to think of Windows as an optional feature and conformance doesn't necessarily apply to optional features maybe some mechanism where there's a badge this cluster supports Windows workloads this cluster supports GPU workloads this cluster has lasers that I seems reasonable but interacting with the Kubernetes API and some base level functionality that that API exposes ought to be universally a message to customers that they can expect the same behavior if I can say I think that in my mind part of the confusion comes in is that because we're doing this bottom up looking at existing tests and and I think I was just looking at what Serene put together on some tracking issues and I think that when we put together the conformance criteria they need to be distinct from the tests right and right now I don't know it seems like they're intermingled so if we can define specifically each of the behaviors apart from the tests then it becomes clear that whether given cluster complies with that even if you don't have tests you can manually verify it right so obviously we want to fill in tests to get full coverage but I don't think tests and whether they're tagged with this or that are that bottom-up approach is the really easiest way to make it clear to everybody like if you had a list if you have this report you could say I mean it must meet all these criteria and I don't think it's quite that simple right now sure I I think the tests are a proxy for something else exactly and that I agree with but and I think if we had a comprehensive exhaustive spec of what it is to implement the Kubernetes API and what is required and what is optional that it would be easy to start there and write the tests we we don't have that today so we're going the reverse direction right but we've had the discussion of coverage and whether coverage is sufficient and it's hard to know that without knowing that that's back first of all and second of all when you get into issues of is this test should this test go towards conformance for this particular platform I think that not having agreed upon that back ahead of time it means we just argue a lot about it or people can't quite come to agreement discuss a lot so just to summarize there what we talked about in Seattle was that the if we did do profiles the initial profiles we would do would only be additive so very specifically in the Windows context it would not be possible to have a conformant Windows only cluster you would need at least two Linux nodes and then you can add on Windows nodes on top of that and so it sounds like we're the first step is for Windows validation or Windows conformance to mean something which is by having these tests identified and passing and reliable and such and then this group still hasn't reached consensus on are we ready to have our first profile I think we're not yet ready and the windows folks had much more practical issues like some of the current conformance tests they don't pass so they wanted changing those tests is hard because it requires more approvals so how could they make progress and that involves things like forking the tests the tagging mechanisms and so on so those mechanical details are still getting worked out in terms of adding it as a profile I think it is useful to consider Windows along with other profiles we might add and how those might be packaged and communicated to users but I don't want to just consider only Windows because I don't think that's going to be sufficient I also don't think it's going to be very impactful because right now no Kubernetes services and distribution support Windows there are other features like persistent volumes that are much more widely used they would have more impact so we're like persistent volumes and GPUs are like those good examples to exercise the machinery I think single node is a good example because it's niche but maybe an important niche and persistent volumes because the impact is so so broad it's such a heavily used feature and maybe low-balance services would be good to include in that because it's something that most cloud based services support but not all distributions for it I'll point out that single node is a special case because it's subtractive or everything else we're talking about it's kind of interesting well so I never want to have a subtractive profile so the thing to can I think eventually we will remove things from the current default base set of tests and add them as additive profiles I don't think single node meets the bar for that right now but it is useful to consider how we would do that for things like features that require cluster level node level or network level privilege which is something we talked about doing or how we would handle other locked down environments which people are more and more interested in for security so Brian I unfortunately missed the meeting in Seattle but last I heard the discussion around profiles was to have a very very small number of profiles but based on what you're saying now it sounds more like we would have a base core functionality and then we'd have other segments of functionality like persistent volumes and load balancer integration and whatever that sounds like we're gonna get quite a few more than just you know two or three profiles and what major subsystems will be there optional I guess so this is where I say we need to think about how we bundle sets of features into profiles I don't think it's the case that we want to have 40 profiles we might have 40 test tags but this is where we can look at you know what sets of features are distributions supporting what sets of features do we think are broadly useful for running applications and ensuring application portability which is the goal and we can bundle that functionality into a profile or something like that sorry go ahead John I'm sorry just we're saying we would have another sort of another level of hierarchy that would consist of the features or functions and you could conform on individual features and functions but profile wise we'd have you could validate individual features of functions but the performance program would not certify individual features got it okay so conformance versus validation I think Jago had his hand up and just I think that I like the bungling together concept so like a cloud provider profile might include node and persistent volumes and load balancers yeah that's one specific thing I've had in mind is looking at you know the cloud based Kubernetes services and what features do most of them support like dynamic volume provisioning which state will set heavily depends on in practice if that's commonly supported or we believe it should be commonly supported we would include that and so whatever we call it like a common profile so I've been thinking about it in terms of base which is sort of base minimum functionality you need and then sort of a common sort of like Linux has the Linux standard base sort of thing so I don't I don't know what I would call them yet but definitely thinking in terms of bundles of broadly implemented and useful together functionality as opposed to individual feature vector Brad yeah I guess what I envisioned was you know for GPU persistent volumes mentioned that we would start driving to have the validation test suites for those and then everybody's comfortable that no we got a whole bunch of validation test weeks we really know what it means to be GPU and your GPU works or your persistent persistent volume works or what have you and then you you would then have a process for potential promotion and sort of a you know a squashing of the commits if possible to then add those in where it made sense right so at the end of the day whatever I can do to end up with minimal number of profiles the one that works for most cloud providers what have you is probably going to be an aggregate of the base profile possible the GPU possibly the persistent volume and I think there's one other one you all mentioned that made a lot of sense to me and then so you would have that two stage process get all those validation test suites going and then when it's presented to the outside world the quote-unquote whatever becomes the enterprise cloud provider conformance profile is is all of those kind of put together and that way as an outsider at least it's a say a binary or a ternary choice as opposed to looking at every cloud provider and which a to the 12 they implemented that thought process make any sense anybody to help minimize the confusion to our users or no yes hey so I have a controversial idea I am physically and mentally capable of thinking about profiles because I'm too tactically focused on what is it that we're supposed to do to adequately cover our base before we even get to profiles and I would greatly appreciate those folks who care tremendously about profiles and additional extras to help out with improving the base otherwise we're never going to get to a point where profiles even make sense and I would like to applaud Patrick Clang and the folks who are pushing hard on Windows as people who are stepping up and showing up and helping us out there and I really just like if you want to keep having these discussions about profiles and stuff that's great but I should be involved in them yeah but you know I have Shrini so Shrini should be on your list of people you're looking to to help you out right so he's trying to pick up work as well so yeah I mean essentially would it make sense that we do the validation suites and see how they go and then if a lot of cloud providers are able to run all these validation suites then we can combine them into something call it a profile is that the work you were referring to Aaron or no I mean those to me sound an awful lot like let's write some end to end tests and then see if they work across all Kubernetes distributions and meet the requirements of conformance and then if so let's promote them to conformance and that's that's the work of like doing a base profile right that's correct yeah but just be clear is that when you say hey I rather have help on this other thing because it's the crawl step and you all are talking about the walk and run stuff I want to make sure Shrini is engaging you Shrini Shrini is engaging Aaron and helping you on that base step that you just described so you so you're not you know you feel like you are getting some relief well so I want to I don't know if I'm jumping us too far ahead into the point the part of the working group session where I said hey everybody I'm leading the 114 Kubernetes release this quarter it's going to eat up a lot of my bandwidth and I'm not going to be really available for this as much this quarter that has happened I really haven't had any time to shepherd any PRs and so what I'm like the very next step that I have taken is to take every issue that we labeled with area conformance and put it on to a project board that everybody has everybody on the conformance GitHub team has admin access to so you can add other people if you want and then everybody who's in the Kubernetes org which is most of us has right access to so the next steps would be for us to think of that as a backlog of work and prioritize it and filter it to make sure that it's actually the scope of work that we collectively think drives us forward to adequate base coverage and so I think I'm talking about somebody doing that or a group of people doing that getting consensus and then actually shepherding the appropriate PRs and even better it'd be super awesome if people were writing end-to-end tests I know we do have like a couple contractors but it's it's three people Brian had his hand raised speaking of project boards and project management and shepherding I'd sent out some email summarizing the priorities and the rationale for the priorities and there's a follow-up suggestion to convert that to issues or a project board or something has that been done anybody I have personally not had the bandwidth to do I think someone else might have volunteered you're looking awkward silence I can help with a first pass cut and show you how we do it for the Syngluster Lifecycle I would say we're probably super regimented about how we approach the process and it's uniform across our subprojects and I'm happy to share how we do it but you know if it's can't be a part of your one can I wouldn't mind pairing with you on that and documenting it for this group okay so like that this is kind of my concern is that thank you for stepping up hippie and hippie actually has a whole team of people you might know him as the guy who wrote API Snoop but like one of the options we have is to take him and his team of people and run them through the exercise of writing end-to-end tests and seeing how the what the whole happy path for that process is including using something like API Snoop to to drive their decisions but I feel like they're gonna lack the appropriate consensus from this group like I look at the discussion we just attempted to have on profiles I asked us if we had made any decisions and I feel like we just went and revisited every single one and undecided so I have concerns about how we're actually going to get consensus from the appropriate group of people to push these things forward and so any somebody who's just like a regular old project manager great we would really appreciate your skills but I have concerns you're gonna run into the same like circular discussion problem that we're running into you right now and that the global contractors run into which like I have tried to shepherd and push to help but it's still like if this thing that we keep having to work through so maybe I'm just gonna say I think there are multiple tracks going on one track is adding tests which are obviously missing and promoting tests which could be conformance tests but just don't have the tag to becoming conformance tests and folks in my group at Google have pushed that along significantly writing a watch test writing garbage collection tests like writing a stateful set and the workloads API tests like those are efforts that are not contentious and are just work and we staff and move those things along quarter to quarter someone showed the number of conformance tests delta over time and that has grown so I think there is a body of work of writing tests and promoting those that exists or breaking them into being more targeted tests so they don't test things by accident there's another body of work that this group typically ends up being in discussion and debate and it is difficult to come up with a path forward for profiles and what does it mean to different interested parties and are those interested parties even on the call it doesn't look today like the windows folks are on this call so we had a conversation but their voices that need to be part of that are not in this discussion so that that is a circular ongoing discussion that will evolve slowly over time but I don't think that means we're stuck on the entire effort we can continue to make progress on pod conformance by having people who write features for that part of the code base to also work on conformist tests I can say I think there's a third thing and I brought this up earlier maybe I'm maybe I'm the only one thinks things we need we should do it this way but the built like like I said building the tests and taking them as conformance as this bottom-up approach but I think that where we need to agree is on what you know what the behaviors are and that I think that that's really did start if you look at what the agenda at least category I create these categories of okay we need conformance in this area but what exactly that means I don't know if I guess how do we measure the coverage of whether those conformance tests actually cover the behaviors that we want to consider as part of and go ahead I was gonna say I I tried at the very beginning of this whole effort to go top-down process was go to the kubernetes.io landing page and what does kubernetes claim to be right try to describe it in terms and very high-level terms how do how can we guarantee that kubernetes platforms and distributions meet this claim it was very very challenging the documentation is out of date or inconsistent I can show that document again but I found that was not of super fruitful or actionable path forward so what about it and more still lower level like so so we go through the if you look at say the pod API spec right you look at the spec of a pod you can say okay this particular field you know can have these values what what does that mean what is the behavior associated with that and I think that that's a little that's lower level than what you're talking about but should be more concrete and if it's poorly documented then you know all the better if we can actually figure out how it should work and make sure that that we all agree and think it works the way it should work so I I encourage you to take a pass describe that moving the apply logic from the command line to the server side required multiple 40-page docs to even describe what it's supposed to do which is the first time that that had been done until then it was supposed to behave the way it behaved which is circular and not useful so I'm all in favor of what you're saying but it requires that someone go and do that yeah so I walked through Sreeni's spreadsheet and the issues linked there in and I feel like a lot of them are placeholder issues and my my my suggestion would be that we try and keep all of our work in the Kubernetes org one of the reasons I created the area conformance label was so that we could track this work on in any of the sundry repositories that the work actually happens inside of the Kubernetes org so this means we can apply it to work that has to happen in documentation in K community in any test jobs that need to run say like the upstream conformance image because the conformance program still uses I think a different conformance image that's in testing for and then all of the code that actually has to land in Kubernetes Kubernetes so if we're gonna do like tracking issues I'd rather have them be somewhere in the Kubernetes org and that they have the area conformance label on them so that we can use the one board and just be clear I created this one board as like the clearinghouse for everything issues and PRs this is kind of different from where like Brian had Jace create up a tracking board for sig architecture to sign off on conformance tests this was just a shepherd like one conformance test to another and maybe it's still good for that sort of shepherding process but I've heard people repeatedly asked for like the one true backlog that has everything outstanding in it and I feel like this is the best attempt at it it feels like it has a little bit more substance than the placeholder issues that Serenie created though I am open to somebody telling me that that's way too overwhelming and we should start fresh with a clean slate which is then we have an and standard way of tracking all of our conformance work so where is your board Aaron I linked it in the meeting notes it's I will place the link to it in chat shortly it's in there it's called cncf kates conformance yeah thanks thanks Eddie anyway and I apologize if I'm like going off on an honor rant here this is really like y'all I want to help out as much as I can but if you're wondering why nothing's happened then I can tell you at least I haven't been pushing any balls forward and I apologize for that but if you want these things to progress you need to help start pushing these things forward so when you say forward is it most of the work done by the contractors or is is this individual six six are responsible for the backlog or the triage or yes okay there's a little bit of the contractors are going to run into this and you're gonna have to help them through that the contractors might just need some general help and maybe you have some more domain expertise than the contractors on things like how are we gonna keep the docks up to date and and some of the other PRs that you stringy have pushed forward in Kubernetes Kubernetes yeah I can definitely help with that I mean again at the end of the day we still have to have some kind of a mechanism to talk to individual six and we do not have a meeting set up with the contractors that we on a bi-weekly or weekly basis to discuss the next step I'm happy to do is in the e-mail Punto Rodriguez I'm happy to be meeting up sounds good I just wanted to check in on one of the mechanisms we put in to try and encourage growth of conformist tests over time was the gate that to to move from beta to stable API it had to be a conformist test plan has that actually happened in practice and is that a useful mechanism Brian so currently it is not very useful because most features that are being added are optional in some way so they're not currently falling under the base conformist profile okay so it's all sort of the backlog of either already we're stable or too complicated to move from data to stable but one of the things I am trying to do as part of this release cycle is to encourage the Kubernetes community that everything that's landing and by landing that means going from alpha to beta beta to GA or just landing is alpha outright should have a thing called a cap the Kubernetes enhancement proposal and part of the contents of that proposal should include a checklist of graduation criteria and I am totally fine with SIG architecture saying that anything that's landing is stable should have conformist tests as part of that checklist I feel like that's the appropriate mechanism to use for enforcement SIG release and the release team won't like mandate this stuff but if SIG architecture wants to say that has to be part of the content we will gladly say your checkbox has not checked you're not landing yeah I agree that that's the right mechanism it is documented is in the place where we currently document the criteria but again only for things where conformance is applicable so like if someone were adding a new RBAC feature RBAC is not currently in conformance so it wouldn't be caught by that as part of SIG architecture we are going to more generally enhance the documentation of the bar for different levels of maturity for APIs and for other features and behaviors so that's something on our slate right now so just because you don't have conformance test doesn't mean you should have any tests right and to be clear this is what brought us back to our whole rationale for saying that windows support is an optional feature and conformance does not apply to optional features which will is what would allow windows to go GA without having conformance so we just have to write that down so we don't forget that right which time what did what he just said it's written in the discussion down below I feel like maybe that's actually the one thing we do agree on as a group today is that we're not gonna consider yeah windows going to GA does not require conformance we agreed on it last year sounds like we still agree on that this year we've tactically repeated that in a number of other recorded meetings such as SIG architecture and SIG testing and it will be written down in the cap so we move on to the proposal William made about being able to certify more than three versions back oh would you let me to explain the is that not considered decided by email I thought that was done by email as well we had two proposals which one are you referring to Dan was it the pull request so yeah so one one proposal we accepted which is that with 1.13 today you're allowed to certify not just 1.13 and 1.12 but we added in the ability to certify 1.11 so that three version pieces accepted and I think a lot of people are appreciative of it you had a separate proposal right which is essentially to say as you certify it let's say a new vendor comes in and certifies 1.11 1.12 and 1.13 your proposal would allow them to also certify 1.0 1.1 1.2 etc to go further back yeah well okay so firstly thanks everyone for supporting the proposal that got approved that went pretty well I thought and we already have a whole bunch of people using it so I think that's a success so regarding this other one just to clarify it's not really to go back to 1.0 the main reason I proposed it and I'm not really like very attached to it so I'm happy to close it if you think it's a bad idea but so currently a provider is able to continue to offer an old version as certified communities as long as they also offer the current version however a new petition to the program can't do that so effectively that this PR would basically just sort of level the playing field and let you know anyone say offer certified 1.10 provided they follow the existing rules that they also have to offer a current one to find as you know the current version and the last two so the main thrust behind the proposal is just to level that playing field so that everyone can offer the same versions whereas right now if you didn't kind of get in at you know when you could then you so if you can never offer that one whereas someone else can does that make sense for starters so I guess I'm a little confused because you can given the first proposal that was agreed on you can if you came in today 113 is out you can certify 113 112 is available and supported by the community you can certify 112 111 is out and still within the range so you can certify 112 and 111 are you suggesting that the playing field is not level because we can't also go back to 110 so currently someone that had a 110 certification gets to keep that they can still offer certified 1.10 what we said in the terms is you can do that but only if you also offer like a current one alongside it so as long as they're offering 1.11 1.12 1.13 as long as they're offering one of those that is sort of current they can continue to offer there the previous you know 1.10 as certified communities so LTS discussion maybe we extend to 4 in the future to get a year worth of support but today 110 is not supported by the community so any would not be patched on that version there's no community support for it so I'm really uncomfortable allowing new cover to go back and certify an unsupported version of Kubernetes so maybe we need to revisit that role because like currently you can still offer 1.10 as certified communities as long as you certified it while it was current and you still offer a current one so we're actually allowing that today so maybe maybe the result is could be actually revisiting that what what counts as a current one right now one of the last three and as soon as 1.14 is cut then we drop 1.11 right but the people who certified 1.11 in this current period get to get to use it forever as long as there's still current so William do they do they have to run the older test now or just keep whatever they filed before so currently they get to keep whatever they file before and that's the only way to do that right so if it if it things were bit rotten they are not able to run the tests anymore we don't force them to run the test not on that all version we do force them to have a current version also informant but the tests are versioned with the minor release of Kubernetes so the test should still run directly yes and yes and in fact that's quite important because I guess yeah that's the problem I don't know if we can go if somebody comes new to then to the ecosystem and they want to run 1.9 conformance tests I don't know if they are able to do that right now yeah just on a practical level I want to point out that there's very few companies who have the level of investment that they can credibly maintain an old version security fixes on an old version of Kubernetes and I think we acknowledge that Red Hat has that business model and several other companies a decade ago what we would have ironically said yeah but what if Microsoft comes in and they suddenly what but I'll actually use the example of Ericsson where they're not yet certified I certainly expect them to be soon I think it's a very different message to say hey come in you can have 111,112,113 and then if your engineers get up to speed and you want to claim to your customers that you're understanding every security practice made and maintaining that going forward on 111 that still seems to me like a very different story than them saying oh yeah we've had this 1.7 version out there and we want to certify that as well and so I guess if somebody asks for it with a really specific reason on why they need to go backwards I think we could always reconsider it but I would suggest that we table this for now I feel like the proposal we did include already incorporates that leaves a lot of the pressure of folks who are feeling like it was yeah I'm having that outcome let me just give you a concrete example so Google is currently offering 1.10 in production you know presumably all the bug fixes and security patches applied and that is certified 1.10 and we can continue to offer certified 1.10 for as long as we want someone else like Ericsson cannot do that so there is a little bit of a yeah you have to have been there I guess when that version was current in order to to have that certification mark on it so it wouldn't be able to do that now that was the reason how the PR just to kind of equalize that opportunity but I'm happy to table it I don't have a like I said at the beginning I'm not strongly attached like Ericsson not having users on 1.10 is an advantage over Google having users on 1.10 so I see it as an unfair playing field but not in the same way you do yeah all right um I'll close it off but I hope at least that made sense like why it existed yeah I kind of agree with the earlier statement that the LTS discussions about patching maybe one more release back so their year of patch versions is probably the better way to address this right okay and I mean on principle it would be natural for the things that you could certify to match the LTS to match the security such as supported distributions and so I don't think there's any other willingness in this group to change those rules again if LTS changes its policy just to be clear you kind of triggered me when you use the word LTS as if it's a thing that exists it's it's our it's our LTS right so our documented support policy which you can see if you go to the kubernetes.io website and search for version support policy lays out how many versions of kubernetes we support with security fixes there is no such thing as LTS though there is a very motivated group of people talking about it and whether or not it should be a thing and what LTS even means but right now we just have a support policy and and out of that position came the discussion about should we extend for one more quarter even without saying we'll support it for multiple years and yada yada what does LTS really mean the very simple concept was extend by one more quarter or one more minor release to get a full year yeah I think in practice we patched uh an older release uh at least twice an older release than we officially supported at least twice we discussed this a bunch in sick release where the ci infrastructure to allow us to patch four releases back which is outside the window of three releases back exists for like the first four to five weeks of the release cycle just purely out of happenstance no other reason than that and if that's something we need to formalize or extend all the way to or procedures back I would hope that that's being discussed at sick release because we just had an in-depth discussion about like well maybe we'll make an exception in this one case where a we happen to have the infrastructure around so it's early enough in the cycle that we can do that and be it's because we are undoing a regression that the prior patch release introduced agreed sick release is the right place to have that discussion um a very useful meeting will reconvene in a month as long as there's any agenda items and definitely sounds like there's a eagerness from parents for help on that project board and so if any folks can find resources to they could offer on the mailing list I think it'd be greatly appreciated thanks everybody