 Let's get started. Thanks everybody for accommodating the new time. Hopefully people were able to figure it out. And let's see, we have 18 people on the call, which is not too bad. Lucina just pasted in the meeting notes for which thank you. So I think the only thing on the agenda right now is the API snoop. But I'd love to sort of open up the agenda if there's some other areas folks would like to discuss. I think unfortunately we're missing Asitha and William and folks. I presume that they're heads down in preparation for Google Next. And so we're not going to be able to get the update this month on how we're doing with Globent in the testing and such. Although I guess from what I see on emails, it's progressing well. Anybody else want to speak up, Brad? Deepak? Not much, we did make a submission. I know you mentioned to me to make a submission for Shanghai because I'm going to be there helping out with a Qubed Oxpirant. So Shreya and I did make a submission there, I don't know if it'll get in or not. But if not, we'll use it as an opportunity to spread the gospel, my friend. Hi, Dan. Hi, Dan. This is Deepak. Yeah, I think we're about to submit that either today or tomorrow, but this week for sure. Okay. Chris, do you want to go ahead and take over and you could give us an update on the EPS nuke, where things stand with the CAP and such? Sure, we'll do. I've been to CIGARC and CIGAPP and they'll be attending testing this week to do an introduction to the CAP and request feedback and sponsorship. After I do so, I'll send out an email to the list to get the discussion going. That's been some pretty positive feedback so far. We've also simplified the client go and user agent audit logging PR so that it's simple to understand. It's not really big changes, but there's some interest on trying to distinguish this from tracing and open tracing, so I'm going to spend a little bit of time this week digging in and ensuring that they're explaining the overlaps and differences between this approach and open tracing. I've also submitted two talks, actually, for both Shanghai and Seattle. And you did the small update on API SNUPE.cncf.io? Yes, oh, we did that as well. There was some minor glitches to the UI, and we've also grayed out some of the prototyping areas that are still – we need to get the more data in, and the focus has kind of been on getting the CAP to a point where people are interacting with it. So this next few weeks we'll be bringing in more of those features and data sets within the API SNUPE UI. Could you also gray out where it says Kubernetes and Ede conformance cube tests at the top? Since those aren't – I know eventually those will be options, but right now they don't do anything. Could you just remind me how often are you updating these results? Actually, we're not updating the web UI that often. It's not been – Okay, because at the bottom it says 2018.05.30, so that's correct. So then – I mean, in principle, you could be running this on master, and it would let you see new tests that are added. And that's where the history there is to show what's happening over time. Sure. Although, I guess I can see the argument for just doing it with betas and release candidates, because you're not going to get useful results if you run it on master and have the test with them. Yeah. Through that. I think it'll help. Okay. Well, I remain very – For a release. Yeah. I do as well, and I remain very optimistic for now. Thanks for capturing that note, Lucina. Okay. Folks, I mean, the program overall has 61 vendors included now. I guess I should have given that update before that we did also just decertify a small number of folks who didn't – had gotten the original 1.7 certification in but then hadn't done a new update since then. We didn't lose any vendors on it, but there were a couple of vendors who had more than one implementation and presumably just hadn't gone forward with the other one. So that – but that exploration part of our process is working today as it was intended to. But if there's not other suggestions on areas you'd like to discuss, we could end the call very early. Then I have a comment. This is Srini, basically, if you – regarding the documentation of the test, essentially, there are set of PRs that I submitted. They've been there for a while. These are to document the tests. We generate a document for every release of the tests that are part of the compliance suite. It's checked in under CNCF. So basically, the documentation does refer to each test in RFC 2119 format. So it is important that these PRs get going. There are a few PRs that are already merged, but few of them are stuck for a while. So just as a contributor and core lead across a number of SIGs, just poke the SIGs that are responsible, namely like Aaron and I are both in this call. So if you want to get review cycles, you know, just poke the appropriate at group inside of the GitHub to make sure that it's going further. I do review that periodically as well as the – I often rely on Gubernator to be my source of truth because there's so much inbound from the Kubernetes project that it's almost impossible to manage just by email alone. So if it's assigned appropriately in GitHub, then it should get triaged appropriately. Much appreciated. We will do that. Thank you. Yeah. I'm trying to do that. Yeah. That's given that if – once these PRs are all merged, I would like to generate the new document probably with 1.9. The set of the test list for the 1.9 release or 1.10, whatever. So we don't have the document updated for a while. That's my concern. Are you going to automate the updates with the SIG docs, folks, so that they can publish this as part of the release? That is eventually the plan. But before that, I do need all the documentation in place for all the conformance tests. That's why I'm kind of – but it's a good point. Basically, I should work with them sooner or later. I did have an agenda topic that I added, Dan, which was with regards to the CRI implementation and what it means to be conformant. We don't necessarily have a – these are extension points and distributions can do whatever they want to do. But I think it might be beneficial for us to have – I know we talked about profiles, but having some level of profile like for things like CRI or CNI, it starts to think about that maybe a little bit sooner because there's a lot of marketing that goes into some of these published statements that go out. And there's not – there's some issues along with that, right? So an example of this is the publishing that came on the CRI in any site with regards to support for container D. And the mismatch of what that means with the actual testing and signal that has been given to the broader ecosystem. So it causes a bunch of issues. And I think this fits into the profile space kind of nicely. And it might be a forcing functions for vendors to get onto the train once we talk a little bit more about profiles. So that way they can actually get – XCRI has been validated for this release or something like that. So I guess what strikes me a little bit odd about it is that it seems like you're trying to jump over the API where you're not just saying, oh, I want to ensure that CRI is validated, which is definitely a core Kubernetes API. But it sounds like you're saying, well, I'd also like to see that container D or an alternative is validated. You'd want to make sure that the – you'd want to make sure that an implementation certifies against a well-defined set of things in the API, right? Because it's one of those things where it falls into the profile category where we hit – we called storage like one. Our abstraction layer previously was based upon Cloud Provider. And we said storage was a good place for us to delineate. But the CRI and CNI are also good places for us to start to define that space of what it means to be a certified provider for CNI or the CRI. Could you say just a bit more on it? I have my Acme Kubernetes engine and I get rid of container D and I switch to Kata. What is the profile that you're envisioning that the branding on my product would change? Or what's the test that I'm running differently in that scenario? There are a – there's a couple of sets of tests. You'd obviously run the standard performance test for API verification, but there would probably be also a set of tests that exercise the node to CRI integration more rigorously. And I know that the Google folks are working on pieces of this. And I think over time, expanding the set of tests to make sure that it's fully functional makes a ton of sense. Right. But what I'm getting – what I don't quite understand is just that it sounds like you're just saying – I'm trying to understand, are you just saying, oh, we should have way better tests for CRI. And I'm sure the answer is yes, that it's not remotely comprehensive enough and that those should just get added in and accepted by – I guess it's SIG node and SIG architecture and such. Or are you trying to see something more with the profiles of, oh, and maybe we should have a container D profile versus a Kata profile. I think a CRI profile. We've talked a little bit about things like storage layers, but we haven't talked about some of the extension points. So if we were to say like a CNI certified profile or CRI certified profile, that makes a lot of sense. That means that providers that want to meet the spec have to go through a set of tests to make sure that they're adhering to that spec and it passes for that version. Okay. But I'm sorry to be dense here, but I'm still just confused on – if we're just talking about the Kubernetes side of the CRI API, then I don't see the need for the profile because every certified Kubernetes implementation should pass 100% of the CRI API tests. We don't have a structure for testing and validating that. That's not currently done. So there's the conformance tests, which, you know, again, currently have a bunch of holes in them. Right. So if a person were to swap providers and go from like CRIO to CADA, like for container due to CADA, like your previous example, there could be a bunch of gaps in coverage to verify that you have met all of the CRI spec. Right. So in order for you to do that, you'd probably have to exercise these extra set of tests, which are currently in flight or being created for the CRI. Yeah. So we're all agreed on the inadequacy of our current tests and the need to improve them. But where you've lost me is just on the question of the profile. I don't get why you need a profile because even if, you know, CADA uses some API calls and container G uses other ones, in principle, if they're supplied by CRI, then the ideal platonic goal of our conformance test, if it's ever-hitted, would exercise all of those APIs. It seems like the profile would come in if you also wanted to jump the API barrier and certify the CADA compliance or container D or something else. Anyone else, please feel free to jump in here. Dan, I'm having the same view you are. That's why I'm, you're kind of channeling my mind. I'm having the same struggle you are. I think if provided that, you know, we beef up conformance to be able to support all the details and exercise the points that get hit by the CRI or C9, then it's irrelevant. But I don't think we're at that stage or will be at that stage for any period of time. Well, that's just kind of a- Okay, but then another way of just, sorry, just another way of saying it would be, it would be really nice to run the API snoop, for example, on some code that is exercising that and seeing the different API calls that CADA versus CRIO versus container D use. And that might help us prioritize which tests we want to be writing sooner. Yeah, I think that's actually a reasonable way to approach this. I like that. Yeah, that's good. Okay. Because the key concept of a profile in my mind is that some conforming implementations do implement it and others don't. And we as a conformance community are fine with both of those and consider both of them good members of the family. Right. And there's some legitimate need to take on the baggage of now having to explain why we have profiles, right? So anytime it's just like this situation where, well no, once you get full coverage, this whole question becomes unnecessary. That's a good thing, right? So does that make sense, Dan? Yeah, I've actually been surprised that we've been able to hold off on profiles so long. I'm thrilled that we've been able to get this kind of base of adoption without needing them because I do think it really complicates things, particularly for application developers and just regular users. Just having to try and figure out what this stuff means. Yeah. Folks should look at the link, Aaron, pasted into chat. That is nowhere near comprehensive enough. I'm having conversations with the, what's it called, with the node folks about a lot of the gaps. And I'm actually going to bring it up to steering this week with regards to some of the issues that we are seeing in the wild with CRIs and the gap difference between what people have stated and where the world is at. And, Tim, have you spoken to Asher Mithra about maybe prioritizing this for Globent? Because this area, the area of CRI, because I know on a previous CIG architecture call that they'd expressed that CRI was an obvious area because as you're pointing out, there's this need for interoperability. No, I have not talked to them explicitly, but I think if they were to target that area, that'd be great. I can follow up with them. This is a criticism for everybody, but I can follow up with them about that after this meeting. That'd be great. Maybe we can just move it on to the mailing list. Sure. Okay. Well, I mean, I, I don't think this is totally done, but we can definitely come back to it if we decide that we do want to spend, spend more effort on it. What was the thing that came up, I think on the mailing list or on the call a month ago that was, we thought was going to be the first area we would do profiles in? Storage. Yeah. And I guess I haven't heard anything more about that. I believe we had a discussion with the, we had someone come in from the storage team two weeks ago to talk about the profile. Okay. Where was it left? Yeah, I think what happened was a, a Michelle actually from a storage group in Google, she was proposing very granular level profile, such as, you know, the dynamic volume provisioning thing. And we pretty much all decided that it has to be at the higher level, which is storage. And that's where we left off. What did you say? Yeah. She weren't waiting for. What do you mean? Pardon me. When you say at the higher level, you just mean storage at the storage level at the core, the core. Yeah, exactly. More like, I think he, Tim described it. I think he pointed out that it should be more at the behavior level as opposed to. Okay. No, very cool. Like to jump in on any topics. Hey, Dan, this is from just to clarify, we were mentioning at the beginning, we're just making progress as usual. And we've been sharing on emails. And I've just heard like a, there's a priority. You guys want us to onboard. We'll be happy to do that just to compare with a, and around what will be our priorities. So we'll be glaring to jump on anything you want us to work on. Basically we're working on persistent volumes. And we have some PRs, a pending for you guys to approve in order to complete the end to end conformance process. So if you guys help us with that, that will be great. I will be sending the status reports every week. So you can guys target the PR sites. So we need your approval to move forward on those. That's great. I really appreciate the update. Gladly. Thank you very much. Okay. Anyone else. Let me just finish with the thank you to Lucina for the note taking. We appreciate it. And speak to all of you in a month, third Monday of the month, but of course, please do jump on the mailing list. If you have any thoughts between now and then. Good update. Thank you. Okay. Right now. Thank you. Hi everyone. Thank you. Bye bye.