 Yeah, thank you. So I'm just going to do a three-minute intro on why we created the program and how it's going so far, and then hand it off to do a lot more details. CNCF is 3 and 1 half years old. In that time, I think the most impactful program that CNCF has created for the cloud native community is certified Kubernetes. And so this is a way of certifying that every implementation of Kubernetes out there supports all of the necessary APIs. And so if you have a workload that runs on one version, it really should run correctly on another. Now, in the real world, our conformance tests do not yet support 100% or cover 100% of APIs, and even on covering 100%, there's then a lot of subtleties about how we're exercising and such. And so it is a program that continues to improve and that we make significant investments into. But I would actually compare us against some of the most successful certification programs ever, say Wi-Fi or Bluetooth or Android applications as being three, and the fact that we've gotten the entire industry. So all of the biggest public clouds and smallest public clouds, all of the distributions that we're not just getting a part of it has really exceeded our kind of wildest dreams when we launched this program just a year and a half ago. So we're up to something like 89 or 90 certified Kubernetes implementations. It's all on our website. We have this logo that you get to use. One of the big carrots that we offer is that if you want to use the term Kubernetes in your product name, like Google Kubernetes Engine or I think it's like canonical distribution of Kubernetes, you need to be certified Kubernetes. So the fact that we own the trademark and are then able to offer that usage of it has encouraged a lot of people to make use of it. We divide the certified Kubernetes implementations into three groups. It's the distributions up top, meaning that you can install it yourself. And you can see all the different vendors that we have here. And then we have the hosted. And so these are the clouds. And you can see Baidu there and Huawei and Alibaba and others. And then finally, we have the installers. And we just defined this as saying that they're not really adding any other software. It's all about just taking vanilla Kubernetes and as a way of building it in. One nice thing here you can see in the latest Kubernetes just released this week. CubeADM has a logo now, which it didn't have before. Interestingly, there's five different installers built into Kubernetes that are cops, CubeSpray, CubeADM, Minicube, and Kind. And it just speaks to the fact that this is a very big project. People have different approaches to doing it. But we're really pleased that all five of those are certified Kubernetes, that it speaks to that level of interoperability and compatibility. So we continue to make significant investments in the underlying conformance tests. The program itself for certified Kubernetes really runs quite smoothly. And we can talk through that. But it's all public on GitHub. It is free for members and for nonprofit organizations. And we've had a huge uptake of it here in China. Over 25% of all of our certified Kubernetes implementations are from China. So I will hand it off there. Thanks. Thanks, Dan. Dan has kindly found some time to come and give us a brief update on this. I will walk through what Kubernetes Confluence Program is from the certification point of view. And also, this is an intro and deep dive together in 35 minutes. So it's going to be a little bit of what, from the community perspective, work from the community side of things, so on the development of the ETE tests and approaches. So certified Kubernetes is a software conformance program, like we all know. Vendors can choose to certify their offering with CNCF. And this program has started in 2017. And now, like Dan said, there are more than 80 vendors who are certified on the program. So what does it mean? I mean, CNCF, of course, runs the Kubernetes certified program. But the importance is that most of the world's leading enterprise software companies are now providing Kubernetes on their offerings, and they are certified. That's the great part about this program. Any vendor is invited to have certification. And they just have to follow certain ways to submit the results by running tools. I'll talk about these tools. And coverage-wise, we are trying to make sure that enough of coverage is done so that it makes real value to the offering. So actual software conformance on the vendor version requires them to exercise certain APIs, the APIs that are exercised by the free version of Kubernetes, the community version of the Kubernetes. So that way, we'll ensure that there is consistency and the portability. So if you run your workload, you can run your workload anywhere and everywhere. So in order to not have vendor lock-in, it's important that you run your workloads on certified Kubernetes platforms. And from the point of view, they have to have timely updates on the conformance program. They have to certify the latest versions of Kubernetes. And I'll talk about how that works. So like Dan mentioned, certified Kubernetes marks and Kubernetes mark are trademarks of Linux Foundation. So Linux established this for a reason. Number one, it provides a quality of the service that is offered by the vendor so that it ensures that the community members are able to accurately describe that offering. It is very easy to actually participate in the program. By self-testing on the qualifying offering. And the CNCF has a participation form, which is a set of questions that you, general questions that you answer. And then with the self-tested results and this form, you submit as a PR to their GitHub report. That way, CNCF will be able to verify your results and validate those results and certify you. And the program is not free for the CNCF members, as well as nonprofit organizations. There is some VSOs here with it. And all the terms and conditions of the program are listed on the CNCF website. I'll show that to you. In order to be eligible certified Kubernetes, you need to be certified on one of the latest to minor releases. For example, 1.13 or 1.14. I know that 1.15 is released last week. Say, for example, the certification is valid for 12 months. And then if you're, for example, you're certified on 1.7 on June 30, 2017. And a year later, on June 30, 2018, there are two more versions available for you, 1.11 and 1.10. You need to be certified on 1.10 or 1.11 to remain certified. And the terms and conditions here actually are part of the CNCF website that we browsed before. So you can actually, if you have access to my slides, you should be able to get to those. In the interest of time, I'm not going to go through that. So there are two ways to run the test. Either you use CubeTest to run the test or use the Sunabai, which is a tool available. And I'll walk through Sunabai in detail. But if you're using CubeTest locally, you're building the ETE test on your cluster. Oh, sorry. And then you set the access path to your cluster and run the test with Ginkgo Focus onto Confirmance. So there are lots of ETE tests, like about 2,000 ETE tests in the KK, right? And Confirmance tests are a small subset, which is like 200 tests right now. We're constantly adding new tests to the Confirmance program, but it takes a long time to add tests to the Confirmance program. I'll explain also why it takes a long time. But generally, it's a small subtest, which is hitting the core API of Kubernetes. So the other option is to run Sunabai. For this, use Standupay cluster on any platform, any offering you have. You need a CubeControl installed, the client. And access to the cluster by proper Cube config. And you need to download the Sunabai program written in Go. And originally, this tool was available from Heteo. Now it's managed by the community. So actually, I ran this a while ago. If you can, can you see my screen? So on IBM Cloud, so essentially, IBM Cloud, I created a cluster of three nodes. And I exported Cube config so I can reach to the cluster. And I did a Go get on Sunabai. I think I did it somewhere here, but I assume that I got it. And then, just as Sunabai ran, and then you monitor, the tests are run. After a while, you will notice that Sunabai ran has completed, and then you would do a, sorry, I think I'm on the wrong window here. So it's run, if you see, Sunabai runs a set of parts on your cluster and running inside these parts, the ETE tests. And eventually, when it finishes by checking the Sunabai status, it's complete. You can go ahead and do a Sunabai retrieve. What it does is it captures all the test output from your cluster and dumps it into a tar file. Using that tar file, you can, these are all the contents of the tar file, which probably have not much interest for us. I expanded the tar file into a directory, and then you can do tail minus 20F on the cluster itself. And you can see all the tests passed. So you submit this result back to CNCF, along with other forms. It's clearly indicated on the CNCF website as a PR. So as you can see, a lot of vendors submitting PRs to the GitHub repo. If you look at any other PRs, for example, IBM has submitted this PR here. It has all the artifacts required by the CNCF. And once the CNCF verifies this, the form and the version that you're testing against in this case is 110. And then the version of the keep control, which gives you the version of the client as well as the server. All the artifacts are submitted along with the ETA log. That can be used to certify the, I'm not sure what's good. So that is briefly about the CNCF side of things, how you certify your version of Kubernetes so that you can use one of these cool logos like certified Kubernetes. From the community point of view, I want to give a brief update what we are doing. Could you just make that distinction of? Yeah. Well, from the CNCF side, they are managing the conformance program from the certification point of view on the vendors on the client side. On the community side, what we are trying to do is how can we build the conformance program itself? How do we make sure that all the vendors have the same quality of service? Essentially, the role is, CIG architecture is the umbrella for governing this from the community point of view. They decide what all the required tests that should be run as part of the conformance. And we have a conformance of project where we have development activity that we'll go through and communicate with other CIGs in Kubernetes. There are lots of CIGs that are involved into the ETE tests. And define the behaviors of the test. And the CIGs are responsible for writing the ETE tests. I mean, we, our conformance program, we do not have the necessary skills for all the associated CIG related work. So CIGs will write the ETE tests, and the ETE tests will become part of the ETE suite. And from the conformance point of view, we will then go ahead and promote an existing ETE test into conformance test suite. So this is always happening. So from every release to the next release, there will be more conformance tests that gets added to the test suite. So it's not as static as we think. So once you pass the conformance, it doesn't mean that you will pass the conformance for the next release and so on and so forth, because there are more tests and more tests right now. I think we have about over 210 right now. The one I showed you was for version 113, where we had about 190 tests. So there are like 20, 25 tests added between 139 and 115. So there are several approaches we take from the community point of view. First we decided that we will concentrate on POTSPEC, because that's a general thing that should work consistently for all the workloads. We have exercised most part of the POTSPEC, but there are still, there are some areas of the POTSPEC that are not exercised yet. So we are working on it. Like I said, the promotion process itself takes several cycles. So we decide that this particular feature is not tested, like POT to POT communication within a cluster is not tested. We write a U2U test in this quarter, in this release. We add that to the U2U test suite. We make sure that that test is not flaky and not slow and so on and so forth. The test has been consistent on our CI. In the next release, that test will be slated to promotion to the conformance test. So what is promoted to the conformance test are guaranteed to work well. That's a couple of ideas for us. So at this point, we're not 100% covered, right? I mean, it's impossible. So we have only 200 tests, and we want to improve the coverage. So of the core features of Kubernetes. So there are several approaches we are taking. A couple of approaches I mentioned here is behavioral driven conformance testing. What in this case we are trying to do is we're trying to look at the API and the fields and see how they can be used in user scenarios and define the behaviors. And then we go ahead and write new U2U tests. The other approach that there is a tool called API SNP, which basically analyzes the audit logs every time you run some tests or workload, whatever, you run on the cluster and sees what all the API endpoints you're hitting. So if you are not hitting some API endpoints during the conformance tests that are run on the cluster, we know these APIs, which are the core APIs, that needs to be hit. So that means we need to write more tests in those areas. So these are the two approaches we are following. And then there is also validation suites proposal right now that's in the works. This is more not towards the conformance program itself, but it is for things that are outside of the conformance program. Like for example, you have functionality like CSI drivers and storage. That's not part of the core features. So we need some kind of a suite that can only test that particular module or networking CNI or not. So that's the idea in there. So these are, again, very public information. So if you want to see this is a cap that is outstanding, you can actually participate and discuss on this cap the idea behind how we are going to do behavioral testing. Similarly, API Snoop is also something that you can see what all the APIs that are running right now on the CI and see the coverage there. So about the conformance test, E2E test itself, like I mentioned briefly here and there, but the merging process of any E2E test is a SIG responsible. Like for example, if it's a networking test, SIG networking. If it's a node test, it's SIG node. And eventually, we analyze that and say, oh, this can be promoted to conformance. It takes two releases. The main idea, the criteria that the conformance test has to meet is it should be a GA feature. It cannot be an alpha or beta feature. And it should work on all providers and on all architectures. It cannot rely on network or special binaries. So what we are expecting is any Kubernetes vendor should not have to do additional steps to run this. And it has to be stable and consistent. That's why we have two release cycles associated with it. So there are two PRs. One is to add the E2E. And the second PR would be to add that test to the conformance test suite. So a promotion PR has to be done to get that test into the suite. Oh, I'm sorry. Did I? Wow. So that's what I talked about here. So essentially, that's the criteria for the conformance test suites. So eventually, if you are promoting an E2E, you should submit PR to the SIG architecture. There are areas that we want to cover in the conformance test suite. Those are the areas that needs to be covered, especially node pod. We are covering pod right now. Volumes, there are tests already existing, but there are more tests needs to be written. So we are monitoring how much percentage of coverage we have. And there is a lot of work needs to be done still. And while promoting the conformance test, you also have to document the test properly. And each test will have a documentation header on top of the test, which lists when the test is added to a release and when the test is modified. And very high-level description of the test, which will help us to identify what area is covered and what the test actually does. So vendors who are running the conformance test, they need not have to know the technical details of the test. They don't have to go through the code, but they will know exactly what the test is doing. It helps us to actually analyze ways of the metadata that we are adding at the top of the test, what the coverage is, and also debug, if something goes wrong, fails in a vendor environment. So the documentation looks something like this. So you'll have test description, and then on the link, the blue part is the link to actual source code of the test. From the community point of view, the Kubernetes Conformance Program, we have a Kubernetes Conformance Workgroup. It's not a sig, it's a workgroup. And there's a mailing list. If you have any questions about the Conformance Program, you can use that. Actively, all our development is public. So if you have any interest in joining the development part of the Kubernetes, there is office servers that run bi-weekly on Tuesday noon PST, Pacific Standard Time. You're more than welcome to join this program. Oops, that's Google Docs. So the link I posted here should work. Or you participate in the Slack channel, kits-conformance. There is one more thing I wanted to show you is guidelines for Conformance Test Development. I think I'll have to switch to my, OK. Yeah, all the links are posted on the slides, so you can use it. And I briefly want to also mention the work that's being done right now with Conformance. We have a GitHub project board where we track all the issues that are Conformance related. If you see here, all the issues that come through triage, well, we have umbrella issues on tests we want to write. And then when the tests are written, they will go through the standard progress from the test set in the backlog. And if somebody reviews the test, it goes into the progress. Once the test is reviewed by the SIG as well as Conformance Techlates, eventually the Conformance program Techlates will review the test. And that will become part of the Conformance test suite. You can follow this. This is also a publicly available project board. And you can see what tests are going to come in your next release. If you are interested in running your tests on 1.16, probably this board will give you some indication of what's coming in for the future. So with that, also I want to thank the people who have contributed to Conformance. Some of the slides I have used from Aaron Krikenberger from LastQCon. That's pretty much it from me. So any questions? No questions? I covered a lot. Yeah. These tests, we've got a use case, not for Conformance, but to do canary testing of the clusters. Is there, this seems like it tests everything. Would that be viable? You mean, ETA tests or all the ETA tests? We want to know if there's anything wrong in the cluster before our clients have problems. So we want to run canary tests on the clusters themselves. Sure. I don't see why not. Actually, we were also thinking about see if we can run the upcoming Conformance tests upfront on the clusters to see if there could be any problems. There was a discussion in the previous meeting, the World Group meeting. Any other questions? All right, thanks for joining.