 We are going to start with what is the performance test, why we need that, and performance and Windows operational readiness. Then we are going to take a different look at the Kubernetes enhancement proposal 2578, which defines the Windows operational readiness in detail. And there is a corresponding implementation of this cap, which is called Windows operational readiness as well. And the tool aims to provide a convenient way to run the performance, to run the Windows operational readiness test. So we are going to go through it. And this tool can also be run as a summary for being, so a main route tells us more like how we can run that. And this project has already been integrated with other upstream projects as well. So I mean we'll talk a little bit more about the usage of this project. Finally this project is still a working program, so we want to bring more attention from the community and we would like to see people start collaborating on this. The goal of this talk is to introduce the idea of Windows operational readiness and attract more people, like we're very well willing to upward. And anyone who is interested in this project start their contribution to the upstream. What is performance test? So imagine you as a Kubernetes vendor and let's say you also ship a product to your customer that help customers to create Kubernetes 1.25 clusters. So how do you verify the project, the cluster created by your product? How do you support all the required APIs in Kubernetes 1.25? Does the protocol connection work? Can user mount volume to their call? And also if you imagine yourself as a Kubernetes developer, how do you know that the feature you are about to add to 1.25 works well with all the existing features there? If you are troubleshooting the cluster, how do you systematically figure out what works and what doesn't work? So the answer to all the questions here is that if we have a suite of Kubernetes clusters, including the APM machinery, apps, nodes, networks, storage, like just like all the expect, then if your cluster can pass the test there and confidently say that the cluster is a valid Kubernetes cluster. And so this suite of tasks is called performance test. Conformance test defined by CNCF. And so if you explore the option Kubernetes test performance folder, you will find a YAML file called conformance.yaml. And there are about 350 tasks listed there. And so when you explore the Kubernetes utility task utility repo, you'll find an encoder with a performance tag. And you will know that this test belongs to the latest customers. So each provider is encoded as a result to prove that the cluster is the cluster's behavior is as expected. However, there's no Windows conformance test right now. So today in the CAP 2578, we want to introduce a way to test your cluster. It's still important to verify the behavior of the... There's one thing that actually is still not an official CNCF test yet. But here's come the first question is, what is the Windows Kubernetes cluster? It means it supports running containers on its Windows OS. Let's say we have a cluster and you can use Linux OS as the control plane node OS and use Windows OS in the rest part of your worker node or all of your worker nodes. So if we take a look at the history of Windows testing, we can see that the first version was created in 2018 and it's called Windows Node Support. There was a discussion towards the other folks from Windows and they considered operational. And there are nine categories in this CAP. So the first four categories are the required because of the cluster. So, for example, you cannot create the, let's say, cluster IP type of service in your cluster. It's really hard to consider because they rely on the functionality that has... Such as there as Windows Server Edition and configurations. And you need to set up your AD server first and then create a domain, create a group managed service account and join the node to the domain or set up the security contacts for the node, for the pod to access the resources in the domain. So after all this setup, you can run this test to see that you do that correctly. But this kind of setup is not required for all the clusters. So it's not in a network category in this CAP. And in this CAP, we can find the functionalities we want to test for each categories. And next thing we want to do is to match the functionality we want to test here because it's upstream implementation official. But when... Defined by CAP 2578, which is not the official one. And the implementation of all the tasks is listed by both tasks in the upstream Kubernetes repo. And the official way to run the Linux conformance is by run. So the input of this project is the YAML file called YAML or task cases. Here we import all the functionalities we want to... We mentioned in a CAP into this YAML file. And so you can just comment out the one that you are not interested and leave what you want to test in this YAML file to customize what you want to run. And the tool will first parse this YAML file and generate the correct command to trigger the upstream into a test binary. So the tool can automatically detect what version of Kubernetes you are using and download the pre-compiled test... pre-compiled binary from upstream. But you can also set up an environment variable called Kubernetes hash points to any upstream commit you want to use. So by doing this, the tool will first pull down the upstream Kubernetes repo and check how to your commit and build this Y2E test binary from there like any versions you want to test your cluster. The report will be generated by GUnit. All the functionalities we want to add here. And for each task case, it belongs to one of the nine categories I mentioned before. So the description here tells us what functionalities we want to test for this task case. And the focus and skip are the same with Jinko focus and Jinko skip. So here we match the functionality with its upstream implementation. So let's take the first task case for example. If we want to test the ability to access Windows container IP by pod IP, we want to run the upstream Jinko test. It should have stable network for Linux and Windows pod. And also here we can find the existing implementation of the functionality, but sometimes the task is missing in the upstream. So if that happens, we add new Y2E task case to the upstream Kubernetes folder directly. And in the top of this file, there's a Kubernetes version, which tells us which Kubernetes version are we testing. So let's say if you want to run the Active Directory test, and it started support by Kubernetes since 118, but if your cluster is on 117, it doesn't really make sense to run that test against the cluster, right, because it's not supported yet. Of running the, of running Spinerade directly. The categories here is the, like, you can define one or more categories you want to run. So for example, here we define categories equals to core network and extend Active Directory so that all the task cases from these two categories will be triggered. So here this is that the Y2E test binary is called internally, and the Windows parameters are set by default, so we know that you are running on the Windows cluster. And here's the code base of how we generated the command to trigger the Y2E test binary. And this is what the result looks like. So if you define the report there or the artifacts environment variable, the result will be stored in those folders, and it can be parsed by the dashboard. So these two are also designed like a Sonoboi plugin, and I will hand over to Amin from here. Thank you. So yeah, starting as a Sonoboi plugin, you can use these inside the cluster and run the Windows operational readiness inside the cluster instead of running these as an outside software. For those who don't know Sonoboi, it's the standard tool for running and submitting the conformance test for Linux. So it totally makes sense to bring these to our project and make these the default for the Windows operational readiness as well. Besides running these in the cluster, you can have tools for parsing and extracting the results and giving you the summary of the results you just run in your cluster. We're going to see the next slides. So to run the operational readiness via Sonoboi, it's super simple. We have a Sonoboi plugin in EmuFile where you can add it, you can define the specification, follow the specification for a Sonoboi plugin. So you can do the same way as you do running these in the CLI. You pass the each web binary in the path, you pass the core network, the category that you want, whatever you want. If you don't want to run a particular category, you just remove all the categories in this one of them. So one thing that was super cool that Mark helped us to do is publish these on the GCR bucket upstream. So every time you merge these on master on main, you can have one version of the latest OCI image of the project. You can use these on a Sonoboi on your own path or anywhere you want to run the image of the project. Dockerfile is in the root as well. It's super simple to use and getting started. So we have a few make targets in the project that will help you and simplify the streamlined experience you have with the system. You can run make Sonoboi plugin. This will start the Sonoboi. I think at this point, this will wait for the results. You have dash, dash, wait, zero. It will wait to run the job, the conformance plugins, extract the results and dump these inside the Sonoboi. So you can run make Sonoboi results. This will read the results and output it for you. So in the example here, one of the network policies tests fail. It will print out what tests fail in the screenshot at the bottom and give you some percentages and information of the state of your cluster at the time it was running. It's super simple, super easy to run. And we have a few make file targets to help you on that. Yeah, so we start to explore a few uses of this project already. Some cool stuff are going on. The first one, and this is like in the developer side, you can use the Windows DevTools to bootstrap your own cluster. It's super easy to bootstrap it. You don't need a cloud. You just run this locally. You need a good machine to run that, but that's fine. You just need virtual box, vagrant, everything's open source. Everything is fine for a personal use. In the project, you just run make, and you can run the Windows Operational Redness tool inside your own local virtual system. You know, you windows closer. So advancing a little bit in the user cases, we have like integration of the pro jobs. You can, when you do a comment in the project, you can run slash test, operational test, CAP-Z. This will bring up a CAP-Z cluster for you with Windows 2019, and we'll run all the tests in the system, and we will output this for you. So we integrated the pro jobs with our project, and we can run our project as a custom job of pro, and this will bring up the results of our changes of what you are developing at the moment for you in pro. So we have a few CAP-Z folks here. They won't let me lie. A little bit of usage on the CAP-Z side. We at VMware use CAP-Z. So CAP-Z allows you to bring up a new cluster, a new workload cluster in the hybrid view, or Windows view as well. So one way to use that, it brings you a brain or master cluster, difficult management cluster, and this management cluster will bootstrap new workload clusters for you. So you can have hybrid OS, or you can have Linux only or Windows only, and this is like a production ready way to run your cluster in the cloud or on-prem. One of the features, there are good sessions about this runtime extensions, cluster class, and things like that around KubeCon this year, but one of the features that we started to explore for this project was the usage of cluster class and runtime extensions. So you can have these hooks inside the lifecycle of your cluster, and in the middle of one of these phases of your cluster, you can execute a command or do some operation called like these webhooks that you just created. So in the folder EXP, it's possible to explore one of these systems that we developed, one of these hooks that we developed, and these will call Sonoboy as soon as your control plan is ready. So your control plan gets ready and it will automatically run the operational test for you. So this is super cool because you can validate your new closers as soon as they are created with new extensions and new bootstrap of these tests at each creation. Yeah, so to wrap up here, I have some special shootouts. First is Jake Vias from VMware from my team for all the mentoring leadership on all these projects and CIG windows. Marc Rosetti and James Stewart here mentoring and the guidance on windows and Kubernetes to help us allotting all these parts of the project and understanding how Windows works on Kubernetes. Aravind, Claudio, Luther and Douglas for working on new Windows features and always ready to help the community. So there are more folks inside the CIG windows and all of them help to create the cap, bring out the review and our authors of this cap as well. Adelina and Ben, the elder for the early discussions in this topic, they started the conversations. It was Adelina that started the thread and everything started to move on. And finally, Antonio Ojea for the networking to test, review and moving forward with that. Also how to contribute to the project. So if you are interested in help, move windows forward, test windows, that's super cool. There is a lot of challenges, a lot of work to do in this area. So right now we have 40% of the tests implemented on the list that we defined in the cap, a round of that. I open one ticket for each one of the missing tests. If you are interested, go ahead, check the project. In the Kubernetes CIG windows operational readiness, you can get one area that you are interested in, get windows depth tools to bootstrap your first cluster, go to Wager, go to AWS, go to vSphere and bring your cluster that, check, test your stuff. We are here to help, we are here to help anyone new to onboard and start to code with us on the project. Join us on CIG windows. Kubernetes is lack as well. I have a QA session. Thanks for the, I noticed there was a slide where there were tests for Windows Server 2019 and I was wondering if there are plans for Windows Server 2022. Say that again, sorry. I noticed there was a slide on Windows Server 2019 testing and I was wondering if there was going to be testing for Windows Server 2022. Yeah, I think we have something. Roadmap, James and Mark. We are going to talk next. You, you, what's next on CIG windows in two hours. So basically what we have tested here is 2019. But yeah, I think it's in the roadmap for the project as well. So if you find like any task that is worth adding here. Thanks folks.