 specifically about the day two aspects and how to navigate, read and write them. And before we do that, a quick introduction about us. My name is Priyanka Sagu. I work at Sousa as Kubernetes Integration Engineer and I've been part of the Kubernetes project for a while in SIG release and SIG Contribux. Hello, I'm Jason. I'm a next consulting IT manager trying to private into the world of DevOps. I've worked on the Kubernetes... I assisted with the Kubernetes 1.2 part of SIG release and I'm excited to be here. So, with that aside, I just want to give a quick intro on what this talk is about. We have a part that is in our footpeg. So, we have divided our talk into three parts. We'll be discussing what proud jobs are, what are the different types and we'll be going into anatomy of proud jobs. We'll also be looking at how... What are the various user interfaces we have to read and go through proud jobs and where the config actually sits. And we'll end with looking at a few examples of actually running Kubernetes proud jobs. A few examples from there and also we'll see how to replicate them locally. With that, let's get started. So, what is proud? Proud is a Kubernetes CI CD system built by the Kubernetes project itself and built for the Kubernetes project. So, most of the automation and grant work that Kubernetes project does, that's done through Proud. And for the sake of this talk, we won't be going into more than the side of Proud itself, how to deploy Proud, that's not the intention of this talk. It's more day two. We assume Proud is already working there and how to consume what's already working there. But there are already a lot of great talks out there from the community on Proud itself. So, although we'll be looking at a bit of workflow, how Proud comes in the picture for Kubernetes contributors, people like us who interact with the Kubernetes project from GitHub. So, on the left side, we have GitHub in terms of PR or any interface that we see our repos. And on the right hand side, we have the Proud infrastructure. Now, how to understand Proud infrastructure? We can read it as traditional Kubernetes architecture. We have masternodes and we have worker nodes. Similarly, in Proud, we have something called service cluster, which is the control plane of Proud itself and all the workload that needs to be scheduled where our actual Proud job runs, all that happens on that side in the build clusters. So, what happens is we start from GitHub. Anybody, for example, creates a PR or they comment on issues or any PRs, something like slash food, it creates events from GitHub side and Proud is listening for those events. Proud has something called hook, which is listening for those web hooks and then it reacts to them. So, it reacts to them by creating a Proud job CRD and that Proud job CRD in turn will create a port for running the Proud job itself and it will create that port in one of those build clusters. By default, it creates it of the build cluster, which is set as default in our case, it's build cluster A, but the Proud job could also say, oh, we want to be scheduled on a certain B or C cluster, so it will schedule it accordingly. And once the port is scheduled on a build cluster, it will do its job, it will run its task and when it finishes, it will send back all the logs, everything that happened inside that port container in terms of task, it will collect all those logs, any useful artifacts or metadata, send back to Proud service cluster and same will be updated back on GitHub interface. So, that's how Proud comes in the picture for us and we just looked at the word Proud job there, so what Proud jobs are. If Proud is a CI-CD system, Proud jobs are the native Kubernetes-native CI jobs, again managed by the CI-CD system and we use Proud jobs to automate testing, building and deploying code changes in Kubernetes. So, anytime there are code changes happening in Kubernetes repos, we would be testing them through Proud jobs. So, anytime you create a PR on any of the Kubernetes repo, there'll be Proud jobs coming to the picture, checking out that PR code, running few tests against it and giving back results. We define our Proud jobs using YAML, so very Kubernetes-friendly and all those Proud jobs are triggered by GitHub events like the SOAP, PRs, comments, we can also schedule our Proud jobs like traditional Chrome jobs, we can run it every one hour or six hours or ten hours and it will do that for us. So, that, we have three types of Proud jobs. We have pre-summits, post-summits and periodic. So, pre-summits are the Proud jobs that runs against any code that's coming from PR. So, whenever somebody is creating a PR and you are adding new code changes to the GitHub repos project, Kubernetes project repos, there will be pre-summits coming into the picture. Those pre-summits will check out code from your PR and run tests against them and if the submit pass then only will go ahead and be merging the PR. So, on Kubernetes project, there is no manual merge, all happens through Proud. After that, pre-summits, we have something called post-summits, that comes in the picture when code is already merged and something needs to be done. For example, we have just merged a docker file code changes and now a new image needs to be created with this new docker file changes. We would be using something like post-summits to create a new image and image tag and push it somewhere. And then we have our very friendly periodics, which is to run if you want to trigger our jobs on a periodic basis. So, similar to our front job, we can give it an interval or give it a cone termination. Yeah. So, let's get into the anatomy of these three jobs that we just, the two kinds of jobs that we just saw. Let's start with periodics. And every job starts with a job type identifier that identifies what kind of job it is. This one is a periodic job. We have the name for the job. Then there's this label called Decorate, which let's take a slight diversion into something called podutils. So, this is a proud job and that's a wrapper. That's podutils around the proud job. So, podutils gives stuff to a job. Like it enables source code. Like if a job needs like the commit or the stuff from the PR that needs to be pulled, it's podutils that enables it and gives it to the job. It tracks metadata and logs of the job. And then if they have to be put up or uploaded somewhere, then as part of artifacts, it pushes them there. So, let's come back. Since this is a periodic job, obviously we have an interval. This one runs every hour. We could set it to whatever we want. Extra reps contains the data needed to clone the repo. While the others would just take the repo on their own as part of what comes in. A periodic job needs us to tell it which repo to clone. So, here's the org and the repo base branch with the base wrap. Spec is a valid Kubernetes for spec to generate the container. Let's move on to post summits. Like you said, this identifies the job as a post submit job. Here's our org and repo as to which one it would run on. The name, decor, and spec are the same as periodic. Max concurrency, we can run up to 10 jobs or whatever we specify here. Max concurrency helps us set the amount of jobs we want to run. Branches is a regular expression. We can set it to then choose what branch we want to run this job against. And if we want something else like vice versa, we don't want it to run on some branches. Then skip branches is what we use. Like in this case, the regex tells it don't run on any released hash, whatever branches. Let's move on to pre-submits. Here's our job type identifier. Here's our org and repo that we need to check out in the container. Here's the job name, our decor, and the spec. Always run through. Then we'll run this job, this pre-submit job against every PR that is created for said repo. Run if changed will run if only the job matches the path. Like if it needs to operate on a specific path. Like in this case, we're looking at if any file in the Cox folder is changed as a part of a regular expression, and only then we would run it. Skip report, setting that to true would be to change a status on GitHub or not. Context is a GitHub status context. It defaults to the job name. Max concurrency branches and skip branches we've just seen. Which brings us to something called presets. If we have lots of jobs with common data that is shared across all of them, we can lump them all into a separate email file and call it a preset. We're identifying it as a preset. We're giving it a name and then giving it all the data that the other jobs can call upon. We're using an example here. There's a pre-submit job, for example. There's a label that says preset full bar, which is in the previous slide. It's over here with the label. So this is called here, and then all the data from there flows into this job container. There are also something called job environment variables, which is something that provides job-specific environment variables. They're injected into all containers within a Kubernetes product. We'll look at a few here. Job name, job type, job ID. They are available across all containers. Some are not like a repo name. Since a periodic we have to give it a name, it's not automatically available. So for a more complete list, we could check all the links are in the footer. You can download the slides and browse through a spell and through the data. And now that we know what proud jobs are, let's visualize all of them. So we do that by using a few tools. The first one is TestKit, which lives at TestKit.khs.io. This is what it looks like. There's a row of boxes and tiles. They all call dashboard groups, opening them up leads to something on dashboard tabs. Clicking a dashboard tab like this contributes to something like this. The ones on the top, the tabs on top and the tabs down here are the same. But the ones on the bottom give you extra information like what the health of it is, like what tests within it are passing or are flaky or failing. So let's click this thing. And you see right now there's only a single job in there, something called pull verify. That's the CI job or what we call a plow job. We click it and it shows a tile-like grid of the performance of this job at last time. That's the name of the job. And over there, you can find the URL that if you follow through will give you the conflict. If you're interested in looking at what this job was made of. In this case, it's a pre-submit job. That's a Golang container. It checks out K community and runs make verify. So getting back, if we click on a tile in this grid, it will lead us to another view, which basically it links us to another tool called Spyglass, which gives us details on jobs. So we find them at proud.qat.io. So let's explore this little bit. We're talking about all this in terms of crowd tooling for the upstream Kubernetes thing. So if you install Proud, this should be available to you. So I've got to run a bit. I've got to run a bit. I'll take questions later. But you would want them together if you want to visualize them. Yeah. So, sorry. So that's our ID for the job. Job run. It's an ID. That's the status in this case we passed, but it will also have done and failed or finished or whatever. Coming down the page, we have the build log, which you could explore out to give you the details. And then there's various events and info about the job, like about you could find out the volumes that were mounted, what the applied part spec was. Now let's go look at the tabs on the top. This is job history, the history of the job, because all its runs looks a little like this, like this job had a lot of runs. Then there's Proud job YAML, which is something like we saw before, but it's basically provided here, the applied Proud job in the YAML format looks like this. Then we have this links to the PR that triggered the job in case of pre-submix, like here. And then PR history would give us the history of the job across different commits and revisions of the PR. Like there's a run and there's a commit sorry on top and then there's a run below. Artifacts would give us access to the act. If the job created any, we would have them here. There's the build logs and the clone logs, etc. for this job. And finally, Test Grid would bring us back to where we are. So Spyglass and Test Grid are kind of the yin and yang in that sense. So now that we've looked at how to visualize jobs, let's look at the source. Like where do they live and how do we find them? So they are all available in the test info report at kit.khts.io. Over here. So let's look at the conflict folder. The PROW, as an aside, the PROW infrastructure folder here, sorry, the PROW folder has the PROW infrastructure conflict of the running PROW cluster. That aside, we are more interested in jobs in the jobs folder and the test mix folder. Let's dive into the jobs folder. Click in there and we see the names of all the kit of organizations in there. Let's go into Kubernetes and we see the names of all the represent of the Kubernetes. We went into community and like this file that's over here is the existing PROW job conflicts for the Kubernetes community level and also the home for new PROW jobs as they get added. So currently it looks a little like this. So let's go back up to this and dive into the test grid folder. This are all the folders with common test grid tabs and dashboards over here. Let's go into, let's say, the Kubernetes folder and again the names of the represent of the Kubernetes org are listed here. Let's go into Kotlbex as an example and all the dashboard tabs are defined here. So what you see here, the names is what is visible here. So while we have, if you're looking at something discrete and defined, these things help us look at them. As in we know what you're looking for, for example. But if we don't, then we have, we can use a tool called home that's available, that lets us do code search. It'll also let you look for jobs. So it's available at cs.cares.io. If you just search for any regular expression, like for example, here we're just looking for a string called community verify, we see it. See all the examples where this thing has been used. Having said that, now we've looked at the anatomy and we've looked at presets. Let's take a little dive into examples. So we just looked at the anatomy of various proud jobs. We know there are periodic summits and post summits. Now we'll look at a few examples from the Kubernetes project itself. We have three examples here. We'll be looking at them one by one now. So let's look at the first example. We'll be taking a very simple job there from the Kubernetes release team. What this job does is it helps us to keep track of our enhancement tracking board. So all the Kubernetes enhancement proposals that we receive as part of a Kubernetes release cycle, we need to track them on a board and we use GitHub project beta board for that. So we have a job that does this for us. What the job does is on left-hand side we have our Kubernetes slash enhancement repository where people who want to add new enhancements or features or anything to the Kubernetes project, they open issues and we call them cap issues. So anything that have a lead update in label during a release cycle, we collect all of those issues from key enhancements and we dump them on the right-hand side, which is our tracking board. So that is the sync that we do and we have created a very simple batch script for that and we use a proud job to sync this for us every six hours. So that proud job appears on the screen here on the SQL release team periodic tab and the name of the proud job is periodic sync enhancements GitHub project 127. So this is the config of that proud job. Like we discussed periodic tells us it's a periodic proud job named by the name that's a name of the proud job and interval it says it will be running at every six hours. At the very beginning in very beginning slides we looked at if we can also tell proud where to schedule our proud job. So this is how we tell it we give it a cluster filled and we give it a name of the build cluster. So that's what we are doing here Decorate true ads port you tell. So it tells you have to check in some port into the port container and grab all the logs and artifacts and make it available to us later like we saw in Spyglass. Annotations. So this is how we are telling our proud job you have to come on desk grid side. So we are telling okay in line number seven create desk group true we are telling proud create a show our job with the same name as the name defined in second line. It says put our job under sick release release in periodic desk grid tab and also add one liner description. And extra refs it's telling us okay whenever you are creating a port for this proud job inside the port container check out this repo check out Kubernetes release at master branch and then this is the spec. This is the specification for the container. So what it says is create a port using that image in this case K it is in column latest and inside that container run this command which is exactly that basket that we looked at and also make this environment variables available. So how it looks like is we have a Kubernetes board running in our build cluster and inside that port a container is created using the container image that we provided the spec area case release is checked out at master all the environment variable that we are giving are also made available to the job and then we just run that script inside and when that script run that means and one instance of the job has run and you would be able to see that instance all the logs everything that has happened inside the board here on spyglass. So that was our example one the simple one with that let's go to next one and here we'll try to go something more useful. So we have something called Kubernetes version markers in our space release release space. So what Kubernetes version markers are these are kind of text files which access public API for us. So Kubernetes keeps getting PRs and code changes every single day and we have to test those code changes. So we keep building artifacts against all those code changes at intervals and we store all those artifacts in GCS buckets. So something like version markers help us to tell okay what if we have to test any anything against those new code changes how to grab those build artifacts from there. So this is like a screenshot from a bucket called Kitas released where we are storing our Kubernetes version release markers and we'd be having multiple files here like we have one Kitas label 1.txt this file points to a Kubernetes version marker. So we get a version marker like that in this file and corresponding to that version marker will also have a folder inside the same bucket and that folder will be containing all the Kubernetes binaries all the binaries that are built as part of Kubernetes release. So if we have to do any use these binaries to test maybe something we can grab them from here. This is a public bucket and how to read that Kubernetes version marker. So the first part on the left-hand side that is a base release tag. So in this case it's saying 126.3 that is a tag and on the right-hand side we have the latest commit hash on the release branch corresponding to the tag so release hyphen 126 and the middle number 41 is saying okay there are 41 commits between when the tag was released and when this version marker is created. So now we have a little bit of understanding about Kubernetes version markers. This is a job that creates those version markers for us specifically the one we show for us every one hour. So this is a periodic job. The name of the job is CI Kubernetes build 126. It's scheduled on Kitas Infra Proud build runs at one hour interval and decorate to say port details is enabled. Extra left part is saying you have to clone Kubernetes on release 126 branch. Annotations we saw there's a great tab name, dashboard and every time if this job fails somebody needs to be alerted, somebody needs to do something about it. So Proud will send an email to those members. Those labels we are setting presets here. So we are saying okay whatever comes with preset dine enable puts here preset service puts here and then we have the spec area. So image name and we are also setting resources limits and requests giving it privileged accesses access and finally what we are doing is we are running a command so we have something called Crel. Crel is for Kubernetes release. It's a tool that upstream Kubernetes project uses to actually build Kubernetes releases or every step that is required to build Kubernetes releases. So we are using that here doing a fast release and whatever build artifacts are created we dump them in the bucket Kitas released it all the images that are created we dump them on that registry gcr.io slash Kitas and we put them in a folder and the folder something like that version and we put that value in a file called Kitas hypen stable 1.txt. So how it looks is we have a code container is created using the image Kubernetes is checked out as at 1.26 branch we have set pre-summit service account to 2 so we'll have that service account gives us those two environment variables and give us that volume mount and we had another one set up preset dine enable that gives us one more environment variable and an empty volume and then once our test environment is set up all we do is we run the Crel command and once we do that this is how a run of that particular job looks like it's running that Crel command here it's performing it's starting a docker because we had that preset dine enable it's also checking out into that gcs it is it is trying to log into the project to actually have get some credentials to put something in the gcs bucket and eventually running that Crel command create artifacts job appears here under the tab seek release 1.26 blocking and that's the name of the job build 1.26 so that's our second examples but version marker leads us to our final example for the talk which is very important so let's now dive into the Kubernetes end doing this so we will be looking in to one of the tests that we run in the upstream Kubernetes space this is a release blocking test so if this fails we have to do something about this test so this is a periodic job again the name is CI Kubernetes GCE conformance test this job run some GCE conformance test we are scheduling it on that set cluster interval is 3 hours and we have set some presets there this is a tab name where the job will appear on test grid those are the dashboards under which it will appear and a one liner we see two new fields here two new annotations here fork for release, fork for release replacements so what it does is it tells Proud whenever you are creating a fork for this job every release we have multiple releases that we are maintaining in Kubernetes project so every release and keep replacing those values those flag values and those flag values we see them here so this is spec area of our job in this particular job we have an image called kubkin's e2e that is the image that upstream Kubernetes project used to run on the Kubernetes end to end test because this image ships with a few tools like kubtes and another tools like gcloud etc to interact with the cloud providers so we will have for AWS so as well and kubectl because we are talking about end to end test so we will be spinning up a cluster doing some tests putting it down so we need kubectl and it will also the image itself will also give us ktest and Proud so these are the args that we give to that container image specifically the 21 line where we are giving a scenario Kubernetes e2e so we are telling we have a few test scenarios to run and here we are telling okay pick the Kubernetes end to end scenario and you can find all the test scenarios there in the footer test in Proud slash scenarios it's a folder inside test in protocol where we have different scripts corresponding to different test scenarios and here we are saying okay run the Kubernetes e2e script and when you run that give everything below that double dash to that script so that would be our configuration options for our test scenario so how does this work we create a port container the container is created using kubectl's image inside that breach we have test in Proud checked out we have added presets so all of this is made available to us and then we run our test script so we have given it Kubernetes end to end scenario so that script gets called out and all of these flags are passed to that script and what that script does it it creates a kubectl's it forms a kubectl's command for us and kubectl's is a tool that is actually what's going to run into we give kubectl's test you have to spin up a cluster you have to run tests on that cluster and then you have to bring it down and we can also tell it where to spin up the cluster in this particular case we are telling you use provider GC so go and spin up a cluster in a gcp cloud and we are also giving it information to interact with the project where it will be spinning up the cluster so other information so we gave it a service account it will use that service account to authenticate we did not give any flag something called gcp project so Proud have another component called boscos that on runtime it's a lease management tool resource lease management tool so it gives you access to available cloud resources so this job will ask boscos give me a gcp project I need it and it will get one from there we also have two more flags here called extract and extract ci bucket that's exactly what we discuss in our example do so this is where we are telling you have to spin up a cluster but how will you spin up a new Kubernetes cluster you grab all the Kubernetes binaries from that word using that version markers so you go to create this release look for that version marker and grab all the binaries from there so we hit that URL we get back a version marker we download all the binaries from there we untar them and then finally we are saying up down test that is you bring up a cluster you test what test we are giving is the test arguments down there we are asking it run components test and then once you have done the test dump all the logs that you are collecting in that artifact folder and then bring it down so that's what's happening here we are starting a cluster we get a cluster we set our config context to that cluster and then we run trigger our test the test runs inside the Kubernetes cluster that's running inside a gcp project and then we keep collecting all our logs from there and then we have the logs handy now we will bring down the cluster and all those logs will be made available to us on spyglass so we can see them there so the job appears here under conformance GCE master on test grid a note here we looked at this job which is using cube test binary but there is a successor to cube test which is called cube test too same but a more simplified modular version of cube test and that's a recommended tool to use we are in a process of migrating from using cube test to cube test to in the upstream space so we looked at what proud jobs are anatomy and different examples but we know proud jobs will always be running green there will be time when they will be read all the time and we have to diagnose them so the last section of our talk is testing proud jobs just quick we proud is a very complicated infrastructure there are a lot of plugins into it that enables us to run all sorts of different tests that are required by the upstream project so we can't really replicate everything locally but we can come closer to as much as we can we can do with our resources available so the easiest always to you grab the container image you pull it down you exit inside the container image and you run your command whatever you are giving it in the command section or argument section that's one way the more a little bit more automated version of this manual is we have a tool called Fano that's available again in test and for you clone test and for CD inside that and you run go run proud cmd and you give a proud job URL to it you can grab that URL from spyglass that will also do the same thing it will grab the image container image URL from the spec file from the yaml file and it will create a container inside it but it will keep proactively prompting you this is what I need you give me this environment variable or you put this mount whatever is needed by the proud job to run it will tell you and then finally the easiest which most of us do actually is to go and all of our proud job sits inside test infra repo so if something is really required you go and raise a PR against test infra repo itself get it much give it some time to sock see if it is running there good if it's not running you make changes in your PR and PR again well because we can't replicate it locally all the time this is the method which most of us use but it's always to always good to test your changes locally as much as we can and with that we lead to the conclusion of our talk let me just summarize quickly we learned what a proud job was we learned about the types of proud jobs we peaked into their internal anatomy to see what they were made of we learned about reducing redundancy with presets we added functionality with pod utilities we gave jobs a sense of the world with variables we saw how to visualize jobs and brilliant with our details with test spread and spyglass then we looked at where jobs decided the code in the test infra repo we searched and we snapped out other ones that we were looking for using Hound and finally we do head pass into various examples and saw how proud jobs lived in the ring world so looking forward we hope the one thing we leave you with is the confidence that you go read proud jobs and focus on part of them you can analyze them and then have them ventilate you can try it all at proud.kds.io the source code is at get.kds.io you can scan this thing to leave feedback and get the slides Biran Slak, Pee Sagu and Jason Braganza our email is also here Priyanka you want to take about so one we looked at very few examples here there are a lot more we can see in test grid and if anyone is interested to learn more about other proud jobs that we have or maybe do collective learning reach out to us at SIG testing or maybe give us a ping and we can run together like mentorship cohort or something like that we need help here to be reviewers of those jobs we get a lot of PRs making changes there on the proud job side so if you are just interested in learning give us a shout there we'd be happy to help and in general like if you are interested in asking any questions about test grid, spyglass these proud jobs SIG testing and SIG get a SINFRA on slag.kds.io is the right place for you with that thank you again for joining our talk and for questions. Thank you. So you are asking about guaranteed execution well it's Kubernetes again so whatever guarantee we get from Kubernetes we get from here but yes like we saw there are some fields called maximum concurrency it totally depends on whatever is the resource we have in our proud build clusters we try to make sure we have enough because we need these to be running like concurrency there is a reason why some of them are said to be running at six hours and one hours because we need that signal to build some CI signal so well yes I am not sure about the guaranteed like what guaranteed but as far as I know they run at least there will be a trigger it might fail giving you something like oh I did not find a GCP project from Bosco I needed a project and that was not available at that time so it will try again but it will get triggered that's for sure yeah if we are hoeing on resources that's like Kubernetes problem again same but yes there will be a proud job trigger and you will be able to see that happening in spyglass it's very hard how hard is it we know it's very hard and this is what we even we face the same issues which is why we kind of gathered what we knew and bundled it into this talk yeah so the question is how how easy or hard it is to maintain proud infrastructure itself well we also have jobs that bumps whenever there is a proud change itself like there is something new added to proud tooling itself we create proud container images for them because again like it's YAML files that are applied to a Kubernetes cluster to make the Kubernetes cluster a proud service cluster so whenever there are changes we have some proud jobs called auto-bombs that comes into the picture and they again auto-bomb the cluster so it's automated and it happens whenever there is a change coming hard it is yes things fail and you have to come in and maybe fix it but we have tried it, tried to automate that so hope that answers your question we have proud we don't have other CI CD running here the number one reason is we wanted something which is very Kubernetes friendly itself so I am not sure about the concourse CI CD here or Argo CI CD here how that would run here but again like we wanted whenever there are new changes we have to just bump that container image version and we need to apply it again that's the idea we want to follow so this is what we do whenever there are changes we just bump that image version and we try to reapply it again again it's self healing we will keep reapplying it again until it's in a better position maybe if it's all gone then somebody will have to come in and fix it but that also happens but that does not happen like we also have pre-summit test to actually check what is whatever new is coming in the proud config if it is working or not so we have staging area and something like that I honestly am not sure about that but okay sorry the question is how about using Helm Starts to deploy so you mean how to put a proud on your own okay we have documentations out there on testinfra config slash proud that the folder we just touched upon if you click on that folder you get all the yaml files that are applied into the cluster so there are handy documents as well which gives you you need to first apply this and that in that for right now upstream Kubernetes proud is very GCP tied so we also have tools to automate that entire proud deployment as well that's not so much there for other cloud providers but we because like we are running millions of jobs every single day and that's turning to be a huge cost for the project so we are also looking for help from other cloud providers and we have received help from other cloud providers so we are also working on making proud more cloud agnostic so there will be more documents coming out where maybe like we don't have to manually apply each and every yaml file we do right now for known GCP cloud providers but for GCP yes we have tools something like that that will allow you to just bring up proud in like 5 minutes or so it is changing but since we already have so many jobs running and whatever they are creating are important it's a very slow migration we would see like the one the third example we saw and I made a note about using cube test 2 we exactly have a similar job for that that uses cube test 2 as well so what we are doing in upstream space is we are making clones of jobs right now and we are giving them soak time to be like stable and when once they are stable and giving us enough good signal then we will be slowly removing them but it's going to be a very slow migration because not all proud jobs are created by just the core Kubernetes contributor people from other projects who are trying to test integration with the main project they come and add in jobs and sometimes you just somebody added a job and they are no longer part of the project so you don't have the context how that job was added what it is doing and how to change it to make it move away from or maybe like not just docker and more container and time so things are happening but what we just showed will stay for a very long time and more things will also come in yeah the second part we are doing a docker in docker so there is a plus I just want to make a note cube test does that sorry cube kins image that we saw that does that docker and docker and we wanted to include that because there are a lot of images a lot of proud jobs that are doing that and it's very hard to read those jobs to understand okay what you see there is a lot of bunch of flags and you can't make out what they are doing so for the sake of the talk we wanted to include that but if you go to the documentation you will see a big banner there the cube test is no longer recommended we do not want docker in docker there so we are moving there there is already efforts out there to make it more agnostic it's just that we need some thing already running there to drive the releases but we are also working on changing them there are kind jobs as well so you would if you if you go sorry if you go to test grade and search like you would find all sorts of variations out there like people are testing all these various options are already in discussion or maybe already tested and again like they are already out there maybe somebody started it and did not drive it to the end something like that it's a slow process people are trying options just that the old way is not it's going to take time to be like entirely so yeah but like all this questions if you are very much willing to take help in that to move from like to make it more agnostic so sick testing sick care caretas and fra specifically caretas and fra if people need help to actually like maintain this proud infrastructure itself so any help would be helpful thank you