 So, I'm Sally. I work on OpenShift. I've been on a few different teams with an OpenShift for the past almost five years now. Yeah. I'm Urbashi Murani, and I'm a software engineer on the OpenShift node team. It used to be the run-times team. So, I work with the container tools that you all probably know and love, Cryo, Podman, Builda, et cetera. Yeah. So, we're here to talk about CI CD, although we are not as non-experts of CI CD. Yeah. All right. So, we came up with a list of 10 reasons of why we are here doing this talk. The first one is we were very interested in how the CI CD infrastructure works for OpenShift, but we didn't have that much knowledge. So, what's a better way to learn than make yourself give a talk at a conference? Of course. Yeah. We wanted to visit for no in January. Said nobody ever. Always love the free swag that speakers get. Yeah. The pretty awesome hoodies that we're getting this year. Yeah. Make a new closet in my house for them. Yeah. Social media. You all know if you give a talk at a conference, you're like famous on Twitter for 10 minutes, so. Selfie time. Selfie time. Don't have it. And we also wanted to sort of clear up what CI CD actually stands for. There are two different options out there. I say continuous integration, continuous deployment. Continuous integration, continuous delivery. Yeah, it doesn't matter. Whatever. One of those two. Maybe, isn't there another D2? Development? Development. Continuous development. That's it. Wait, I don't know. Free food. Yay. Even more free food. Free cups. I couldn't think of anything else, so now I got free cups. My New England Patriots lost Tom Brady and a few weeks ago, so they're not in the Super Bowl this year, and I just had to come and drown my sorrows with some good, cheap Czech beer. And an excuse to brag about how amazing our open trust CI is. Yeah. We're getting to the real point now. It is. We're actually serious. So one of the main reasons is that it's pretty fascinating how big complex projects like Kubernetes and OpenShift actually get compatible code in on a daily basis, especially since they have so many different moving parts, different repos, they have over 100 contributors all around the world. So we wanted to sort of share an overview of how the CI CD infrastructure is actually very important in getting a stable release out. Yeah, maybe many of you have never thought about how this happens. There are about 100 repositories that make up an OpenShift release. Just a few years ago, everything was in that single repository, OpenShift Origin, and same with Kubernetes. And over the past few years, everything's become distributed and hundreds of repos make up those projects. How do you ensure that one component's not breaking another, backwards compatibility, all of those are really hard problems. And what we hope you'll take away from our talk, respect for your CI CD team for us, it's OpenShift CI CD. Hug them today. Yeah, you will get an overview of how our CI CD system works, especially how it has evolved over the years as the project itself have evolved. It's perhaps the most important part of any project. It's what drives that project toward the goal of a stable release. And you will also understand, hopefully take away how important this is and also get the idea, understand that having CI jobs running is actually your team's job. The CI team will just give you an infrastructure that can actually run those jobs for you, but you have to create those jobs, maintain it, keep updating it as your code changes with it. And actually yesterday, we found a pretty big bug in our code and realized that we didn't have a CI job for it, which is pretty ironic because I was giving a talk today on the why CI CD is important. So, they're interested. Yeah, the CI CD team will happily run your jobs and fail them 100 times. It's not their job to fix those failures, it's your job to know your project and to fix them. And automation is essential in a project like Kubernetes and OpenShift. Lots of yaml involved, lots of jobs to run, lots of platforms, lots of cloud providers. Automation is essential. So, let's start simple. There are two main moving parts to the CI CD infrastructure that we have, or the CI infrastructure actually. So, we have Prow, which is a Kubernetes-based DID system with GitHub Automation. Basically, it just runs your tests in the container and the pod. And CI operator enhances Prow, just adds onto it to actually fill up the needs of the OpenShift CI. Yep, Prow was Kubernetes solution to their CI and we took that and opinionated it and made it work for OpenShift too. So, CI operator was a tool developed by our test platform team, just to add automation around executing Prow jobs. It's a super cool tool, I believe Steve Kuznetsov pretty much developed it. Is it a Kubernetes operator? No, it's not. It was developed just before Kubernetes operators became a thing. So, the name is a little bit confusing, but it operates CI, but it's not like a reconciling to a known good state Kubernetes operator. Yeah. So, Sally, if Prow has deployed at Kubernetes and OpenShift is Kubernetes, does that mean that OpenShift CI can run an OpenShift cluster? Yes, and it does, and not too many people know about that, so that was the main goal of ours to let you all know that we run our OpenShift CI on OpenShift. Mind blown. And now we're finally ready to dive in and discuss how this all works. Yep. Oh, wait, let's digress. A little background information, our goal here is to come up with an OpenShift release or for any CI it's that stable release. So, what is an OpenShift release? Yeah, so here you can see a snippet of what the OpenShift release looks like. It's basically an index of image digest that our known good test set together. So, we have over almost about a hundred of these images which sort of just represent the different components that make up OpenShift. Yeah, that's a shortened list, but you can see with any release, you can run that command OC admin release info and then pass it the name of the release. And you will see a list of all of the components that make up an OpenShift release. And each of those components lives in a separate GitHub repo and those are all included in our CI and tested with every single PR of every single one of those repos. And how does that all happen? Well, with every merge, a new release is created and that release includes 99 components that are known good with your one component that's questionable. From that release, clusters are launched, tests are executed. If they're all good, they all pass, then back in your PR and GitHub, you'll get the all green. If you have all of those proper labels like approve looks good to me and such, it will move to what Prow calls a tide pool. A tide pool is a small group of PRs that will merge together. So that small tide pool, say there's three or four PRs that are ready to merge. Prow will execute further tests, create a new release with those four changes if it's four PRs, run the tests again. If those are all good, that's when Prow will auto merge those changes. Once those changes are official, a new release gets published to our release page. We have a release page that you can check out throughout the day. Go back to the index there. Throughout the day, we're building new releases every few hours and with every release, if you click on one, you can get a change log and go down to the four fours, right down here. Yeah, click on any one of those. It takes a minute for the change log to pop up so we might not see it, but it will reference exactly every PR that's in that particular release. Now once a day, we get a nightly release and those nightly releases are passed on to QE, further testing, further periodic jobs. With a wave of a magic wand, then finally that release gets released to the customer and public. And that's how it works, but it's all orchestrated through an openshift cluster running RCI. This is not loading, I think, the internet. Oh, that's all right. Back to the side, so now we're really ready to dive in. How, let's walk through a typical workflow of a job and you'll get a sense of what's happening in RCI. Yeah, so with CI operator, we have something called config files which basically is the definition of how you want to build your image and what tests you want to run. So every time you open a PR to a certain repo, you want to build an image that incorporates your changes so that it can be injected into the release image that will be tested using, that will be tested on all the tests that you have enabled. So this part is a small snippet of what the code looks like for when you want to add, like what images you want to build. So the build route, basically, it's a way of telling the CI operator to get your source code into this to build that. The base images are the building blocks so you can use this in future, like you can do a from base, like how you're doing Dockerfiles. And then the images part here is just like, tell your CI operator where your Dockerfile lives and how to build your source code. So if your source code needs to be built, if you need an RPM for it, for example, that needs to be injected, then you specify the path here, you tell it which Dockerfile to, and the Dockerfile actually lives in your repo, it doesn't live in the test. Yeah, crafting that Dockerfile is usually half the challenge of getting your CI going. It's basically, how would I run this locally? We'll put that in a container because in our CI, everything runs in OpenShift pods. Yeah, and then the other part that makes up the config file are the tests that you want to enable. So every time you open a PR to a repo, you will see a bunch of tests start running. So this is where you define which tests you want to run. They sort of gate your changes and based on whether they pass or not, your changes get merged in. So you can define different tests. You can have unit tests. You can have intent tests. You can have integration tests. This is where you do it. You can also, if your tests need to be run on a, need to launch an external cluster on a different cloud provider, for example OpenShift, and we want to test on AWS, you can specify where to do that there with AWS. And we'll talk about this a bit more as we go. This is just a quick overview of that. So with every config file we have, we get jobs. There are three different types of jobs that you can have. So we have the pre-submit jobs, which basically convert your config file and creates jobs for building those images and for running the tests that you have defined. So, and then the post-submit jobs is a job for creating the promote images, which can then get tagged into the image streams and eventually the other tests, the other tests in the whole infrastructure can use your image to test, basically. And then the periodic jobs are just a job with a crown element. They run as often as you specify that you want them. So if you have certain features in your project that you always want to test, regardless of whether you've opened a PR or not to make changes to it, this will run as often as you specify to it. So for OpenShift, we run upgrade jobs. We run jobs in a disconnected environment and all. Yeah, those pre-submit jobs are all the jobs that happen before your PR merges. So I want to pause real quick here because a few weeks ago, I was working with Federico Paulini, another Red Hatter. And we were working through a CI issue for his team. And through that, a few days of really working together on Slack, he pinged me afterward. And he's like, thanks so much. We got this figured out. I ended up giving a presentation to my whole team on what I learned about our CI. And it went over really well. Everybody really liked it, so thank you. And I said, awesome, because I'm giving a presentation in a few weeks at DevConf and can I pilfer your slides because I haven't started mine. And he said, sure thing, no problem. And I said, I'll give you a shout out during my talk. So this is one of the nicer slides that he made. And I basically started our slide deck with his slides. So this slide is showing a typical workflow of a job. It starts with GitHub repo. Any of our, in OpenShift, we're talking about a component. It was part of the release. That config file we talked about. The config file generates job YAML that lists all of the jobs that need to execute. From that job YAML file, the jobs happen. There's the unit test that is just start a container that has my source code in it. Enter the directory in my source code and run make test. It's that simple. Then there's the more complicated jobs that say, okay, take my source code, launch an external cluster. Keep my cube config in the OpenShift pod in the CI cluster. Execute, go into my source code directory and execute my end-to-end tests against that cluster. And then there's other tests that say, execute the centralized whole big test suite of OpenShift Kubernetes on an external cluster with my change. So that's basically what happens. And so let's go back to the CI config and talk a little bit more about the images. Yeah, we'll talk a little bit more about the image builds. So yeah, so there are different kinds of images that you can build with that. So the first one is a source image, which basically gets your source code and builds an image using that source code. So there are two ways you can define this right now. So you can use an image stream tag. So let's say your source code doesn't need, you don't need to install anything new to it. You just need to pick it up and pull it in. You can define that with an image stream tag under the build root here. But let's say that you have to install different things. You have to probably create an RPM from it or something and then inject that. Then you can specify a path to the Dockerfile, which lives in your repository. You have the steps in the Dockerfile that actually go through and build your image for you. Did I miss anything, Sally? I don't think so, I think you're good. Then the base images was that top section. That just says, it like declares an image that you'll use down to build your images. So if I say my base image is OS, then down in the image build section, I can reference that OS image. And that becomes like the from statement in the Dockerfile. Yeah. You have some special images also that can be built from the source image. So you might wanna create a binary image or a test binary image to run, like for example, your unit tests or intent tests that you need your source code for. So you can add that to your build binary commands here and you can just basically, the make build is like a command that lives in your repository and your make file. So you can link that over here and when it starts the test, it will do this and run those tests for you. Yep. What we're showing here are just snippets of that config file. The config file is like the central part of CI. So the last one, the target images, of course, is there's a Dockerfile that lives in my repository, build it, and out will come my component image. And that's the image that goes into the release. So those are the target images. You can also cherry pick from other images. I think most people will just have a Dockerfile though in their repository. Yeah. And then we also have this field called tax specification. So basically when you start your job, you can see there's so many components that come together to actually, in a release image to actually finally imitate a cluster and run your test on it. So when you specify, for example, 4.3, then you're telling your C infrastructure basically that when you start testing my changes, I want you to pick all the images that have the 4.3 tag on it. So are you specifying which version you wanna pick up? Yeah. And the promotion section says, I need this image that I'm billing to be available to the rest of CI. It could be because it's going to be part of the release. It could be if I'm building the CLI image, other tests are going to need that newest, latest, greatest CI OC. So make that image available to all the other templates and jobs. Yeah. So you promote the image basically. And once the image is promoted, it can be referenced with the underscore image, underscore image name. Yeah. Yeah. So this was sort of more deep dive into what the config file is. And when you put everything together, this is kind of a config file you'll have. You have your base images, your builder which specifies your source code, where you wanna pick that from, your tags, your promotion, and then finally, all the tests you want to run. You can add as many tests as you want. And you all can check these out too. I think I listed it, but it's openshift slash release slash CI operator. Yeah. And then you'll see the jobs directory, the config directory. Yeah. All right. So let's talk a bit about the different kind of tests that we can run. So the first one is the unit tests. They just run inside a pod in a cluster. So you don't really need to launch an external cluster for this. You can just define that here and it will, when it picks your source code and builds from it, it can just execute the unit test command you usually have in your repository. So you can also do like a hack slash run unit test. Yeah, just whatever command you do locally with your repository. And then the end to end tests are tests that are really specific to your component. They don't really have to be run by all of the components and those would live in your repository. So for this type of test, you would want to any test that launches a cluster, you need to start with a template. We'll get to that a little bit later. But this template is incorporating your source code into the template because you're going to need to CD into your repository and run like make test E2E once a cluster is launched and running. So the template is that OpenShift installer source that is mapped to a particular template. And again, we'll talk about that a little later, but the templates live in OpenShift release CI operator templates. And then, oh yeah, that's it. Oh, did you do the periodic job? I think it's, yeah, there you go. It was being weird. Yeah, so another one is the periodic test. As I mentioned before, it's just jobs with a cron element. You can see how you can define the cron element in the YAML there. It will just run as often as you tell it to. And then the final kind of class of tests is the acceptance test. These are the really important tests for each release component of the release because they launch a cluster and then execute the whole conformance suite and acceptance suite for Kubernetes and OpenShift against that cluster which was built with your change. And this is a slightly different template. The OpenShift underscore installer maps to a slightly different template and it doesn't have to include your source code. That's the difference because these tests are centrally located for all components to run. And they're actually incorporated into the release itself as a separate image, the test image. Yeah, so basically every repository that makes up OpenShift have these tests enabled just to ensure that when you add one small thing into one you don't break everything else. So these, they're actually the gating jobs. Like if this don't pass, your change is not getting in. Yeah. So yeah, let's wire it all up together now. Yeah. Yeah, so once you've written your config file, you have defined how you want your images built, what tests you want to run. You are ready to go on. Oh, wait, wait, wait. We need, you were, we said something about that job YAML. Where does that? Right, yeah. Now I have to write my job YAML which we didn't tell you guys how to do. And it's really long and it's a lot of YAML. Yeah. But luckily the CI team has made a very simple command called make jobs which auto generates your job YAML for you. As long as you have your config file written correctly. Yeah, all the automation that has been given to us by our test platform team is the result of us like bugging them about how to do stuff. And they're like, I'm sick of telling you how to do this. Here is a tool, just run it. Yeah. That's how all automation should happen. Yeah, and the job YAML is also pretty complicated. Like it defines every like all the environment variables for each job, all the different flags you need to run them, et cetera. So it's a pretty long file that you can easily mess up if you have to write it by scratch. Yeah, some things that like maybe you wouldn't think about is the credentials for all of those cloud providers. How does the Obershift pod get those? They live in a config map or a secret on the Obershift CI cluster and they're shared with the pod and all of that kind of stuff is defined in the job YAML, different M variables. Yeah. So I'm sure you're all here thinking, well, Sally, I wish you great, like, I guess I can write a config file now, it's so simple. Yeah. There's docs. Yeah, but luckily you don't have to. So we, the CI team also gave us a new command which is pre-recent called make new repo which actually has an interactive step-by-step guide on how to, that asks you question and based on your answers, it generates the config file for you. Yeah, the config file plus the job YAML plus the prow plugins. I didn't really talk about the prow plugins, but they're just a list of plugins that map to GitHub actions or comments, really. So the LGTM, the approve, the assign, the okay to test, all of those are prow plugins. There's even a meow. If you go into a prow PR and you type slash meow, you'll get like a cat picture, I think. There's a few like little Easter eggs there. But they are all also, so when you run make new repo, it will populate every single plugin for you and you can go in and fine tune if you don't, you know, if you're not like cool and you don't want the meow plugin, you can delete it. Yeah, the CI team pretty much got tired of everyone asking them how to write a config file so they just created a step-by-step interactive way for us to do it. Yeah. Why is this not working? Yeah. So the last and final piece of this putting it all together is we have everything we need. We've got our job, I'm all ready to execute, but let's talk about those templates. Anytime you have a job that needs to launch an external cluster, that uses a template. Yeah, so a template essentially is just pod definitions that tell the CI infrastructure how to launch these clusters. It creates different pods and containers based on different pods and containers for building your images, for setting up an external cluster, for actually running those tests against that external cluster that you set up. It has the different volume amounts to share files, like usually in all the tests we need the OC binary to be shared across all the containers so it injects that in. So it's just a definition of how to run. Yeah, and it ejects the OC binary that was latest and greatest built from our CI by using, remember I showed you test underscore CLI, image underscore CLI as an init container or volume mounted. And then OC is available to the every container in the pod. And then the other containers, the setup container, is what has the OpenShift installer binary. Side note, the OpenShift installer binary is actually an image that is included in the release itself. I think that's super cool. I don't know if you all know that. But the installer itself is packaged with the release. So the setup container just runs that OpenShift installer and launches whatever cluster you tell it to launch, whether it's GCP, AWS, whatever. Then there's like a marker file, and the test container just waits until it sees that the cluster was launched successfully. Once that pops up, it sees that the cluster's up. Then the test container kicks in and starts running all of the tests. The cube config is shared via a volume out in a temp shared directory. And it can just from the OpenShift pod run tests against the external cluster. And then if anything goes wrong along the way or if everything is done, the Teardown cluster is just waiting to kick into action and run the destroy cluster command. Yeah. One more thing the Teardown part does before it destroys the cluster. If your tests fail, it gathers a bunch of logs. We call it a must gather, which helps you debug what the issue was. So yeah, it also snapshots like all the yamlis that were used for the operators and everything and the logs associated with them. Yeah, the thing is not working. Yeah. So that's the troubleshooting. We need lots of feedback from all of these tests. It is a lot of logs to go through. But what they do is enable us to quickly pinpoint what's gone wrong, what we broke, what is broken in CI. Once in a while, Clayton breaks everything. And it's not our code. And you've got to go tell him. And he's like, oh, yeah, I did do that. And we all freak out. Yeah. So the different types of jobs usually give you feedback. So for pre-summit jobs, the bot will be commenting on your PR. So if your test fails, and we like test fail, it will give you a link on how to get to the must gather logs. You can go there, look at the artifacts or actually get all the information you need to get to debug it. We have Prometheus alerts also that can be created and send you messages and Slack. And as our tools have developed and as our test grid has expanded, we also have a website where you can go and enter any error strength that you're seeing. And it will list all the different jobs that failed with the similar error, or the same error actually in the past few hours, days, et cetera. Yeah, so this is just a service that the test platform provided us, basically, because we needed it. Because we'd be in one person in one repo, like, hey, my job failed for this reason. It's not related to my PR. Someone else is on Slack, like, hey, my job failed. Like, what's going on? This has nothing to do with my code. And then the third person's like, hey, my job failed. This has nothing to do with my code. You can go to this simple UI and type any error string. And it will pop out how many times within the past day, two days, weeks, hours, that failure has happened in RCI. And you can quickly determine if it's just a flake, if it's the only one in your PR, then you probably broke it. We also have another tool that also sort of extrapolates into, like, a graph to show you the number of jobs that they are. Lots of pretty graphs and test grids and, yeah, crazy. Again, that's Clayton. Only the experts that can understand them. So, oh, yeah. Oh, yeah, we wanted to show you what, actually, the job looks like, yeah. Yeah, this is one of my PRs. In GitHub, I did slash retest about half an hour ago, because I wanted to be able to show you. Here is my namespace. I'll just show you all the namespaces real quick. Maybe not so quick. You shouldn't have done that. Yeah, no, I shouldn't have. OK, so, yeah. Well, let's go back to the slides, and we'll come back while I'm at it. All right, yeah, so just ways to help. Feedback and lots of feedback. And the Artifactor directory is a shared volume mount in the pod running an OpenShift. Eventually, that just gets shared to your pull requests in the proud UI, and it has all of the logs and everything you could want to troubleshoot. Yeah. Yeah, so some more troubleshooting. You can, whenever you open a PR to a repo with a new job, like you're adding a new job, it starts rehearsal jobs that actually runs the jobs to test if it works fine. The console we just showed you, it's this website. It's basically our internal CI cluster that you can go to, and whenever you start a job, it creates a namespace with all the jobs for you, and only you can go and see them. Yeah, you're given the right to access that. But as a PR owner, you're the admin of that particular namespace. So if you're collaborating with somebody, you can give them access to your PR. And you guys can go and look at the pod logs. You can actually just access the terminal of any of the pods, and you can run OC commands to see what your cluster is doing. You can check for, if you're configuring a job, you might have to place a file somewhere on your pod, and you can check that it's there, because sometimes the paths get wonky, and you have to figure all that out when you're making a Docker file. Yeah, yeah. And if your job also starts an external cluster to run more intent tests, you can also get onto that cluster to see what's going on if your tests are not working as expected. Yep. Maybe we should go back and see if it's loaded. Ah! The internet's not working here for some reason. Darn it. I really wanted to show you this. Yeah. I could run OC commands on my terminal to show them. Yeah, let's find those slides so we can do that. All right. Yeah. So these are some resources for you guys. If you want to go check it out, there's a lot of good documentation on everything. It can be a bit daunting, but there are a lot of examples, and you can, once you start going through them, it all makes sense. And if you have any questions, just don't ask us, because we're not the experts here. We just use it. We're at the mercy of RCI just like you all are. Yeah. Yeah, so that's some useful Slack channels to ping and mailing lists. And yeah, that's it. So yeah, we've all been frustrated with CI and have babysat. We call it babysitting CI, where we're just sitting there hitting retest because there's a flake. But what you don't want to do is hit the force merge button because with such a distributed project, that can really wreak havoc on a release. And it has many, many times. So you have to be patient and just work through the CI problems and wait for the all clear before. And let the robots do their job. Yeah. Let's just see if it's... Oh yeah, there you go. Oh, cool. So here, every time a trigger, a proud trigger is a job, your namespace is created. Yeah, so these are all the namespaces. Those are all just like community namespaces, but the CI app is what... So here, go to the pods. So here are all the pods that ran. The one released latest is where the release image was created with MyChange. The cluster cube controller manager operator, that built my new cluster cube controller operator image. And those two are those special images. The source build and the binary build. So go into the E2E AWS is the gating job for all of our OpenShift repos. And if you go into the logs, Oh yeah, sorry. There's a dropdown. And for all of you guys who maybe are in OpenShift, you might not know this. So it's really useful to be able to debug your PRs by doing this. These are the four containers in the pod. So if you go to the setup container, can you do that over here? Thanks. And you go to the logs, which I might... Not available. Oh, they're not available, all you would see was the output of the OpenShift install. And it would say, install complete, log into the cluster using this password. And then the tear down hasn't happened yet? No, I think it's just the... And then the test container is what you'll see the output of all of the tests just spewing out. But it's slow. But that, and then you can go into a terminal. And here, will you hold that? Oh, it's done, so I can't. Once the tests are done, but I was going to go into the terminal and type like OC get cluster version. And you would see that my, you know, it's accessing my X, the cluster is external and GCP or AWS. Yeah, so, is there anything else we wanted to show them? It's a pretty UI to see how all your tests are running. Yeah, actually, let me see in my, this is a very storied PR. See, it hasn't, the results of my tests haven't come in yet, but it looks like everything is green. So, Mache, you can release that hold on this PR and we can merge it, thank you. Yeah, so all these jobs you see here are the jobs that are created from your pre-submit jobs. So, here, yeah, this morning I checked it and my E2E AWS job failed and I looked in the logs and it was some, like, terraform, some terraform thing that had nothing to do. That means that AWS crapped out on us, basically. So, that's why I retested it right before we came. Like the Wi-Fi is crapping out on us right now. Yes. So, I think that is it. How are we for time? Yeah. Are we short? Where's, are we good? Yeah, oh, questions, anyone? Yeah, that's all. Thank you. How many of you knew about OpenShift CI coming in here? Anyone, how did, how many knew that, like, it runs on OpenShift? Yay! And how many have been, like, really frustrated with your CI because, yeah, yeah, yep, yep, yep, yep. How many of you have experienced the Clayton break? Yay, yeah, all right. Yeah. And Steve Kuznetsov is amazing. He pretty much, he and Clayton really grew our CI to what it is. Yeah. It's awesome. So, thanks. Thank you. Yeah, yeah. I'm sorry, can you say that again? Oh, yeah. So, Prow, if you have any kind of Kubernetes cluster, you can deploy Prow and you can go to our repos and check out how we have everything set up and you can use that to set up your own CI if you'd like. And the CI operator lives in another repo. It's like OpenShift slash CI tools, I think. Everything is available for you to look at. Do we say, you know, the release page I showed you, you can go and check out all of our releases, all of the PRs that are currently in the release. But OKD, there's also an OKD release page. I don't have it, but on that page you can pull down the images and those are our community images, OKD. Yeah. First off, thanks for doing this presentation. Oh my gosh, thank you. Oh, yay. Thank you, everybody on my team, because... Yeah. Everybody knows 10% of how it works. Yeah. Thank you. Oh my gosh, thank you so much. Thanks. Yeah, say again? Oh, that's a good question. We use all cloud providers. So our customers run OpenShift on all different cloud providers, on premise, on bare metal. So I just showed you a few examples. There's GCP AWS, Azure, bare metal. Yeah, so we test them all. We test them all, yeah. Yeah, right. Once your pull request is green and has all the labels, it doesn't merge right away. It's pretty quickly, but there's a proud tide pool that will merge a few together. Oh, I'm sorry, yeah. The question was once your PR passes all the tests, it doesn't merge right away. It waits for other PRs. Why is that? It's a good question. I think just to group them together... So they go on a... Yeah, I think the tide pool basically just puts them in a group and retests everything that wants to be merged again. And then once everything is fine and passes, then it merges. So it tests a bunch of PRs together again. Yeah, not a whole lot, but three or four. And I'm not exactly sure of the reasoning, but it does ensure that a current change to this component isn't gonna break the same time change to another component. That wouldn't have been caught... I think that is it. Yeah, that wouldn't have been caught by just testing the 99 good ones with mine. Now I have to test the 96 good ones with the four questionable ones together to make sure those new changes haven't broken each other. That's a prow thing. It's called the tide pool. So prow is located in Kubernetes slash test infra. And actually Steve Kuznetsov is a main contributor to prow also. He's a redheader. Yeah, it's all... All right. If you guys are in OpenShift or Red Hat and you have questions, you can find us on Slack. Lots of people will, they know I like to answer questions about CI. So they'll be like, hey, Sally, how do I do this? And you're so much nicer to talk to than Steve, so you can tell me. No, Steve's awesome, but like he's busy and sometimes he gets a little snarky, you know. Yeah. Mache, so add that into my daily chores. All right, we're good. Thanks so much. Thank you.