 Hi, my name is Fominal Barrilla. I will try to introduce you to Prow. What is Prow? Prow is like a CI CD natively for Kubernetes. It's the platform where Kubernetes is tested, OpenCiv is tested, and all the PRs, all the mergers, and also all these kind of things, it happens. Yeah, let's put up this one. I'm senior software engineer at Rehat. I love programming and OpenCiv and Kubernetes lover. And this is more or less the agenda. I don't think that I have dated in the last PR, but it's fine. It's almost this thing. Okay. Let's get started with Prow. As I said, it's a CI CD system that is based on Kubernetes and it's more attached to GCE, GKE, and yeah, it's it's an opinionated platform, let's say. Then you could manage your Github repo through this using chat ops, which is like I will put a command in a PR with that you could let's say your Github bot will take care about all the PRs, all the mergers, all the, I don't know, put all the labels on the right PR, right mergers, and it's made in Golan, which is cool. All the manifest, all the configuration are storing in secrets, config maps, and also in just the deployment that you put on the Prow. Then the behavior is more like operators. It's an infinite loop where the component will check the configuration on the config map, and with that will happen in some things. Could be interacting between some, let's say, components, or maybe just add to the, against the Github repo or just create a build, create a test spot, whatever, okay? And I create this presentation in order to have some kind of guidance for the people that they could use it, but it's like a script, okay? Then you could use it as a documentation, and you could put your hands on and it's fine, but I could not stop in every step that I put here because it's like 80 slides. This is how it looks like the Prow pods in the console plane namespace. As I said, it's all the configuration in config map and secrets. We have two namespaces, two main namespaces. The first one is the control plane one, which is by default default. And the second one is it's called test pods, which all the jobs will be triggered. And this is all like, we have like five, four, six main components. I will talk about them. And this is all the services that are up on the cluster. I think the main ones are deck, hook, and tight. But yeah, I will go through them a bit later. Let's go with the control plane. Okay, hook. What is hook? I think almost the word itself is planetary, but the thing is hook will take the communication between components and also with GitHub. Then GitHub will talk with hook, and hook will talk with whatever, tight or deck or whatever component that makes sense. It's a statements deployment, which involves pods and pbs. And dispatch all the calls to the other components. The other component is called plank. I think it's most base one. Manage the job execution of every test that you have in your configuration on just on your jobs. And Manage also the life cycle of the jobs itself. Also have a cool feature which is called decorate. Decorates will add some, let's say, extra things to your jobs without put the explicit in every job that you need to configure. Then if you put the thing called decorate to true in your presets, it's like all my logs or my artifacts that I create on my build will be uploaded to GCS, which is cool. You could have your own policy to delete, destroy or restore your artifacts. You could create whatever. I don't know. If I need to use a secret that is not usually used on one job or whatever, you could use these utility images. Yeah, you have many things with this decoration. It's very cool. Yeah, utility images will take care to bootstrap like four or five images and containers. Doing every image and every container will do different jobs. For example, there is one called init uploader which is in charge to upload the artifacts to GCS. Yeah, there are many. I will go through them a bit, but not so deep. Well, Dec is just a UI. It's informative. You cannot configure anything there, but it's informative. You could see every PR, every merge and every job executed during the time. Here we have another one which is called PR status. You could see how is your PR and how many commits have in this PR. What is the status and the author? Take history. Take history is like all the merge and all PRs that have been triggered to your repo or your organization. You could manage organization and repositories as you want, but you could see here all these things. The action that will happen, the target, and the base commit which is starting the thing. Also the PR author and the PR status will show, it's like a query. It's triggered to GitHub. It shows which author creates how many PRs on the repo. You could see the status and all the labels that it has and something like that. Let's go with two more which is called Oralogion and Syncer. Oralogion has not explicit configuration but it's like the component that is in charge to trigger all the periodic jobs which is like a Chrome manager. Syncer is like a garbage collector. If you have like, I don't know, 20 jobs in a row in your test name space, it's not so clean, let's say. Then Syncer will take care to clean that stuff and keep the workspace clean. It has explicit configuration, Syncer, but it's like what is your pace to clean the jobs and these kind of things. Tide, Tide is I think one of the most important. Manage retesting and merging of PRs and it needs another plugin called Trigger to make it work completely. Also manage to sync the config map that you have in your configuration in order to read every time that you need to sync with the GitHub repo or just to have updated on your configuration file. So Syncer works with a pool of GitHub PRs which is very cool. Yeah, it populates the DAG dashboard. Yeah, it provides three models of merge which is the same as GitHub does and it's merge, squash and rebase. The bad thing that it's on prow is the vendor locking thing. It's very tight to GitHub. It's working on the pro guys. It's working on GitLab to make it work but still it's not working at all. Yeah, it's ongoing. Cryer I think is the last one for the control plane. It's the notification demon. For example if you work just with GitHub it's fine. If you have a pre-commit or post-commit pre-merge or post-merge jobs it's fine because GitHub will notify you when something happens. If every post will fail it's fine or will work it's fine, will notify you. But the thing is if you have a periodic job that doesn't have any context about your repo, about your, it's just a generic job you don't have any way to notify you. Then this is the component that will happen, will make it happen. Then it has a specific configuration and for now I think I have just four providers that do notify and this is the four one. I think we will just cover Slack and the configuration. The configuration is just a secret with the OAuth token and that's it. Okay, until now any questions? All right, let's go with deploying thing. I think this is the most complex thing on the talk but we have one way to do it easily which is using tackle. Tackle is a component installed using Golang. It's a Golang tool which will deploy Kubernetes on GKE. I think by default it's like a three-node cluster. It could be configured, it could be customized. The default one is three nodes with four CPUs and eight gigs of RAM, more or less. You could do it manually but the thing is if you use tackle, let's say the good thing, all the ingress thing, all the load balancers are managed by the ingress and in this case by GKE. If you do it manually, you need to take care about the deployed load balancer. You need to customize ingress like Nginx or something like that and you need to configure manually all those things and it's not great. But as I said, this talk is based on the tackle installation but here you have the sample of manual installation. You need to configure your G Cloud account in your shell and your all token. As I said, I put all the sample installation and all these things. You could just click on the links and that's it. As a script, you need to create your GC instance on the manual installation, deploy a Kubernetes and with that clone the Sinfra repo and apply all the airbags, all the deployments and that's pretty much it. It's not so complex but all the ingress thing, it's the bad part. You need to configure all the ingresses, all the things. If you have like three nodes deployment, you need to configure load balancer and all these kind of things. If you use GKE, this will be done by the cloud provider itself. Okay, any questions? I think, yeah, go ahead. No, this tackle tool is like the operator but it's not operator, it's just an executable and there is not operator at all. I don't think so but I think you don't need an operator at all because the update method is quite easy. Yeah, the Kubernetes people, I think they are not covering this thing but yeah, PRs are welcome. Wherever you want to deploy it, it's fine but the thing is you need to be reachable by DNS, by GitHub, configuring the webhook in the UI and that's it. The unique thing that you need to fit is the GitHub thing. It depends on your test. If you have jobs like I need to rise up another Kubernetes in a Docker, you need a very big machine but you could configure clusters which is called like node selector and you could have a cluster called bare metal and you could have your bare metal clustering premise and you could have another one which is like cloud provider fitting and then yeah. You could configure this on the jobs, you put the node selector when you need to point to another cluster. Just pro, without testing or just testing minimal thing, it's like one node with 8 gigs of RAM, maybe 4 and yeah, 8 is better. Two CPUs or 4, it's fine. There's no more on that. All right, let's go with the configuration. By default, tackle will not deploy the SSL thing but in this presentation I configure Proud with third manager and you have all the sample deployment and also all the stuff to configure the issuer against the let's encrypt, creates the issuer, creates the certificate and with that create the ingress. This will create the load balancer on GC and with that will point to the right pro instance which is this one. I have all the execution here. I let him like two days in a row just to get some history. And yeah, for example, if I go here I could see the deluxe execution for this job and this for this one. Okay, to configure this you need just to deploy third manager, configure the issuer against let's encrypt in this case is ACME base and with that create the ingress. That's it. Let's encrypt will recall and create the CA and create all the, minus all the certificates. Okay, and here's the big thing, the configuration, the plugins, the labels and the jobs. You could merge configuration and jobs but the good way to do it is to separate jobs from configuration. I think it's quite nice to have it separated and if you have a very big organization or repository it's recommendable to do it in this way. Let's go with a config file. For example, this is the config file. No, this is the plugin file, sorry. Yeah, this is the config file. Here we have like the non-section thing which is the default things like log level for debugging, pond name spaces and prune name space. You need to configure the proper airbag if you change the name spaces. Type configuration which is configuring the sync period which will recall the config map and update the live configuration that it has on the pod. All the merge method for your repositories, in this case we are configuring merge method for all the repositories that we have on the shadowman organization, Github organization. All the queries that you need to fill to make these merge happen. Then you need, for example, I have a PR on ProDevconf repository and I need to get these merge happen by the bot. Now I need to make all these requirements fit. We don't need to have, do not merge, do not merge hold, work in progress or invalid owner's file. And I need at least two labels, look good to me and a prover. It needed to allow type to make, to merge this PR. PR status is a page on the on deck, but it's not a big thing. I will go through this, sync air configuration, just the garbage collector, I remember in you. This is the, like, what is the path that where you will store your logs and your artifacts in your GCS bucket. And I just copy paste a template configuration. I think this is the best way because it's a mess to get this configured. Job URL prefix config, which is where I will store the artifacts, well, timeout will happen. And the decoration thing, I think this is very useful. Utility images that I talked before, it's clone ref, I will use this image and this version of the image. They need to upload things to GCS entry point to get bootstrap the container and pod and the sidecar. If you need a sidecar, you need to decorate the pro job. Deck, which artifacts will create on the execution and presets, which is, like, all my jobs that have this label will have this configuration. It's not a big thing, but it's useful to don't repeat yourself in the job description. Okay. I think I show almost all in this. And here we get to the, we already have my configuration done. I will show the plugin, but I will talk about it later. Plugins, yeah. Here's the plugin thing. I think the config updateer is the best one. When you create a PR against your configuration repository, you could have, like, an automated pool when a component will trigger the configuration from github repo. And from that we'll update the config maps. I think it's very convenient to have it configured on your CACD system. And it's, as I said, very convenient because if you create, if you are working with many people in a team, it's okay to have separated folders for every repo that you are managing. And I think I will show an example. This is a repository where it's located and manage all the, all the jobs of our organization. This organization is like maybe 20 repositories and every repository has their own, their own folder. And inside of every folder you have, like, well, this is not a right one. Let's go with this one. It's like periodic jobs, pre-summit jobs, post-summit jobs. And if you have some presets configured for this, you could use it. Okay. And with that, would you just point to them, use another tool called Basel, which is a build configuration or build, let's say, build program where you just have many commands done in GoLand. And you just need to use a tool which is called chess config just to ensure to don't upload bad configuration to your CACD system. Then with that, you just need to update your config maps and that's it. If you already configured your config updater, you don't need to configure your config maps. The PR will take care to do it by itself. I will show an example of a pull request already done and merge against the configuration and how... I think this is it. Yeah. This is a PR already against the configuration. Some folks helped me to do that because you cannot approve your own PR. Then I think it makes totally sense. When he puts something like look to me or approve on the repo, on the PR, the bot will take care about approve the PR and merge the PR and will notify you which configuration or config maps will be updated. In this case, plugins have been updated and your config have been updated. This periodic one and the pre-summit one, the bot will inform you how the configuration has changed and it will be published on the PR. I think it's very convenient and if you will do in a serious way, I think you need to configure it in your repo. Okay. With that, we could start with plugins. Any question until here? Okay. Plugins. I think it's another thing that is very important. It's very extensible and it's not very complex to create your own plugin for PROW. This is the main configuration for plugins, the main file. I could show you. You could configure for your organization and PROW will take care about extend if you put some repo just to extend the configuration up to this repo. Then if you put to your organization all these repos, all these plugins, it's fine and for every repo that you want to configure separately, you could do it in this way. The other thing that you need to configure to make tight working in a proper way, it's these triggers which will take care about the merging. With that, it's fine. It's not so complex. Also for approving, you could have more configuration in this case. It's just implicit self-approve, which means when I create a PR, I assume, let's say the bot will assume that I put a logo to me. And logo to me as a proof. I put as false because it's not always the case. And also a good thing you can do is like a logo to me review will act as a logo to me. If you review the PR, it will take care about all this logo to me. It doesn't make sense to repeat yourself. I enter already on that, configure editor. I already talk about it. Yeah, the config maps. It's in the control plane, in the control plane name space. And we have four config maps. They are these ones. Yeah, it's a Jamel base. And it's parsed by tight. If you separate your job config from your configuration, you need to decorate your every deployment that you have on your name space, on your phone to plane, to include these lines, which is the, I will have my separated job configuration. I need to mount the volume in this path and also in this, and I also need to use this config map. That's pretty much it. You have a link to do it here. And this is how it looks like a repository based on separated jobs by every repository. But periodic, pre summits and blah, blah, blah. This is how it looks like the successful PR. And another one I think it's very cool. It's approvers and owner labels. You could manage with this plugin in a very granularity way. What this means is I could put an owner file on every folder that I have on my repo. And with that in this owner file, I could put separated owners. And just them could manage this folder. I think it's very cool because every job in the configuration and every job in your proud instance could be different or managed by different people or different teams. Then this could make your day. Also you have another thing, which is meritus approvers, which is like temporary approver. And yeah, you also have filters. Filters will apply some labels to the PR or to the jobs in base of these reg expressions. And yeah, it's not so complex, but the owner labels it's very, let's say, useful for when you have like, I don't know, 50 labels to manage. One thing that is not displayed on the proud documentation is you need to create all the labels on the repository. If you will recall this label on applying that you don't have, you need to create by yourself. But we created an image that will make this for you. Look at me approve. Questions on plugins? Yeah, sorry? Reviewers and approvers. Reviewers have not the same, let's say, credentials. Reviewers will just could be like a contributor. And approvers could be like owners on the GitHub repo. More questions? Yeah. From two guys, not only one? Yeah, yeah, this is by default the behavior. Like give it to me, yeah. And then it's okay. Yeah, after that another needs to approve that. And you have another plugin called Bunderbloss, if I remember correctly, which is the plugin in charge to assign the PR already to reviewers, which is cool. Then you don't need to go through GitHub repo and assign the reviewers manually. More questions? Okay. Yeah. No, it's just to assign, if you have a PR and it's not managed by anyone, Bunderbloss will take care to get the owner's file, see their reviewers, and assign them the PR to manage their look to me and review code and all these kind of things. Is there something? I don't know. I don't think so. Okay. Automatic management of the GitHub repo is quite easy. You just need to configure the web hook on GitHub repo or on organization, which is the normal way. You need all the involved components will be planned tight on deck. And this is the usual behavior. Plan will take the utility images and ready to GSS bucket, tight manage the self-merging and testing the jobs, and just on deck will show you the status of all these actions. And again, use check config because if you have separated a configuration from jobs, the configuration will not give you any error and you could upload it. But you need to go to execute check config to get the errors on the jobs. Tight will not say anything about this working or not working. But you could, for example, check the job config in order to know how many jobs you have. Then, for example, here we have all our config maps. I will get job config, for example. As you can see, I have one periodic job for this file and two jobs for this other file. The thing is, for example, if I have four jobs and you don't have one of them in them in the config map, you have an error in your configuration, but tight will not say anything about that. It's very cool. Okay. The way to replace manually your thing if you are doing things fast because you are pushing it, you're very pushing it. And yeah, question on that? It's quite easy to configure your repo. Okay, let's go with testing. We have three kind of jobs here. Presummit, posummit, and periodic. The thing is, the periodic one doesn't have any context, as I said. And you need to put all the extra argumentation on the job. And it's something like this one. Extra reps will point to the right repo to clone and to manage. You could put whatever you want. It's an awry. Then you could just fill it wherever you want and put whatever you want on that, whatever you repos want. The node selector, as I said, you could put the cluster here. In this case, it's primary. But if you put, it doesn't put anything here, it's fine. But we'll run in the same cluster that you are running proud. And this is Daymatch, the Daymatch definition. The commands and the argumentation is the useful for Kubernetes. Presummit is when you have a PR already submit to GitHub, it's ongoing, and are the tests that are in execution. To make the PR happen, you need to fill all the tests and finish successfully. If not, it will fail and you need to check the pods and so on. This is the configuration for Presummit. You don't need any context here because you already have it. On the PR, you already have the repository, the organization, and yeah. You could put some things like a script report. I don't need to report, type, whatever. Always run. In every commit that you have on the PR, we'll run this test. If not, you could just disable it and that's it. Decorate. I always use decorator. The name of the job, and that's it. After that, we have the podsummit. I think this is more like after when you already merge PR, all the execution that you need to execute on the proud. For example, if you need to already have my Kubernetes version already done, it's tested, it's merged. I need to spread all the images to Google Compute. You could do it with this one. This is the aspect of the job. It's more or less the same for Presummit. That's it. It's not more complex than that. Any questions? Notifications. As I said, the thing for notification is on periodic. You don't have any notification. With Cryer, you could do it. You define on the configuration which is the behavior in order to get notified. For example, you need to configure the Cryer airbag and the Cryer deployment because by default it's not deployed on your proud instance. Also, you need to create a generic secret which contains the Slack token. That's pretty much it. With that, you have it, let's say, prepared to get configured. With that, with this configuration, you have it configured. Job ties to report. For example, if I have already configured our GitHub notification, we don't have to get it configured by Cryer. Then I will avoid to put Presummit and Possummit on this one. I just have periodic and batch ones. The state of the report could be just ping me when it fails. For example, the Slack channel will be notified and the template. This job, whatever name, will have been failed on this link. The link will be pointing to the logs and that's pretty much it. It's not a big thing. Any questions? Other similar project? Jenkins? Yeah, but it's not based on the same base. For example, you could use Tool, which is used by OpenStack guys. There are many of it. The thing is, Proud itself is not a complete thing. You need more high-level, let's say, component-like pipelines. You don't have it here. You don't have it like Tecton or something like that that will join to this Proud thing and will take manage to all the pipelines and all these things. For example, you already have one thing called external plugins, which is not explained in this deck, but it's fine. You could use a Jenkins external server to execute some jobs. External plugins, it's okay, but you need to put it manually in the pro instance. It's a bit messed, but it's very useful. Here you have all the references where I put all... There are many of them and yeah, it's done. Questions? The thing is, not for now, they are working on a GitHub integration, but it's not working at all for now. The PR is ongoing. Yeah, you just need to add it as a cube node without you are fine. Well, depending on what do you need, I mean, you have already one test namespace where all the posts will run, all the posts mix. You don't have any name on them. Then you cannot identify by, let's say, just scene. You need to go to the PR and see which number is being executed. But you don't have any way to separate the jobs for now from the repository that you are managing. All right, thank you.