 So thank you everyone for staying so late. It's the last. I wouldn't be promising like I wouldn't be taking a lot of time here. And thank you for Ramya covering most of the testing strategies, so I would be just skipping it out. Thank you Ramya. So my name is Rohit Vyas and I'm a suite quality engineering manager at Red Hat. And mostly involved with the automation part of the CI off of a test suite. So this talk is mostly about revamping the QE automation pipeline, QE automation testing and tech on pipeline. This talk is mostly about my journey towards like how we integrated our integration testing from end to end point of view. If you look around the QE CI process and if you want to think about the CI and CD process, it's like building your software from building, deployment, building, testing and deployment. But what I'm talking about is just the CI process of the testing side. And this CI process of testing side includes from trigger identify at what stage your test suite would get triggered. Then it from what repository you are using the code to be checked or it could be GitHub, it could be GitLab. You need to also link your code. You need to ensure that you're coding like JS. If you're using JavaScript, you need to use JS slant, it can be using pilot, Yamalint, depends on the code which you are using for your integration testing. Then you have a provisioning environment where your SUD will be run. What is your provisioning environment? Whether it be a cloud, it would be a bare metal. What kind of provisioner tool you're gonna use it in integration testing some might be using Ansible as a provisioner, some might be using Terraform. It totally depends on the set of tests which your test engineers have built. The SUD setup can be in AWS, GCP. It could be a standalone bare metal machines using Vagrant or Liveware or something like this. Maybe it's running on the podman containers. So that's the SUD part. The test execution constitute of your Ansible test, your Python test, your Jmeter for the performance, your API test would be in SOAP or it would be running on the headless mode with using the postman. Or it could be a go-kingo for the go framework. The last part, not the last, but it's a test logging because that's the most important part where you're a stakeholder keeping an eye on. The logging part and the analysis part. That's also a very interesting part because what everyone care about is what's the percentage of a test case is passing and what's the product readiness for your entire workflow. Last but not the least, the important part for getting the notification. Your notifications should be very productive at any point of time where your test execution get failed or there's an identification of any deviation from your workflow and you can have your test notification integration with your Google chat Slack or even getting a PDF reports on the Gmail. So this part is a small criteria in the CI CD workflow where QECI scenarios and the workflow fits in. There are so many CI tools available. It's pretty hard to identify which one suits your requirement. There's like Jenkins where you can write your Jenkins files. You can grow up into multiple stages, build up a pipeline of these steps. You have GitHub actions, which is good to integrate with your Git repos where you have to run some kind of action files for your integration test. You have Ergo workflows where you can run your execution pipeline on multiple clusters by integrating to it and then you have Tecton, which I am trying to target today. So Tecton is just to give a brief introduction, like Tecton is a Kubernetes resource which helps you to build your pipeline using multiple tasks and stages, multiple tasks and the pipeline. This is one of the use cases which I was working for my last project. We were certifying an operator certification use case where we need to certify different vendors, operators, and also we need to certify different OpenStack plugin and have like tons of different test scenarios that we need to cover on. If you see that there are like proper triggers, it could be triggering our test suite twice in a day or probably on every Git check-in. Then we have to check out the workspace that definitely a part of our CI part, provisioning and building a test data, provisioning on multiple environments like OpenStack, then configuring, setting up an OpenShift cluster. It could be using a mini cube, code ready containers over a bare metal machine, or it could be using just kind cluster. Deploying and configuring the operator side of it, triggering the GitHub actions pipeline, which is on the upstream side, pulling the events notification from the GitHub comments just to start and initiate our integration test suite. Integration test suits were in many different formats from UI test which were in Cypress, the backend test which was on Python, the API test on Go, then we have Go Catalog test, we have tests which certify different Docker images of the container images, then we have backend DB test and this has to be integrated on together. It requires all configuration, so we have bundled all the tests in a container format, container image, and we keep those container image in a registry just to ensure that we are well versed with the latest images for our test suite as well as it has a vulnerability scan, we are always updated over it. A vulnerability scan on clearance with clear services, so it's like quite giving us the updates of the health of our images. The last part would be the logging part. We push the entire test results into our centralized data logger, which is report portal. We are using report portal for pushing all of our test artifacts, test reports, the UI test logs, everything. And then after that we have notification section where we push all the notifications to our Google chats, Slack, and Gmail. Just to bring a light introduction about Tecton CI, so Tecton CI is a Kubernetes resource which help us to build up our pipeline and by pipeline we have multiple areas. The smallest one or the shortest one would be the steps where you execute your commands and your test scenarios by bypassing the arguments. Then we have a task which bundles the steps. After this task you can imagine as different stages of a pipeline. Pipeline is also a Kubernetes customer resource which is used to bundle and execute your task whether you can run it sequentially or you can run it parallely. Pipeline runs with a pipeline run customer source which is then been triggered by a trigger template. You can bundle it with the trigger template. Trigger template is something which listens for the trigger event and you can listen those trigger events with the event listener services from the Tecton which helps you to listen different triggers like from webhooks or you can just directly use chronicles. Why we shifted to Tecton? Because most of the workload which we were running on was run by container side because the test bundled container it was easy to use the same format where we can just execute our container test scenarios. Also applications can be deployed on the same cluster so that was easy for us. Then Tecton provides a good mechanism of executing using CLI and GUI. So Tecton has a CLI client which is called TKN. You can use it to run CLI. It's easier to integrate and it also provide a GUI dashboard from where you can just manually run the test on demand. Then we have a huge benefit of using Tecton was that it's catalog source where you have a predefined task which you can just directly integrate with your pipeline. It has a huge community and the best part is like it's task are reusable so you need to write your task once and you can reuse it in multiple pipeline scenarios. Also Tecton tasks and pipelines are just YAML. You can write your YAML over here defining what steps you need to execute the images on which your task would run and each task run as a pod and each pod executes as a single instance for the process. So you define your task in a simple YAML format which is easy to understand by Q engineers or it's easily maintainable because any changes on the test side will reflect on the image. So you can just update your image it's quite easy to integrate in the entire format. Then you can integrate those tasks on the pipeline which defines whether your task would be securing parallely or sequentially on a specific time. Then you have a pipeline which defines how your pipeline, sorry, you have a pipeline defines how your pipeline would trigger you can pass all the arguments as the input and the output resources. You have a trigger template and event listener for like if you want to create an event listener for Git events, you can have it as a Git event with respect to the push hook or something like this and also you can integrate it for a different like you can use it as a web hook like I think in GitHub Actions there's a STTP response dispatch workflow so you can also trigger your event listener directly from any STTP request. All right, I think this is a small video of Tech Turn catalog where you can just directly search for your task and let's say you want to create a task for a pie test one example, you can just directly go in the pie test download the task you can download the task, go in the CLIs you can install the task, give a reference give a reference of your task to your pipeline can directly install the task in a cluster you can give a reference once you have a task in your it directly installs the task in your Kubernetes cluster and once it's available you can use it as a reference. So I just downloaded the task of pie test and it's in my Kubernetes cluster I can reference it and use it in my pipeline. Okay, all right. So this is one of the screenshots for my operator pipeline where we have a set of environment setup then we have a provisioning area where we provision multiple bare metal machines and deploy a Kubernetes cluster on it using CRC and kind of the Git event which we trigger on the GitHub side we wait for the polling agent to get the information from the GitHub side and then we execute multiple tests in parallel which is from the UI back and DB and the API side. Finally, we submit the test run and after that we can get the notification of completion. Well, this is the demo for how you can execute a pipeline from the CLI as well as from the UI. So this is a simple example of running your CI pipeline using the Tecton CLI. You can use overriding your default parameters as well as if you want to use the existing default parameters you can use set perms default. Once you run it from there it's pretty easy that it gets executed on the UI side. You can also redirect these locks from the terminal as well as you can see on the OpenShift side under the task locks. So that was an example of how we trigger the test from the CLI. Also you can directly trigger test from the UI side. It depends on what's the demand for execution of your test frequency. So you can directly go and override the variables from there and you can execute the test. All right. For test logging and notification we are using report portal where we are pushing all over test results into the launches. It supports the launches format where you have all the test results from multiple source. You can also see and configure your customized dashboards based on which you can understand what is the failure analysis for your entire test suite and for that program. The notification part consists of custom messages that we are sending on the Google chat side which gives the information from the tech logs as well as from the test results. So we customize our notification format. Well this is an example like how does the report portal analysis looks like. You have your test results over here. You can see the test logs published. It also has a capability of predicting the failure. So it saves a lot of our time in identifying like what was the reason of failure. So let's say if there's a system related issue. If it's a product related issue or it's an automation bug. So we can just identify from this test log analysis and it's automatically reflects with all these kinds of defective. So rather than investing times. Okay. I got another event coming. I know time to do. Sorry. Okay. All right I'll just hurry up. All right so you can see the dashboard over here which has the predictions of the test run. And also you can see the Git chat notifications which we have customized has information of the tech logs. You can directly navigate TechTown dashboard as well as the board portal log instance from here. All right. I think I won't be taking much of the time. These are the resources and the reference which you can refer for for TechTown OpenShift Pipelines. We have our project operator pipelines over here in GitHub from where you can take a reference of it. And also there are different tutorials on the Jenkins and other CI tools like GitHub actions. You can just refer it. All right. Thank you. Thank you everyone for being so patient. I thought I covered the talk on time. Let's thank the speaker. So you are completely on time. I'm not sure what happened here. So don't worry about this. But I mean, there's a minute over. Okay. Well then thanks everyone for joining. Let's give our speaker a round of applause. Thank you everyone.