 All right, so finally, we have this working, and we can start. So I'm Rodrigo, and this is pushing our QA way upstream. So who am I? I've been using Opacity since 2013, when I was doing my marches. And I needed to run a lot of data, processed data on the cloud. And to run this kind of processing in AWS is not that simple. So we used our local PeshTech deployment to run my processing. And this also helped me a lot, because I have this kind of user background. So today, I used a lot of it. And later, on 2014, I joined a team in the university, where I started to work with development in Opacity. Mostly in Kiston. As of today, I'm a core reviewer in Azure Policy and also in Kiston. And today, I try to hunt bugs in Red Hat, so we can deliver better products. So the agenda of this presentation. So first, I'll talk a bit why you should care about submitting your tests upstream. Then I will talk about a bit of the QA in Opacity, giving a brief overview of the components and different parts of it. And also, after showing the parts, I'll try to show a real example of a test that I had to submit upstream to make it work, also with a CI job. And I'll talk later a bit more of the QA process in general, like if after we submit this stuff upstream, is it ready already or not? So why upstream? So first, when you submit something upstream, you automatically have the community support. So once your test is there, if it needs improvement, if it needs fixes or something is not working well, you're not the person that is directly responsible to take care of it. You can have lots of other people helping you out and fixing everything you might need. Also, if it's upstream, it means it's automated. So I think everyone here knows the importance of having an automated test environment. So you have all your regulations, and so on. And also, if you manage to add the tests in the same cycle of the feature, you avoid finding bugs later, and the need of back-importing bug fixes and stuff like that. So here, I'll try to talk about some of the main tools that we have on Pestec already related to QA in Infra. And also, to understand that better, we need to be more familiar with the OpenStack CI, so how it works. So I think one part of OpenStack that everyone knows, everyone is aware, is that OpenStack is this review process. But we don't have only humans doing the reviews. We also have the check gate and the mesh gate. So for example here, we have the check gate that is our first check of tests in the patch that we send upstream. We have the reviews. So this is kind of more ish there. And in OpenStack, we have the check gate. This is our first set of tests that are run against that change that is submitted to Gravit. We have the review process. There is a human process where humans and other developers give feedback in our change and make so just improvements and so on. And finally, the mesh gate, there is a final set of tests that are run against a patch that is run before a patch is measured to the code base. So here, I will talk more about some of the main projects that we have in OpenStack QA in infra. So we can have a brief overview of how it works. So DevStack, I think that everyone that starts playing with OpenStack starts by using DevStack. And it basically provides a set of scripts that creates a cloud environment, of course, for development. And it's basically the project that runs in the VMs in the OpenStack gate, in the OpenStack CI. For that, we have the DevStack gate project that is responsible to set up the VMs and everything else for the CI. We also have the DevStack plugin. There is a tool that DevStack project provides in order to, if you need, some customized environment and it can't join the DevStack tree itself. You also have Tempest. There is basically the project that validates OpenStack. And it provides a set of scripts of tests, sorry, to run against APIs. And also, it provides a type of test called scenarios, tests that have a more complicated or more complex way of validating more complex features. Also, Tempest also offers a plugin where you can write tests using a lab in the Tempest way, not necessarily using the Tempest project itself. You can create your own project to run Tempest-like tests. We have the Granade, which is the upgrade testing project. Granade basically validates upgrade between versions in OpenStack. But it not only validates the, for example, if I have a migration, a SQL migration that's going to break my environment. It also validates if I'm not doing something stupid, like, for example, deleting all users or projects or the servers. Also, we have Raleigh. Raleigh is more like a benchmark tool. So its main object is to validate OpenStack's scalability. So it works like that. It first deploys a cloud, then validates it against Tempest, and then runs some kind of overloading tests. So it can check if you're not, for example, taking too long to retrieve a token or something like that. It can also reuse our ready-created deployment. It doesn't necessarily need to create a new one. Zoo is a cool project because it's the project that handles the CI. So every change that you send to OpenStackGarret and Zoo is responsible to run tests against it. And also, it's responsible to figure out the patch dependencies. I don't know if you already use it, something like that. But sometimes, you need to create a new environment variable in DevStack that's going to be used in Tempest, for example. And we can, using Zoo capabilities, put the patches dependent in each order so they will run the correct environment. Zoo also provides a stats page where you can check the key and everything else that the project is responsible for. This is not a project, but a other per-stack feature. It's related to when you don't have something that is officially supported by OpenStack but is still important for you. So for example, here, we have a screenshot of some of the tests, the third-party jobs that run against each changing sender. So you can see that using these, you can, for example, use the systems that are private or something else and still have a vote in each OpenStack change in Garret. So we already know some of the main projects in the OpenStack infra and QA area. Let's understand how can use them. Our favor, in order to create a new job or test using a real example that I did this year. So this was the feature that I needed to verify. And there is called federated identity. And if some of you know, it's a feature in OpenStack where you enable the possibility to have an external identity provider or set of users authenticating against OpenStack Cloud. So as you can see, federated identity tests needed a custom environment. So it doesn't run against a regular DevStack deployment. And it's also really specific to Keystone. So it does interact with all the services. And it's really just about the authentication process. So we thought that a Tempest plugin was the best option here. So to implement the tests for this feature, we first created a Tempest plugin for Keystone that was the first time that Keystone had a Tempest plugin. Later, we can add a new voting job in the OpenStack CI. A new voting job is a job that can, it still has its vote, but it doesn't count as a, it doesn't vary to a patch to merge or not. And later, we add the first set of tests. And once we see that the tests are working and we don't see any problems in there, we can then make the job. There was no voting to a voting one. So the first step is to create a Tempest plugin for Keystone. And we use the Tempest plugin in Kukikure. There is a project provided by a tool provided by the Tempest project that basically creates the bare bones of a running and ready Tempest plugin. It's really nice. It uses a lot of the process. And basically what we did, we used the Tempest plugin Kukikure to create a new folder inside the Tempest, the Keystone tree, in order to have these tests. So here we have basically the set of files that are created by the Tempest plugin Kukikure, a tool. And here is the actual review that was sent to Keystone to insert this new plugin. And after it's measured, we can run these tests using this command line here that we run all the tests that from a Tempest plugin that has the Keystone keyword. And we run this command inside any Tempest folder. So after that, we create an oven to job. So since we already have the basic structure, we can add that job that we run only the tests of this plugin. So first, we added, as I know, voting. So we can check if it works properly. And new jobs are, I'm not sure if you're familiar, but new jobs can be added to the Opacity CI via the Project Config repo inside the Opacity Infra umbrella. So here is the example of this job that we added. And it's really simple because we can see that we have the NV there. So this NV is enough to tell the Opacity CI that the job is not voting. So after, and also here is the job layout. As you can see, you have the two important environment variables here. One of them is that we are enabling the plugin. And another one is that we are saying that it should run with these rejects. So it will only run the Keystone templates plugins test. After it merged, we had these new jobs here in the Keystone. Every change that's to be added to Keystone, we have these new two jobs here. And as you can see, it's a no voting job. So at this point, we have a basic structure to write tests. We have a job to run them. So we can, for sure, start writing new tests. So for the federated API test, for the federated identity feature, we could add first API tests. There are really simple tests that just test the rest API of the component. So from that, we can be sure that this API works properly. And we can then write the actual test that we want. There are the tests for the authentication workflow. After that, we have real tests inside the Keystone plugin, the Keystone templates plugin, and a no voting CI job. So what we need right now is to make the job voting. So any new changes that arrive in Keystone that break this feature won't land anymore. This was done by just removing the NV surface from the job. And also, after that, we added to the merge gate as well. We run this job twice. We run it in the check gate. And the mesh gate. So right now, we have the basic test. We have the job. What we need right now is to write the actual test for the new functionality. But this functionality, as we said before, it needs a custom environment. So it looks like it needs a DevStack plugin, right? Also, with the custom environment, then we can write the new set of tests. So the DevStack plugin is under review. It's still under review. And it was written by Krissi that is right here. And it basically has this structure here. So what has the necessary stuff to build the environment that will be ready to run everything related to federation? And also, the scenario test. If you use the environment created by the DevStack plugin, we already have the API test to make sure everything works in the API level. And this scenario test is also needed to go through the whole authentication process in the federation feature. It's also under review there. And all right, it's a work in progress still. But yeah, we are going to get there. So suppose you have everything measured upstream. You have your tests for a feature there. And they're running, they have a job for it. Am I done? I'm not sure if you saw this shot from my previous summit. It's a really cool one. And yeah, basically, the upstream CI runs against DevStack. So it's kind of not enough. It's good, but it's necessary, but not enough. We also have the manual tests. So for example, suppose you have a feature that can't be automated, so you need to have a human interacting with the environment in order to verify it. So for example, an example right now, as of today, we can't automate tests for Horizon. So if you want to verify this kind of part of the federated identity feature, we need to go ourselves, deploy the environment, and then check manually if Horizon is properly working. And we also have some challenges related to custom environments. So for example, when you are going to launch a product related to the upstream project, your product might have a different set of building tools. It can even run in a different operating system. It can have different package versions and so on. So what this means is that you need to rerun your upstream tests against your downstream product. And it also may have tests that take too long to run. So imagine a test that takes, I don't know, a day to run. And it doesn't scale to run against every change that is submitted to Garrett because the quantity of changes that are submitted are much more frequent than your job can give our answer. So basically it can scale. And for this kind of test, you can run against our upstream product because usually it has less features than the upstream project itself. Also, you can have proprietary environments that are private and you have clients that don't want to disclose them. And this might be another reason why you need to run and have a different set of tests downstream. So some final thoughts. We saw that why we should submit our test upstream. And for that, we saw some of the tools that OpenStack provides for it and how can we use them in to our favor with a quick example. But I think it's very, very illustrative for us. And at the end, we saw that it's not enough for us to have only test upstream. You also need to rerun them against our downstream product and also add some different tests and also some minimal tests. So that's it. Thank you. If you have questions, you can go to the mic. And I'll be here to try to answer them. OK, no, thank you.