 Hello, everybody. How are you doing? So I'm here today to talk about automating your manual test suite with Tempest plug-in scenario tests in Rally. This whole thing came about because we have a lot of legacy tests that are completely manual right now. And I started talking to the QA guys about a month ago or so and trying to find a way that we can move away from this manual testing and to automating it. And so that's where I made this prototype with using Tempest, the scenario tests in Rally. And it was quite easy to do actually. And I'm going to take you through it in a moment. Also presenting with me is Daniel Malato from AdHod. And I'm a senior software engineer with Dell. And it's not moving. Space, maybe? You guys are kidding me. Space? No. You need to freeze? It's just working. Bear with me. Crashed. Try this again. It doesn't work. We can use Nighting. And we may have to be like, you must be kidding me. Get your computer out. It doesn't work. So great to start. Well, things happen. This is the back, guys. I should have a copy. Hold on, though. Hold on one second. I should have a clean copy here. And let's save this. Keep your fingers crossed. Yes. Try again. No. Thank you. Good night. So how are you doing? Are you laughing at me? I said something to do with the cable. Yeah. All right. What an innovation. So background. So we have an existing test framework that has roughly 200 test cases. Some are relatively simple. Some of them are huge, like taking a compute node down for maintenance and moving all the VMs off it, that kind of thing. The platform they use right now for manual testing is called TestRail. And it's more of a project management tool to me. I don't know a lot about it, but from what I see, it's more of a project management tool than it is actually a test suite. You put your tests in there. It's a script. And you can track all kinds of information around it, which is useful. Some of the benefits. It manages the test well. It does really good reporting. And it's wide open, right? So it's manual testing. You can do whatever you want. So you have a lot of freedom in that respect. Obvious problems with manual testing. It doesn't scale. As we bring on more and more distributions, the grid becomes huge. So we're supporting more distributions, more hardware, and it becomes, you know, impossible for a QA team to handle. Especially in large scenarios, if you have a lot of steps, there's a lot of room for error and something can go wrong. And it's also just a waste, right? You have resources that she'd be fixing problems, not doing something that can be automated. This is just a screen capture from TestRail. It's got a variety of reports. You can see the script in there that they would use for the manual test component. So after talking to them, I looked at, you know, what was out there for tools. I've already was familiar with Tempest. I do a little bit of upstreaming there. I've worked with rally, and I started looking at the scenario framework, which as I looked at, it really offers a ton of power. It allows you to do many, many different things that are cross-cutting across different apis and look like it fit the bill. So I went on from there and created a prototype. Okay, so just let me give you a quick overview. Yes, out of curiosity, how many of you know what Tempest is? Yes, from the audience. Please raise your hands. Okay, quite a bit. I'm impressed. So congrats. I'd say that Tempest is one of the most widely used and less known open-stop projects. So basically, like, this is being used in the gate for every patch. So whoever sends a patch, it goes over a ton of different gates, and it gets to Tempest, and it's trying to test all the time. So if you already knew that, congrats on that again. So Tempest is meant to be used from the gate, a Dev stack, ideally a production cloud too. So basically, it starts with API, it does whatever the API allows you to do. So in addition, like Tempest doesn't have any kind of branches like most of the open-stop projects, we do use stacks because the open stack API is backwards compatible. So we use one Tempest stack for about three different releases. So what happened? If you take into consideration that it runs for all the different projects, not different gates at some point, they start to go huge. So let's say I'm a great guy, and I know Tempest, I can review Tempest patches, but let's say I can do Neutron too. By the way, what would happen if I were to review Neutron, Sahara, then Cilometer, Ironic, Ironic Spector, whatever? I would say it means it probably. So as Tempest starts to grow huge, we decide to implement blind interface that would allow for a test to go back to Neutron projects, so people will be able to review it with knowledge. If you check that link, there's about like 50 plugins. So we decide to keep only the core projects, mainly Nova, because as of now, there are also plugins for Cinder and Glance. But basically, that was why we want people to handle that. So I'll be just covering a little bit introduction about the Tempest plugin before, but I want also just to introduce you about how do we handle this. So when we do release Tempest stack, it's tied to the global rex, what are the global rex? Those are the requirements that are released by DevStack and by the releasing of OpenStack whenever we get an actual release and now it's for Neutron and it'll be for Okata. When we do install a plugin, that means installing either another project or out of three plugins, I'll be covering that. Basically, we are installing an entry point that allows Tempest to discover the new subset of tools. In order to do this, usually we are using visual environments or you could use assistant side packages too, but that goes with the drawback. So when you are using DevStack and you want to use system side packages, it's really recommended to use these talks environment. So how many of you have been using these talks environment? Okay, not that many. Awesome. I guess we need to advertise this a little bit more. This would allow you to, ideally, discover all the plugins that are installed within your installation of OpenStack, either DevStack or triple or whatever and it would deploy there. Okay, so how do I create a Tempest plugin? First, let me introduce you briefly to what a stable APIs are. So basically, if I want to create a Tempest plugin, I would only need to import Tempest.leaf, Tempest.config and Tempest.discover. So what the hell are those? Tempest.leaf are the service clients and the stable interfaces, so that won't change. So if you are going to, you type your plugin to a specific commit of Tempest or you were doing that before, it shouldn't be needed anymore because you are using these stable interfaces and this is going to change, so you shouldn't need to touch anything on your plugin. Then Tempest.config, which is basically the Tempest configuration file, Tempest.conf, and then the discovery for all the plugins. Okay, so let's say I want to create a plugin for scratch. I could either code it myself or use cookie cutter. If you don't know cookie cutter, it's basically a templating system. So you just need to install cookie cutter and then use that repo and that would create as a scale for all your plugin needs. So as I said before, we could either choose to have our plugin in tree, which would mean to share the same repo as the project, for instance Neutron, which implements this kind of way, this way. So you have like the project and then the Tempest test and the Tempest plugins, which lives within it. So that's not the recommended way. So at the QA team, we would love to have, I think we think that it's going to be much easier to have a dedicated repo, because that way, that would allow you to decouple the requirements from the project, from the test project, because otherwise let's say my packageer, I'm going to package everything into an RPN or DBN or whatever, and if I just want to install Neutron or whatever project, I'm going to be needing to install a ton of dependencies like that. Also, it would allow you to decouple another new thing. As I said, Tempest is branchless, so are the tests, but the projects don't necessarily, and they are used like don't, are branchless too. So you would be able to mix best of the worlds. The only drawback is that this out of three is a boot for making you have to learn two patches. The first one would be to the main project, the test would be needed to be learning to a different project. Okay, here comes the fun. So I told you that Tempest basically uses all the API that we do have, like the split-trips for the plugin, that we are dropping tests, like Sahara, Neutron, and so forth. But it wouldn't happen if I just want to do like a real-life example of a test. That's where we go to the standard test, and that would do what a real-life use case would be. Like, okay, I want to deploy a VM, I want it to upload an image, I want to create a network, I want to create a router, I want it to ping each other and so forth. So those are the tests that are usually, and are not still that much put abroad on the different repos. Also, those kind of managers would allow you to easily create whatever you need in your metodes. But David will give you some examples with rally. Sure. So I'll be letting him go. Okay. Thank you, guys. Sure. So the Tempest Scenario framework. So the key class there is Tempest Scenario Manager Scenario Test. That has hooks into all kinds of things you can do in OpenStack. Create an image, create an instance, and wait for it to start. Attach a volume to that instance. All kinds of operations there are basically macro functions that will let you do this. The base class is the Scenario Test. But there are also a couple of subclasses of that, you know, specified for specific scenarios for specific APIs, if you see networks, bare metal, encryptions, etc. I think what I'll end up with right now, right now my test or prototype just extends the base scenario test. I'll probably create my own scenario test base class and inherit from that because we have some specialized things we'll probably need to add. So from there, we get into rally. Are you guys familiar with rally at all? So there's three main pieces to rally. It's a deploy engine. I've never worked with this piece, so I can't tell you much about it. A benchmark engine which allows you to do performance testing and create scenarios. And in some ways, the scenarios are familiar to what you can do with the temp of scenarios, but in the benchmarking side, the tests are more designed for running concurrently and trying to put load on it. My goal isn't to do that. It's to just validate the features are working. So I stuck with using the Tempest framework. And verification. Verification in rally just uses Tempest, executes Tempest, stores the results in a database. So at the end of the day, when you say rally verify, it's running Tempest. There's a basic architecture of the rally. I stole this from their site, so you can see this anytime on their Wiki landing page. So as I said, the prototype is only using the rally's verification features. So I'm not touching any other parts of it right now. So basically, I'm just using it to execute Tempest, store the results in a database and have a database that I can report off of. Why not just use Tempest alone? Because you could. You could use this plugin just in standalone Tempest. I like rally because of persistence. It has a relational database, it stores all your results reporting. There are some good reports in there and there are easy hooks into creating new reports, which Tempest by itself doesn't necessarily have. And it also supports multiple deployments. So you can do a spoken hub kind of thing if you have several stamps running OpenStack. You could have a separate deployment for each one and store all of the results in a single repository. And as I said, you could use this Tempest plugin on its own in just Tempest. So putting it all together, the first thing I need to do is implement a Tempest plugin, the framework, the basic shell, learn about the scenario framework, write a test, install the plugin into the rally deployment, which is its copy of Tempest, basically, for that particular cloud you're talking to. Run the test, report on the test results and maybe make tweaks to the test, reinstall the plugin and repeat. So here's a basic flow of information. This call would enter a rally, go into the specific environment for your deployment, call the regex expression, which would locate the test that you installed with your plugin and execute that test. Then it will store that data back in the rally database and then you can report on it. Oops. It's okay there. It's okay there. Good. So I, unlike Daniel was pointing out, didn't know about cookie cutter. So I went to another existing plugin and plagiarized. And this is the basic source tree. The main things to be concerned with are the setup.config and how you lay out the tree. And you can also put API tests in your plugin. I'm not doing any API tests. I'm going to do a pure scenario plugin. And you can see my test in the tree there. So setup.config is where you add your plugin's entry point. And you can see the directive there, entry points, and it's pointing to my plugin class. Then in your plugin class, you have a load test, which you pointed to a directory where your tests are going to exist. And when the plugin is loaded, it goes and searches the path and makes all those tests available. This is just a basic overview of my test, right? So most importantly, you need to document the test properly. You're basically word for word copying the script that they have in their test rail application into a doc string. So I can clearly state what this thing is trying to do. Then you just add the decorator, give it a unique good for your item potent ID, and write your method. Oh, man. This is actually some pretty good stuff. So I'll walk through it verbally. So this is taking into account that you already have a deployment in Rally, which the easiest way to do that is if you have an environment file, your source set file, which will have your OpenStack credentials, the Keystone URL, all of that. And then you pass into Rally, you say Rally, deployment create dash dash from ENV and give it a name. And it will suck in all it needs from your environment and create the deployment for you. And then you can install Tempest and immediately start testing. I think that's most of what I have here. Actually, no. So in the next steps, once you have a deployment, you install Tempest. So you do Rally, verify, install, which will lay down and will clone the Tempest repository into its own virtual environment. It'll have this funny path like it's .Tempest for deployment and then your deployment GUID. And then under there is the root directory for your Tempest install for that deployment. So once you have the Tempest installed, you do Rally, verify, install plugin, and point it to your source path for your plugin. And it will then install that plugin for Tempest in that deployment. And then it's just as simple as calling Rally, verify, start, pass in a regex for your tests, and it will run. And you could have seen all that if that great square wasn't there. And at the end of the day, you can do Rally, verify results with the GUID of your verification run. And I added a few flags here so we can see HTML and give it a file name. And there's only a single test right now, but as you can see, it gives you some useful information. There's also ways to compare results. So you can do test runs to see if over time if the test is getting slower or if on a particular stamp it's getting slower, that kind of thing. Say you create your test, you run your test, then something's wrong or it has an error or just something isn't right. All you have to do, you can modify the python code in place, install the plugin again. As you can see, they Rally, verify, install plugin, point it to the same source. It'll overwrite the existing plugin, and then you can rerun the test and you'll see the changes take effect. It's a pretty good workflow, pretty productive. So at this point, I had a demo. It is on that machine, so I can't do it. But it was basically, it wasn't very exciting anyways. It went over basically what I talked about, creating a deployment, installing templates for the deployment, installing the plugin and running it. So there you have it. Conclusions. Implementing 200 test cases is going to take us a lot of time because some of them are very complicated. And some of them might not be doable at all. There are things that have to call out to the operating system and do things there and look at logs and that's kind of out of scope for what we can do with the scenario framework. Rally is not, I've said this before, but Rally isn't a requirement. So if you make a plugin and open source it, we're going to plan on doing with ours. Once I hit critical mass with it, I'm going to upstream it, and you can just run it through Tempus. Rally is not a requirement. Rally is a win for me. It's very easy to install. I didn't mention that, but you can get going with Rally in 15 minutes and you haven't been running Rally environment. And my plan is to open source this eventually. I don't know where it's going to live, but I'd like to get maybe some kind of general scenarios for the community. It seems like it would be something that would be useful across different organizations. There are some existing scenario tests in Tree and Tempus, but there's not a ton of them. And they're good. They test very basic things. Maybe more edge cases we could put into this community plugin. And lastly, I started looking at possibly, there's a few tests that only run through Horizon. They don't use CLI. And I did some initial looking at using the Selenium WebDriver Python bindings, and it looks like it may be possible to do UI tests, but I can't guarantee it. But that's something I'll be looking at in the near future. And that is it. Feel free to ask any questions. We don't have a mic for questions, so I could ask you to speak up if you do. That way, I can hear you. Yes, sir? I'm sorry. I couldn't hear you. I'm sorry. Say again? That's the goal, right? So we have this whole manual test suite that does all kinds of things that don't have test coverage. Yes. Well, we don't. I mean, we don't have any APIs that are outside of OpenStack, so we're testing OpenStack APIs. I'm not quite sure I follow you. Yes? No, no. I'm just using what's in Tempest. Yeah. Yes, sir? Yeah. What do you mean? Would you like to create a plugin and do you want to find its requirements file? Your deployment has to be configured already, right? So before all this happens, we have deployment plan, reference architecture. All the bits are there and running as we want to test them. So it's in a state that we're ready to test. So my plugin isn't interacting on that level at all. No. Also, you create a plugin. You could add additional configurations and additional stuff that you're planning that would have to have. When you create a plugin, you implement a class. If you don't need nothing, you just put pass and nothing happens. So you could add whatever you're planning is to have, like extra configs or something like that. Yeah, I didn't mention that. That's a good point, is you can add your own config stuff inside your plugin so you can extend existing Tempest Conf. Yes? Mm-hmm. Well, the redjacks, you can go by name in redjacks, right? So you can give your... And you can also use tags, which I'm not super familiar with. Okay. Yeah? Yeah? Well, you could add extra configurations and put decorators there. You can skip flag. That's the test that I don't know. It only runs on MariaDB, let's say. So when you create your plugin, you could add extra configurations and maybe even like a skip decorator for that. You own your plugin so you can do whatever you want. So you can do a skip if whatever. You could do that. That's what you would do. You say, if this is MariaDB, you'd skip, basically. And you can put that logic into a decorator, I believe. That's where you do that, yeah. Yes, sir? No, no, I haven't heard of it. And that's something I got to do with some homework because I don't want to reinvent the wheel either, right? Exactly. Right, right. Yeah. Yeah, I appreciate it. Thank you. Please? Oh, yes, sir. I'm not aware of a UI for Rally yet. I don't know if there's a project. I'm sorry. Thank you. So I wrote a report plugin for Rally which does compare. I noticed just when I was putting this in together, it was deprecated so I have to figure out why. But it did just that. So you had two test runs and you could do a compare. You could set the threshold to say, if the difference is less than a second, say it's not different, right? You want to find things that are taking a second longer and things like that. So that was there. But as far as a UI, as far as I know, there's still no UI to rally. I mean, you could always wrap this in a UI, right? You could make calls and make your own UI. You know, because it's all restful. I'm sorry? It comes at the subunit level. So you would get your results and then it processes the subunit test results and spits out a file. Yeah. It just builds a dict and then transforms that. You can compare two test results. Yeah, yeah. And it just paints. You know, you can export it as HTML or JSON or just tab delimited as well. Yeah, yeah. I'm not going to talk to Boris and find out why. Because if you do tempest, verify, compare, you can't do it. It says it's deprecated and I don't know why. It says use task results, but there's no compare task results. And it's not a task. So I'm not really sure what's going on with it. OK. I apologize deeply for the technical difficulties. Oh, thank you. Hold on. We have a prize. Thank you for coming, guys.