 How you doing, guys? So I'm here today to talk about verifying your OpenStack Cloud with Rally and Tempest. A little introduction. I'm Dave Patterson, senior software engineer with Dell. Originally, I started in OpenStack working on the Crowbar project. I don't know if you guys are familiar with that project. It was one of the first OpenStack deployers. We've been contributing to Tempest and Rally projects since about Juneau. So I'm still pretty new at actually submitting stuff upstream. One of the keys to any OpenStack solution, including ours, is the ability to verify the deployed solution's bits function properly on the defined reference architecture. Typically, this involves a manually scripted smoke test, running the Tempest suite and analyzing the results, and comparing the test results between different stamps and previous release results. Originally, Tempest's primary use case was a gating mechanism at OpenStack CI. So when you try to submit some code to upstream repository, Tempest runs against your code. And if it fails, you were rejected immediately and need to figure out why it was failed. The current trend in Tempest is being more and more used out in the field as a verification tool. So you have a reference architecture. You deploy OpenStack to it and run Tempest against it and see what passes and what fails. One of my goals has been to improve the usability of Tempest and Rally. So the verification use case is simple and safe to your cloud. In the near future, automated performance and scalability testing were more prominent in my team's objectives, which will leverage the benchmarking features in Rally. However, this talk is just about verification side of Rally. A little bit of background. OpenStack integration test suite is what Tempest is. It's a battery of tests for OpenStack API validation, scenarios, and other specific tests useful in validating an OpenStack deployment. Tempest is an important part of the DevStack gate job in the OpenStack CI. You can check out this link here. It talks about the DevStack gate. Beyond verification, Tempest results are also a part of DevStack and DevCore certification process. So right now, if you want to get the OpenStack stamp of approval, you have to deploy DevStack and run it against your cloud and submit the results to DevCore. And if you pass, you'll get the certification. Tempest right now is used for that. I heard today, actually, that there may be something else used for verification as well, but right now Tempest is the only verification tool they're using. The keys to Tempest is it can run against any cloud, be it DevStack or 1,000 node cluster for verifying functionality. It strives for coverage on all OpenStack features. And coverage is a constant work in progress. As new features are added, you constantly have to write new tests to provide coverage. Another goal is that Tempest attempts to clean up after itself, but that's easier said than done. And just of note, an important recent directive is that Tempest is trying to get everybody to take over their own tests. So for instance, I believe Nova has completed this, where all of the tests are no longer in the Tempest tree, and you're going to be in the Nova tree. And as time goes on, you're going to see that pattern go on. So if you're starting a new project, you're going to have to own your own Tempest integration tests. It is a basic high-level architecture of Tempest. This is a little stale. I typed this off the internet, and I think it's getting a little bit stale, but it's a basic overview of how Tempest works. And now a little background on Rally. Rally is a tool with a central database that stores various test results from the verification and benchmarking subsystems, can be configured to test any number of OpenStack deployments. Rally is made up of four main services for the central database repository, and OpenStack deployment engine, which leverages DevStack or fuel to simplify deployment. I haven't used that functionality yet. We've already had a cloud, so I don't have to worry about deploying it. Benchmarking and profiling engine allows you to create parameterized load on the cloud based on a large repository of benchmarks. The verification engine, which is using Tempest as the verifier. It's possible that there will be other OpenStack verification frameworks down the road, but right now, Tempest is the only one. But there is a layer of abstraction there that somebody else could come up with some verification tool, and it could go into Rally. And lastly is a reporting service for being able to view and output all the stuff that you have run. And a basic high-level architecture of Rally. You can see the four main services, Rally Database. So why use Rally to run Tempest? You can download the bits of Tempest and install it and run it on your own. What you run into is some complications right away, especially if you're supporting more than one deployment, like we do at Dell. So for every OpenStack deployment you wish to verify, you need to configure Tempest to test it. And Tempest only supports a single deployment at a time out of the box. To test a different deployment, you have to reconfigure Tempest and save off the configuration file somewhere. And there are currently 278 configuration parameters available in Tempest. So configuration can be a bear. It's kind of a daunting task. Tempest doesn't have a central repository to store the results across multiple clouds. So when you execute Tempest, all the test results go into test repository. And that's pretty much so the end of the story. So if you want to compare them to other results, you have to get the other repository and you have to be able to parse the files in there, which are in subunit format. And again, it's not an easy thing. Tempest has no built-in functionality for comparing test results. And Tempest doesn't have any reporting capabilities built into it. What Rally brings to the table, very easy to deploy and configure, takes like 10 minutes to deploy Rally, can support any of number of OpenStack deployments, stores the deployment information and verification test results in a central database repository. This is good because the test results are available forever in the database. Database can be backed up, can be merged with another database. So there are a lot of really good things for having a back-end persistence for your test results. The results from multiple deployments can be compared and analyzed. Rally has built-in reporting features for viewing and comparing results in a variety of formats, HTML, CSV, I think there might be JSON as well. And plus, there's benchmarking and scalability testing, which I don't really touch in this talk. And I don't have a lot of experience here yet, but I will be shortly. So what I'm going to talk about as far as my demonstration objectives is to install Rally, configure Rally to talk to my OpenStack deployments, and just a little bit about my setup. I have, let's see, where is VMware? So I have three instances, three Fedora instances, two running DevStack for my cloud. So I have two clouds. And then I have a Rally server running on another Fedora instance. So you can get the kind of architecture I was using for the demo. Installing Rally is pretty simple. You just clone the repository, sudo Rally install Rally SH, and it all happens. That's pretty simple. To configure Rally for an existing OpenStack deployment, the easiest way I found is to log into the cloud and I have that up here. Yep. You log into your cloud, go to the admin project, and then go to, don't know what happened there, sorry. You log into the cloud, go to access and security, API access, and then download this RC file. This has the credentials you need to log into the administration credentials. It'll also provide you with the Keystone URL. So you download that file, and I create a couple of files in Rally with each of the deployments I'm trying to handle. And then, so here you get the RC file, and you create the RC files on your Rally node, and then you source that file. So that engages all those environmental variables, and then you simply just go Rally deployment, create the name you want to give your deployment, and you say from ENV, which will suck in all the stuff from the environment variables you just sourced from the RC file. Do that for each of your deployments, and they're loaded. Next up, if you want to verify and run Tempest against this deployment, you just do Rally Verify, Start, and specify the deployment, and off it goes. Next, talk about reporting. So once Tempest is complete, you can do a Rally Verify list. We'll show you a list of all of the verifications you've ran. In this case, I ran two, one against each of my deployments. And then you can view them as a HTML file, if you like. So you go Rally Verify results, the ID of the result that you have, specify HTML as the format, and the file you want to pipe it to, and you'll end up with something like this. So this is a result of the Tempest results. Fails are highlighted in red, and you get a stack trace. It's a pretty handy way of viewing your Tempest output. The next is comparing. This is a feature I added to Rally back a few months ago. So you have run two different verifies against two different clouds, and you want to see what the delta is between them. So I specify some similar parameters. I want to see it as HTML and output file. And this bit here is kind of important threshold. If you don't have a high threshold, you'll get a lot of deltas, because timing is one of the deltas you'll get. So I set my threshold to 1,000%. So I only want to get some drastically different timing results in my comparison. So the results of that will give you something like this here. And what this is is the comparison report. It'll give you the type of delta. So for instance, this test was removed in the second run. The value change here was it was failed in the first verification attempt, and it was OK in the second. You can see if there's any new tests that were added since the first verification. And then you can see here where the threshold came into place is some drastically different timing difference. So it's 1,000% or over-different. So you had 100ths of a second here. And then you had, because it was ran OK, and it was a failure here. And I'm guessing it was some kind of timeout if you were to explore it. That's what the comparison report looks like. So from there, another tool I created upstream in Tempest is the ability to clean up after a Tempest run. So when you run Tempest, there's typically quite a bit of cruft left over. So by running this cleanup utility, you can try to restore the deployment to where it was before you did the verification. You can run the tool in Rally, but it takes a little bit of trickery or a little bit of doing to get there. Because when Rally creates a deployment, it installs Tempest to this path here, root.RallyTempest for deployment, and then this is the deployment GUID. Tempest lives in there. So to use the cleanup tool, it's all in there. That's a full Tempest deployment. You have to go in there. And the first thing you've got to do is copy the TempestConf into Etsy. Then you need to install the requirements that the cleanup will require. And then you need to set your Python path to the same path that the Tempest is installed in. So once that's done, you're pretty much so ready to run the cleanup tool. You CD to the command directory under there. And in there is the cleanup tool. So the very first thing you want to do is init the save state, which takes a picture of your cloud before Tempest was run. In this case, we did the init save state after. So you have to clean up a lot of the stuff in there. But that file looks like. So that will create a file called save state.json. And what this basically is is all the stuff that you want to keep and not delete. So it's a white list of objects that you want to keep, objects and tenants. So this stuff will not be touched by the cleanup tool. The next thing you want to do is do a dry run to see what you're actually going to clean up. And that is clean up dry run. And that will create a file dry run.json. And there's a lot of objects in here that's going to clean up. So it's a pretty big JSON file. But I think if I go down the bottom, it's kind of more useful. You see the users and tenants down here that's going to blow away. And you can see they have the test names in them. It's most likely that that test failed. And so the tests cleanup portion didn't work. So this is a bunch of cruft that's actually left behind in this cloud. So once you're satisfied with the dry run, you say, I'm comfortable with deleting all of these objects. You just run clean up without any arguments. And it goes and blows away all of those objects. And you're done. So right now, some future stuff I would like to do around this is currently Raleigh has the responsibility of configuring and running the Tempest tests. These concerns should actually be down at the Tempest level. So there should be a Tempest CLI that Raleigh just calls to do this. So the Tempest CI should have things like Tempest config create, Tempest run with some arguments, and Tempest cleanup with some arguments. And Raleigh shouldn't have to worry about it. It should just be able to call these APIs. And it's not working right that way right now. Raleigh is actually doing that work. So adding these features will not only improve Tempest in general, but will provide a much more loosely coupled integration points for other projects like Raleigh. And another feature I'm working on, which I didn't get to this cycle, is verify import. So if you had Tempest results that were ran just by running Tempest out in the field, you could import them into Raleigh and still use all the Raleigh tooling for comparing the results. So other stuff I'm currently working on is the Tempest CLI. I got partially through this release. And I hope to spend some time on it soon and get that into Liberty. And if anybody's interested in getting involved, please feel free to reach out to me. You can see these two blueprints, the Tempest CLI and Raleigh verification import are two things I'm working on. And I would love to get some assistance there. Here's some more info on where you can find links to Tempest and Tempest documentation, Raleigh, downstream Red Hat configuration script, which is very handy if you're going to deploy Tempest without Raleigh, because it will greatly reduce the complexity for creating your Tempest.conf. So when I have to install Tempest without Raleigh, I always use this tool. Even if I'm using head Tempest, I use this config.tempest, config underscore Tempest.py script. And that's pretty much so it. Do you guys have any questions for me? And if you do, please use the mic, because this is being recorded. Is there any plans to plug this into something like Jenkins so that you can continuously test? Yeah, I'm not sure about that. I'd have to ask Boris. Boris is the leader of the Raleigh team. And I'm not sure that's on the road map. Anybody else? No? I guess that's it. You guys get 10 minutes of your life back. Thank you.