 Good, pretty good. All right, my name's Dane Fichter. I'm a software developer at Johns Hopkins Applied Physics Lab. I'm working on OpenStack for about two years now, mostly upstream Nova stuff, a little bit of playing around with Barbecan, some security features. But today I'm here to talk to you about something that's pretty different than all that. I'm gonna be talking to you about this, I would call it a sort of newer thing in OpenStack. It's called a Tempest plugin. Just for my own education so we can perhaps move through the beginning of this presentation a little quicker. How many people know what Tempest is? Okay, so everybody. How many people know what a Tempest plugin is? Okay, decent amount. All right, cool. Okay, so what we're gonna do today is we're gonna walk through a little bit of background. I'm going to tell you how to basically get a Tempest plugin started from scratch. We're gonna talk about the anatomy of sort of what's going on inside the Tempest plugin. I'm gonna impart some of my amazing wisdom to you and then we're gonna run through a really quick demo to show how sort of the testing workflow works once you have a Tempest plugin. So what is Tempest? Many of you raised your hand so I assume you already know what it is, but basically Tempest is a project that exercises a cloud like an end user and allows you to replace manual tests with this automated testing framework. What can Tempest test? So Tempest can exercise the APIs directly interacting with them through a REST client or it can do something a little bit more complicated called a scenario test, which is where it uses multiple interactions with services to test something a little bit more complicated than just whether the API works the way it should. So a good example of this is what we have here, which is a boot from volume scenario test. Basically what's going on is Tempest would act as an end user would if they're trying to boot a volume and that's a good way for you to make sure that the code path that you would follow to boot from a volume is still functional. So what's a Tempest plugin? Tempest plugin is a little helper project for Tempest essentially. If you have a project that's sort of outside of the big tent and I use Barbican as an example here because I implemented the Barbican Tempest plugin, what you would do is you would implement one of these plugins to work alongside Tempest to test the functionality that's supported by your service. And this is in response to the Tempest project essentially narrowing its scope to the core services. Why would you create one? I think we touched on this a little bit and for many of you in the room, this is pretty obvious but essentially it's a good way for you to make sure your API still works. It's a good way for you to make sure that your features still work. It's a good way to make sure that no one merges code that would break either of those things. You make a plugin because your project isn't a core project and you can't merge your scenarios or your API testing directly into Tempest. In my case, I made the Barbican Tempest plugin because the Nova PTL told me to. So the first step that you have to do is to create a new project. As of Pyke, the official guidance from the QA community is that you should maintain your Tempest plugin as a separate project, not as a part of your existing service that you're testing. So most of these steps are documented in the project creator's guide so I'm not gonna spend too much time on them but essentially you create the new launchpad project, you make your Git repo, your mirror, you create your bug tracker, ad core reviewers, create a release team, and then set up your CI stuff. So your basic Jenkins jobs, your basic Pepe jobs, that sort of thing, and then the Tempest cookie cutter provides sort of the basic structure of how the plugin should look. The next step that you wanna do is you wanna add an entry point to your setup config file. So this basically is how Tempest discovers your Tempest plugin. As Tempest is sort of spinning up, it iterates through Python paths until it can find plugins and then it sort of reads from the plugins all the necessary testing. Okay, so now we're gonna get down into the guts of what's going on inside these Tempest plugins. And I'm using the Barbican Tempest plugin as a case study so sort of think about that and triangulate that to your own use case if you see some differences to how you might wanna do things. But essentially the first thing we're gonna talk about is these service clients. So the Barbican API has several resources. It has resources to keep track of consumers, secrets, secret metadata, quotas, containers, and orders and mirroring each of those resources we implement a service client in our Tempest plugin. And the functionality of this service client is essentially to use rest requests to talk to each of those API resources. And the functionality of these is very simple. They essentially format payloads as JSON, dump them into a rest request, send them along to the right method of the right API resource, very little error checking. They're basically just a pass-through. So here's an example of a method that you might find in a service client. Let's say you're talking to the secret client, you wanna create a secret. So in the secret client class, you have a create secret method that as I said just passes along the payload essentially. So once you have the service clients implemented, you need to test them. So the next thing you wanna do is implement your API tests. And as you can see, just as we had sort of a mirroring of service clients to API resources, we have the same sort of thing going on with the API tests. You basically want one test suite corresponding to each resource of your services API. And here's just another diagram to give you an idea of how these things inherit from Tempest. You're basically pulling in Tempest's test case interface to set up your API tests. The base, excuse me, the base API test is not required by any means, but we'll see later, actually in the next slide, we'll see how it helps you with sort of the setup and cleanup of resources and makes the labor of writing API tests much easier. So here's an example of why you would want a base API test. You use the base API test essentially as a way to track resources that you're creating in the course of testing your API. And that means that during your API tests, you don't really have to be as cognizant about cleaning up. It sort of can clean up for you. So in using the example of the create secret method, let's say you have a secret API test that's testing this create secret method, the sort of the direct effect of calling this create secret method is that some blob of data is now stored in the key manager, right? So your secret API test is testing this method. It calls the create secret method and if you just exited the test or failed the test at that point, you would still be stuck with this blob stored in the Barbican key manager. And obviously for CI that's running all the time, you don't necessarily want that. So what you do is you overload the Barbican base API so that you overload the create secret method in the base class here to keep track of the actual secret itself. And then when the base case goes through sort of it's life cycle set up and tear down, during the tear down part, you can explicitly delete that secret, clean up after yourself basically. And here's a little code example to show you how you would do that. This little convention of using a creates function decorator is everywhere in Tempest and so I leveraged it for Tempest plugin. But it's essentially what I just said, you have a resource list, you can tag things to make sure that, you can tag methods to make sure that the whatever resulting resource is created by calling that method is stored and then later you can clean it up. And the nice byproduct of this is that your API tests are very simple. You don't have to really worry about cleanup as much. It's just a matter of creating that unique ID for the test and then just calling the method of the base class. So that's API testing, pretty straightforward. We're gonna get on to something a little more complicated now, the scenario tests. And I think this is sort of what most people the capability that people are more interested in with the Tempest plugin. This allows you to make sure that your features sort of stay intact. So someone merges a patch to Nova and breaks one of your features. If you have a gate running that's running your Tempest plugin scenario tests that's testing out these actual features in a live cloud that failure will be exposed and it'll prevent that code from merging. So here's an example of a test that you might want to do with the Barbican Tempest plugin, a volume encryption test. The steps of how a user would do it you can see on the side there basically you would want to store the certificate, you would want to sign an image, store the image in glance, you'd have Nova boot the signed image and then you would create an encrypted volume in Cinder and use Nova to attach the volume to the instance. Manually if you were going to test periodically test OpenStack, this would be kind of an annoying thing to have to do and I know that because I've done it many, many times. The upshot of the Tempest plugin is that this becomes an automated process and you no longer have to manually do it. So quite similar to the way the API sort of class inheritance works, it makes things a lot easier if you have a base scenario test that you inherit things from, can help with setup and tear down, it can also help with just reducing code duplication if there's things that many of your scenario tests are going to do. So here's an example base scenario test, this isn't exactly perfectly accurate Python or whatever but it just gives you a sort of notional idea of how you would want to have a common method in this base test that many of the scenario tests would use. And then here's an example scenario test. Again, because a lot of the heavy lifting is sort of done behind the scenes by the Tempest framework itself and by your clients that you've already implemented and your base tests, these scenario tests tend to be pretty short and sweet and readable, which is good. Okay, so some general tips and tricks, excuse me, tips and tricks to sort of make this thing, this whole process a little less laborious is to implement your plugin in the order I've described here. So the service clients are really the basic, easiest part, the API tests test these clients. So a good way to make sure that everything is working properly with your service clients is as you're implementing a service client, implement the parallel API tests at the same time and continue to test and implement service clients alongside each other so that you know that your API tests and your service clients work as you expect them to. The service clients working as you expect them to is very important for the scenario tests and you don't want a situation where your service client has some small error and you're trying to figure out a complicated scenario test at the same time. Following along with your project's API docs is a great way to just give you ideas for how to implement the API tests. Generally, with any sort of, any of these open-sex services, if you look at the API documentation, there's pretty specific examples of how each API method is used and I found that those can, you can just basically crib straight from those and make API tests very quickly. A sort of a key important thing that's becoming a little bit more important as Tempest gets refactored is that you wanna make sure that you stick to what Tempest advertises as it's stable interfaces. You don't wanna be pulling some obscure thing down from Tempest and then a week from now Tempest obliterates it or changes it or the Tempest community decides to do something else with it. Maintaining a test environment and testing as you go is key that's pretty common sense and I always use DevStack precautions which means I don't run any of this stuff directly on a machine that I can't get rid of easily. I think the key to working with DevStack in maintaining your sanity is deleting it often and restarting a VM. Okay, so now we're gonna walk through a little demo of how this sort of testing workflow works. The first thing you would need to do once you've set up DevStack is to configure it. So we're going to enable the Barbican plugin and we're going to set the auth options so that the stacking process is a little easier. After we've done that, we're gonna run our stack command and those of you who work with DevStack often can tell that this clip is sped up a significant amount. Okay, and they're serving our default users, admin demo and the password is what we configured in the local conf. So I've always found that Tempest is way easier to use in a virtual environment so we're gonna go ahead and set that up now. Those of you who are familiar with virtual environments this should look very, very standard but essentially all we're doing is creating the virtual environment and going ahead and activating it. The next step is installing Tempest in our virtual environment. I always like to install this from source just because if I wanna tinker with something in Tempest itself and see how it works, the installing it from source makes that process a lot easier. So you need to do this Tempest init phase. This creates this cloud one directory which is basically Tempest's abstraction of the DevStack instance that you have running on this machine. And the next thing you need to do is configure this cloud abstraction, this cloud representation so that Tempest actually knows where this cloud is running. So I'll actually run through this clip again because it's pretty quick. But basically, so we're editing Tempest Conf and basically what we're doing, the majority of what we're doing here is we are pointing Tempest to our Keystone install. And so this is how Tempest knows where to set up users and then from there knows how to talk to Nova, Glantz, et cetera. So once we have Tempest up, we need to incorporate our plugin with Tempest. And the fortunate part about the fact that Tempest finds plugins via Python paths is that this process of setting up the plugin is actually extremely easy now. It's as simple as cloning the source and just running pip install. And Tempest will run your plugins test now. This step is pretty optional, but I always just do it as sort of a sanity check. If you list tests and grep for the name of your project, you can make sure that Tempest actually knows that your tests are there and will run them. So now I'm gonna walk through sort of a cheesy example of how you could maybe find a bug in an existing project with the Tempest plugin. So what's going to happen here is I've basically put a bad line of code in the code path that follows from booting an instance in Nova. So what's going to happen is I'm gonna try and run my Barbican Tempest plugin scenarios. Over the course of running these scenarios, what's going to happen is Nova's going to try and boot an instance. It's gonna fail, we're gonna correct it, and then we're gonna show that the Tempest scenarios succeed. So first you use this Tempest run command and you can just pass it the name of your plugin and it'll run the test for you. And as you can see from our output here, we've got a stack trace. Specifically there, we're getting a no valid host error, which if you guys have used Nova before you know that, that means that something went wrong, but it's not necessarily specific to a particular part of the code. So the next step is to fix this error. So we'll use our trusty rejoin stack script, which will bring up all the services which support OpenStack. We're gonna kill our Nova CPU service, and then we're gonna look in glance.py for this obviously fraudulent line of code I've inserted. We're gonna get rid of it, restart our Nova CPU service and rerun our tests. And as we can see this time, our tests were successful. So that's basically the benefit of using a Tempest plugin. If there's some issue down some obscure path, some obscure code path that isn't necessarily tested by other things in Nova, you can run this Tempest plugin, and it'll exercise all your features for you and make sure that they still work. So here's some links that might be useful to you. Some of these are just things that I've referenced in this presentation, but the, in particular the ether pads have very specific procedures on them, which are really useful for setting up the plugin, running through the tests, all that good stuff. Okay, that's it for me. Any questions? If you guys could go to the microphone, that would be great. How do you not run the plugin? Like say you've installed it, you don't wanna run that plugin, you wanna run some other plugin. Okay, the test or run command basically works. I don't know if you've ever used like TOX tests. You can specify what you want to run. So if you wanted to run some other plugin, you can change the tester command to run the other plugin. So by default though, if you just run tester, it's gonna run every plugin, right? You have to form a regex to not to exclude your stuff. Yes, that's correct. Thanks. Anybody else? Okay, oh. Earlier you tried to debug, you saw the NOVA arrow saying nobody hopes. How do you know that the problem was in the glance? I knew it was in glance because I put a bad line of code there. I mean generally there's a stack trace that pops up. That's what I wonder if there's a debug flag that you can put in so that it output the more detailed stack trace. So that I know that when you get to the image, then you fail. Is that option? Let's go back to this clip. Yeah, I believe the stack trace has, I believe what happens when it fails does, does include information about where the failure is from, but I could be wrong. Yeah, so one of the drawbacks of this framework, I guess, is that the. It's always found in that case sign image upload, so I guess you can kind of figure out it's in glance. Yeah, so the. It wasn't though it was in compute, right? Yeah, so the error itself was in the compute service, but what you see from Tempest's perspective is which command failed, which is not always, I guess, necessarily the most helpful thing, but the Tempest framework doesn't have access to the code that's running in the cloud, so unless NOVA is reporting that error in a verbose way, you're not going to see what inner machination of NOVA broke and caused the Tempest test to fail. Yes, yeah, so what I would suggest is if you were trying to debug this, I would suggest that you look at the Tempest output, you'll see exactly which test broke, then you can go to the Tempest logs, you can see the sequence of REST commands, which led to the failure, and then from there, if you're running the gate in a job, excuse me, the job in a gate, or you have access to the compute logs, you can look at the compute logs themselves and see where the exact failure originated from. Thank you. You basically just have to debug it like you would any other failure in a live scenario. Correct, yeah, I mean, obviously a unit test is going to fail in a nicer way in terms of the console output, but yeah, you can clearly just step back through logs if you need to. So all those helpful links at the end, I didn't get a chance to copy them all down, where is this presentation available? Oh, sure, the presentation will be, I think there's, after the summit's done, there's like a slide share thing, so I can put the slides up there, or if you just stop by when I'm done with this, I can just email them directly to you. Let me show you the references again. Okay, all right, no worries then, we'll talk after this. Yeah, same, anybody else? Okay, great, thanks for your attention, guys.