 OK, I guess we'll get started now. So I'm Matt Trinich. I'm the QA Ptl. I work for HP on making OpenStack better for basically everyone. And I'm Andrea Frittoli. I'm on the QA team for OpenStack as well. And I work for HP. I'm responsible for QA for our Helium OpenStack distribution. And we're going to be talking today about the external plug-in interfaces we have in many of the OpenStack QA projects. So we'll just get into it. Yeah, so we are from the OpenStack QA team. So just quoting our mission statement, what is our team about. We care about OpenStack quality stability throughout the cycle at any point during the cycle. So we work and develop new tools, to maintain existing tools that are used for that, basically, so that the entire ecosystem can ensure quality throughout the cycle. We have a number of different projects that we care about. A few of them, they are related to syntax check tools, like Bash8, or hacking, or as laying the config as OpenStack. Yeah, for JavaScript. Yeah, for JavaScript. It's relatively new. Yeah, it's brand new. We have tools for setting up the test environment, deploying your cloud, like the DevStack or the DevStack Vagrant for the two nodes. It's relatively new as well. We have actual testing tools and frameworks, like the Tempest. Tempest leaves a library for building your own test, grenade for upgrade testing. And finally, we have tools that are focusing on the analysis, post-processing, and visualization of the test results, such as StackV, OS, TestStar, and the new OpenStack Health dashboard. So before we go into what the plugins are and how they work, how we use them, I thought it would be good to explain some of the rationale behind why we decided to go to a more modular approach for a lot of our QA projects. And to really talk about that, we have to talk about what QA was probably two years ago, a year ago even, and that our scope would be we would directly support all incubated and integrated OpenStack projects. And this worked great for five projects. When we got to 10, we started to see some of the things starting to get stretched thinly. So just to show the number of OpenStack projects over time, this shows releases and which projects were incubated, integrated, or existed, and were neither. And QA is a thing starting around Essex and was formalized during Grizzly, I think. Maybe it was Havana, but whatever. But you can see that we started with a small group, and then things started to grow really quickly, really fast. And the number of projects we had to directly support in tree was anything that was green or blue. And at Ice House, that's quite a lot. And it's a relatively small team. And we started seeing we couldn't keep up with the pace of things coming in. It was too difficult as an open source project to basically have a traditional QA approach, where the QA team is dictating the final steps for every project. It's something that just doesn't scale in an open source community like OpenStack. Another way to illustrate this, I find, is looking at tempest tests over time. This is a really ugly diagram that I put together. But it's just showing you the trend that there are some projects that have a lot of tests, and then at that bottom, there are a ton that have nothing. I find that's a good way to illustrate how we fail to scale after a certain point of doing everything by ourselves in one repo. And eagle-eyed people will notice that grizzly is missing. That's because it was impossible to run. So and then the next thing that happened was the big tent. And our previous scope completely went out the window. And the amount of projects that came in grew very rapidly. I think we've added about a dozen in one cycle. And we couldn't scale at what we had before in opening the floodgates to everyone just didn't work. So we had to come up with a new approach to doing the QA project in the open source community. And what we decided to do was QA will still support that direct base, those five core projects. Because that's what most of these OpenStack projects depend on, having that core infrastructure as a service with Nova Keystone Glance. And it's very important to ensure that that works. Because nothing else in this large ecosystem under that flaming tent will work, unless that core layer is there. But to support those other projects, to let them control their own destiny and do what they want, and ensure that they have the right level of quality they're looking for, we provide stable plug-in interfaces where it makes sense so that they can expand the core tooling that we're using on that base layer to include anything else in the wider ecosystem that we want. And we found that this fits the growth pattern in OpenStack because it's exponential. I don't think there's another trend that fits it. It's a much better fit. And it's also a better fit for the open source philosophy where people can do what they need to do. It's freedom. People have the ability to use these plug-in interfaces if they want to, but they don't have to. And we're upfront and honest about whether they're using them or not. And that's really how we ended up getting to these plug-in interfaces. For today, the current definition of that core set from the QA team perspective is Keystone, Nova, Glance, Cinder, Neutron, and Swift. These are the projects that we eventually want to be the only ones that are in tree direct support. So in Tempest, the only test in the Tempest tree will be for these projects. For DevStack, the only things in the DevStack tree will be for these projects. Any other projects that fall outside of this will be in a plug-in. And we advertise the plug-ins and make it as easy to run the plug-ins. And so that's a little bit about where we were and how we got to where we are today. And granted, this is still new, so we still have things in tree that we need to decompose into plug-ins still. That being said, I thought we should dig into going over different plug-ins. So, Andrea? Yeah, so there are three different projects that we support, like plug-ins framework. The first one is DevStack. So DevStack plug-ins actually serve kind of three different functions. So there are three different types, if you will. One is, I have my project. It's not one of the core six. And I wanted to actually be deployed in DevStack as part of the cloud that I'm deploying there for testing. And with the plug-in, I can hook my project into the deployment process. The other one is actually using different configuration in DevStack or in one of the existing services, like a custom driver, like a different hypervisor in Nova or different Cinder backend or so forth. Or some networking do that. Yeah. Yeah, there are quite a few of those. And there is a third use case that is deploying some service which relies on the cloud that was deployed by DevStack, so something that runs on top of the cloud. One example for which we have a plug-in is NotePool. So NotePool is a system, a tool which is used by OpenStack Infra to allocate test node from cloud. So to have a development environment for NotePool, you need to run NotePool and connect it to some cloud. So with the DevStack plug-in, you can set up a DevStack with a running cloud and a NotePool instance that is configured to work with the cloud that was deployed there. So what's the plug-in, in fact? It's several bits of bash code, in fact. And it is leaving outside of the DevStack tree. So it's maintained in a different project by the team responsible for it. It's called by DevStack by a strong contract. And yeah, so there is a registry of plug-ins which is maintained by the community. It's actually not so up-to-date. It's kind of stale. Right, and yeah, so you can see in the registry, there are probably 20 plug-ins or so. But this graph shows that what the actual plug-ins implemented since the beginning of the year until now. So we have quite kind of linear growth. It's basically linear, except for at Summit. Yeah, exactly. We were looking at it. It was like, oh, what's that? Oh, it looks like Vancouver. Yeah, so we have 57 different plug-ins right now, which is quite impressive. Yeah, I think quite impressive growth. So if you want to write your own plug-in, how do you go about it? So there is a cookie cutter project you can use to set up your basic template and files. So you don't have to use it, but I think it's quite useful. So I would recommend it. And it makes it easier also for people looking at a goal. They find things in the place they expect to be, and so forth. And there are two main areas that make the plug-in. One is the settings file, where the global variables are defined. This is a file which is sourced pretty early in the stack process. So if you want to impact the way the variables that are used in the DevStack process before your code is actually executed, you can set variables there and affect the behavior there. And the other part is the plug-in file, plug-in.sh. And there you implement the hooks for the different phases of the stack process as well as unstack and clean. If you use the cookie cutter, it actually creates a file for you in a lib folder where you have a template for all the functions that you need to implement to get your plug-in done apart from the settings. So you have the pre-install, install, and configure, and init are the four phases where you can intervene during the stack process. So you can have installation of West-level packages. You can have installation of your service if it is an open stack service that you're installing. Configuration and the init or extra is setting. If it's an open stack project that relies on a database, typically you do things like setting up the scheme of your database. Or Keystone, you have to do that admin token song and dance to that kind of thing. Right. And start your service. Shutdown is for shutting down when you do the unstack and clean up for removing everything. Yeah. Uninstalling stuff and everything. So the second project where we implemented plug-ins is Tempest. Tempest plug-in allow you to discover and run a set of tests that are maintained outside of the Tempest tree. And they allow you to combine the configuration option you get in Tempest, originally from the core Tempest, and configuration options that you may need for your plug-in and the access to both of them in your test code. We use Thiefdoor, their extension manager for discovery. So you just need to install your plug-in and then install Tempest. And Tempest will discover your plug-in. And the test code that you write should be built on top of Tempest lib. It doesn't have to be, but it helps make things a lot easier if it's using the same testing framework underneath. And yeah. This is where we are in terms of Tempest plug-ins. So Manila opened the way and the Tempest plug-in for Manila was the first one to be developed. It was the first one, yeah. And then we had three more ones, three more that were developed in the meantime. So Sahara, Congress, and Monasca one. It doesn't look like DevStack yet, but it was not nearly as impressive as the DevStack graph. Much more newer. And if you want to write your own Tempest plug-in and make that graph look nicer in the cycle. So there is a cookie cutter. Again, it's nice to use it because you get all the folders. And you don't have to, but I recommend using it. You can, again, host your plug-in code within the tree of your project if you want. You can have a dedicated repository. You don't need to do that. I mean, you can either. So how do you write your plug-in? So you have to extend the Tempest plug-in class from Tempest, which, yeah, it lives in Tempest at the moment. We might move it to Tempest Sleep in future, probably. Probably because it's a stable interface. That's what Tempest looks for. Exactly. So when you go and implement your plug-in, just check if it is moved to Tempest Sleep. Well, we'll have a backwards compatibility if we ever do move it to ensure that it works for legacy plugins. But it was just done in Tempest because we were evolving it organically. And that's what the Tempest we do that. And Tempest Lib is more for stable interfaces. Exactly. And then you have to implement the abstract methods for config option discovery and test discovery. If you do base your code on Tempest Sleep, you might find that if not everything that you need is in Tempest Sleep yet, well, feel free to contribute Filabag and help us migrate the things that you need from Tempest into Tempest Sleep and make them stable interface because that's what we're working on. But I mean, if you see some, yeah. It's a lot of work to stabilize the interfaces we have in Tempest because a lot of things weren't written assuming external consumption. So it takes some effort. And there are still gaps to write a good external test suite using Tempest Sleep. And we need to fill those. So if you're writing a plugin and you find gaps, you file a bug, push a commit, help us out with the migration. Right, of course, set up CI for it. I mean, you run your tests and to keep them. So this is how the Tempest plugin class looks like in Tempest. So there is a load test method that you need to implement, which is used by the plugin mechanism to discover the test. So when Tempest actually runs the test discovery phase, it uses the information, the path that you provide via this method to go and discover the test that you provided in your plugin. And there are two more methods, the register options that is invoked when Tempest is setting up its own config to make it available to test. So it will register the extra option that are defined by your plugin and make them available to the tests. And the get opt list. This is used when the Tempest conf is generated. So the mechanism that generates the Tempest configuration file on the fly requires a list of the option available. So if you install Tempest and do a Tempest in it in a folder to get your sample config there with some config preceded, you need to implement these so that it will contain all the options that you need for your plugin configuration. And the last project that we have a plugin interface implemented on is Grenade. Grenade is the upgrade test suite, which, for those who don't know, takes a stable branch of DevStack, deploys a cloud with it, then shuts down that DevStack after running tests on it and creating some resources, and migrates to a newer version of all the projects that it just deployed and starts them up using the old configuration files and checks that everything worked. So we have Grenade plugins to enable running upgrades on projects that aren't in the Grenade tree. It also enables adding additional services to the old DevStack. It gives you a phase where you can call plugins to add them to the old DevStack. And it also allows you to plug the resource creation phase. There is a phase after DevStack old is deployed that creates resources that are supposed to survive the migration. We use that to create a couple of servers, a sender volume, images. So we ensure that upgrading your cloud doesn't nuke all of your running instances, because that's kind of important. This is even less impressive of a graph. We've only got three plugins right now for Grenade. Grenade is one of those projects that a lot of people don't really know what it does, don't really know how it works. And it only applies to services, because those are the things we do the upgrade testing for. And so right now it's heat, salamander, and Sahara, which are actually things that were missing Grenade tests completely, because those teams, we didn't have the review bandwidth. There are three reviewers for Grenade. Used to be four, now there are three. And we didn't have the bandwidth to review every project's upgrade procedure. And these projects also didn't have interest in contributing. So now they can control their own upgrade story, which is really useful. The process for writing a Grenade plugin is a bit more involved than the other ones, because there's a lot of different stages that go on. And I just realized I put this in a bad order. But let's start with settings, which is in the middle. So settings is used to register any DevStack plugins that you want to deploy on the old side when you first spin up the cloud. So the typical way this works for the other projects, salamander has a DevStack plugin. They call that DevStack plugin to deploy salamander on the old side. And settings just sets the local RC file option to use that plugin. Then you also have to register the service to be upgrade tested, which tells Grenade to call the hooks in the other scripts that are listed here. So upgrade.sh is exactly what it sounds like. It's used to do the migration of the code repos that were deployed on old, and also rerun SQL database migrations or any other mandatory steps for every migration across every release. Shutdown.sh does exactly what it sounds like. If you write the bash code to shut down the service you've deployed, because you can't upgrade unless you shut it down. We don't. Different story. Resources.sh has the hooks to do the create resource, the check resource, and the tear down resource, which is the resources phase that I described a little while ago, where it will create a resource after old is deployed. Then in the middle of the upgrade, it will do check resource, and then it will check again after the upgrade to ensure that during each stage of the upgrade, you're not losing your VMs, losing your alarms, or whatever plug-in resources your service creates to ensure that it works. And then at the end, you want to make sure you don't leave anything dangling, so it bleeds them. And there are hooks in all of these. And then the last thing is a little weird. There are certain cases that should be rare. They actually kind of are, which is good, where you need to do a manual step for migration. If there's a backwards incompatible change in a configuration file, or a policy file, or something with the way the service is deployed, and you have to do an extra step as part of an upgrade, there is a provision in grenade plugins to call that. You create a directory that's from release, so from Kilo, from Icehouse. And you put your script in there, and that will be called as part of the upgrade when you're upgrading from one branch to the next. And as part of grenade entry, we're very strict about allowing that. You're never supposed to do that. It's a very severe exception, because the theory of upgrade we have in OpenStack is that you can use the old configuration, and all you need to do is copy the new code in, and run the database schema migrations to ensure your database is up to date, or whatever data store your random project is using. So anything besides that is not a good experience for operators, and we don't want to make that common practice. So when you write a plug-in for grenade, you should never do this unless you're very careful about documenting it, because it's not good for our users. And with that, we're to get some more information about things. So there's Tempest official documentation. That's up on docs.openstack.org. DevStack has a similar plug-in doc. Grenade has a similar plug-in doc. I just realized when I created these, I did one, them all plural except for Tempest, which I don't know why I did that. And then you can also reach out to the mailing list to ask questions, or if you have specific issues that you have around these plug-in interfaces. And also OpenStack QA channel on FreeNode, always willing to help people with writing plug-ins, questions about plug-ins, or anything else related to OpenStack QA. And with that, we're going to go to questions. We left a lot of time for questions, I think. There's a mic here. It's on, so just pass it to him, please. Thanks. Could you talk about the DevStack gate and how does the plug-in architecture enhances or changes the configuration that we need to make in the OpenStack infras, the Jenkins jobs on the DevStack gate? So to do that, you actually don't have to do anything in DevStack gate. In your job definition in Jenkins job builder, there is an option, I forget exactly what it's called, but it lets you pass through variables to your local RC when DevStack is run eventually as part of DevStack gate. You say enable plug-in and your Git repo for the plug-in. And that's it. It's a Jenkins job builder change. So as part of the self-service model and letting people do what they need to do to ensure quality of every project, it's self-contained when you add the job. When you push the change to project config to add a job that uses your plug-in, you just add this one line to it and your plug-in will just work. And there are plenty of examples everywhere for it because there's 57 plug-ins. Are there any other questions? There's one back there if you can just pass the mic, that'll. So I have two questions. One is any test that uses the Tempest plug-in as a base class is good to go as a Tempest plug-in, right? It will be discovered and... Yeah, the Tempest plug-in interface is just for anything external. It doesn't actually have to be a Tempest test. It's anything that complies to the Python unit test spec. It will be loaded as part of Tempest and then configuration will also be loaded as part of Tempest. So when you run Tempest and you have the Tempest plug-in installed, it will be seamless, except you'll see instead of Tempest.something being run, it'll be TempestPluginName.whatever. Okay, so one example, if we talk about the Trove project, if there's Tempest tests for Trove project and they would go in the Trove repository as plug-ins. It can go in the Trove repository or it can go into a separate repository for Trove Tempest tests. There are actually a lot of advantages to doing that by decoupling the tests with the code that enables testing the API across releases much more easily. We do that in Tempest entry. It's called branchless Tempest, that's what we call it, and that enables us to ensure API consistency between releases. If you include the tests in your code, you have to make sure that you always check out master even on stable branches if you're testing interoperability between releases. So there are advantages to doing it in its own repo. There's also install time benefits. You can have separate requirements and things like that because it's all Python install, but you don't have to do it in a separate repo. You can wait. Thanks, no problem. There are any other questions? One right there, just pass the mic down. Configuration, so if we're adding a bunch of tests as a plugin that have extra configuration values that are outside of what Tempest has, is that part of the plugin architecture? Yes. To normalize the configurations? So. I might have just missed it if I didn't apologize. So there are these two options. So register ops gives you a Oslo config object and you have whatever code you want elsewhere and you just, in this method, you make sure you register the ops on that object. That makes sense, you just extend it with whatever your own configuration. And then get ops list is for the Oslo sample config generator. It expects a tuple, a list of tuples with the group name and the object list or the group, the object options and that gets returned and then when you call the sample config generator either independently or as part of Tempest in it, that'll include all of your extra options. Awesome. And this works with arbitrary number of plugins too. So. And you can have your dedicated group of options so you can reach existing groups. Like if you want to add to the service available, say your service. No, that makes sense. Thank you. There's a question in the back if you can. I think it's for the video. So I'm just curious what was or is the use case for the per-police manual steps since they're highly discouraged, but they're still there for people to use? There are rare exceptions when there's a backwards incompatible change that has to be done. I don't remember any examples off the top of my head, but the procedure we use for doing it, I think there have been some keystone examples in the past related to like how they do some initial configuration stuff, for example, but the rule we use for the grenade entry is that you have to have PTL sign off from the project and you have to have it mentioned in the release notes already. Because it's not friendly to operators to expand on just updating the database schema and the code. Thank you. Yeah. Okay, are there any other questions? There's one right here. If you can just pass it forward to them. So I remember that sometimes ago we had a conversation about keeping only in the tempest tree scenario-like tests. And now you mentioned the C-score projects that we continue to live in the tempest tree. My question is if I'd like to push a brand new test that is only probing one project, is it better to put it in one of the core project timing? Is it better to push it in tempest or to push it as a functional test in the project itself? That's a tricky question because there's no hard and fast rule. There are advantages to doing it in either. Part of the reason we have those six core projects in tempest is because tempest is a self-contained test suite that can run against any cloud. And there are actually really strong advantages to that because it's black box only and it lets anyone validate any cloud anywhere. Functional tests don't give you that advantage, but they give you the advantage of being able to do more than black box. You can do some gray box or white box testing where you probe the internals. It also allows you to evolve things more rapidly at the same time you're adding features or fixing bugs in a project. And you have to weigh the weight of being in tempest. It's a little bit more heavy weight because it runs against all of the projects. And then there's also implications for people who run outside of the gate. They run it against their own clouds. You have to weigh that against how efficiently you can do it in tree and the advantages there. And some duplication is actually okay. It's actually expected for very important APIs and things like that. That answer your question. I know it's a bit nebulous and not exactly clear, but... Yeah, thank you. There any other questions? I don't know. How much time do we have left? 10 minutes? Okay, well, quite fast. Another question over here. Ref stack. Ref stack. Where does ref stack integration occur? I have... I don't think there's a session on it or I don't know if there's a session on it. Ref stack, for those who don't know, is a wrapper around Tempest to run tests for Def Core, Interop and Trademark. I don't... I have my personal opinion on it, which is part of the reason we have those six core projects in Tempest is because partly because of Def Core. We want everything to be in one repo. We don't want people to install and have to worry about plugins and configure it. I totally agree with you. There's a lot of political pushback because Def Core is a very political thing, you know? And a lot of projects, I won't name any names, want to control their own tests. And they want to leverage plugins. I'm opposed to that because there's value in keeping it separate, value in keeping it self-contained and keeping it simple for users because it's complicated enough to run anything against a cloud, not just Tempest. And so that's my personal view. I don't know what the Ref Stack team wants to do. I don't know what the Def Core... Because Ref Stack's decision will be influenced by the Def Core committee and the Def Core committee will do whatever they want, regardless of what I say. So... It's definitely not in this session, but I've expressed my opinion quite vocally about this. They don't like listening to me because of how vocal I am. Yeah. Yeah. And part of the reason, you know, I said six for now is because that definition of what we consider core will change over time. It might shrink, it might grow. I don't know, but it's not set in stone. That's part of... So we want to be able to support the Def Core use case and other external opinions change over time. Are there any other questions? Well, if there aren't any other questions, I'll be... I guess I'll stand here for 10 minutes. We don't have to put it on video. That's all. I don't want to subject that out. There's... Oh, the slides are already posted. I have a link. It's a big image. I have a link right there on the bottom. They're on my GitHub. They're LaTeX for people who like LaTeX, but there's also a PDF package so you don't have to install Tech Live and learn how to compile LaTeX projects. I probably should have included a make file, but I can fix that. The test cases. In Tempest, it's not designed to be unique. It's designed to be idempotent. That's why it's called idempotent ID. It's designed to track things across renames. It's not designed to provide uniqueness. Uniqueness is guaranteed by the test runner already. The test names are always unique. The UUID on top is added for ensuring that the test is recognizable if we move a file or rename a test. Well, if you want to be included as part of the Tempest suite, which is very strictly black box API testing, an external cloud, it assumes there's a cloud deployed somewhere with your service and you hit it with API requests. If you want to be your functional tests entry to be included in that, you use a plugin. If they're lower level or something different and the assumptions they make about the environment are different or you don't see the value in being included in the wider Tempest test ecosystem, then there's no reason to add the plugin. It's there for projects that want to consume it, but there's no, you know, yeah. Which is the whole point about doing the plugin decomposition to enable projects to succeed with the tools we use upstream, but they don't have to if they don't want to. Are there any other questions? Darryl. All right, it looks like the Tempest, so everything just appears to be part. Is it possible to run tests from a plugin at the same time you're writing tests from, say, something core? Like, say, you wanted to run the core set of NovaTest plus some extended set that we're coming from plugin? Yes, that's actually exactly how it works. The way we use SteveDor is it will, when you install your Python package, you add an entry point to your setup config if you're using PBR or if you're not using PBR, you have to write a lot of setup tools code in your setup PY to add an entry point with the Tempest test plugin namespace and that will be loaded when Tempest does test discovery or the test runner does test discovery on Tempest. There's a load test hook which will look at all of the entry points and find all of the Tempest plugin entry points and then those that will call load tests, that load test method to return where the tests are and how to call them. And then when Tempest is executed, the test runner will treat all of those external tests as if they were entry. And so you can have as many plugins installed as you want and they'll all be running in parallel if you're running in parallel, they'll be run serially if you run serially. And because your plugin may include a client for your own service. So you can install multiple clients and you can even build scenario tests if you will which will rely on multiple services. So if you want to have a hit test which works with Manila or something else, so you can do that. Yeah, and that's the advantage of using Python packages is that we have the requirements file which as messed up as Python packaging is a dependency handling, you can at least declare dependencies between plugins to stack them. There any other questions? Nope, there's one right there. I don't even know where the mic went. Yeah, for the Tempest, they generate test list and load test list. So it's the same way as the right now it is or such like I want to just generate a test list which include both the entry Tempest test case and the external test case. And I want to run later with load test with the test list that would include both the entry and the external test. Is it still the same way that work as it? You can work it that way. So the way it will work is when you install the Tempest plugins, when you do Tempest list tests, it will list all the entry tests and all of the out-of-tree tests. And in your test runner, you can use whatever selection logic you want. You don't have to make two calls if you don't want to, but you can do that to run entry and out-of-tree separately, but they'll be treated as if they were all entry. Does that make sense? Yeah, yeah. Another thing is for the test report, it's the same things as everything that they generate in the Tempest.log and then we pass the log and get a HTML report. Yeah, so everything will be run as it is today with Tempest. It will just be loading tests from another repository. So for test results, you'll still have a subunit stream with all of the test IDs and all of the data that's in subunit. You'll have the output filters we use, the console output will look the same, the logging will be the same, assuming the plugin does logging, things like that. It should all work seamlessly, so it looks like it's part of Tempest. All right, thank you. Well, I feel kind of funny asking if there are any more questions after every question, but are there any more questions? Well, I guess if there aren't, we'll end here. Thank you, everyone. Thank you.