 Okay. Well, I guess we'll get started since it's, well, 10 more seconds, but whatever. So I'm Matt Trinish. I'm the QAPTL. So I work a lot on Tempest and testing an open stack in general. And I'm here to talk to you today about Tempest and basically what it is, how it got where it is today, and some of the little quirks about how we do testing in Tempest. So I'm sure everyone has seen this picture before. It's kind of out of date now. It's from Grizzly. But I wanted to show everyone this just to outline that open stack is incredibly complicated. How on earth can we test something that's got all of these different components talking to each other asynchronously? We have all these APIs. They're all talking to each other. We needed a solution to test it all together. Because before we had Tempest and integrated testing, we only had unit tests in each of the projects. Some of the projects did have functional testing where they'd bring up their entire stack and they would test against that. But there was nothing really out there to test the whole picture. There were some projects trying to do it, but there wasn't an official open stack integrated test suite to put everything together. And that's why Tempest was started. It's the official open stack integrated test suite. We only deal with black box testing in Tempest. It's just the rest APIs. We don't deal with any of the internal interfaces. We assume an existing open stack deployment. We run API commands against it. We poll the response and we check to make sure nothing breaks and that the response is what's expected based on the API guidelines and the expected behavior as dictated by the service. We currently have about 2,300 tests. It's a little fuzzy because we actually do some automated test generation, which I'll get into later. But we cover every current incubated and integrated project in open stack. All of the tests fall into a set number of categories, which I'll explain in a bit. We use Tempest as the gating tests on every single commit for an open stack project. We run Tempest hundreds to thousands of times a day to verify that our open stack patches work correctly and they don't break expected behavior. This is just a graph to outline how Tempest has grown since it started. Tempest originally started late in the ABLO cycle to solve the lack of integrated testing, as I explained earlier. It became a gating test suite during the next release in Folsom. But as you can see down there, there wasn't much back then. It was about 150 tests back in the Folsom cycle. Since then, it has grown to be one of the five most active projects during the Ice House cycle. I think it was number four by commits and reviews. It's also part of the TC's graduation requirements. If you have an incubated project, you cannot become an integrated project unless you have testing in Tempest as part of the gait. It's also part of the board's deaf core initiative. RefStack is using Tempest to validate that a cloud conforms to open stack. This graph actually is pretty cool, I think, because it shows an almost exponential growth in the number of tests. That's a little bit higher. Can you guys, I don't hear the mic anymore. Okay, good. This shows an exponential growth in the number of tests we're seeing contributors also and number of patches. It's become a very active project from when I started in the Folsom cycle. It was kind of just an afterthought for most people, but since then it has grown to be very important and a very key part of the open stack community. With that growth, though, we have some issues that have come up. Because of that, we've had to define certain rules in Tempest and certain behaviors. One of the first ones was the number of, as everyone's seen, the number of projects we have in open stack has grown. Because of that, the number of APIs we have to test is part of that whole picture has grown significantly. And that continues to grow. There are always new incubated projects, always new integrated projects. And we have to have a way to deal with that incoming growth. And the way we've done that in Tempest has been by defining rules and guidelines for how things should be tested and how we should be doing things. The other part of that is the amount of common code we've had in Tempest to test these APIs. We try to abstract a lot of that. But that has become increasingly complex as the number of projects we have to test increases. We're doing more and more. So that code in itself has become sort of a de facto unified client. There's an initiative right now that's starting up to have unified client libraries inside of open stack. And we effectively have been doing that in Tempest. And that's been getting more and more complex. Another thing with the sheer number of tests we have, it's about 2300 currently, managing individual tests and actually even small groups of tests becomes almost unmanageable. How can you tell if one test is not being run properly if you're looking at the results of 2300? Things like this have become increasingly difficult, especially when we look at it from a gating perspective, because in the gate we run all of these tests all the time. How do we know if one stopped running? As part of that we also have run time growth, because number of tests are growing. And configuration in Tempest continues to grow. We have a lot of options and that number is just going to keep going up, so we need ways to manage that. From all of those growing pains, we started by defining the design principles in Tempest. This started, I think, late in Grizzly, early in Havana. And these are basically the governing principles of the project, which are outlined somewhere in the repo. The first one is that Tempest should be able to run against any OpenStack cloud, whether it be DevStack or someone's public cloud deployment. Now, that's good in theory, in practice it's a little bit more complicated, but we try not to hard code any assumptions into the tests. They should just work against any OpenStack deployment. Another key one is everything explicit. This comes back to what I was talking about on the previous slide about how we have so many tests, it's impossible to tell if one doesn't work or if there's something wrong with it. So by making everything explicit within the code base, we have basically no underlying assumptions. You have to tell Tempest your entire configuration upfront. And by doing that, we can prevent things disappearing. At one point it wasn't that way and we tried to auto discover what services were available and what extensions were available. And when we were doing that, we realized as part of gating that some tests would just not get run because the services weren't actually reporting what was available properly. And that's not really good testing because things aren't running because they're not working. That's, we should fail. So we came up with making sure everything is explicit. So now everything, and that's why we have tons of config options is because we have to basically account for every single possible configuration within Tempest. Another key thing is we only use public interfaces, only the REST APIs. If you remember that first slide, the complicated diagram of all of the internal OpenStack services, the testing the internals becomes too complicated, especially when you factor in the fact that they're all changing constantly. We can't keep a track of that in an external repo for testing. So we only deal with the REST APIs. At one point this wasn't also true. Another thing, Tempest has to be self-cleaning. Whatever resources we create during testing, we have to tear down. We don't want to be leaving things. This comes back to the first point about testing against an existing deployment like a public cloud. And the last design principle, which is actually relatively new on is that it has to be self-testing. And this comes back to the growing pain in the last slide where the common code has become increasingly complicated. And we have to have a method of verifying that Tempest itself is working properly and that we don't introduce regressions when we're adding new functionality to the tests. So this is a pretty cheap diagram that I drew pretty quickly to outline how all of the testing works within Tempest. So we've got several classes of tests, and they all either talk to the Tempest clients, the official clients, or a third-party client, and they all talk through REST APIs to an existing open-stack deployment. It's pretty straightforward. You can see here the service clients, these are inside the Tempest tree, and they talk to the unified REST client, which handles things like the actual HTTP traffic as well as handling response codes and making sure things like that are correct. These are just the official Python clients for certain classes of tests. We go through the official libraries, but by default for the API testing, we don't do that because what we have found is that by using the official clients, that actually masks bugs occasionally. The Python clients tend to sometimes write around bugs in the service APIs, and we don't want that to happen in our verification testing, so we don't use that for the API testing. We use that for other classes of tests. And then there's also third-party tests, which currently is only using EC2, which is the only in-tree third party API that I believe we have currently in OpenStack. So the API tests, we use these to directly test against the OpenStack REST APIs. They use a unified REST client that we maintain in tree for Tempest, and they fall into two categories of tests. We have positive API tests and negative API tests. Positive tests are the ones that we expect to pass. So for example, create a server. We send an API call to Nova, create a server, and then we check the response and we assume it's, I think it's a 202, probably wrong, but we check the response, then we do a get on it to make sure the server was created properly, and then we tear down and move on. Negative tests are the opposite case. That's where we do something that we expect to fail, and then we check the error response to make sure it's valid because we have to test both services of the API to make sure that they're consistent between patches, as well as inside someone's existing deployment. We want to make sure that the behavior is consistent between any deployment. The problem with negative testing that we realized about at the Icehouse Summit timeframe is that that space is almost infinite, because if you imagine you have defined good characteristics, you can throw anything at it, and that will be bad. And what we were seeing was an influx of tons of new negative tests checking all sorts of bad input that we didn't have testing for. So what we've come up with is using some libraries to automatically generate those negative tests given an API schema. So we have a definition of what a good API call is, and then we use that to generate things that are outside of that norm, and we send requests from that. So we don't actually write individual test cases for all of those bad things, we generate them based on what we know is good. And we started doing that in Icehouse. I think we're probably going to expand it significantly during Juno. The next type of testing is scenario testing. These are the through-path tests for functionality. These are things that are like user stories, basically. It's attempting to simulate an end-user action workflow. So for example, create a server, attach a volume, do something with that volume, detach the volume, snapshot the server, delete the server, create a new server from that snapshot, attach the volume, make sure everything works the same, that kind of stuff. And for these, we specifically use the official Python clients. And that's because it's an end-user action workflow. And people aren't going to be writing their own clients. They're not going to be writing their own SDKs. They're going to be going through existing ones as an end user. So for these tests, we use the official clients because we're not actually trying to validate the API. We're trying to do functional validation of OpenStack as a deployment. And ideally, these tests will talk about integration points between projects, because that's the advantage of Tempest is that it's unified integrated testing. So we don't want to test just a Cinder workflow. We want to test Cinder with Nova, Cinder with Keystone. And these tests are designed to talk about integration points. Then we also have CLI tests, which kind of go in the face of what I was describing previously of Tempest's mission because Tempest is supposed to be about the APIs. And the CLI tests were actually testing the CLI interface. We're testing the CLI response formatting as well as the input. And the reason for this is it turns out that we didn't have good CLI testing in OpenStack. The command line interfaces were wildly inconsistent and often would misbehave. One example is the Nova client command line. When you ran dash dash version, when it was configured properly, or when it was misconfigured, it would stack trace. No one wants that for just a version string. So what we ended up deciding to do was put these kind of tests in a singular location within the Tempest tray. It's mostly a matter for gating convenience, because to test with Tempest, we have to bring up an entire OpenStack cloud using DevStack. And instead of duplicating all of that for each of the individual projects to do functional testing with the command line, we decided to put that in the Tempest tray to validate that the CLIs work properly. These tests are only going to do read-only operations, which means that they're not going to be doing things like the server creates on the command line. It's just a check, basically, response formatting of the command line. We also have stress testing in Tempest. These tests are designed to generate load on cloud. They fall into two categories, tests that are dedicated scenario tests that we write for specifically stressing the cloud. These are not realistic workloads. They're just creating loops of servers or loops of volumes and deleting them in quick succession. We also have the ability to basically decorate any existing Tempest test to be a stress generator. So the stress test framework can then use the existing tests in a loop to stress the cloud deployment. By default, all of the scenario tests fall under this category. They're all, by default, decorated to be stress testers. And the stress tests give you some flexibility in how you're stressing the system. They can be run similarly or serially or parallely. And you can loop them for a set period of time. And there are also other configurations about how you're going to be stressing your open stack deployment using the stress tests in Tempest. We don't use this for gating for various reasons, but it's good for basically doing burn-in testing on an existing deployment. Then there are two other classes of tests in the Tempest tree. There are the third-party tests, which are just the tests for in-tree, non-open stack APIs. The only one we have currently is the Nova EC2 implementation. That's the only in-tree, non-open stack API. So we have testing for that. And like one of the earlier slides showed, it is just using Bodo to do some basic verification. And we also have unit tests in Tempest. This is a new thing that started late in Havana when I realized that some of the wrapper scripts that I had written were super-passing the gate, which is a fun term we came up with for they weren't returning the proper response code and everything passed, whether it passed or failed. When this happened, we realized things were reaching a level of complexity where we could invalidate them by hand and just by eye anymore. We needed some automated testing of the testing framework itself built in. So we decided to write unit tests at that point. And during the Rice House cycle, we have significantly expanded that to not just include the wrapper scripts, but also the basic common functionality we use in Tempest. So now things like the REST client, which I've mentioned a couple of times, that has unit testing coverage to ensure that it is always functional and that when we introduce changes to the testing framework, we don't introduce regressions or break it. So now I'm going to talk about some of the features we have in Tempest besides the classifications of testing. One of the coolest things I think we've done since Tempest has started has been running the tests in parallel. We started doing this right before the Havana 3 checkpoint, or no, the Havana RC period started. And we basically can run tests or classes of tests in Tempest in multiple worker threads. The number doesn't matter. It's by default, it's the number of CPUs on the system you're running Tempest with. But the advantage of this is that it actually more closely simulates a real user workload on a cloud, because in reality you're not running one user at a time serially against a cloud. You have tons of users hitting it all at the same time, running API commands at the same time, and you're going to have nondeterministic behavior because of that. And by running Tempest in parallel, we can simulate that to a small degree. The other, the primary motivator for doing this though was runtime. Before we turned parallel on the gate, we had about a little bit less than 1,000 tests. It was high 800s, low 900s. And it would take an hour, over an hour to run a full Tempest run serially. When we turned on parallel with that amount of tests, it was around a half an hour actually. Currently we have 2,300 gating tests, and those runs take 40 minutes to an hour. So the primary motivator for parallel Tempest was to increase or decrease our runtime. But in actuality it helped us improve OpenStack from a quality perspective because now we're more closely simulating a real cloud end user workload. One of the key things, one of the key issues with running test cases in parallel is that they'll be overlapping with each other. This is just a simple diagram I put together to outline kind of a hypothetical problem that you could have. So we have three test classes that each have two tests or two tests in each class, and they all do their own thing. So this test creates servers, gets servers, they all do the same thing, and then the last test creates three servers, does a list operation. And if they were all run with one user, this list operation would show all of the servers that were created before tear down. And the list response would be hard to verify depending on what you're looking for in the list. Assuming you're running in parallel, serially you don't have this issue because you create a server, get a server, create a server, get a server, you delete it, you go to the next class, and so on and so forth. So the way we deal with this in Tempest is we have a concept called tenant isolation, which is each test class creates its own user and tenant in Keystone before it runs any tests. And then that context is used for all of the testing within that class. So if we go back to the previous example, this would have its own tenant and its own user for testing. Same with this one and this one. So then the list would only show the servers from this class, which is what we're trying to verify. This is just a neat little trick that we have in Tempest to prevent issues with parallel testing. And the only way this works is by we parallelize Tempest at the class level. So instead of each individual method running in parallel, we do it in classes of tests. So we can take advantage of tenant isolation and prevent conflicting tests. A new thing that we're starting with Tempest for the ice house release moving forward is something called branchless Tempest. So in the past, when we've done Tempest, we've had, when there's an open stack release, we would branch it with like all the other projects. So we'd have a stable Havana branch, for example, or stable Grizzly branch. And then we would backport fixes as we saw necessary to that. And the project stable branches would then gate using those stable branches. However, what we've found is that we actually don't need to do that because the API should be stable between releases. And Tempest is all about the API. So there should be nothing in Tempest that prevents us from running master against a stable branch. So what we've decided to do is eliminate stable branches moving forward in Tempest. So from, there will no, there will not be a stable ice house Tempest branch. Instead it will be master. And the hope is that this will help us improve API consistency between releases. One of the issues we've seen is that sometimes things slip through. We have changes between API releases or releases in API. We have API changes between releases of projects. And by removing the branches, we will have to run master tests, the tests from the master branch against both the stable branches and the master branches. So any proposed commit to Tempest will have to work against previous branches as well as the current one. And stable branches will have to work with current tests. So that way we won't have, we remove that exposure for slipping in an API change between releases. Because the master branches are also, the tests for the master branch are also the tests for the stable branches. Now I'm going to talk about Tempest configuration a little bit. Currently in Tempest we have about 200 config options. It's a little bit more. The sheer number of options is because of what I said was one of the design principles, which is everything explicit. So we have to have a knob that says basically every optional feature with an open stack deployment is either on or off or something like that. And because of that we have tons of options. And that number is only going to continue to grow as we add new APIs, add new services. The intent of the config file in Tempest is not to say what should be running, but is what can be run. You should only put in the config file for Tempest what is actually running on your open stack deployment, not what you want to be running. What you want to be running should be specified in the test runner, which by default in Tempest is tester from the test repository project. But you can use any test runner and specify what test should be run that way. And we have some facilities within Tempest to specify which service tests run and to try to group things logically so you can easily run the set of things you want to be running. The sample config file, which is in tree currently documents each of these options. It lists out all of the options and has a description of each of them. So if you're looking for a reference on how to configure Tempest that would be the place to start. And it's intended is to cover any possible open stack configuration. We probably have work to do there because we only frequently run this against dev stack because of gate. We run Tempest hundreds of times a day in the gate against dev stack. So we have some bias in how we write things because of that. We don't have a lot of feedback from larger deployments running Tempest. But the intent is that it will work against anything. The other part of the config file is it should have same defaults. So for most of the options, you shouldn't have to do anything. They should match the defaults in your deployment configuration. If you change something in your deployment from a default option, then you probably have to change it in the Tempest config. Now this step, I do want to apologize. I did plan to do a live demo of how to do a config figuration. But what I found out is it's actually too hard to do in slides. There are too many options. You need about 30 to 35 options to do a Tempest configuration. So I outlined the steps instead. To start, you need to basically specify the authorization information to talk to your Keystone endpoint so you can get access to the catalog as well as user credentials to do things. So you just specify your user, your password, and the endpoint connection information. From there, you then have to specify flavors that you're going to be using for creating servers as well as the image IDs and information like that, which Tempest will then use to bring up servers and create resources. It needs to know of your pool of resources in your deployment, which ones it should be using for creating servers, creating volumes, etc. Following that, you need to list which services are available to in the open stack deployment. And then from there, you need to look at the sections within the configuration files for those services and specify which required options in those sections you need to fill out. I understand it's a little nebulous at this point because I don't actually have an example, but like I said, there are far too many options, which is something we hope to address in the future, which is one of the topics we're actually going to be discussing in the design summit sessions for the QA track is tooling to help automate configuration of Tempest. These are some of the items that are going to be discussed in the design track or design summit track around Tempest, as well as just plans we have in the community. So we're going to be expanding the auto-generated negative tests, because as I said, the negative space is almost infinite, so we need to do that automatically so we don't waste review resources as well as time on writing negative tests. We also plan to expand the unit test coverage to prevent more regressions within tree. As part of the branchless Tempest effort, a side effect of that is we need more feature flags, which means more config options, unfortunately. This is because if we're going to be running the master Tempest tree against stable branches, we need to be able to accurately differentiate between both features that are available in the stable branches and features that are available in the current master releases or the current heads of all of the trees. So when we introduce new features within a project, we have to make sure we have a feature flag for that. Otherwise, we will try to run those tests against the stable branches and they won't work. Then as I said, we're planning to have some new tooling in the Juneau cycle to deal with the config sprawl, because 200 options is really not manageable by a person. This is done automatically by some of the deployment systems. For example, Dev Stack will configure Tempest as part of its setup. And there are, I know, the open stack puppet modules that are in Stackforge do have a module for configuring Tempest based on your puppet policy. But we also want tools to do this against existing deployments. And one of the other things we've seen is that a lot of operations and users people don't like running Tempest. It has a developer mentality almost in how all the tests are written and how it's expected to be run. So one of the things we've been bouncing around in the community is the possibility of making Tempest a service, kind of like any of the other open stack services with its own REST API, which we would then use to run tests against existing deployment. So you would give it a configuration with an endpoint and it would go run the tests against that and aggregate the results stored in a database and then you could query the API to get the results from that. That's something that's possibly going to come out of the Juneau cycle to make Tempest easier to use. So where to get some more information about Tempest? These are hyperlinks so when the slides get posted eventually you can click on them. But the Tempest repository is probably the best source to get information. We don't have the best documentation. It's mostly developer documentation right now. We plan to expand that in Juneau as well. There are official docs but like I said there are only development documentations. They outline the classifications of tests and certain methodologies we use within the Tempest tree. But there's not much on actually running Tempest against an existing deployment. And then we're also always available in the open stack QA IRC channel. If you want to get involved with Tempest we always need more developers. We always need more people helping us giving us feedback. And the IRC channel is the best place to get that real-time feedback as well as the mailing lists obviously. So thank you all for your time. That's all I had for prepared material. So if anyone has any questions about what Tempest is, how Tempest works, or anything about Tempest in general I'll take questions now. There are mics in the aisles so feel free to just walk up to one and ask a question. Hi, my name is Sean Collins from Comcast. And this may not be the correct format for this question, but for some of the scenario tests that we're doing for Neutron we're doing a lot of work with IPv6. And I guess I just need to put myself out there for figuring out for both the gate and then also maybe for third-party CI for some of the complex deployments that we have and configurations that we have for the IPv6 parts of Neutron. It's a question of like who would we reach out to for building those types of scenario tests and bringing them into the gate because we found a lot of bugs when we were working with IPv6 for portions of the code that aren't really run and tested. So like creating an instance and trying to connect to it for SSH on a V6 network. Yeah. So during the Ice House cycle there was actually a lot of work on Neutron testing. Prior to that there was almost no Neutron testing within Tempest. Part of that was IPv6 testing. We actually now have config options to specify IPv6. It's available as well as features within that. The people you need to talk to are probably some of the Neutron developers working on that. I know his IRC handle. I can't remember his name but he's been doing some of the IPv6 works. But we all hang out in IRC or a mailing list post and if you have actual tests it'd be great if you could just push them out there. We're always welcome for new contributions. Yeah. I think at this point we just have at the API level to create subnets with v6 attributes but we haven't done any of the scenario testing. OK. Yeah. Cool. Thanks. Oh hi. Hi. So quick question. So it seems like the focus of Tempest is very strongly on the API. Yes. One of the things I have a challenge with a lot of the defects that I run into have to do with the fact that the OpenStack UPIs are very transformational. They create things like virtual machines and volumes. So one example we had a bug where a credit server, credit volume, attach it, API action succeeds. Actual attachment did not occur. So that doesn't sound like something that follows by Tempest's mission within the OpenStack QA or where does that belong? So that actually does belong in Tempest. It just doesn't necessarily belong in the API tests. That's actually the role for the scenario tests where it's an end user workflow and that's actually a good example scenario test that I like. It's I think it's called test build stamp or something. And that actually creates a server, creates a CinderVolume, attaches that CinderVolume to the server, logs into the guest with SSH, goes to the directory that the CinderVolume says it's attached to, puts a timestamp in there, then tears that all down, brings up a new server, attaches that old CinderVolume to it, and goes and checks the timestamp and makes sure it's the same. So that kind of testing does belong in Tempest. It's just not in the API tests. It's the scenario test. Thanks. No problem. Another question about testing that should be or maybe not in Tempest, Horizon. OK. There aren't a lot of tests for Horizon in Tempest right now. There's actually one. It's a really cool test. It logs into Horizon. It's a small number and it covers a small part of Horizon. Yes. Do you think that additional Horizon tests are sensible? Is that a good fit for Tempest? So this actually came up the Icehouse Summit as well. And the problem with that is Horizon is a lot about the UI. The APIs which Horizon uses where all the functionality happens, we have good test coverage for. The thing that Horizon does that's different is the UI. And the problem is the UI changes constantly between releases and all the time during development. And Tempest is more about stability verification. So if we're having UI changes and we had some kind of gating using some kind of web UI toolkit that are test toolkits out there to test UIs for websites. If we had that in Tempest or a similar tool in Tempest, what would happen is every time that you had a Horizon UI change, you would have to make a Tempest test change because you wouldn't be able to merge your UI improvement. Because of that, the Horizon tests like that don't really have a place in Tempest. But if there's other kind of verification you can do for things that should be constant like logging in, that would have a place in Tempest. The problem is those kind of tests are very hard to write. Understood. Yep. Thanks. No problem. Hey, you mentioned about the scenario test and how you're testing with DevStack. The question I have is, does the Tempest today support like multi-node DevStack for scenario testing? So we don't actually gate on that. And because of that, we have a policy within Tempest we can't actually add any new tests where we don't have functional verification on the commit. So for example, we gate or a third party test suite. But in theory, nothing in Tempest prevents it from running against a multi-node deployment. We just don't actively test it as part of our CI system. So in case if we wanted that to be a mandatory option in the Tempest, do we need to add additional tests on the gating to have that? Yeah, if you wanna add tests that require multi-node environments, we have to have either a gating environment to do it or within the OpenStack Infra system or you can provide a third-party testing system like some of the projects are requiring for drivers and other things. Yeah, the reason I'm asking is I'm working on the distributed virtual router for the Neutron. So for testing the scenario test for distributed virtual router, I need a multi-node setup in the Tempest and also in the gate to make it to work. Yeah, unfortunately, we've been burned in the past by loosening that requirement for having verification and tests. Things bit rot very quickly just because of the sheer size of the project and how quickly it's growing. Okay, thanks. Thanks. Could you expand a little bit on the third-party testing piece? Is that basically where tests that don't belong in Tempest are supposed to go? No, that's the location for anything that's in Tree. That's an external API. The only one we really have right now is Nova's EC2 implementation. But if there was like an in Tree Google Compute Engine API or if Swift adopts their, what is it? Swift S3, their S3 implementation. If they did that and it was in Tree, we have to test it, but it's not an open stack API. So we separate that. And because it's not an open stack API, it also gets a lesser priority. So we don't do direct API testing. We use a client for that API. So just a specific example, something like deep functional tests for the sender backends, the sender driver stuff. I mean, obviously, you can't get on that because you don't have, for instance, a net app back end to test on. Where would those tests live so that they're accessible to the community or is that just not part of the code? So the problem is we don't actually test backends directly. The back end shouldn't matter from the API perspective, so we won't be testing that directly. From the Tempest perspective, the API should remain constant, whether it's a net app back end or a store-wise back end or any of the other back ends. But if you want to do functional validation of that, it would belong in the sender tree or outside the tree. Okay, sort of related, I guess, to where stuff lives. Is there any movement on making it possible to run or write Tempest tests that live outside of the main Tempest repo? Yes, I should have put that on the future work list. We want a part of making Tempest more accessible as we want to basically libify. That's probably not a word, but make Tempest importable in other projects so they can use some of the common glue we have, like the unified REST client, which supports testing very well and things like that. So it's a future item. Unfortunately, it's kind of difficult to do that now. Okay, thanks. Hi, we are adding, for a particular vendor, adding the mechanical driver, so how the Tempest testing will cover that? So, as I said before, Tempest is all about the APIs, and a new driver shouldn't affect the API in any way. The API should be consistent, independent of what's underneath that OpenStack is running against. So if you have tests that are specific to your driver implementation, they belong in the projects as unit or functional tests, but if you want to add increased API test coverage, then that belongs in Tempest just to verify the API. But your driver-specific stuff does not belong in the Tempest tray. Okay, so if we do not introduce the API, then this Tempest test doesn't apply. Correct. Okay, thank you. No problem. Well, if there, I guess there's one more question. One more question, Timur Mirantis company. So now we used Tempest to test regression for each new commit here, but do we plan to use Tempest test use to validate production-ready deployment for OpenStack? So that is part of Tempest's job. We don't have a regular job to do that because we don't actually have access to deployments. I'm sure public clouds or other deployments are using Tempest for validation, but we as a community don't actually do that because we don't have our own deployments. We only have the community CI system. So that's why we use it for, that's where we use it. But by design, Tempest should work in that role and it should perform well. So no, but it is just simple tests, yeah? And if all Tempest tests passed, it doesn't mean that this cloud will work without any problems in production. I mean, that's a testing coverage issue. I mean, more tests are always better. That's, I mean, there's nothing we can do about that. We have scenario tests which are designed to simulate typical user workflows and we use that to do some validation of a deployment. That's really the most bang for a buck for doing verification of a deployment. The API tests, not as much because they're more about API consistency, but the scenario tests are really where you get the value for verification and expansion of that will help improve deployment verification. Okay, thank you. No problem. Well, I guess that's it for questions. Thank you all for your time.