 Oh, that's blinding. OK, I guess we are at time. So hello, everyone. Thanks for joining here today. My name is Andrea Frittoli. I work for Yulet Packard Enterprise. I'm QA tech lead for the helium distribution we make there. And I'm a member of the core QA team for OpenStack. So today I will be talking about stable interfaces in Tempest and how to use them for OpenStack integration testing. One logistic question. I may say Neutron or Newton or understand either of them. There was an email actually in the distribution list, April 1st, about renaming Neutron to Neutron to Quantum. But maybe it wasn't such a bad idea. Does this work? Right. So how many of you in the room are familiar with QA for OpenStack at Tempest? Well, quite a few, but probably half of you. OK, well, I'll provide some more details. Basically, what we do in the QA team, our official mission statement is to provide tools and initiate projects and initiative to ensure that OpenStack quality is maintained at any time during the release cycle. So our job is to make sure that you can go and take OpenStack at any point and you can find the quality you would expect. If I learn how to use this. Right, so we have a number of QA projects that we maintain to achieve that. Kind of group them by topic. So the first three are related to kind of syntax checks that you may want to use in your project to check your syntax for your JSON code, of your Bash code, or of your Python code, whatever technology you use in your project. Then Tempest, Tempest, Leib and Grenade are really about tests and test frameworks. So they contain the framework and the test to verify your deployed cloud and upgrade in the cloud in case of Grenade. Dev stack and related are projects which allow you to deploy a cloud, not a production cloud, maybe, but a cloud which is good enough for testing purposes. Then down towards the bottom stack, bits and OpenStack health dashboard are related to actually the collection and visualization of test results. Plus, we have other tools like OS Teststar, which is a wrapper around Teststar. To make Teststar nicer for our needs, basically. Some cookie cutter that you can use to develop start development of plugins. And the rather new OS performance tools, which is a combination of Python tools to collect performance data from the cloud. So what is Tempest? So Tempest is the integration framework, test integration framework for OpenStack. So it's strictly black box testing, so API driven testing. And it includes the framework and the test which are executed thousands of times every day in the gate. So whenever you submit a patch to some project in OpenStack, it's quite likely that some DevStack cloud will be started and then Tempest test will be executed against it. So when Tempest started, the scope was relatively little. So a handful, a little more of projects at the time. Then more and more projects started to be incubated and then integrated into OpenStack. So we started having more and more tests to include in Tempest. And more knowledge also needed, required in the QA team to manage those tests. So we already started at that time to realize that the model didn't scale very well. But then when the governance model changed to the big tent, I think at Liberty Times, then, well, we realized it really didn't work like that. We couldn't provide enough QA coverage for all the projects that were coming into OpenStack at that point in time. So we had to change model. So we decided that we should use a more distributed approach of QA and, as a QA team, only maintain the tests for a core, small core of services and let all the other projects care for their own QA using the tool that we provide. So the first initiative that we started was Tempest Lib. So it was initially Tempest underscore Lib, as Matt can tell you. So the idea was to create a stable interface within Tempest. So a part of the interface that we guarantee is backward compatible that other projects can consume and use to create their own test. And we wanted to do that within Tempest. However, the Tempest namespace was taken on PyPy, so someone else was having some Tempest project. So we couldn't do that. So Tempest Lib, as a separate repo, was born at that time. Recently, Matt managed to get the Tempest namespace back because it's not used anymore. So the project that used that name is actually not alive anymore. So we moved back to a namespace within the Tempest repo. Later, so that was towards the end of Liberty, the plug-ins interface was introduced in Tempest as well, which allows you to define a set of tests and configuration items that can be loaded then in Tempest. So what are the projects that we support? So Kiston, Nova, Glanz, Sinden, Neutron, and Swift. These tests are part of the Tempest suite, and they're executed as part of the integrated gate, which means that every change which runs integrated gate runs all the tests for all these projects. So where do the other tests leave? So they're a combination of things. So there are several Tempest plug-ins out there in different repositories. There are CLI tests for several Python clients for the different services, and there is a combination of other functional and integration tests that they use different parts of Tempest stable APIs. Those tests, they are not executed against changes in Tempest. They are only executed against changes to the repositories that own those tests. And because of that, they should only use the stable interface of Tempest because otherwise, they have no guarantee that they will break at some point. So six months ago, we presented Tempest plug-ins, and this was the situation in Tokyo. The interface was pretty new. So there were only four plug-ins out there. But now, it's a bit different. So we are almost at 30 plug-ins available, which is a good number. I don't know if you can read the names there. Maybe a bit small. But something that I wanted to note is that you see the name of the repositories there in the graph, or how does this work? Anyways, you see the name of the repositories there in the graph where the Tempest plug-in is actually hosted. And I would say probably 90% of the cases, the plug-in is hosted within the repository of the service they want to test. So there are some exceptions. Sahara test, Intel, NFV, CI test. But those are probably not even plug-ins. Oh, no, they are plug-ins. But yeah, those are the two only exceptions that I can see. So while there is a certain degree of convenience in hosting the plug-in in the same repository as the service because it allows you to change the service API and the plug-in. So the test at the same time with one change. There are also disadvantages in doing that. One is that in terms of dependencies, it is not possible while the plug-in is hosted in a repo for a service to declare the right requirements for the plug-in. So it means that anytime you want to go and install a plug-in, you actually have to install all the dependency for the service that the plug-in is testing. So this may not be a problem if you're just a single node DevStack. But if you start to consider more different deployments where you have a cloud, an actual cloud, and a test driver, then you end up installing a lot of dependencies on your test drivers that you don't need to install. And the other thing is that Tempest is branchless. So that means that we run the same version of Tempest against all stable branches that are supported in OpenStack. And if you write a Tempest plug-in in the repository of a service, which is branched, it means that your plug-in will be branched as well, which is not consistent with the model we have in Tempest. So you will run into problems eventually when the next release comes out. Same graph type of graph for CLI tests. So over time, you see that different Python clients have started to introduce CLI tests based on the stable interface. Other tests that I could find. So there are a couple of Tempest plug-ins which are almost ready, not used yet, as far as I could tell from Kingbird and Vitrage. There are several Tempest tests in different repositories which could become plug-ins eventually. I know, for instance, the designate team is already working on creating a plug-in, proper Tempest plug-in in a dedicated repo. So other type of tests. So there are some test suits which only use part of the available stable API, like the REST client and a little more, which is good to see because the idea of having a stable interface was exactly that to have something out there that people can pick and choose and use what they want, what actually best suited for the type of testing that they need to do. So I've seen examples where the REST client is used and maybe other interfaces are mocked to provide more efficient testing for specific projects. So what are these stable interfaces, in fact? So this is the full list, I hope. There are clients and authentication providers and many more. Rather than going through all of them one by one in this list, I wanted to show how they are used across different repositories. So what I'm showing here is the number of repositories that use a certain interface at least once via an explicit import. So it was interesting to do this graph because basically I could verify that pretty much most of the interfaces that we expose in Tempest Sleep are being used by projects. So things like the REST client. It works somehow, I don't know how. Well, things like the REST client is used by a good number of projects. Decorators, exceptions are used as well a lot. The second most used, actually, it's interesting, is utils. It's a simple thing that we expose that is a method basically to generate test data, things like random name or random password. Seems to be pretty popular. Down towards the bottom of the graph, I don't know if it's readable, but you can see for instance things like auth. You would expect auth to be used by most projects, but in fact it is not explicitly imported. It is used but indirectly. There are some other interfaces within Tempest that basically allow you to hide that and avoid the need to call that interface directly. There are other interfaces that are not yet stable, but they are potential candidates to become so. So some of these service clients, they are not yet in the stable area. We're working on that to make them stable. Credentials providers, those are tools that we'll describe more later, but basically about getting interface credentials that you can use in your tests. Client managers to collect all your clients in a single object. And actually the plugin interface itself, which is not stable yet. It's not in the stable area, but it is in fact stable. So it's pretty much just a matter of moving it over. And while the attribute decorator is pretty much used everywhere, so it may be worth moving it over. It's very small. It's probably five or six line of code, I don't know. And the same graph here for internal APIs. There are a lot of them that are being used. So the config one goes along with the plugins one. There is a little utility again, three or four lines of code in the config module that allows you to register config options. And that is used by pretty much old plugins. That's why that's pike on the config. And then test is the attribute decorator again. It's creating that spike over there. There are other interfaces which are used that are not stable, like the manager for the scenario tests and so forth. So we may consider moving them to stable in future. And if you are writing integration test and there is an area of tempest that you're using that you think should be in the stable area, please let us know. And also you're welcome to contribute to making that a stable interface. Basically the requirement we have for interface to be stable is that, first of all, it should not depend on configuration because we don't want consumers of the stable interface to depend on tempest configuration. And secondly, once we move an interface into the stable area, we guarantee backward compatibility so we should provide a decent level of documentation and good interface that will not change in future. So to provide some more details about the different interfaces, I wanted to go through an example of how to write a tempest plugin and where these different interfaces come into play. So the first is the tempest plugin interface itself. That's an interface that allows you to specify to tempest where to find a group of tests that are external to tempest and to provide a set of configuration items which are specific for those tests and no native to tempest. We plan in future to do some extension to these which will allow to integrate custom service clients. So if you have your service which comes with its own specific service client, then you can integrate it via the plugin interface. Plugins are discovered, oops. Plugins are discovered via Steve Door which means that as long as they are installed, and visible to tempest, they will be discovered and the test and the configuration option will be loaded and become available. Normally we install tempest in a dedicated virtual environment and there is one environment specific environment in a tempest, Toxini which installs, creates a virtual environment with a site enabled which allows tempest to discover plugins which are installed system wide. So externally from the virtual environment where tempest is installed. There is also something that I wanted to note that once the plugin is installed it will automatically become visible to tempest and if you don't want to run the test you should not install the plugin because otherwise there is no way that you can basically install the plugin and configure tempest not to run that plugin unless you specify then a regular expression to filter those tests off. So this is an example from Manila of how it may look like. So basically in the top line you can see that all you need to import is plugins from tempest.test-discover. If you want to use the register-opt-group method then you can import it from tempest config as well. Okay once you have your plugin structure in place one thing that you may need to do is to write your own service client. That's not necessarily the case for all plugins so you may want to write a plugin where you just bundle together a bunch of neutron tests that you don't want to run in the integrated gate and put them in a plugin. In that case you don't need an extra service client but in most of the cases the plugin will be associated to a dedicated service and then you will need an extra service client. So to do that you can use the rest client interface which provides method to run all the different HTTP methods to decorate them using the auth providers. It provides you with validation of HTTP codes and also for with handling of non-200 return codes. In terms of service clients, as you've seen before a number of the service clients for the core services are already part of this stable interface. The remaining ones are being moved over and they provide a single method for specific APIs. So for each API available from each of the core services you have a dedicated method and they allow you normally to pass any parameter to the API code so in case you want to do fast testing or negative testing or play with the parameters that you pass into the API you can do that. This is an example, the telemetric client that is now in a plugin and shows you basically importing from Tempest sleep come on the rest client, you can define your own rest client. What they do in this specific client plugin is also to define an extra manager and you can see below that in the manager object the telemetric client is instantiated by passing the off-provider in plus a set of parameters. So where do you get the off-provider from? The off-provider comes from the authentication layer in Tempest so the authentication layers provides you tools to basically encapsulate your credentials for your test accounts into credential objects, provides facilities to select the end points from the catalog based on different filters like regions and end point type and so forth. It allows you to decorate requests with all information for V2 or V3 and also to do things like injecting alternative data if you want to play in your test with the off data and mix proper data, proper tokens within valid data. So to verify things like a tenant cannot access other tenants project, I should say, resources. This is an example of using the off-layer so it's in Tempest's leap of an interface which is not yet part of the stable interface but we are working on making it stable interface is the client managers. The client managers provide you one object which is bound to a set of credentials and for that object you can get access to all the service clients which are registered. It's nice to have in the sense that it hides the complexity of the off-layer so if you have a client manager object you can initialize it with a set of credentials and it will provide you with all the clients, initialize with the right off-provider and everything that you need so you don't have to worry about anything else. So what we're working on is a way to register service clients from plugins into the existing managers from Tempest so you can get them from there and also lazy loading of clients which means that if you get a client manager object not all the clients will be initialized which makes sense because in some cases you only want to test your own service so maybe your service plus Kiston or plus Nova and you don't want to have all the config parameters that are required to initialize all of the other clients. Again, an example, the managers are in Tempest.manager and in Tempest.clients as well there is a more sophisticated version and finally credential providers. Credential providers are also not yet in the stable area of Tempest but we are working on moving them there. They allow you to get set of credentials for your tests. When you want to run multiple tests in parallel then if you want to keep them isolated you may need several credentials maybe one or two credentials for each parallel stream that you're running. So these credential providers give you a way to have access to this set of credentials. They also manage network resources associated to those credentials. There are two of them. Actually one is the dynamic credential provider formerly known as tenant isolation but now tenant is forbidden. So as long as you have access to admin credentials and you are allowed to inject your admin credentials into your Tempest config file then you can use the dynamic credential providers which will create credentials on the fly for your test. So each test class when it starts it will get one or two sets of credentials. The other alternative if you don't have access to admin credentials at test runtime or simply if you just want to reuse test accounts multiple times which is a valid use case as well is to use the pre-provision credentials provider. In that case you create the credentials that you want to run to use for testing before running the test. You store them into a YAML file and you feed the YAML file to Tempest and the pre-provision credential provider will allocate them taking care of the locking and make sure the credentials are not used by two different tests at the same time. Finally if you're working on a new service or an existing service it may well be that your API supports micro versions and so if you're writing tests for micro versions you can use a set of utilities from Tempest Lib to deal with that. They allow you basically to define the acceptable range of micro versions for the test that you're running so you can say this test is only valid between version X and Y. They allow you to match that against the configurable range of micro versions as well and they select accordingly the micro versions to be sent via the API when you make requests. This is an example actually from Tempest Nova tests of using the micro version framework. So yeah, there are several other Michelinius utilities in the Lib area. So one for to generate random test data which is pretty popular. You have an SSH client you can use if you want to do validation on VMs. We have skip decorators that you can use to skip tests based on different conditions like service not being available and test attributes, well that's not yet stable but as I was saying it's just a few lines of code and it's widely used. The same for CLI tests. So what kind of stable interfaces are available to you if you want to write tests for your Python clients. So there are four pieces that you can use. One is the execute command that give you access to running basically an external command like Nova or Newton or so. You have a CLI client class which is a wrapper around that which exposes, it wraps around the execute method basically it gives you a nicer interface. We have an output parser which allows you to parse the output, the console output of clients that like in tabular format for instance and the client test base which is the base test class that you need to implement to run the tests and that provides you with the link to the former free already in. This is an example from Mril CLI test. The only thing they need to import in there is actually base because once you get the base class then you get everything else from it. Right and that's all basically. So this is a list of more references. The documentation from the developer documentation of Tempest, the plugin documentation, the link to the source code for this presentation and yeah, you can find us on OpenStack QA on FreeNode and I don't have an idea what the time is. There is yeah, there is still some time for questions. So if you have any question, you can ask them here now or just find me later as well. There are two microphones in the two lanes. So if you want to ask any question please use the microphones. Just one quick question. Are there minimum requirements for the cloud that you're running Tempest against as far as memory, CPU and where you're running Tempest from as well for it to operate properly? Right, so in terms of sizing of your cloud it depends very much on the concurrency, the level of concurrency that you are using in your test driver. So if you want to run multiple tests in parallel you may need more resources in your cloud of course. But we found that if you go beyond I think the level of the four parallel streams you don't gain very much in terms of the time it takes to run your tests. You only gain in the amount of stress you may put on your cloud just because there are some of the tests which are long running. So then the rest of the tests take less than the single test to execute. In terms of connectivity it really depends on how your cloud is configured. And so the minimum requirement that you need to have is that you need to have connectivity to the endpoints as they are specified in your catalog. So as long as you have access to the core endpoint for authorization then you can download the catalog and if the URLs in there are reachable by Tempest then you should be able to test. There are some other requirements that you may incur into if you want to do things like validating your VMs. So if you, for instance, start up servers and you attach floating APIs to them then you need to make sure that your test driver is somehow connected to the network which is providing floating or public APIs to your VMs. But otherwise that's all you need. So the tests are strictly black box so you only need access to the APIs and optionally to the VMs. Does this compare against Rally which is another popular testing framework for OpenStack? Well, they're separated, so to say. So Rally is part of its own program. It's not part of the QA program in terms of governance. There are some parts that overlap between what we provide here and what is provided in Rally. I'm not an expert in Rally, to be honest. I don't want to comment too much on that. I know Rally provides tools for benchmarking that are not covered in the current Tempest tree. And yeah, so if there are no more questions, I think it was a long day for our first day in the summit. So thanks everyone.