 Okay guys, I think we can start, punctual. So this is a session about OpenStack quality assurance, a beginner session, so I will talk about the very basics of OpenStack quality assurance. My name is Mark, I've been working for Deutsche Telekom, and I'm a core team member of OpenStack Tempest, which is a project inside of the QA program. So the agenda for today is we will talk about the program, what kind of projects are in that program, why are they in the QA program. We will talk about the continuous integration system, and what kind of system triggers the quality assurance process, and then we will go in particularly one project into a deep dive, where I show how actual tests have to look like and how to write these tests. And at the end we can talk about where we need help in the QA program for contributors. So at first I will start with motivation. So my personal motivation to contribute to OpenStack QA. On the one hand I think there's a lot of discussions about the quality of OpenStack. As I started with OpenStack during the Grizzly release cycle, we had a lot of discussions about is OpenStack enterprise ready. So the question is how can somebody solve this, and I think one possibility is to contribute to OpenStack QA and to make OpenStack a more robust and reliable product. So currently I think we have a lot of discussions about carrier grade. So at least from my view as employee of Telco, I heard every day is OpenStack carrier grade, and I think one of the solutions would be that a lot of people contribute to the QA program and bring their feedback in the community. So this is one motivation. Another motivation that I have is the team of the QA program. So usually if you're an OpenStack or an open source developer, you choose your program or your contribution where you like to contribute. You have a lot of freedom. I think we in the QA program have a really good team spirit. You can join our IRC channel just write a question and you will see there will be a lot of people involved and they will help you to set up your quality assurance process. And the team is split around the whole globe so we can capture all the time zones from Japan to Europe to US. So let's go a bit deeper in the QA program. So the main thing you will have to keep in mind the QA program isn't inside of a project of the OpenStack core. It's usually external so we have a running cloud system somewhere. It could be your production cloud. It could be your continuous integration cloud, whatever. You have one cloud and you want to validate if this cloud works or not. So there's one core project that is called Tempest. So Tempest actually holds all the tests that are run and we have a variety of tests. So we have for instance API tests so they are usually quite simple. They go to a certain kind of API of OpenStack and checks if everything works. We have more complex scenario tests that tries to validate real-life scenarios. Like I want to boot two VMs with a network and try if they really can connect to each other. So this would be in scenario tests and it will validate this thing. We have other tests like more non-functional tests like stress tests. So they run every night and have a look if your OpenStack cloud really is robust enough to handle a lot of stress. Something what is new in Tempest during that cycle is that we are now separating our test cases and the actual framework. So the Tempest lib is created during this cycle and we will enhance it during the next cycle. So one important thing in cloud is always automation. So to really test on state of OpenStack we have to have a tool that sets up a cloud. So in our program we have a tool called DevStack which is mainly a shell script that sets up an OpenStack cloud. So we use this for our continuous integration system to build our OpenStack installation and then verify if the state of the OpenStack that is built is really workable. So then we have another one called Grenade. So I think during the keynotes we heard about something that OpenStack needs to have processes to upgrade the cloud. And Grenade does that. It uses a stable branch, sets it up with DevStack, verifies that the OpenStack installation works with Tempest and it will upgrade the cloud to the new state and verifies if everything works. For sure what is important is that during the upgrade process there must be certain artifacts built that something is really in the system where it can be upgraded and then after that we verify that the upgrade was successful. So this is one, I think, important piece to reach that goal that OpenStack is upgradable or easy upgradable. So these are the main projects that we have in the QA program. So Tempest for tests, DevStack to build a cloud but only for development or continuous integration system it's not a production cloud and Grenade to upgrade a cloud and test the cloud during upgrades. So important to understand is that we usually, for our systems we build an OpenStack and we verify through APIs if it really works. Inside of the projects we have other quality assurance efforts like unit tests. So Keystone, for instance, has unit tests inside. We have usually in the projects other things like code style checks and so on that are in the projects. We usually have other view, we want to have a running cloud and verify it against it. So another thing which is quite important for the QA because this is why QA was started, I would say, the gate process. So the gate process is usually, if you are a developer and have a certain patch or a delta that you want to contribute to OpenStack you have to go through a process. And after many days and many reviews and comments you will reach your goal that it's merged into the master of OpenStack. So this is the picture for the review process. This is quite often shown. So on the right side you see your local environment you are as a developer, develop your code and have it locally. And in the middle here we have a review system. So we do several things. On the one hand we do our automated testing and on the other hand we have manual code reviews going on from the core reviewers in the projects or from the community at all. So if you upload your patch onto the SCARED system what automatically is triggered is a Zoom job and this is our check queue. So here Tempest will run some time and try to test your patch if it really works or not. After that if your patch works and all the core reviewers or at least two core reviewers agree to this patch it will go to the gate queue and it will be actually merged into the master of a project. So I think quite the basic process is like that that the patch comes in, the VM is already built and DevSec runs and creates from that state. So with your patch it creates an open stack installation and Tempest will run all the functional tests against the VM and tries the patch. At the end what the result will be is a plus one and minus one in the review system. So plus one means yes your patch works, everything is fine, minus one something went wrong, please have a look. So let's go a bit deeper in Tempest. So as I said Tempest is the framework for the actual tests. So we have a variety of tests. We have API tests which are trying to directly go to the API of an open stack server and try a certain test case. As I already mentioned scenario tests which is more real life behavior tests we have command line tests so this is something that is important to understand that nearly all the tests have their own REST client so we directly connect to an open stack API we are not using the command line tools but those CLI tests will do so they directly use the command line tools and try to use them. We have some third party tests for libraries that are important but we have negative tests so they are trying to do something wrong and we want to test this behavior if the writer's error code comes back and so on. Usually these test cases are quite simple to read and I think also to write because we are trying to reduce the complexity by using a class model so we usually put all the common functions and functionality that we need into our base classes so here as an example a flavor test so creating an open stack flavor inherits from a base compute test so a lot of functionality is already implemented in the base compute test and some functionality is implemented in the base test case. So at the end we will have a quite short test that possibly just creates a flavor and it's just some few lines because we can reuse a lot of functionality in our framework. So around these test cases we have the Tempest framework where we are currently trying to move everything to Tempest library so as I already mentioned we have our own REST clients they are using inheritance as well so here we have a base class called REST client and for each client for flavors for a lot of things volumes and so on we have our own API clients that are directly connected to the API servers. We are doing a lot of other things like we are checking the response automatically so this is a schema validation so we are getting our results we have a definition how the result should look like it's a JSON definition and we check if the schema that comes back is really that what we want. We have automatic negative tests this is the same approach we have a definition how a request is built and with this information we can automatically generate negative tests like if there is you have to put a parameter for memory you could instead of sending an integer value to this API you can directly use a string and it doesn't make no sense and you expect an error so this is something that you can automatically create out of the outer negative tests. Another thing quite important authentication you usually don't have to care a lot about authentication this is everything is already handled in the base classes in our frameworks so you just describe what you want to test and the authentication will be handled under the hood. Sure we have stress tests this is a special case CLI framework because I mean we will execute some shell commands we have to parse it and so on this is something that is done by the CLI framework important thing is that Tempest has unit tests so as all the others all the other libraries or projects inside of OpenStack we have unit tests there Javelin is a process for Grenade to create artifacts so Grenade is an upgrade framework we want to have some artifacts that we can test on so this is a quite simple test case here that I want to show so the test case just gets a flavor and tries to validate if everything is fine with this flavor so we see only two active code lines here so it executes self-client get flavor details so this is a function that is defined in the client class already predefined and we rely on already defined flavor that is configured in Tempest and what we get as result is a response code and a flavor ID and after that what is quite important is if you have a result in the test case you have to verify if this result is actual that what you want so this self.assert equal is the actual check of the test case if it really works so this is basically a quite simple test but if you browse through the API tests in Tempest you will see a lot of those tests are quite simple and look like this so some guidance how to write good tests for sure the test name is quite important because if the test fails you will see that in the logs and other developers that have some issues maybe with this test they have to identify what is going on there is it something that I broke or is it something that is broken in the system so a test name is quite important for us if you see that in the test file that you are in you see that a lot of functionality is copy and paste through all the files you should think about reusing functionality you put it into a base class or put it into some helper classes or helper functions and reuse the functionality quite important is that if you create something you have to check if this creation works try to check everything right away on the line where you created because you want to fail fast we have a lot of tests thousands of tests that run every day it takes a time I think 45 minutes something like that so it takes the time and it is quite important that it fails fast it makes no sense to create all the resources and it takes 2 minutes and then you check the results check them right away another one is quite important to clean up the resources that you create so if this test here would create a flavor a certain flavor it would be important that you clean up your things that you created because after that if you do not clean it up the tempers won't do it and you have some resources left at the test case run which is quite problematic for production environments because they will have some resources left so the question now for the project where do we need help so as I already mentioned we are still quite focused on quality assurance that run in our continuous integration system because this is mainly the thing what runs I think 99% of the test cases but we are quite interested in feedback from operators that run our test framework in their production cloud or in their test cloud so this is something that we already spent a lot of effort in but we would need help and feedback from operators to get feedback how they use it and what is usable and what isn't usable in our framework so for operators it is always interesting maybe to add new scenario tests because if somebody contributes a scenario test you are sure that your scenario that you put in our code it will be checked for every single new patch that comes in so if you contribute a new test or especially a scenario test you are sure that your scenarios really work in the source tree of OpenStack so another thing documentation and reviews for sure everybody is invited to help in our reviews documentation is something that we are having always some issues I would say in many areas we think yes we have to improve that and if you have feedback just open go to our ISC channel or create box and we will fix them another quite important thing is third party vendors so there is a group that was built calling third party testing group so quite important I think for customers is that they are sure if they are using a open if they want to use OpenStack with a third party vendor like a driver or a plug-in a neutral plug-in or something they want to be sure that it really really works with OpenStack and it is not only a marketing slide with third party vendor tests you could as a vendor create your test environment run a certain kind of test cases that you choose and contribute it back to the review system so this is a discussion or this is a group they have weekly meetings maybe if you are a vendor think about it to join there I think it is a great effort and we really have to have that in place so for new contributors we have a lot of areas I have in the next slide a list of blueprints and a lot of things we have a lot of low hanging fruits that can be fixed quite easily and I think it is really a nice field for new contributors to contribute there as I think I mentioned at some time we have our ISE channel you can just ask if something is there to work on we have a a spec repo so there is all the features where we are working on you can have a look if there is something that interests you you can just subscribe to this blueprint or ask the main responsible what has to be done there and for sure there will be something to do we have a weekly Q&A meeting where you can raise questions and ask what you would like if you are already an active contributor and maybe you are active in another project you can think about your features that you implement quite recently and have a look in Tempest whether they the test coverage is there for your feature for your area have a look if it is really tested and if it doesn't please have a look how to do that maybe raise it in a Q&A meeting that there is something missing we are also searching for volunteers for Q&A liaison which is if you are active in one particular project you are let's say a contact person for the Q&A program so everything what is new and what comes up can be discussed there so this is a we will upload this presentation with a lot of links there you can just click on it if you are active in a project you can think about joining the Q&A liaison so things that we need help on regarding special cases or blueprints and specs that are open one thing we have our design session sessions starting at Wednesday you can simply join our sessions have a look to SCAT org have a look what interests you and just join us you can discuss with us if you want to contribute something just ask us and we will find some things to do for sure there is a lot of current things that are in our pipeline one thing that I already mentioned is the Tempest library we separate our functional test cases from our framework which is I would say quite low hanging fruit because it is just moving code from one to be and adapt it another thing is we need test coverage for sure so if you are interested in a certain area you can just have a look if everything is tested if everything what you expect is tested in that try to eliminate the blank fields and add new tests unit tests is something which we work since I think two cycles and it is still not completely covered so there is a lot of things where the framework needs to have more test coverage so I think it is quite easy so negative auto testing and schema validation at all so everything what is related to schema is maybe quite easy because you just transform an existing API schema and bring it into the Tempest tree so these are things which are quite easy to help us and yes as I already mentioned I think it is quite easy to have contact with us go to our channels to ISE chat with us we have a mailing list so we are using the usual OpenStackDev mailing list with the QA prefix so if you want to write something there you are welcome to QA meeting is Thursday usually evenings or yes we are rotating the time zone so we have one for more focus on the Asian time zone and one more for the US so we have possibility to raise bugs so if you see something missing just raise a bug I think there will be somebody that cares about it and you can create your own specs your own feature thing if you have an idea of a new feature if you see something missing in Tempest you can just describe what you want to have there is a template for that you can describe it in some detail and then you upload it to our review system and you will get our feedback so this is in a nutshell what Tempest is and I think we should take some time to have time for questions if there are some just I think there is a mic about your thoughts on a situation when you need to deploy something outside of OpenStack using Tempest suppose you have DHCP server which is outside which is external to OpenStack and you want to have interoperability between OpenStack and DHCP ok so you mean you have your OpenStack running your deploy application you want to use Tempest not necessarily running suppose you want to deploy OpenStack on one node and external DHCP you want to have interoperability between them so your approach is to run some additional say fabric tasks from Tempest or run something external to Tempest I mean with our new approach we put everything into a library you could use our framework to verify it I mean it depends what you are choosing if you would like you could use the Tempest framework to verify the DHCP is running and to have an end-to-end test the thing is on that is that if you cannot contribute it because you are lying on your setup you have your DHCP server this is something that is not in our continuous integration system or DevStack so this is something that you will develop for your own but you could use the Tempest framework to create such tests and see maybe you will find if you create such tests you will find that there are some general approaches that you want to put into our framework and in our test cases Harsha Abdi from Microsoft building upon the question that was just asked how do we is it possible to contribute tests back into the Tempest collection for example we have some drivers that we test internally on OpenStack how do we do that and just what the process is and also I wanted to find out what is the minimum set of Tempest set absolute essential Tempest set that needs to be executed in other words what is the possible exclusion if you are a voting CI okay so if you have a driver as you mentioned you usually should have the usual the OpenStack API where you can rely on so if you have a Neutron driver or a Nova driver whatever you will have the usual Nova API you have a Neutron API that you can put your own configuration in and you can use Tempest to verify that it should work out of the box yes so and the other question is about the let's say third-party testing so I think I'm not quite sure how defined it is what kind of tests you have to execute I'm not quite sure about that but we had some test groups like smoke tests and gate tests and so on but this is currently not really used maybe we could use it in that way I don't know Matthew maybe you have some ideas but yeah I mean essential tests you can have a look on the defined tests that are running on our continuous integration system every single patch and have a look what kind of tests they are executing Hi you mentioned being interested in people giving feedback on running Tempest on production I'm a bit confused here do you recommend running Tempest on production or is there some sort of set of tests that could be run because I don't feel comfortable myself to run Tempest because it's quite a extensive test ongoing I'm not sure it's too healthy and maybe I don't know about people doing it without any issues I understand running it for staging or QA platform this is a very convenient use but for production I'm a bit surprised I mean it depends what kind of workflow do you have if you have your test system you have maybe a reference system and a production system you could use Tempest to validate just a reference because you're some kind of afraid to use Tempest in the production environment I think there's some ways to make sure that nothing goes wrong you could use some isolation things we're spending effort on that right sure but I cannot recommend to use Tempest in the production environment I think we need to have more feedback on that I think at the first beginning as you already mentioned we can use it for test systems we can use it for reference systems if we see that we solved our issues I mean one issue that we have resource leakages on some places so at the end of the system so after the Tempest run you will have some resources that are already left that are still left and so on that's things that we have to solve but I think at the end we can think on some kind of smoke tests for productions as well so the issue with the gate is that we have limited resources if you want to deploy a multi cluster with a lot of things you will need a lot of resources and we have a lot of patches every minute and hour so that's the thing that limits everything in my personal opinion I would say we have to rely on some third party help on that so that vendors or companies give us more resources to have relive deployments and test it on that machines yeah we have nightly tests but nobody really looks at them so for instance a stress test is something that runs every night yeah this is one possibility that we have pros and cons there because the question is who looks at these tests what happens if they fail if they fail every time sure this is one approach to solve this issue but anyway somebody has to care about that Olivier Jacques from HP just one question about the test that you are going to review and accept so how do you define what are the criterias to accept tests good versus bad tests I would say we accept nearly all tests that are capable to run that's for sure some kind of criterias I don't know which kind of area we don't accept I mean the first thing, the first criterias that they have to really run you cannot contribute tests that are by default skipped this is something sometimes people want to contribute tests because they are currently writing something or there is a bug in it they raise the bug they want to have it included in tempest and it's always skipped that's something that we do not accept I think this is one of the areas where we have some debates about accepting tests whether accepting tests or not but at the end I would say we are accepting tests all over the place for all OpenStack projects my question is do you have tests for negative scenarios like a BM going to an error so you are building in differently what's about it we have negative tests that usually rely on a certain API so they are quite basic we don't have negative scenario tests just in case I'm going to be giving a talk about Mimic it's a mock service that allows you to do such testing maybe you could come sure it's on Wednesday so I think the time is nearly over so thank you for coming