 Thanks. So I'm here today to talk about QA in the open. A little about myself. I work for HPE, they pay me to work in the upstream open stack community working on making it better for everyone. In the past two years I've been the maintainer of all of the QA efforts in the community, which is why I thought I'd give a little talk about it. So when I talk about open source QA, what do I, what do I mean? I'm not actually, I am talking about, you know, doing QA for an open source project, but I specifically want to be talking about doing QA in an open source manner. You know, the four ideals of free and open source software adhering to those while doing your testing as well. And that includes running the tests and hosting the results in public. Just having, you know, test suites available doesn't necessarily meet with what I view open source QA as being. And I view there's a lot of value in doing the testing in the open. Basically, treat your QA like any other open source project. Now, my typical, well, it's a little cut off on the left, but my, this is my interaction when I was doing corporate software development with the QA folks. And I don't mean any offense to anyone who works in corporate QA. They do a good job, but in my experience QA for software in big companies is kind of like a traditional engineering department. You know, you're making something and you pass it off to another department and they have a box and they have to test that box. Software development, as we've seen over time, doesn't really adhere to that same model as much. Because especially with something like open source, we don't have, you know, a black box that we're giving someone. The insides are available to everyone to work on at the same time. And a lot of the issues I've had with enterprise QA are because they assume it's external. They don't talk to the developers. They come up with these gigantic test plans that, and they find all of these really esoteric bugs because there's someone sitting in front of a computer following a test plan running tests manually. And that really, you can do things a lot better and a lot smarter. So now I'm going to talk a bit about how we do QA in OpenStack and kind of segue into how it's evolved over time and how we, and how some of the advantages we've had by having a dedicated QA effort on a large open source project. So OpenStack QA, this is our official mission statement for the program, which is to develop, maintain, and initiate tools and plans to ensure the upstream stability and quality of OpenStack and its release readiness at any point during the release cycle. So basically we're tasked with, it's mostly developing tools and ensuring that good practices are being adhered to in a lot of other projects. Right now the QA project is, I think that's 15 code repos. It might be 14, I suck at counting. And we run the gamut of stuff, bash, hate, and hacking, and ESLint config OpenStack, those are style rule checkers. They check your whitespace and everything to make sure code is consistent. DevStack is a dev test deployment system, so developers can deploy OpenStack in an efficient manner, in a quick manner with, you know, and they can mess with it in real time. And we also use that for CI testing. Grenade is just upgrading between two releases using DevStack Tempest as a black box API driven integration test suite. We've got other tools for visualization, doing test analysis, test runners, all sorts of stuff. And that's how the community has evolved over time is to, you know, fill these needs because there's people dedicated on working on these things to help the community out so they can test their code more efficiently. So how did the project get started? In the beginning of OpenStack, and it's not really an old project, it's 2011, 2010, that's when it got started. But when projects got started, they had unit tests. That was about it. A couple of projects had functional testing where they would spin up part of themselves and test that those worked. But there was no, and testing was central to the OpenStack culture. It was, you know, ingrained that when you push a patch, it has to have unit tests. And these tests would be run. But there was no dedicated effort on QA. And it was every project for itself, so coverage between the projects in the OpenStack community varied significantly. And, you know, the way you ran tests was different. The way the tests were written were different. And it, you know, it just, there wasn't any cohesion. Then in 2011, the second half of the year, like December, a project called Tempest, it was actually called Kong back then, was started. And that was an integrated test suite that would take all of the components of OpenStack, and it would, it would spin them, it would assume they were running, and it would just make API requests and test them and make sure. And that was, you know, a dedicated testing project in the community. Two years later, that project continued to gain traction, and more work went into it. It was renamed to Tempest from Kong. And it started being used for CI gating, pre-merge gating. So when you push a patch, Tempest was run, and the patch couldn't land unless Tempest successfully passed. Two years later, the community decided that we needed to create a QA governance group to own all of these projects that were responsible for it in the community and got recognition for it. And then over time, that group has slowly grown into those 15 projects I showed before, where, you know, we own everything in the community that's involved with fulfilling that mission statement. Now, while the QA community was growing, OpenStack was growing. And OpenStack is growing really rapidly. I mean, you can see there, it started in 2010, and there were two projects. And 2016, actually, that's all, there's a lot more in 2016. But like 2015 Kilo, there were a lot more projects. And you can see how many more we were adding over time. I mean, in 2011, we added four. That's, and what we were seeing with the QA effort, because we were a centralized effort with only centralized projects, like Tempest and DevStack, which would deploy and test everything together, we were seeing that we couldn't keep up with this growth. And a good way to visualize that is the number of Tempest tests we had per project. You can see we have NOVA, which is that big spiky one. I'm not sure how easy this is going to be read for people in the audience. But we've got NOVA, which is this big spiky one, and we have a couple of other projects hovering around 400. But the most of them, they're just down there between zero and 50. And this was really just an indication that doing QA in a monolithic approach, like everyone was expecting, like corporate QA was done, where it's a separate effort and all of the testing and development is done separately and done after the fact, that really didn't scale, especially for something that's rapidly evolving like open source software really does. And then the big 10 happened. For those who aren't really involved in OpenStack, the community in 2015 decided to change the governance and open up the floodgates. So any project that kind of was related to OpenStack and fulfilled all of the community guidelines could be considered OpenStack. And that completely threw out what we were doing in the QA community, because if it was an official OpenStack project, we would support it with all of those external testing efforts. And when you open the floodgates, we couldn't keep up when we were adding three projects a year, four projects a year. Since the big 10 opened, we went from about a dozen, 15 projects or so in OpenStack to about 130. It's closer to like the Apache Foundation, where there's all of these projects that are working together to make a cloud ecosystem. And we just couldn't keep up. So what we decided to do in the QA community was move to a more self-service model. We wanted to facilitate testing for the wider community, but we couldn't own the individual testing for each project because it's just impossible. We're a small team. We have maybe 300 committers a cycle compared to the 3,000 plus that are in OpenStack. It's just in scale. So we decided to make it so that the QA projects, we would still support the direct base, the core group of five projects that have been there since 2012 that make the base of OpenStack, the base layer of infrastructure for service. But for the rest of things, we'd provide stable interfaces and plug-in interfaces to enable other projects to use all of the tooling that we're using to ensure the stable base and that they would work for, they would be able to do it themselves. And this better fits with the growth of project. And I think it better fits with conforming to the ideals of free and open source software where it's, you have the freedom to use these test frameworks and these tools, but you're not required to. It's good if you do and it makes it more consistent with everything in the community, but we're not forcing anything on anyone. And I feel like that's a better fit for open source software. And I think it shows. This graph is something I actually spun up this morning and it shows the growth of plug-ins for DevStack, Tempest and Grenade, which are three of the projects since we introduced them. So DevStack was the first project to deploy with a plug-in interface. So, you know, you add a new project to OpenStack and you want to deploy it, you write a DevStack plug-in now instead of trying to land a patch in DevStack where we have a core team of five people. And you can see how quickly the plug-ins have grown. It's been about a year and we have almost a hundred. Tempest and Grenade added a little bit later. Apparently people don't really care about upgrade testing their project because there are three Grenade plug-ins. But Tempest is off to a pretty good start for growth and I feel like this really shows that monolithic and separate for a large community for doing testing and quality assurance doesn't really scale so well. And also a thing we saw was that keeping things separate increased a lot of friction for contribution. People have their domains of expertise. They don't really want to be working. They want to be working in a way they're comfortable, where they're productive. And if you tell them, oh, if you want to add tests, you have to go over to this other repo and it uses, you know, different coding style and different, you know, naming conventions. It increases a lot of friction. You also have to, you know, coordinate commits between projects to land a patch with a feature and new testing for it. And I feel like the plug-in interface and doing things in a more, you know, distributed and self-service model module, distributed and self-service model lets you, lets people adapt and grow very rapidly. And some things I've seen with QA doing it in the open, some of the advantages I've seen by having the dedicated effort is that it enables external audit of testing. A lot of projects I've worked on, Linux kernel, for example, most of the testing is done by vendors, individuals after a release or, you know, at certain points during the cycle and it's, you know, done independently. And, you know, some people push results and push methodologies, but there's not like a unified location. And having dedicated QA effort means that people can look at the testing, see where the gaps are, add additional testing, see where there are bugs in the tests and, you know, the advantages of free and open-source software. They just hold through if you do your testing that way. Another thing I've seen is user confidence in the project. How many times have people, you know, looked on GitHub and found a random project to do something they're looking for? You have no idea if it works or not. You pull it down, turns out it doesn't work because it's two years old. They don't have any, having testing and publicly available test results specifically, enable a certain level of confidence from users. They can see that, you know, we're deploying open stack on every single commit and making sure that everything works the way the tests are testing. And that really goes pretty far. I was at a code sprint a couple weeks ago and we were talking about a networking project. I don't remember which one. It was either OVS related or something like that. And we weren't sure, you know, how well it worked or not because we've never used it. So we went and looked and see if they were any testing and we couldn't find anything. We still don't know if it works or not. We'd have to deploy it and test it by hand. And, you know, that goes a long way for growing a community. It also enables independently repeatable testing. So as a user, you see all of these public test results. You can pull down the test results and you can also pull down the tests and test it against your own deployment, your own use case. And that goes a long way. And we've been using that in the community. Well, the foundation has been using that to ensure copyright. They're running the same tests we run in the community to verify that someone who sells a product that is open stack, that it, you know, conforms to what open stack is and they will grant them a trademark. And then reusable components, you know, standard open source advantage where, you know, everything's open and you can use it and build off of it and use it in other places. Some of the potential issues you have with running QA for especially a large open source project like open stack is lack of corporate contribution. The thing I was talking about, you know, with the Linux kernel on the last slide where most of the testing is done after the fact by vendors and individuals, that's how a lot of companies market selling a product based on an open source software package. Like, don't want to call out any names, but, you know, like Red Hat, for example, they say they, you know, rel their whole market is we burn this in, we test it, we make sure it's really good. Doing that in the open kind of competes with their business model. If you test things really well in the open, well, why would you buy the product? If, if, what are they adding? And we've seen that a lot of the QA folks who work on corporate products don't want to contribute upstream because they're being told to work on the internal test suites and the internal testing. That's been an issue for growth, I think, for the QA community and open stack in particular. But I wonder how much of that goes to other projects not having the same kind of dedicated effort. Another issue is limited free resources for running tests. We're very lucky in the open stack community that we have, well, we're making clouds and we have public cloud vendors that donate a lot of free resources to the community to run tests on. And we have an infrastructure team that keeps on top of making sure we can keep using all of those resources all the time. I mean, we have a pool of about 800 virtual machines to run tests on. And we're constantly exhausting that. And that costs a lot of money. And it's donated for free to the community. And that's an issue for a lot of projects. They don't necessarily have the financial backing to do something like this. But for larger projects that are more popular, something like open stack, Linux, I'm sure Docker has a lot of money, they probably could get around this. But smaller ones, not so much. And it's also difficult to get community buy in. If you go back to my slide with the chimpanzee, a lot of software developers in open source have similar experiences to corporate QA that I do. And there's a certain stigma attached to that. And they don't want to work with QA. The amount of times I've tried to help someone with a bug, and they assume I know nothing because I'm the QA-PTL in the community. It's hard sometimes to convince people the value of doing this. But these potential issues, they're all social. And I feel like through communication and working with people over time and seeing the benefits and the results, we can overcome them. And there are a lot of benefits to doing QA in the open for a project, especially a large open source project. And, you know, a lot of open source projects, there's a stigma of, oh, they're not tested, the quality's low. And I feel like doing things in the open really can overcome that over time. And just someplace to get some more information about QA in open stack. We use the Dev mailing list. We've got an IRC channel on FreeNode and there's a wiki page. And all these slides are on GitHub. I see people taking pictures of them. And the source code for the slides is also on GitHub because I wrote it in LaTeX. And all of the graphs, they're there. I don't know what license, I didn't pick a license. I should set a license to make that more easy. But they're free to download. And with that, are there any questions? I don't know how much time I have left. But you have enough time for a few questions? Yes? Shouldn't quality assurance be the responsibility of gatekeepers within projects? And the importance that's given to QA has basically led to how much status it's given by gatekeepers within a branch? Yeah. So what is, for small projects, particularly for open stack, it's a huge responsibility. For small community projects, how do we convey the product that's about output is quality as well? How would you anticipate that it's actually informed? Yeah. So for smaller projects, it's more difficult. You have to really try to ingrain it into the community culture. I don't have a simple answer for it. But if you come at the project with a test first strategy, I feel like that would be the best way. The gatekeepers, they're the ones who are in charge of picking or merging commits basically. And they say, no, you can't merge this commit unless you add testing for it. And you have a way of running tests, even if it's not publicly available test results like I was proposing, smaller communities might have more difficulty doing that. But even if you know, you have tests beforehand and the gatekeepers, they check that the tests work before they merge anything. I feel like for smaller projects, it has to be about community culture and putting testing first. And a lot of the techniques and things I was describing for OpenStack won't really scale down that far. And I guess GitHub does have Travis and there are free resources available. They're a bit more limited, but you can leverage that to a certain extent, I think. You know, do some pre-merge testing, which is something we do very extensively in OpenStack, which has proven to be incredibly useful, you know, testing that the tests work before you merge a commit. Are there any other questions? Okay. And I'll put the link back up for the slides if people want them.