 I guess it's time to get started. So I'm Matt Trinish and I'm here to talk about the OpenStack Health dashboard. And Tiri wanted me to welcome everyone to the first talk in the new upstream development summit track. It's the first time we've done this and this is the first talk as part of it. So I'm just curious based on the amount of people here, how many people have pushed a patch to OpenStack and know what the test result system looks like in that whole workflow? Okay, it's about half the room, maybe a little less. So I'll spend a little bit more time on the first couple of slides. So this picture, which I'm sure everyone has seen before, is the basic workflow for interacting with code submission, code review, and getting changes merged in OpenStack. You clone a repository, you make your local change, you push it up to Garrett. The test system runs in that little corner on the bottom left and you iterate on this process until it gets approved by core reviewers and the testing says it all works and then the change can merge and it's all automated. The thing that people often don't recognize is when you push a change to OpenStack, a lot of work gets spun off as part of this. So the little chef there, who is the typical OpenStack contributor, pushes a simple change, it could be a one line change and it spins up all of these jobs in all these different environments and they're running a lot of tests. And as a developer you think, okay, this is my one change, I can see the test results, this makes sense to me. The thing you have to remember about OpenStack is there are a lot of developers, a lot of projects, a lot of things going on at a given time. I mean, I think there are like 3,000 developers this last cycle, that's a lot of people pushing changes. So when you look at it at a whole, you get something like this, which is, this is the web view for Zool, which is the system that manages running jobs. And this is the state last week when I made this picture and everyone wasn't really working because the summit was coming up and people were traveling. But you can see that this is one little piece of it and there are 268 jobs in that queue on the left and 26 in the middle one. And it's like, there's a lot going on at any given time. And an issue we have, an issue I have is someone who thinks about this space a lot and wants to see what the state of OpenStack testing is and the state of OpenStack and if things are getting better or worse, how do you rationalize this? How do you keep track of thousands and thousands of jobs running? And we have some systems in place that we've organically grown over time. Oh, I guess I should talk more about the size and remember my own slides. So when you push a change to OpenStack, a lot of work gets triggered. You got five to 25 DevStacks depending on the project, which will spin up about 10,000 integration tests, depending on the configuration of those DevStacks and how many of them there are. There's about a thousand and a half Tempest tests, which is the integration test suite and those run against DevStack. Each one of those test suites will start up about 150 guests in that DevStack cloud. So you've got a cloud that runs in a virtual machine on a public cloud with DevStack and then Tempest will trigger launching 152nd level guests inside of that DevStack cloud. And also we have about a gigabyte of uncompressed log data that gets generated from these test results. It might have gone up. I generated that quite some time ago. When you look at that in aggregate, across both the check and the gate queue, we have 12,500 jobs run every day. That's a gigantic number. And then you look at our failure rates. They're really small percentages, but when you have 12,000 jobs, that becomes really noticeable to people, especially an individual who's pushing the change that might be irrelevant to that .01% so they think. The thing is when you run at that big data set, 12,500 is multiplied by .01% is a noticeable number for people. And it's always going to affect someone because someone is always pushing that patch. So what we really want is a way for people to know the status of things so they can get a better idea of how all of these systems work together. And we had some systems in place for doing this. And they've grown organically over time. I mean, the log servers, where all of the artifacts or all of the test runs get stored, it's just a big directory trade with output from all the logs. And we used to store up to four months. I don't know if that's gone up or not. We're gone down more realistically. On top of that, we've implemented an elk stack for things like elastic search, Kibana. So you can do document queries and search through log results to find trends and log results. And out of that, we built something called elastic recheck to do failure analysis so we can figure out fingerprints for specific kinds of failures and track them over time. So we have a better idea of the kinds of failures we're hitting. And we also have systems like graphite and Grafano, which I'm sure people in the operator community are familiar with, which all of the test jobs we run and all of the systems we run in infrastructure emit events into the stats D Damon and graphite and Grafano can then visualize them. But the level of data in there is pretty granular and high level. It doesn't delve really into what's being run. It'll tell you something ran, but it won't tell you what that is or go into the individual specifics in that. And that's where subunit to SQL came in, which was a project I started to show test level information at that same level where you can track the result of an individual test over time across multiple runs and filter it and so on. And that leads me into what OpenStack Health is. And I'm not a big Lord of the Rings nerd, but I remember the one ring to rule them all. That was the goal we came with when we started the OpenStack Health project. We have all of these different data sources. We have all of this data out there. It's very confusing for developers. It's very confusing for new contributors to figure out what's going on in the system. It'd be great if we had a single view, a single dashboard where they could go to, see the state of the gate, see the state of the system, dive in, find more information about what's going on. We're still not quite there yet, to be honest, because this is the architecture for OpenStack Health. Very simple. Doesn't really use multiple data sources, doesn't pull in all of those things that I was just talking about. That's the goal, but right now, it's mostly just a dashboard view for test results. In the gate. In the way it works, super simple. We've got a MySQL database running subunit to SQL in Rackstrove as part of the OpenStack infrastructure project. We have an API server that sits on top of that and queries it on requests, and then we have a JavaScript front end that users interact with. It's pretty straightforward. The API server. The API server is written using Flask and the subunit to SQL database API. Runs an OpenStack infrastructure at health.openstack.org. Anyone can hit it and write queries against it. If you write tooling that uses this API, be warned we're not developing it for external consumption. It's designed for that JavaScript front end, so things will change on you. We change the API quite frequently. So if you do plan to write your own tooling on top of this API, which everyone does with APIs, especially in the developer community, it will change on you. This API server is continuously deployed on every commit to OpenStack health. So as developers of OpenStack health, when we make a change to it, within the next puppet refresh, it will be automatically applied and live for use. And it was built with the intent to pull in from all of those additional data sources. Right now it doesn't do that, but some of the future steps is to pull in data from LogStash, from Grafana, from all of these other tools and make it the true one place to go for all of these issues and tracking all of these things. But to talk about the API server, we really need to talk about subunit to SQL a little bit because right now it's just a wrapper to subunit to SQL. All of the data for the OpenStack health API server is coming from this database. Subunit to SQL is just a set of utilities for storing test result data in a SQL database. It's pretty straightforward. It creates a database schema using a limbic against any SQL database. We test it against SQLite, Postgres and MySQL, but it'll run probably against anything SQL Alchemy supports. And it provides a database API in Python for interacting with that data as well. And then there are some CLI utilities that can interact with subunit stream format, which is something that we use in OpenStack testing because we use the test repository project for running tests. It emits subunit automatically, but other test suites also emit subunit. And we have CLI utilities in subunit to SQL to take that subunit output, which is machine parsable test results output and just convert that into the database inserts that are needed. And then we also run a public database which anyone can access, which is just a terrible, terrible idea. If anyone has run MySQL in production, they now don't give the world access to your server, but we do, and anyone can log into and query it if they want to. And for a long time, that's how I was doing some of this result tracking because I would just interact with the database before we had OpenStack Health. And I still do from time to time because it's easier for me. The way we get data into subunit to SQL is also kind of important to understand if you wanna know about how the system is working today. So this is a big complicated diagram and I guess it's kind of legible. I was worried about that because the font's a little small. It's an old diagram and I didn't have the source anymore. So in the upper right-hand corner, where it says Jenkins slaves, that's where the tests are run. These are one-use VMs and public cloud providers donated by a bunch of different ones now. Those get triggered by multiple Jenkins masters and the results get uploaded at the end of a test run to the log server, which is that log archive with SCP. The Jenkins masters all run with a zero MQ plugin enabled, which emits an event whenever the jobs are finished. Then there's a daemon that sits on that zero MQ queue and listens for that and emits a Gearman event. That Gearman, then that gets pushed to the Gearman server. Then we've got a bunch of Gearman subunit workers which sits there and listen on the Gearman server for events that say we have data, we have subunit data and they take that subunit data in and write it to the database. This machinery is very complicated. Goes wrong often. Something that we actually need a lot of help maintaining sometimes. I see Spencer staring at me because he's helped me a lot with it. We have to do this for various security reasons and other steps, but understanding this will help understand a little bit more about where the data is coming from in the system when you look at it graphically. Let's talk about the front end a little bit. The front end has got all of the hipster words in it. It's JavaScript written in AngularJS. We've got D3. We're using NVD3 right now which is a reusable wrapper library on top of D3JS, but we're gonna have to go to straight D3 eventually because we're pushing the limits of that. The JavaScript, it's pretty straightforward. It just calls the API server which we're running to make the request, get a JSON blob back and render the data in the browser. Just like the API server, it's continuously deployed on every commit. So whenever you push a change to OpenStackHealth, it automatically goes to the production server which runs at status.openstack.org slash OpenStackHealth. So now here comes the really bad part of my talk which is going to be a live demo which never works, but hopefully it'll work. So I've got OpenStackHealth running here and this is the homepage that everyone's greeted with when I go to status.openstack.org slash OpenStackHealth and I've got a couple of graphs right off the top. I've got the total jobs. This is all of the jobs that have been run in the gate and periodic queues. That's a limitation with the way we're collecting data right now. We only collect subunit data from the gate queue and the periodic queue, which means that we can assume the changes are going to be good because they're either running with the current state of master on all of the repositories or something that's already passed tests once and has been approved by core reviewers on the project. And so these are all of the test jobs that run, unit tests, tempest tests, functional tests, anything that emits subunit in those queues. And it's grouped hourly showing past month's worth of data and you can see this nice little gap here, which is when that complicated machinery had crashed and no one noticed. Then you can also see the failure rate, which is using the same set of data right above there. Also can see failed tests in the last test, the failed tests in the last test, 10 test runs. So it shows the last test, 10 runs and you can expand it and see specifically which tests failed in them and when they ran. And then it also shows you grouping by project. But the thing is, this is all interactive and live so you can change it, the period, so let's change it to run with 12 hours of data. Not too much, it's more than I was expecting considering everyone's here. But yeah, you can also change what we group by, if you're not interested in project, you're interested in branches, for example. You go to the bottom, we've only run changes on master in the past 12 hours which makes sense because no one's gonna be backporting changes at the summit. You can change the grouping resolution, all of this stuff, but let's choose something a little interesting. So we'll go to project. Who is a favorite project? Someone screamed something out. Puppet Nova, 12 hours no one has run anything on Puppet Nova. What was that? I can't hear, I'm sorry. But Nova's right there and it's got a failure rate so maybe there's something interesting in there. So Nova Jobs, that's something I guess. We've run a couple jobs. Eight with no failures, except there's one there. But you can see how we can dive down into more details as we go on. So we've got that high level view which was the same kind of thing that Graphite gives us. But we can dive down into more fine grained information and we can see recent runs and whether they were successful, what job they were when they ran. Say no tempest jobs, that's really bizarre. I probably should expand it. Let's see, here we go. Well, now we'll go to a single job type. So this is running, it's generating data from all of the gate tempest, DSVM, Postgres tests or full tests that run. So this is taking, ignoring the previous filter as we dive in, we lose the previous filter which is a bug or as designed depending on your point of view. And we're seeing the number of tests now that we've run in this particular job. And since it's run once in the last 12 hours, I'm actually not convinced this number is right. But we can see it ran for 588 successful tests and skipped 65 and none failed. So there's a 0% failure rate. And then we can see logs from the past, I think this is past 10 runs of this test suite. But the really interesting thing is we can go into per test view. So this is a list of all of the tests that run as part of this job. And we can see individual information about them. And we can click on them just like everything else. And this is me Taylor picking one because I know this one is very interesting because it does a lot of work. And we can see the run time of this test over time. And we can see it's success and failure rate over time. And looking at it at 12 hours is not very interesting. But when we look at this, let's say three months and see how long this takes to load because Wi-Fi at this is a little slow today. We can see a lot more data. We can see a terrible performance regression. I don't know what's going on there, but we fixed it very quickly. It's less than a day. But you can see this inherent noise in the system comes from running on public clouds and second level virtualization in public clouds makes it even worse. It's unavoidable. You can't really do real performance testing in this kind of environment. It's inconsistent beyond all imagination. But when we look at this data trended, you can see a lot more when you look at it at the higher level. So you can see with three months we can see this test averages about 150 seconds to execute. You can see that noise boundary is pretty thick. It's about 100 seconds actually. And then when you have a real performance regression like this spike right here, which I didn't even realize was there. I don't know what caused that. You can see that something caused that and obviously someone identified it and fixed it very quickly. You can also see how many of these tests were running. I mean, this is grouped hourly. Always running it. And then you can also see past failure rates. It gets a little more interesting when you turn passes off because then you can actually see the failures. Failures tend to happen at a very low percentage. And that's basically what we have right now for OpenStack Health. The plan for the future is to integrate more data sources into it. But for right now, this is how it works and you integrate it. But the whole point is all of these systems are interactive. So as an end user or as an end developer, you can figure out exactly what you're trying to look for in the test results. Because coming at it with just looking at it one patch, you're missing the bigger picture. And without something like this, we don't have a good view of everything. So talking about the current limitations of OpenStack Health. We only have data from the gate and the periodic queues. That's a purpose. We do that on purpose because the data is considered clean from those queues. We only catch failures with subunit data. That's a limitation of only using subunit to SQL as the data store for this. The only source of data is subunit to SQL. If there's no subunit data emitted by the test run, we don't know about that. So we emit, we have... Something that I added last cycle was DevStack, which is not a test runner. It's just, you know, it's a deployment system for DevTest environments. It emits subunit data now. So when it successfully runs or it fails to run, it emits a subunit stream that says, I ran one test DevStack and it passed or it failed. And it tells you how long it ran. So we catch all of the failures from DevStack through the end of the run for DevStack runs. But for any other run or for Tempest runs, if it fails before DevStack, we don't account for it because we don't have any data for it. That's one of the future things with pulling in from other data sources that information is available from systems like Zool, which is actually maintaining the running of the jobs or somewhere else. But we don't have that in this one data store. And did I really put three bullet points for the same thing? Okay. The next steps for the project. Include other data sources. We want to use Zool for that job level information. We want to integrate Elastic Recheck, which is an amazing data source that we have for tracking test failures across time. It'd be really nice when we go to this view and it says these ran, these failed. If it also said Elastic Recheck thinks it's this bug, it'd be kind of nice. And it's, it needs to be a little bit more kind of nice and it's an easy thing to do. We're also thinking about ways to include data from the check and experimental cues. We want to make sure that for things like that run time analysis that I was showing you, that we keep that from the, keep that specific to gate and periodic because we don't want that corrupted with random patches that people are pushing. Because the issue with the check and experimental cues is their run on every proposed commit. Not before, not when it's ready to merge and not when it's already merged. So someone could throw up a garbage commit that adds a sleep 100 to an operation and that would throw off the run time graph because it adds 100 seconds of execute time. And then UI improvements. I'm not a UI guy. I taught myself JavaScript to work on this project. I am terrible at anything on the web. Full disclosure, I don't think this UI is perfect. I've talked to many developers about this project because we're really developing it as a community for the community. So developers can be more productive in interacting with test results. To that end, I've talked to many developers about the system and why they're not using it because a lot of people in the community don't seem to be using this as much as I would have hoped. And there are still some gaps. Like we had a mailing list thread about periodic results recently and we were talking about a gap, which was notification. Some developers who work on periodic test results for things like stable branches like to get an email when something fails. So we added RSS support to provide a notification feed for those people. That's the thing. We have to collaborate on this to make it better so it suits all of our needs. It's not just, I have things that I'm interested in like that run time graph. I'm really interested in things like that and doing some statistical analysis but that's not for everyone. And that's something we need help with in the community is people getting more involved. You don't have to be a JavaScript expert to work on this. You can just file a bug and say, I think it should be doing it this way instead of this way or this is missing. And that's a way we can grow this project so it can become the one ring, the one system, the go-to place for interacting with test results from the gate. And some place to get some more information about the project. I realized I did not put the bugzilla link in there either but the dev mailing list, open stack QA on free node is where all of the development for this is being worked with in the community. The two git repositories and then the bug tracker is launchpad.org slash or bugs.launchpad.org slash open stack-health and I will add that link to the slides before they're uploaded and they're all open source on my GitHub account anyway. With that, are there any questions? I realized I'm probably a little short so please have questions or if people would like to see some more on the dashboard interacting, I kind of went a little bit quicker on that than I was expecting to. I think there are microphones up in the middle if you don't mind. Are there any plans to enable this for third-party CI such as Cinder? We have a lot of drivers that we do CI and I would love to see all of those results put in there too for our tracking purposes. So for the infrastructure run database, I don't think that's likely to happen because it requires credentials and passing data in but there is nothing stopping anyone from running the system. It runs every aspect of it is open source, maybe not the best documented but it's not that difficult to set this up. I know Andrea right there who works at HP, he set this up internally for their test system. I know Rackspace has done something similar or is trying to, I know I run it at home for some personal things because I develop on it so it makes sense to do that. So there's nothing stopping third-party CI from running this themselves. I think there's going to be an issue with credential sharing and getting them integrated into the one system but nothing stopping them from running it and providing a link in their Garrett review comment. I think that's a good way to do it actually. There any other questions? I can fool around some more with the dashboard show some more things, show some other projects that people are interested in. Show with a, oh, Terry. Yes. Yeah, so the first one is going to be elastic recheck data because that's really simple to integrate. It's, we have an ID and we just need to query LogStash to get that information. Then when we see failures on the dashboard it will say elastic recheck thinks it's a spug. That's probably a week or two weeks of work to get that integrated. The other one that we really want is the Zool data. There's an issue there with querying the Zool data because we don't actually store that anywhere. So we need to add a MySQL data store or some other type of data store to Zool so it will report its test results somewhere that we can query dynamically for the webpage. That's been in progress. There has been a SQL Zool reporter in progress for I think the better part of a year and a half at this point and it's still not there. So it is, but it's not reporting things to Graphite in a granular manner. It's reporting this number of jobs ran in this time bucket because of the way StatsD's data is formatted. We can't pull out the information we need to know about like this is on this project. We can't cross correlate. It's a bunch of time series points with numbers. It's not what we really need for this. Okay, well that sounds like something that we could work on integrating this instead of waiting for a SQL reporter. So maybe that'll happen sooner than I'm expecting. Yeah, and it hasn't moved at all. Let's, ah, there you go. Quick question. So I know back in Tokyo, one of the things we were talking about with OpenStack Health was the size of the database and the performance as it was growing over time and I know that there had been a lot of work in this last cycle to keep the performance up. Do you guys see any type of limit given the number of test jobs that we have of performance where we may max out? Um, so we're actually kind of seeing that. So we did a lot of performance tuning this past cycle on subunit to SQL and it is way faster than it was six months ago. And we also started pruning the data instead of keeping all test results for all time, which is what it's been doing since we started subunit to SQL back, I can't even remember the cycle name it was, before Paris. That was, that helped a lot too to limit that data set. But right now, like if we go to, let's pick a, if we go to an individual job view at a month, you can see the query is still running. And that's because this is returning a lot of data and it's a very complicated query involving two or three joins. Yeah, the metadata is a little bit tricky. Yeah, so we are seeing some limits there. I think we might be able to do some more performance tuning, we might be able to do some more data pruning. I think we can get a little bit more clever about that, but we are seeing some limits. There always are going to be limits because especially as a website, I know there's some UI rule of thumb for like, how quickly something has to respond or change before people get, you know, rage quit and start throwing things, so, yeah. Thanks, I'll ping you more offline about that. Okay. Hi. Thanks for working on this, Matt. You're welcome. You showed the architecture, which you had some editorialization about how it was maybe fragile. The subunit to SQL one that. Yeah, and of course a big part of that is Jenkins, which is slated for departure in the near future. And I apologize for making it so you can't watch your logs in real time. Sorry about that. That's not your fault. It's the Jenkins upstream for the security bugs. So with Jenkins leaving an Ansible 2.5, Zool 2.5 Ansible coming out, how do you see this architecture changing? Where would you like to see it to go? I really would like it if it was synchronous. Because that's the biggest issue with this right now is it's asynchronous, and that's where things fall apart. Things don't actually fall apart for us on the Jenkins side. Things fall apart on the Gehrman side, where it's actually pulling the logs, processing them, and writing them. There's no, the worker that I wrote was really buggy and had a lot of issues, which is a big part of the problem. I was glossing over that bit, make myself look a little better. What architecture? Yeah, but part of it was there was no, there was nothing end user facing to see when something went wrong. People would only notice when I looked, or as Dims was in the room before, he was looking for test results and things were missing. And it's like, well, things were running over here, why aren't they on this job that I care about? And that's how we found these issues. And I feel like if we can address that somehow, which could be an async thing, could just be a reporting thing, that's really where the gaps lie in this. There any other questions? Does anyone else wanna see anything on the live demo? Which is surprisingly working. I can try to pull one up. Let's, I don't think I can make the font bigger, unfortunately, because the terminal I'm using is, so this one is to find the last 10 tests that ran with Oslo Master at the end of their job name. That's a simple one. I'm trying to find a really ugly one somewhere. There we go. Now, what is this one doing? This is one that comes from SQL Alchemy, so I didn't write it. I was just testing it for performance reasons. And it is getting all of the test runs. I'm not sure. I think it's getting all of the individual tests that ran for each of the times a single test ran across all jobs with a set of key value metadata pair, key metadata, key pair, key value metadata. So for example, I wanted like that query I was looking at for the test boot pattern tempest test, which was one test. I wanted to look at that across, if let's say I wanted to look at that across all runs that were using Postgres. I would use this query instead of A key and A value, which I guess was test data I was using in a test database. And then here was some performance testing. Yeah, so there are some very long queries that I had to figure out. And having a web interface is much nicer. There's also a Python CLI for subunit to SQL, which is how I generated some of the graphs for this slide deck. Does anyone else have anything that they'd like to see, like to ask, like to have in this microphone? About the point you mentioned about losing the filter when you move across. Yes. Do you have any plan on making that kind of optional or configurable or? I have a plan for it. It's very tricky. So you remember that big query that I just showed. To make the filters persist, that query has to get more complicated involving things like Cartesian joins to keep multiple sets of metadata across the join. And very complicated queries. It hurts my head thinking about them sometimes because of the way the schema is constructed to be portable versus our needs for the dashboard. I have an idea how to do it. I'm afraid of the performance of doing it. And it's also very tricky to get right, but it's something that's on my radar for this cycle. And if someone is better at SQL than me, please feel free to come and help contribute on this. Because I am not so great at SQL either. If no one has anything else I'd like to see or ask or the check queue, yes, that's something I want to address this cycle. There's going to be an open stack health QA design summit session later this week. I don't know exactly when it is. But that's one of the things we're going to talk about. I have an idea where we have separate databases to maintain data purity between the gate and periodic and the check queue, which is potentially polluted with bad patches. So we can selectively filter that way instead of adding a metadata filter to every single query, which is how we would have to do it with a single database. But that's something that we will be discussing during design summit this week. If there's nothing else, I guess I will. So looking at that dashboard, it was, there's a lot of information. I'm just wondering, what would someone go into that dashboard? What would be the number one question they're asking themselves? Because at HP, our developers, the thing that they wanted to know most was, okay, my test failed, was it my test? Or was it the infrastructure my test was running on? And I couldn't get a clear picture. I wouldn't be able to answer that question by looking at that dashboard. Yeah. And I think that's where the elastic recheck data is going to have to come in. Because this data doesn't, what they could tell from the dashboard is that this test is failing at a rate that's higher than just my job. They can see that this failure rate is trending across multiple tasks over time. It's failing 0.01% of the time, for example. They could see that, but they wouldn't see that that is not caused, that's caused by an infrastructure failure versus an inherent race condition and what they're testing versus something, a bug in the test. That's where the elastic recheck data comes in, which is querying log stash, which actually has searchable logs that you can introspect and see what's going on. And that's, we track that with elastic recheck and the data sets are separate right now. Something I wanna do this cycle is integrate them so we'll at least see when there's a known bug and we can correlate. Cool, thanks. No problem. Do you mind going to the microphone? Sorry. So I'm kind of a novice at OpenStack, but I just wanted to add to the gentleman's question in being able to determine whether it's the infrastructure that's breaking down or the test because we're having a similar challenge. But is it easy for just a thought maybe for us to look at the distribution of failures across tenant spaces? If you have a dedicated tenant space for your testing or if it's more rampant, you'll see that it's the infrastructure at fault, right? Yeah. So we actually do have that view already integrated in here. Let me go back to stop using my broken mouse pointer and use the separate one. We can go to the node provider view which shows the rates per cloud region that we're running in. So you can see that that's a bug, that little pass failure thing that didn't go away, but you can see the failure rate for the individual cloud regions that we're running tests in and you can see that like tests fail a little bit more frequently in OVH and they never fail in VEX host because VEX host has turned off. Things like that. There's nothing else. I guess that's all I had to discuss today. Please come and help us develop this project. There's Python, SQL, JavaScript. We've got all the buzzwords. No containers, no Mongo, but that's, I'm sure if that's your thing, we'll figure a way to make you happy.