 The only memory is that guy, and it's about to happen. Oh, here we go. Yikes. All right, so we begin with introductions. Please. We'll start on your end there. My name is Christopher Ado. I work for IBM. I'm super lucky I get to work from home almost all the time. I live in Portland. I'm PTL of the Community App Catalog Project, so I spend a lot of time with that, and Def Core, and Ref Stack, and some other things. And then every chance I get, I try to help or learn a little bit more about OpenStack Infra. So I was really glad that I had the chance to do this with these guys. So my name is Paul Boulanger. I talk really loud. And I work for Red Hat. And like Chris, I'm lucky enough to work from home in Ottawa, Ontario, Canada. Recently became a InfraCore member for the infrastructure project. And like Chris says, it's just an awesome project to work with and help OpenStack developers make OpenStack better. Thanks. My name is Elizabeth K. Joseph. I work for Hewlett Packard Enterprise. Just like the two of them, I work from home. I'm based out of San Francisco, California. I'm also working upstream on the OpenStack infrastructure project as a core member. And I'm happy to be here today to talk about some of the internals of how we make everything work with the OpenStack development workflow. So we're going to get started. I probably don't need to explain what OpenStack is to this audience. So I'll just move on to quickly talk about some of the projects that OpenStack encompasses. And again, you probably know. In the OpenStack infrastructure team, it's our job to support all of these projects that come into OpenStack infrastructure. So these are some of the early projects. We've since moved into Big Tent. So we're supporting a lot more projects. That link there gives you a link to the official or the listing of projects that have project teams. The infrastructure team is also responsible for supporting other parts of the project, not just the code teams, but also projects like the documentation team in OpenStack, the quality assurance teams, release management, translations, and internationalization. Pretty much anything that has to do with the components that ship with OpenStack or the surrounding supporting documents in the OpenStack project are things that we support. You may be familiar with the OpenStack release model. It releases every six months, much like Ubuntu and Fedora, sort of what they're based off of. The development is all done on the master branch of the OpenStack projects. Sometimes projects will have feature branches, but we won't really talk about that very much. And then after the stable release, there are other stable release branches that are maintained that we have to pay attention to in our infrastructure. Contributors, again, from OpenStack, all over a bunch of different companies. This is a screenshot from Stacklytics of the contributors during the Metaka cycle. We work with companies, organizations, non-profits, government, local and national governments, all kinds of different types of contributors. So we need to support a unified workflow for them inside of the OpenStack infrastructure. We also have to be able to scale the infrastructure because the number of contributors to OpenStack has been growing very fast. In order to do this, we provide a consistent tooling for all the OpenStack projects. So that means we don't want people using different code review systems. We don't want people using different vision control, different tooling. We all sort of tell everyone to use Garret, which is our code review system. We want them all to use Git. And we want all of these changes to go into the same place and be centralized. So we don't want any sort of projects, but specific weirdness being pulled up based on any of these, any different changes in our tooling. So I'm not gonna go through everything on this slide. We'll dive into some pieces of this as we go on. But the developer infrastructure for OpenStack pretty much has all these components. So you interface with a code review system using Git review, which is submit your changes and then you review through the web interface. We'll dig really deep into the test and build automation here. But we also do things like host a bunch of repository mirrors for various things that we need to test against. We have massive log servers to host both static logs and searchable logs. And then we run all the other things that developers need. We run the bots on chat. We run the mailing lists, the paste bin for the OpenStack project, all the ether pads, things that developers use for collaboration. As a team, we also do all of our systems administration in Git. So the OpenStack infrastructure team itself uses the code review system. So we use Puppet and we put the Puppet config files into Git. We do code review. We do testing against those. And so all of our changes are also online and open source just like the rest of the OpenStack project. Just like anything else, anyone can sign up for our code review account. Anyone can look at our changes. Anyone can propose changes to our team projects. And then there's a link right here to the system config repository, which is sort of the central repository of the infrastructure team that will define all of the servers that we've launched in the project, as well as a bunch of configuration things. The development environment for OpenStack itself, as you might be aware, people are using Python. So we're supporting the testing of Python software in our development environment. Developers get in touch with each other in IRC. There's an OpenStack dev channel. There's also several meeting channels and then projects tend to have their own channels. Developers use DevStack to test their changes against oftentimes. So we're making sure that that's, along with the QA team, making sure that works and is working inside of our infrastructure to test changes. As we'll talk about the code that people submit to the OpenStack project is all tested when the code is submitted. So it's not, after it's merged, it's not any other time. It's when people submit changes and it's approved that the code is tested. And we make sure all code is tested before it hits the repository because, again, we have this diverse group of organizations and companies and governments and other people using it. So we want anyone to be able to pull down the master branch of any OpenStack project and be able to make sure, be assured that it's been tested. We believe this ensures code quality. It protects developers from each other. No one can intentionally break the master branch, really, because they have to have passed code testing. And the same process is pretty much for everyone. So there's no one person in the project who can bypass testing and land their code. Everything in this is automated. I can scroll through this slide a little bit. The tests come into our check queue when the software's uploaded, the patch is uploaded. After it's been approved by reviewers, it goes into the gate queue, which is the one in the middle here. And then after it's passed its test has been merged, there's a post queue for any tasks that need to be done. So in the case of documentation, that may mean updating the documentation on the website. And all of this is automated. As soon as you upload it, it automatically happens. And the infrastructure team's job is to make sure this automation doesn't break and add more as the time goes on. And then to sort of set the stage for the rest of the presentation here, this is our CI workflow, the continuous integration workflow, from I guess from sort of my perspective as someone who's works on the project. And well, this will be important as we discuss the components. So you have your little computer down there. That's where you write your patch. You submit it up to Garrett code review. From the code review, it gets sent over to Zool, which manages, it sort of manages making sure that all patches are appropriately tested against each other. So you don't have one patch coming in and then another one that conflicts with it, causing trouble in the master branch when it lands. Zool makes sure all the dependencies are properly tested against each other. Zool passes things off to something called gearman, which will then pass it off to one of the Jenkins masters we have. Jenkins tests all the code by passing it off to a fleet of servers in what we call node pool, which is a bunch of donated instances from various open stack providers. Once it passes or fails those tests, it all gets sent back down the pipeline and you end up back in Garrett. That's when a comment will be left in Garrett, explaining what happened with the test, past failed and it'll leave a little check mark or a minus one depending on whether past or failed. At that point, human reviewers go into Garrett and they do their code reviews. Once they approve it, it gets sent back up this whole pipeline for testing just one last time, because the patch could have been sitting there for two days or two months. So we wanna make sure it's still passing all the tests that much time later. Once it passed, it comes back down, gets merged to get repository and then you have your code landed. So just keep this in mind as we go through different components so you can sort of think about where you are. I'm gonna pass this on to Chris now to sort of talk about this from a developer perspective. Thank you very much. That was very informative. Question, how many people can I get a show of hands have landed a patch in OpenStack in the room? Okay, not too many and that's cool. So this is probably the exact audience we were expecting. So this process flow you will have seen if you have looked at the developer workflow document which is where we point everyone who wants to start contributing to OpenStack so that they have an overview of what the process is from the developer side of things where you probably don't need to know as much about what's happening on the back end. Like Elizabeth said, everyone starts their patch by cloning from master. Then you'll see the box for your local environment. You've got now a clone of whatever project you're planning to submit a patch against. You create a local branch, you make your changes and you should be running your unit tests locally. So running, using talks, assuming those pass and you hit get commit and it sends it up to Garrett and the review server where it will go through the first round, the first pipeline that Elizabeth was talking about of the gate checks. And if it fails, you'll see I think in the next slide I will show you what it looks like but you'll get the immediate feedback that something's wrong or semi immediate depending on how backlogged the gate is. And you have an opportunity then to make some other changes and amend your original commit and keep working on it until it passes and it's ready for review by other people. And then once it gets approved, it goes through the last level of testing through the, what's the last gate before it gets merged? Check and then gate. Okay, so it goes through check and then gate and then it gets merged back into master. So Garrett was, like Elizabeth was saying, this is one of the common tools that we require everyone to use for the sake of sanity and in order to allow OpenSec to continue taking as many patches and contributors as it does. It was originally developed by Google for Android. It's a standalone patch review system. Has lots of different integration points and you can categorize the reviews that are going in there and everything can be accessed via REST API. Most, I'm not actually sure what the numbers are but I think most people use Garrett through the web UI but there's also a project or a program called GERDI. Is that written by Mr. Blair? Written by James Blair, who's in the audience today which can also use all this on the command line in the terminal but we won't talk about that right now. So this is the, what a review, what a patch looks like. How many people have reviewed patches? Sure, hands a few, okay. So this is what you see. You've got the commit message and some extra information like closes bug and we'll talk about how that is integrated with Launchpad so Garrett can also update blueprints and bug track, the bugs in Launchpad shows you who authored it and then on the right hand side you'll see the responses from the different CIs for this particular one and this one has two patches and you'll see all the way down detailed responses and code review comments from different people and also the responses from the different CI systems. This is also, if you, sorry, back here, if we clicked on the API PY, we can actually see the code and see a side by side diff of the code itself and oops, sorry. You can add line by line comments, so if you were reviewing the code and you wanna point out either a mistake or a way something could be done better, this is where you'll add comments to it as a reviewer and obviously see those comments if it's your code. This is integrated with Launchpad. Like I was saying in your commit message you can say fixes, closes bug, you can say implement blueprint, you can say partially closes bug, so in the commit message if you specify that your patch relates to a specific bug on Launchpad, the tool will be integrated, so you'll end up seeing the link to the related patch and then eventually if the patch actually closes the bug and the patch gets merged, Launchpad will be updated and show that that bug has been closed because the code was released. If you are just looking through the review as you can see what is to the right, it doesn't show the last column. Maybe you can't, this is terrible, no? No. You can imagine to the right of this, oh wait, yeah, perfect, thank you. So there's extra columns at the end. Those three columns indicate that a patch has been verified by the automated by Jenkins. We'll have a check mark if it's been indicating that it's been reviewed positively and then the last column is the if it's been approved for workflow. So the patch two under the one that's highlighted shows a patch that has been approved, reviewed and approved for workflow but that one hasn't actually passed the gate test yet. So the first column is the one you'll see the plus one or you can see with the change sale options to use dashes that's got a minus one. One important tool that everyone uses is Git review. This was written by, was it by someone in OpenStack? Maybe, I think it was, yeah. So this is another OpenStack tool and it allows you to very easily review changes still through the Garrett workflow. Can use Git review to review other people's changes too so you can go into a project and if someone has, like for instance you wanted to see, which was this one, yeah. So this one has changed number 308264. If you did Git review dash D in that number, it will check out that branch in that particular patch of that person so you can very quickly and easily duplicate or get their code to try to replicate and test whatever they're working on. I get review is really easy to use. There's not too much more to say about it. And then obviously all the usage details are in the reviewer workflow, workbook thing. So the two types of tests that we rely on are unit tests and integration tests. The unit tests are what I was talking about earlier like using talks that you can run in a virtual environment and you probably should be running them before you submit your patch for review. Make sure everything passes rather than make the gate give you that feedback. But then the integration tests get much more complicated and this is where it's touching sometimes third party CI or lots of other really complicated bits and testing your patch in one project against all of the other interrelated projects that it touches. And so the intention here is to test the effect of merging that change and in a minute, I think maybe one or two more slides and Paul is gonna talk in much more detail about that. I'm trying to talk kinda quick too so we'll have time for some questions and answers too. And I think I'll hand it over to you because the Git Prep and Zool Cloner stuff. Sure, if you're much better at it than I am. Thank you. So ultimately this is the tooling that we use in Jenkins to get the branch of Git into the CI pipeline. So initially we had basically a batch script called Git Prep which did some clever things with Git to grab the target branch from Garrett and make sure it was cleanly checked out onto the remote node and then actually merged said branch against master. And ultimately if any of those steps failed then the patch was minus one and it would be the responsibility of the developer to rebase and so on and so forth. The newer approach is using Zool Cloner and we'll kinda get into that a little bit further down the road but ultimately Zool which is the heart of everything has a little command line client that still leverages Git but a lot of this functionality of testing a merge before it gets pushed to the CI node is handled by Zool locally. And ultimately it's just a faster way to scale out Zool because ultimately we found that we were running into bottlenecks in the gate for this process. So I think we're up to about seven or eight servers of Zool that just deals with merging code because there's so much code that's going through the pipeline at a given time. Okay, so to kind of expand on our integration testing in the beginning we had DevStack and DevStack gate. Ultimately that is a project in OpenStack that is primarily driven by Bash to deploy OpenStack, the OpenStack projects onto a fresh node and DevStack gate is the job that is responsible for that and like I said, it node pool will boot a pristine image, a fresh server. DevStack runs to install the OpenStack components that are needed and then we run integration tests atop of that such as Tempest to validate that the Nova is working appropriately. I wanted to add in here Puppet, so another team Puppet OpenStack or OpenStack Puppet Integration project works almost exactly like DevStack except it's leveraging the OpenStack Puppet project Puppet modules to do all of this deploying. So it just adds another layer of real world testing in the pipeline and because Puppet is, the Puppet modules are quite well maintained and that's what operators tend to use. It just adds, like I said, an additional level of real world testing. Okay, so Zool, I mean, we could do a whole presentation just on Zool but unfortunately we only have about 19 minutes to get through it. Zool, again, another project that was created by James Blair, very, if you get a chance to talk to him, he's in the back, very smart man. So the purpose of Zool gets its name from Ghostbusters, basically it's the gatekeeper and you can't get through the gate until your code is properly tested and it interfaces with Garrett and Jenkins and ultimately it allows for very flexible pipelines through your actual CI workflow. We're gonna kinda, we being OpenStack kind of really have a three phase pipeline where we talk about a check which is when a patch is uploaded. It runs a first series of patches. Once it's approved it then goes into the gate pipeline where it may do the same testing or maybe a thin down version of testing and then ultimately we have a post pipeline where once the code is actually committed like Elizabeth was saying, maybe you need to generate tar balls to be uploaded somewhere because you wanna check out a tar ball of the static or the contents of the Git repo. Bottlenecking, we're not gonna really dive into this because a lot of the changes, sorry, a lot of bottlenecks in Zool are being dealt with. I mean, Zool in itself is a project in its own. We're working on a version 2.5 and even a version three which like I said, it's a fantastic way of how we're enhancing Zool to deal with all these projects. And I don't think Zool would exist today because of all the CI problems that we've had in OpenStack and a lot of it is just becoming with, we couldn't use Garrett to merge all the code because we were pinning the CPU in Garrett because we had so many projects. So Zool just externalizes that and makes it fantastic. Okay, so this is ultimately a simulation of Zool and hopefully it'll work. I just have to click, I think. Okay, so in this scenario, so we have three basically patches that have been uploaded to Garrett and Zool receives these patches. So basically Zool then initiates the process to build these patches on the remote CI by telling Jenkins to launch them on remote nodes that are managed by node pool, across multi-clouds. So in this example, patch one, two worked great, patch three and ultimately four failed. So one of the cool things that Zool will do is it recognizes that patch three is a problem. Zool will take that out of the path. It will then re-attempt to run patch four and ultimately patch four will then run again and if patch four is successful, then now the pipeline is patch one, two and four and three has been rejected. So patch one becomes green, gets merged into Nova, patch two becomes green, gets merged into Nova, patch three has failed, is returned back to the developer to tell them to fix their patch, maybe they got a formatting issue, patch four has been successful and is landed into Nova. So you can see this process really allows developers to almost work in silos and not be dependent on other developers' problems. Zool takes a lot of that headache away and allows our patches in our pipeline to move forward at a rapid pace. So this is kind of an example of Zool pipeline. This is our gate and I won't get into all the sub triggers and so on, but ultimately what this means is in our pipelines, the gate pipeline is considered the highest. So when test resources come online from node pool, this pipeline gets the access to those nodes first and ultimately what it does is it will listen for a trigger from the Garrett event system and ultimately if it's successful, it ultimately, sorry, if it's successful, it says a plus two, which tells Garrett to go ahead and merge the code. If it fails for whatever reason, it tells Garrett it's a minus two and the code is not merged. And in a case of a failure, code, once the developer re-uploads their change, it doesn't go back into the gate, it has to start the process again. So it hits the check pipeline, goes through that, if it's successful, then gets promoted into the gate pipeline and then if it pass, it is allowed to land. So this is kind of an example of how Zool configuration looks like. This is Nova and ultimately it's showing four pipelines. So in the check pipeline, which is when somebody uploads a patch, we do a pep eight, which is basically a formatting and linting, we make sure that the code is formatted across a standard use case. This one is showing that we're testing against Python 2.6 and Python 2.7, and then we're also testing against Tempest DevStack. And you can see basically the check gate and the check in the gate are the exact same tests in this example. Most projects do not have the same amount of tests. Usually if they're a non-voting, meaning you're running kind of like some experimental testing that it's not stable yet, it would be considered non-voting. It would still run in the check queue to give the developer feedback, but you would not run a non-voting job in the gate simply because it wastes resources from infrastructure's point of view. And then from the post pipeline, this is what happens when the code has actually been merged. So you can see we're generating some tar balls which get uploaded to tarballs.opensack.org. We update some documentation which goes to docs.opensack.org and we do some translation updates. And finally, when a release happens, which is the tagging process in Git, we want to run a tar ball and we want to update documentation. And these are what operators and end users will end up using on final releases is the tar balls generated in this pipeline. So templates, yeah, ultimately, there's tons of templates and they're very interchangeable among projects. Everything is managed through Git. So like Elizabeth and Chris were saying, the infrastructure team heavily relies on this process to manage all of our tooling. And a lot of time is spent going through patches that projects are submitting into infrastructure to change job definitions and so on. So from infrastructure's point of view, we don't tell projects what to run. Well, we don't tell them how they run. We expose the tooling that allows them to run and they have a lot of flexibility of how said jobs work. Basically, we have another tool called Jenkins Job Builder which takes YAML file and publishes it and we take YAML file which gets pushed into each local Jenkins master. So again, there's no manual touching of the Jenkins interface to configure all these jobs. It's all automated in an automated pipeline. So again, here's kind of an example. I'm gonna try and go through this a little bit quickly. But ultimately, this is a template for our PEP 8 which is our linting process. You can see, if anybody's familiar with Jenkins, these are kind of what Jenkins expects. So in Jenkins, you have a builder process. The first one is just ultimately a very simple no op that tells us which template was being used more for debugging purposes. Zool Git prep upper constraints is more related to PEP to limit which versions of libraries a job is installed. And then we do some install disher packages which is another project of ours called Bindep which allows projects to tell what OS libraries to be installed outside of Python. So say you need Firefox for example, for whatever reason. And then we revoke sudo which means that the job doesn't necessarily need root credentials. And then we run PEP 8 which is basically a talks macro. If you type talks, PEP 8, it'll then run. When it's finished, publisher basically takes all the console logs, takes those logs out of Jenkins, pushes them up to logs.openstack.org and allows developers to review the changes. And ultimately, we run this on a Ubuntu trusty node. So from a template, you can have a group of templates and you can see in this, there's a handful. So this is all of our projects that subscribe to our Python jobs templates would run these series of tests, PEP 8, Python 2.7, 3.4, PIPI, docs, requirements, so on and so forth. And this is how you would apply a template. So you can see this is for Nova. A lot of things happen on this patch. We're not gonna go into it, but ultimately, in here somewhere, you'll see, I think it's in here, Python, well, maybe it's not Python jobs. Oh, there it is, sorry, it's Python DB jobs. This is an old snapshot. But ultimately, you can see in the gate, the Nova project runs a series of jobs and it's one of the most heavily, one of the most projects that uses the most resources. We've got less than 10 minutes, so I'll kind of skip over this. This is the process that we use for archiving our logs off of Jenkins slaves. So because of NodePool, all of our Jenkins slaves are short-lived instances, usually about an hour. And NodePool will generate an image from scratch, runs all of our testing, and then it deletes it and destroys it. So we need a process to archive that information off. Basically, we have a very simple on the left-hand side, SCP, where we take static text files and move them off somewhere. In the middle, we have a more complex process where we take, this is the process for log stash. So you can see we have some Gearman client server. We move the logs off to some log workers, which then convert it and then upload it into Elasticsearch. And then finally, you can see on the right-hand side is just the process for SSHing into the servers. So logs come in from the slave back to the master and then pushed up to logs, that opens back the org or a Swift. Okay, so scaling hardware means, so basically we have, I think we're up to about six or seven cloud providers now, which we run all of our jobs across a donated hardware. And we're very thankful for that. And basically, it just allows us to deploy a lot of changes across a lot of clouds. And it's great because it allows us to scale up and down, it will, and more specifically, it allows people's codes to run against different clouds. And all of the clouds may not work the same way. Everything, however, if that's not for your needs, say you have specific hard drives that you wanna test a backend against or something along those lines, you would set up a third party system there's this process that allows you to ultimately set up a mirror image or a mirror setup of the infrastructure CI and allow Zool and Garrett to talk together. So when a change from a patch hits open stack CI, it may get pushed down to IBM CI as well so that IBM can run specific maybe customer scenarios. And ultimately, as long as IBM is pushing those logs up into the public and the open source developers can review them, I think that's basically the minimum criteria. So anybody can have their own CI in the gate. You just need to make sure that the project that you're gating against allows kind of some two-way communication. Very simple, we have multi-master Jenkins servers. I believe we're up to eight now. So we have eight master servers of Jenkins that we just use to trigger jobs and they are all exactly the same provisioned through Git and YAML files and so on. And the reason for that is because the amount of jobs that we were pushing overloaded a single master. So we added a second and then we overloaded that one and now we're up to eight. We use a gear man plugin, which I believe was written by Clark, which is the backend that uses between node pool and Jenkins. And you can see our masters are called Jenkins01.openstack.org, Jenkins02 and so on. And right now we're basically at a capacity of 750 test nodes that can be launched at any given time. So yeah, it's a lot of nodes. And I believe they have four cores, eight, no, what is it, 32 gigs of RAM? I mean, it's not, it's some big hardware. So you can see in a given timeframe, these are kind of showing a graph of how many nodes that we're launching. We're peeking out at one point in time at about, you know, 615 nodes. And all of that, like I said, is controlled by node pool. It's a very elastic concept, automates all the creation of the Jenkins slaves and basically takes care of registering a slave node to a master and yeah, manages multiple open stack clouds. And here you can see kind of more details, I believe, let me see if I can scroll down, but yellow means we're building a node, green means it's available to be consumed, blue means it's in use, and magenta or purple means it's actually being deleted. So you can see it's, in this case, we're up to 825 nodes at one point in time being used on a Monday morning. And I think we're at the end of our slide. Yay, so we have exactly, I think, five minutes for questions. If you could line, yeah, there's a microphone over there if you could line up to ask questions, because there must be questions, there have to be. Success, guys, I think we answered every question anyone could have. Not me, we're good. All right, so the question was, you talked about log stash and logs from your build. So how does, how do you link the log output on the build server or the testing to the code change that was submitted? How do you associate those? So there's, so we mentioned there's two ways. There's the thing we upload to logs.openstack.org or into Swift. That is, how do we link that exactly? So it used to be just like you could, you could figure it out on logs.openstack.org by appending the patch number to a certain path. With Swift, it's slightly different. I think, I don't know how we're, so we do send a link back. And so the developer just clicks on the link. I'm not sure exactly the piece that figures out what that link is. I think it's a hash from Zool or something like that. Yeah, I think it's still just a thing you can figure out. I bet Jeremy knows. Yeah, I mean, there's fantastic people in infrastructure that know all that information, but we didn't really dive into that because it's such a complex process. But ultimately Zool knows about the change and it's at the heart of it, it's the core, and we propagate that information through to both Jenkins and Log Stash and so on and so forth. And it goes back into Garrett, so it's tied to the patch. So you can see, you can click directly in the comments and get straight to that log. Yeah, okay, got it. All right. Any other questions? Okay. I have a question about, how do you monitor in the open stack infrastructure? I mean, how are you monitoring if like a Jenkins master is down or? So there's two ways. Well, first we don't. We don't run dog use or anything. We probably should. We always need more people coming on our team and helping us, and now that you know how it all works, you can help contribute. We do use cacti to over time track what we're doing server usage-wise. So not everyone on the team can log into the server, but they can all look at cacti.openstack.org and see like, hey, this ran out of disk. Someone should fix that or they can sort of debug and see how RAM usage and CPU and network and all that tracked in the public cacti thing we have. So that's really pretty much what we have for monitoring. That actually speaks to one thing that I kind of regret we didn't mention. The open stack info project is entirely volunteer-based. And so if you think about this project, the fact that there isn't a dedicated, paid-for team of people whose only job is to maintain this infrastructure, it's just a bunch of volunteers that are doing awesome work. It's kind of volunteer, like I'm paid by HP and my job is actually. But HP has volunteered me to do that. Yeah, and to add what Elizabeth and Chris said, we also have stats D in the back-end running, so we don't have an active process to let us know there's a problem. Nine times that a tenant's a developer that tells us that there's a problem. So we need to do a better job of having some sort of automated notification, but at the scale we're talking about, nobody wants to get email bombs when something's broken. Yeah, exactly right. Somebody, people just hop in IRC and say it's broken. Okay, you have time for another question? Yeah, I think we have another minute. How about security? I mean, yeah, who can access to their server, how you keep that password or the public key across all the server? So we have a core group of people, I think there's eight or nine of us now, we who have SSH keys that are deployed to all the servers. So those are trusted members of the community. It takes a year or two to get up to speed on the team, of you being around and being part of the team and reviewing patches and being pretty much working full time on it to be entrusted with that core privilege and root access to all the machines. Beyond that we allow a couple extra people here and there who understand our workflow if they are subject matter in a certain spot. So maybe our asterisk server may have a couple other admins on it who know specifically about asterisk. And then as far as passwords and things go, of course that is the part that is not open source about our projects. We keep them in a Git repository on one of our protected nodes that we have access to. And then we share the details that way in a Git repository or a GPG encrypted file, depending on what the password is and what it's for. And I mean to just add to that, there's very little that you can't do without root access. So usually when you're trying to troubleshoot a problem or something is down and you need to look at sensitive logs, that's really only, I find the only need for root access. Everything is all there and it requires some creative thinking sometimes. But if it's failing in the gate, you can take our tools, download it locally and you should be able to reproduce the same problem locally. If you take the time and effort to download it all, launch it in a local Jenkins or local VM and so on and so forth. Okay, okay, thanks a lot, asset. Hey, I think we're done. Wait, wait, there's one more, please. Hi, I want you to start contributing, right? In which part of the OpenStack project do you guys recommend me to start? Oh, that's such a loaded question. So. Infrastructure. I mean, what I usually recommend people start with is the docs project because it's the easiest to test locally. It kind of, it has the fewest moving pieces. So really big difference, right? Between testing your patch to Nova versus testing a doc update. But doing doc updates gets you really familiar with the Garrett workflow and used to kind of using that process in a pretty easy to consume way. Yeah, I mean, to add to that, I think if there's something you personally, if there's something in infrastructure that you find lacking and you want to drive that, that's also a great way to get involved with the OpenStack, Infra Team. But if you want some low-hanging fruit, I mean, by all means just ask on IRC. We also have a spec process which kind of explains what we have in the pipeline that's coming in the following months and years. But yeah, I mean, there's a lot of stuff that just needs help with. And it's just a matter of just saying, I think, hey, what can I help with? And it's that simple. Yeah, and the specs are up at specs.openstack.org and that'll give you a nice overview of all the specifications that different projects are looking at. If you are particularly interested in something, you might look at one of the specifications that's already up there. You're awesome. Thank you. Thank you. Thanks, everybody. Thanks. Oh, wait, we have one more, please. Just a really practical question about DevStack. Is it when you bring DevStack up once you get it running and then if you shut down the server, then you can't just start the server again. You have to unstack and restack, right? Is there a way to just bring the services back up again without having to run the stack again? I just wanted to clarify, that's the reality, right? Yep, I mean, there's, in theory, there are ways, but it's not like anyone in DevStack Project will tell you just flat out, no, we don't ever want to support that. Got it. Hey, can we just get my mic? It's mainly because it's not launching things as integrated services and like you would expect from system D or something. All right. Yeah, I just wanted to clarify. Thanks.