 So, requirements as a project started out of necessity. There were many OpenStack projects that were managing their dependencies and managing libraries and all that, all on their own, and we still expected them to work together. Things started occurring and be noticed really during the Havanna timeframe and we got into kind of eventual consistency type of issues where you'd have something like Keystone Client or any other library, get it installed by one OpenStack package, say Keystone itself, have the test run for Keystone, go to the next one, Glance, have Glance test run but a different version of Keystone Client was installed by Glance and you don't quite know where you are. So right around then is when we decided that we had to kind of fix this and get ahead of it a little bit and the way we did that is we set up a bunch of rules for the different libraries that we had available and we're already using. So we were vetting them for quality, vetting them for license compatibility because unfortunately we can't use all different types of licenses with an OpenStack. And the big thing more recently is we're trying to kind of narrow down library purpose where you have five different libraries providing the same functionality. This becomes a little bit overly complicated, more of that graph. So at the start we found that libraries had or the way that we were tracking the libraries was not quite correct either. They had caps on the allowable version that you had installed and that meant that we couldn't consume a bug fix, just a revision, a certain library that fixed a critical bug and one part of OpenStack and another part of OpenStack was holding it back. So we've worked on clearing those caps and consuming those bug fixes and security fixes so those were blocked as well. And we got a little bit better. I like to think that requirements is a waterfall on fast forward where we consume stuff from the community and try to have it power OpenStack. So you're going to see a lot of pictures of waterfalls and dams. So what we've done is we've constructed a gate in front of all the upstream releases of different libraries where we control exactly what happens, when it happens, how it happens. We can stop it, that process, we can pause it, and we generally do for the release time frame, which is called the freeze period. And it's actually worked pretty well in the last few years. But it doesn't always work well. So typically what happens is in these type of situations is an upstream library will pull a release. So a release that we've been relying on and pinning the exact version saying this is the one version that gets installed everywhere with an OpenStack, they will pull that version down and break something or sometimes they will add a app to something, which we went over earlier, kind of prevents consumption of bug fixes and security fixes. So we work with upstream as much as possible to get them to be good stewards of their own libraries. So how then became now? So we use Python, of course, with an OpenStack, and we gathered all the different libraries and all the different dependencies and froze the output of installing them all together. So they all have to be co-installable, and that's one of the big things and the big troubleshooting things with an OpenStack library-wise and dependency-wise, is getting everything co-installable. We have validations in place for formatting and generating this list of requirements and dependencies, and we've gotten more flexible over time as well. We've just more specified different Python versions, so if one, if a library stops supporting a certain version of Python but continues forward, we have to define it multiple times, and yeah, the co-installability testing is kind of our big thing right now. So the best thing that we found within the requirements process and just over the last few years, and believe this was spearheaded by Tony Breeds, if he's in here, but was cross-testing, and that's where we take a proposed update and test it against various OpenStack projects. So we'll have a proposed update to say Request or some other library where they will take that, generate the constraints for it, then run those constraints with the Nova tests, and if the Nova tests pass, that's great, but if the syndrome tests don't pass, then we've got to fix that. We don't want to break the gate, so we'll work with those teams and figure out how testing needs to change, and that's typically what it is. It's overly specific testing, and then move forward, get it fixed before we break the gate, and that's probably the hardest thing that we've learned. We've started out with just using upper constraints, which is just one block on everything, but we moved to allow freezing lower constraints, and what lower constraints are, every library in OpenStack is required to have a lower bound, so at least this version works, and lower constraints is the lowest co-installable versions of all the different libraries that you depend on, and that also includes the libraries that they depend on and on down, so it's the full list, and each project can have a different version of their lower constraints, because for co-installability, we're only caring about upper constraints, which is OpenStack-wide, and lower constraints testing ensures that you don't start using a feature of a library that came out in version two, and your lower constraint is version one, so it's not going to work, so it gives better confidence in your lower bounds to packages and on down. Let's go to the next, and this is the big slide, where we're going to go over all the tools that we're using in OpenStack, so we have generate constraints, which will take a requirements file and pip install it and generate a freeze output of it, we have as an option there to add a blacklist, and the blacklist is useful specifically for things like a link test, so project A will have a link test version, pylint or something that they install, and project B will have it be a totally different version, but it's pinned to something exact, and we allow for some variants in libraries like that, as long as it's not, as long as it doesn't get overly egregious. Next, and that's just how we generate the upper constraints file, we also have a version map where we're run against Python 3.6 and just map the version output for the freeze to Python 3.5 and 3.4, but beyond that, edit constraints is actually very useful for projects further on down the chain, so a project that is consuming the constraints that you're generated will want to install the master version of itself, and with that tool, you just run the command and it will edit the constraints to include whatever you tell it, it could be master, like the foobar part there is the version, it could be a shaw, whatever works, it could be a tag. We also have validate constraints and normalize requirements, which just kind of generates the, and formats the various text files that we're saving there. We also have tools for X fails, which we aren't actually using anymore, we haven't used that for quite a while, luckily, where we generate a list of known incompatible libraries and deal with that, but the last two tools are the check constraints, which validates that a library or a project does not add a version or add a library that has not been reviewed, so we want to make sure that projects don't add libraries that have bad code or incompatible licenses, things like that, it's just a kind of gate check on the various projects, and build lower constraints is the tool that you can use to build your lower constraints file, so it will take your constraints for various projects, and you can build it globally, the way that we like to put it is the highest minimum, where it will search for the highest minimum version that is co-installable amongst the various projects that you're tracking lower constraints from, because lower constraints for project one, lower constraints for project two, it will just combine all those. And the future, what we want to do is we're looking at pip file and pip end to track constraints, they have their own depth solver now in pip, finally, it took them many years, as it feels like, but they have it. That's actually how I generated that graph is one of the tools embedded within there. The would be nice, would be if Python eventually supported something like the Rust dependency model, where all the dependencies are scoped to only be what imports them, but that would be a Python four, at the very least. And just leave it there. So if you have any questions, let me know. I posted a version of this that unfortunately doesn't include the contact information when I uploaded it, but all the other slides are the same. And our documentation I think is pretty good, that's one of the things that we're focusing on this cycle is much better documentation and centralizing it all into one place. It's not as important to make some people that would realize the importance of it over time over the years. If you have any questions, just let me know. Feel free to ask. Sure. I don't know if the mics are working or how that works, but yeah, I can hear you fine. It's just recording and whatnot. So the tooling is kind of made for Python, but the best practices I think would apply to anything with the same kind of dependency model that Python has. So Python, when you install a library, you only have one version of that library, whereas Ruby, for instance, you can have ten different versions of a library in Ruby, and different father libraries will depend on different versions of each of that. Rust, like I said earlier, has a different dependency model again, where you can have ten different versions installed, but it's just locally scoped. So it depends. I think maybe NPM, it could be used for the same kind of practices. The reasoning that we have for doing this and our practices are in the documentation. The updates that we're doing to them is more for the tooling and stuff, saying this is how you use it and whatnot. We haven't yet. We're kind of keep an eye on it because every now and then it's proposed to start using other languages within OpenStack, but it hasn't happened yet, so we haven't gone all in on that. Any other questions? That depends on graph enough to show you how simple this is. Thank you very much.