 Hi, everyone. This is a really big room, so if you guys want to come up, please do. So it's a little bit intimidating to see this big room. My name is Dimms, and my co-conspirator is Swapnil. I work on things like NOAA, Oslo. I was a PTL for Oslo. I'm on the technical committee. I mainly help out this time with the release management team, trying to figure out how to make the releases automated, and if you see all the 100 emails come to your OpenStack Dev, OpenStack Announce, that's us. As part of the release team, we were also taking care of the requirements changes that were happening in all the projects. So how many of you have had trouble with Python packages installing? Okay. One, two, three. So if you're thinking this is about getting your requirements into OpenStack, wrong session. So this is just about Python packages and, you know, the versioning and the problems with the co-installability and things like that. So heads up before we start. Okay. So we're going to go through... See? I got you. Okay. So we're going to talk about what the problems are, typically, in an OpenStack environment. How do we end up with version conflicts and how do we get out of version conflicts? How do we make sure that over a period of time, you know, all of us are not struggling trying to merge code here and there, trying to... Especially people like Zego who has to package Debian packages. So we... Yeah. And Cori. Cori? Yeah. Cori who has to package, you know, Debian packages and others doing RPM packages. How do we make their life a little bit simpler by coming up with strategies to manage the versions that need to be installed together? So we're going to go through what global requirements are, what we mean by upper constraints and, you know, a few other things that you see on the slide. Okay. So when we identified that the release team is taking care of a large set of tasks with respect to requirements, we said we need to split out a separate team, mainly consisting of people who are working with packaging stuff, but also anybody who is, whoever else is interested in taking care of this hard problem. And the main thing that they were going to work with is what we call global requirements.txt file, which is present in a repository right now called OpenStack slash requirements. So before we go into what global requirements.txt is, typically if you open up a Python package, you would see something like this. You can event let what versions can be used, what versions are not good, right? So if you see the line number eight, you can see that there is one version of event let which we don't like. And then we accept anything about that, right? And then LXML is BSD license. We can work with 2.3 and above. What this means is we may have a feature of LXML that we use, which is only available about 2.3, right? That's why we need a minimum of 2.3. And then SQL Alchemy, we are doing an upper range and the bottom range because anything about that upper range, we know it's broken. We have a bug that is upstream that's waiting to be fixed. So, you know, things have to be tracked. So this kind of gives you a sense of, okay, if my operating system has these Python packages which fit within the range, then I'm okay. So then you have to go back to PyPy and figure out what is a set of packages that is there, what will install together, right? So that is essentially what we end up doing for all OpenStack projects. So typically in requirements.txt, there is a test requirements.txt, sometimes in the setup.cfg, depending on which, you know, Python package that you're looking at. Okay. So what do we mean? What are the files that we are taking care of in the requirements? It seems really simple. Every project has requirements.txt, then how do we come up with a combined requirements.txt? So all the projects in OpenStack can be installed. So that's what we call the global requirements.txt, and we have to enforce what is in the global requirements.txt to each of the individual projects. So that's the huge task that we are faced with. And one thing that helps the packages is we calculate the upper constraints.txt based on what we can download from PyPy, what is in our mirrors, and so we come up with a list. If you look at the upper constraints, it's triple equal to sign, which means it's a pinned fixed version. So CouchDB, we always, in the CI test, we are using 1.1. MySQL Python, if it's 2.7 Python version, then we are using 1.25. So we are publishing essentially the set of packages that we install in our CI. And if you go back to the global requirements, each of these will fit exactly into the global requirements. So global requirements can give you many answers, but this is one answer that we are actually testing. So that is exactly what we are doing. So now we have these two files. We don't want to manage it manually. It's a huge task. For example, the upper constraint.txt, we can run some jobs to calculate it. And the global requirements.txt, we can run some jobs to publish that as reviews to each of the projects that want to adopt the global requirements. Anybody confused yet or not? It will get even more confusing. So just raise your hands if you have a question and we can talk it through. So there are two files. We need to manage the two files. And we have to distribute the information in the two files to all the projects. So that's the task at hand. And the team that is going to do this in automated fashion and deal with exceptions is the requirements team. That's basically what we are talking about here. So different projects may have different version ranges. And also the order of the installs in the requirements file might break some stuff, too. So what used to happen previously was if you take, say, Python client library, Keystone client library, it used to be referenced by many projects. And each of them had their own version. So depending, there was random order in which DevStack would end up installing stuff. The same Python library might be installed like five times. That used to happen in our CI. So we would never be able to figure out exactly what we tested unless we put a, you know, we print out at the end of the CI run. But then all the CI runs would have totally different ones because of the random ordering that got pulled down. So we could not publish that information and that information was not useful to the packages. So that's a problem with the CI. Then also, if when new projects are coming in, when projects are trying to do new stuff, like, you know, somebody wanted to add ZooKeeper, somebody wanted to do a different Python library for Memcached and things like that. So somebody has to ask the hard questions like, okay, is the license compatible? Does it work in Python 3? Because we have a long-term goal of going to Python 3 as a whole community. Are there other libraries that other projects are already using? So we have to ask these questions before we can let new libraries in because we don't want this set of libraries to explore. So, and the team that vets these questions is the requirements team again. So new requirements are added by, you know, people like NOVA, Neutron, Congress, anybody can add new things. They are welcome to open reviews in the OpenStack requirements repository and come up with an answer for all the questions that we ask, you know, work with us and try to get stuff in. Then we, when things break, we add things like, you know, the pins, exclusions and things like that. And we have a process where every night we calculate changes to upper constraints, we install stuff from PyPy, and then we calculate the changes that were done in upper constraints that are needed in upper constraints because somebody published a library on, say, a Friday night, which typically happens. So, and then on Saturday morning, we'll have a review that says, look, MySQL put out a new library or, you know, event let had a minor update. So, and then there is a review that's logged and then the requirements team approves that and that gets merged. So this is, there is like a cyclical things that we have set up that we end up monitoring all the time. When, sometimes on Sunday night or Monday morning, we'll see that all the CI jobs are broken and then we go figure out, okay, somebody made a release over the weekend and, you know, something is broken. So we go add a pin by hand and exclusion by hand and then, you know, we move on. So the other good thing about this is we, one big thing to protect us from breakage like that is the upper constraint. So if there's a new version of event let that breaks, say, neutron, then we fix the upper constraint. We don't fix the range. We fix the upper constraint. We let the CI go, you know, go and then what we do is we work with the event let team to figure out if it's something that can be fixed easily. If it cannot be done in a few days, if we realize that it's going to take a really long time, then what we do is, okay, we go add a block on that specific version and tell them look, we're not going to pick up any new versions or we're going to say we're going to block on one specific version. So recently what happened was a library was already published, but they published a wheel for the library and the wheel was breaking our CI jobs. So we went and did an exclusion and then we cleaned up our mirrors and then they deleted the wheel from their mirrors and, you know, everybody was happy. So sometimes we do global, it's hard to do the global requirements change because if we do the global requirements change, then we end up changing everybody. If we make a change in the upper constraints, then only the CI jobs are affected and we don't push out a change to everybody when we know it's a temporary problem. So these are the kind of issues that we deal with on a daily basis and, you know, we need your help. If you are interested in this kind of stuff, you know, please talk to us. So the bot proposes update to the subscribe projects. Okay. And also, so in this whole thing, it's not enough to keep adding stuff. We have to take stuff off too. For example, Zigo is trying to get a botolibrary off for a really long time. Yeah. No? Well, there was, you have a set of libraries that you don't like that are not well maintained. There you go. So there is, we have a process of pruning the stuff from requirements as well. So it involves identifying which projects needs to be removed, going and talking to the teams that are responsible for that project and make sure that they take it off their requirements and test requirements. And then when we know that it's all gone from all the projects, and then we can get it off it from global requirements and upper constraints. So also, one more thing which you may or may not have appreciated was if a thing in the global requirements adds new dependencies, for example, one library starts, has a new release that uses something else, something, some more plugins or something like that. So the upper constraints is going to grow as well. So the upper constraints grows based on the global requirements and shrinks as well. Okay. So this is just an example. So how does a project know whether they, whether, you know, how to get into global requirements process, right? So the first thing is there is check requirements job. A check requirements job is a well-known template in the project config repository. And you have to add the check requirements at the bottom. And if you don't have it, then you are not running the check requirements. What check requirements job does is if a random developer who has no idea about the global requirements process wants to suggest a change to add a new library or change a version or change the range and they don't know what they are doing, this stops them by doing a minus one saying, okay, you are subscribed to global requirements updates, but you're doing something that you should not be doing. So and then they get to know that, okay, we should, the course will say, okay, look, check requirements are broken. So what you're doing is wrong. So go talk to the requirements people. So it gives you an alert when things happen like that. And also when the Oslo team releases Oslo Utils 3.17.0, then the global requirements may be updated depending on if somebody feels that they need the global requirements to be updated. Typically what happens is, say, Glance or Neutron or someone else needs a feature in Oslo Utils that was not there in the previous version. Only then we let them update the lower bound of the library. Typically we say, do not raise the lower bound unless you absolutely need it. But what happens typically is, when we release the library and Glance starts using it, because we are using upper constraints, they will fetch the new library as soon as it hits the upper constraints and they don't realize that the global requirements is still using the lower bound. So that is one of the holes that we have right now. So it's a really complicated process and we would like your help. Okay, go ahead. Okay, now I'll let Swapnel talk a little bit about himself and continue the rest of it. Thanks, James. Hello everyone. So I'm also part of the requirement stream that we maintain the libraries. And I work at Reddit and contribute to Kola. So that's all about me. So DIMS explained us very informative about how we do requirements, how we manage global requirements and upper constraints. And as a part of requirements team, we need to maintain global requirements and upper constraints. So how we do that? So maintenance basically happens in two ways. So there are broad proposed updates and there are human interventions. So for broad proposed updates, we update requirements project itself and the other projects that are managed by requirements. For requirements projects, we update the upper constraints automatically whenever new library is released into PyPy. So Infra has the mirrors for PyPy which picks up the latest releases and updates the upper constraints and the dependencies. And once they are updated, the core review team of requirements basically checks whether it passes the cross-project integrations that we have. So can just, okay. So it will check whether it passes the cross-project integrations that we have for projects that are using upper constraints and as a sanity check and then basically they will approve the upper constraints. And like Dim said, the global requirements change is very, checked very strictly because it updates a whole lot of projects in OpenStack ecosystem. So we check whether it's absolutely necessary and then we update it. Then the board basically triggers a change to all the projects that are using that global requirement and update it kind of. And the management of basically updating the test requirements and requirements is up to the core review team of the particular project. So can you just move back. So if you check, so we have taken care of both the things for bot and human interventions and basically for global requirements we have some constraints for global requirements as well as for upper constraints. So for upper constraints, we only accept bot proposed changes. We do not entertain human intervention unless until it's absolutely necessary. While when it is breaking stuff, like Dim said that some projects get it in the gates directly when it is updated from pi pi. And if it breaks stuff, then we need to go and change the upper constraint for that. So how do I subscribe to global requirements process? So one of the things that we need to do is whenever I have new projects, I need to check what requirements I need and I need to check what requirements I do have into global requirements. If I have all the requirements, I can directly go and submit a check requirements job change into the project config and see how it and sync the versions in my requirements file whether they work correctly. Once the project config change has been merged then you can submit a change to project.txt to update your project and the bot will then trigger the changes for requirements into your projects. So you just need to monitor the bot proposed changes into your project and basically approve them. So add in and upgrading requirements. So basically we get a lot of trouble when people want to add new libraries. So we have a strict requirement that adheres to a set of questionnaires that needs to be fulfilled and basically core team needs to be in agreement that we need this new library. Because whenever new project starts or people see new libraries, they tend to pick it up, implement the new features and they want that it to be part of global requirements. But we might have a library that is doing the same thing or it might not be compatible license that we expected from or it might not be Python 3 compatible or distro may have alternate options for that. So we need to check all these things into consideration before we approve any new library into the requirements. So we are very strict on that purpose while adding it because it creates a new maintenance job for the requirements team as well as the release team when we are releasing new things for OpenStack. Same goes with upgrading the requirement version. So we are very strict on upgrading global requirements because anything changes into it affects all OpenStack ecosystem and you need to be very sure that I need to update it. So the update process basically is followed in multiple formats. So for OpenStack libraries, the release team basically takes care of updating the requirements into global requirements for OpenStack projects. For dependencies that are managed by projects themselves, so they know that I need a new dependency for this particular library, they submit a new change into the requirements repo for updating that global requirement and sometimes a distro specific library which basically is added as a part of dependency management into upper constraints. If they need any change into it and they need to add or update into global requirements, they submit a change into requirements. So this way we manage upgrading the requirements into requirements repo and like we discussed, the upper constraint changes are very specific. They are 100% or 90% we can say are managed only by bought proposed changes and no human intervention as long as it's not necessary. Okay. So as a part of this, basically presentation I want to basically ask as many people they want to review and provide help to requirements team to review the new requirements or upgrades. So because we need feedback or basically when we break any requirement, we need early feedback from as many users as we want and this basically helps to reduce the CI, the efforts that we lose in CI for breakages that we can reduce by knowing those early things. Then as a part, because as a part of adding new requirements, why we need all those questionnaires because we need to know who is maintaining that requirement. If nobody is maintaining it, then we get into trouble too many times rather than recovering from it. So that's why we need to know who is the maintainer and when we know then he can basically come around and put it to a minimum version or block it kind of that way so that we know the damage early rather than knowing it later. Whenever we are doing any changes into requirements, we appreciate to have as detailed commit message as possible so that we can evaluate the change appropriately. And if you are not sure about your requirement and it's very specific, so it's very volatile, so we have blacklisted requirements, so we don't recommend you adding into it, but there are things which change very much to project specific, so those are added into blacklisted requirements like pipette, flakette, so every projects have their own changes or rules for that matter, and they add it into basically requirement stream, add them into blacklist, and they are ignored when we are basically changing the constraints. So as a part of Newton, we have accomplished a very good review thing, so it's very diverse and it's around more than 2,600 reviews in Newton and more than 570 commits from complete community, and I really thank everyone for that. So some of the accomplishments that we have done in Newton as we have first time formed a team which is formal and which is continuously running the meetings and working together as a group, so we formed the formal team, so Tony Brits is our first PTL and as elected PTL, then we have active core review team which works every day to manage things. One of the major thing that we have achieved in this cycle is cross-project integration, so prior to Newton we had DIMS proposed patches that we used to check for all the projects that were managing the upper constraints for any change that is done, due to in-front supported facility that we can do a cross-project integration, now we have gates that check for every upper constraint change, the upper constraint for projects that are using it, and we are having now specific stakeholders who look into issues related to requirements updates, so we can ask for this two specific things like for number it is not here, core is here, so we ask them specifically, do you want to add this requirement into global requirements and they are very much there to help us at that point, we are having better coordination with release engineering and one of the things that we did is removed most of the cruft requirements that were not being used or were duplicate kind of that stuff, some of the pitfalls we have observed during this release is basically like DIMS explained that people start using a feature that is just added to UC and upper constraints and that is not added as a part of global requirement, so if a project is dependent on that project which is using UC then it fails because of that, we don't deal with folks, so basically there are some projects which are having folks with different features and different projects want to use it differently, then it becomes a difficulty for requirements and release stream to manage that, we don't basically test the lower bounds, so similar to upper bounds we need to check whether lower bound for the requirement works perfectly while installing a project or using it, then this is a very common problem but we have dealt it couple of times in this release that if a requirement fails, basically if whenever we update a global requirement it triggers a change for all the projects and if we need to revert it then we need a lot more cycles to basically CI cycles to get it into all the projects that have been affected, then larger set of jobs to basically prevent bad things going in, so we ask more projects to start using upper constraints so that we know what is failing and have a better integration with projects, so if a project starts using upper constraints then we can add a new job into requirements or in the same projects to check with upper constraints and get the results early than getting it later kind of, so some of the current challenges that we have identified at the end of the release is basically remove the requirements with duplicates, then remove the requirements without active maintenance and find duplicates if possible, so some of the things that as a community we have done, I think in the last couple of releases that the important requirements that were not maintained, OpenStack community has taken the responsibility to maintain them and we are, as a part of requirements team we are going to see how we can help with that, then like I said advocate more project teams to use upper constraints, so it helps to better check the sanity and optimize the proposed changes, so the way we propose changes to all the projects and the board propose changes how we can better optimize it kind of, so one of some of the priorities that we have in Okata is basically introduce lower bounds or the way we called as divergent constraints, and so that we test those into CI and reduce the proposed changes, so we have a session on Thursday, I think 3.30 design summit session for requirements where we discuss all this thing for divergent constraints. The another thing that we are looking forward to is Python 3 compatibility, so we are having a separate working group for Python 3 and we are, at the same time, we are looking at the existing requirements which can be, which are getting to Python 3 or how we can find replacements for that, and this is always a point that better communication of failures and impact analysis so that we have very, we basically secure the environment from complex binary changes kind of, like he said for event later and we had some from Oslo at that time. So, how you can contribute, so we want as many people as to contribute to requirements and we have a lot of contributors, thankfully, so how we can manage the global requirements for your projects by your own. You can contribute to reviews, so anyone who wants to contribute is always welcome and more project integrations for upper constraints. Some of the resources for requirements that you might want to have a look at, so we are on GitHub. You can contribute to get it reviewed, so if you want to see what we are doing over the next couple of, over the cycle, then we have a to-dos list. We weekly meet on IRC on Wednesday. The timing will change a bit in this cycle, so it's around 11 UTC, and we are always available on OpenStack Requirements IRC channel, so if you have any queries, feel free to drop by and... Thanks a lot. Any questions, anyone? Any questions? Nope. No. Okay. Thank you. Thank you.