 All right, this is the project update for Oslo, and I'm Ben Niemek, I work for Red Hat, I'm the current PTL, we also have Moises, Doug, and Ken who are Oslo cores in the room. And yeah, we'll go ahead and get started. So just in case you accidentally walked all the way down here and don't know what Oslo is, it is kind of the location that all of the common libraries for OpenStack live. So if we have multiple projects that are trying to do basically the same thing, we try to factor that code out, put it in Oslo, and then it can be all maintained in one place and everybody benefits. So as a result of that philosophy, our team is kind of a combination of general code reviewers who are, you know, people who have demonstrated that they know how to design an API and, you know, are good at reviewing Python code. And then there's also specialist API maintainers who are responsible for maybe just one or two libraries and have a lot of specific knowledge for that library so they can kind of help us with the, you know, domain specific things. One of the things that I learned the last time I was doing one of these project updates was that independent contributors are actually one of the biggest groups in Oslo, which is kind of cool for a, you know, kind of corporate environment like OpenStack usually is. We have a lot of people who are not necessarily corporate contributors. We have around 40 projects still and you can see a few of the things that we cover there. We added a new one this cycle and we might add more. And if you want more details, the wiki page is there and I believe these slides will be published on the foundation site so you should be able to find it afterwards if you want to. It should also be recorded. So things that we've gotten done. This was kind of the first step towards getting secrets out of the configuration files because I know that's been a long-standing pain point in OpenStack that we have passwords and tokens and whatnot that go in plain text files on the hard disk and a lot of organizations really, really don't like that. So we got started this cycle with a driver framework that allows us to pull configuration data from other locations. So for this cycle we got a remote file driver done and basically what that does is it goes out to a web server and pulls down a configuration file and then it kind of merges those values in with the ones that are on the local file. And you can see an example there of how you would configure that in your local file. So there's some local configuration but none of the secrets go in there. So it's an improvement. This is only the first step. I'll be talking a little bit later about kind of the next step in getting rid of secrets in configuration files and we'll get to that in a bit. Also on the config front we added a validator and that's another thing that has been requested for quite a long time and basically what it's going to do is it's going to look at the sample configuration data that all of the projects are already publishing for generating their sample config files and it will go through the list of options and then look at the configuration file that you pass it and if there's anything in the configuration file that doesn't show up in the project options it's either going to error at you or if you're using deprecated options that will be a warning. So it's kind of handy for that. The one big limitation that I know of right now with it is it doesn't handle dynamic groups. So like for Cinder if you're validating a configuration file that has one of those dynamic groups in it it's probably going to error at you. You'll just have to look at the options for now and decide whether it's a legitimate failure or just a dynamic group. We could probably fix that. We just haven't yet. And then you can see the example. There's actually two different ways you can call it. You can either pass it directly the project sample config file. That's the first one there. Or you can use the config generator to generate a machine readable file that contains all of the option data and then you can pass that in and that's kind of handy if you're on a production system where maybe you don't have the source code for the project available. We also have the new library that we added was the Oslo upgrade check library and that was to support the community goal for upgrade checkers. And basically that's where we put all of kind of the common processing code for the checks and some CLI helper stuff. So you can see an example down there of this is just the output if you run basically the documented sample code that we have for the project. So yeah, if you're if you're writing upgrade checks, hopefully that's useful. We also added fair locks to Oslo concurrency this cycle. That's another thing that has periodically come up over the years. For the moment, we're only guaranteeing it for the internal locks, which is within one process for external locks that that you might be using for inter process locking. It's dependent on how the operating system implements their file locks. So we can't really guarantee that and spent a lot of time looking at the Linux kernel code trying to figure out if it's fair or not. And we ran across a comment that basically said, it makes sense to implement this in a fair way, but we don't document it. So so it probably shouldn't count on that. But it probably works. And I have no idea what it what it does on other operating systems. So windows, you'll have to go talk to Microsoft. Some other noteworthy changes that aren't features. We we removed some things and deprecated some things in PBR. We removed the Python version specific requirements files. And those shouldn't be needed anymore because PIP and setup tools and friends have grown support for specifying a Python version in your requirements files. So you don't need those anymore. Hopefully nobody's using them. And they're gone. We also deprecated a couple of the setup.py commands. So for testing, you should be running S tester directly instead of calling setup.py test. These are deprecated, so they're still there, but you should really be trying to migrate off of them. And then build Sphinx was also deprecated. And there again, just use Sphinx build directly. It's better. It's less fragile. We had a lot of breakages because of the PBR integration with Sphinx. And so if you're just calling Sphinx build directly, hopefully that won't happen anymore. The functionality that was PBR specific has either moved to OpenStackDocs theme or the Sphinx contrib API doc project. And there was one small Sphinx extension left in PBR for non OpenStack projects who aren't using OpenStackDocs theme. So they have a migration path as well. And the other big removal we had this cycle was the zero MQ driver. It hasn't been maintained for years and we're pretty sure it doesn't work with current versions of OpenStack because they didn't implement some new features that projects are requiring. So it's gone. If you want it back, talk to Ken. So things that are upcoming in the near future, hopefully. As I mentioned, we had another config driver that is in progress. And that's to store secrets using Castellan. And so that that gives you some flexibility on what your back end is and gets a lot more secure than just having a file out on a web server somewhere that basically anybody can get to. This one should be a lot better for security and hopefully addresses all of the concerns that people have had with secrets and config files. Another config driver that we added actually, this one has just merged and apparently we released it already too because I had somebody tell me that they started using it today. So that was good. But this one reads the values from the environment. So you can see there we map the group and the option name to an environment variable name and then it will go out and look for that environment variable and if it finds it then it will use the value from there. And the reason that we added this one was so with containers, you theoretically now don't need to have a configuration file baked into it. You can just pass everything in through the environment so it fits the container deployment workflow a little bit better. We also have a config work. We have a config migrator that is getting pretty close to merging I think. And what it does is it takes a configuration file from one version and basically prepares it for the next version of OpenStack. So like if an option was renamed it will automatically move that value from the old name to the new name. It has support for custom functions within the services. So if they, you know, if there's a complex change that needs to happen in the migration you can do that through that callback. It doesn't handle everything. So like for the transport URL that we changed in Oslo messaging it wouldn't handle that case yet. We have some plans to deal with that in the future. But the first release probably won't do that. But hopefully this will be helpful and get people a start on their upgrades. And then this is Ken's topic so I don't know if you want to come up and talk about it. This is actually Andy's topic. I think he couldn't make it here today. So we had an internal goal of getting the Kafka notification driver that's Kafka support for notifications done in Rocky. And we, you know, heard the sound of that milestone whizzing past us. Because we had a lot of problems with the implementation due to threading issues with the Python library we were using. We're using the original Kafka Python library. But that had problems with the vent line. I think it was doing operating system, blocking at the operating system level like a select or something like that. So we moved to Confluent Kafka and integrated, I think it's event let T pools in the case of event less. And that seems to be working. So we've got that review up. And I think that's it. Yeah, so making a thread safe is what we've got it complete in Rocky and experimental, yeah, through T. So it should be usable, at least feature complete. So we also have change. I think it's still in progress. But it's once again, it's, I think getting close for Oslo proofs up to allow to process requests in parallel. Previously, it could only do one thing at a time. So again, Cinder keeps coming up. They have some privileged calls that can take a little bit of time. And so they couldn't let those be serialized and, you know, just keep stacking up. So we're working on adding a thread pool to Oslo proofs up so that it can process multiple calls at once. And then that should get past that blocking issue. We also have some work in progress to enable plug-ins for the Oslo policy. And that will allow people to use external policy engines. We've actually done some work on this in the past, but it was just to enable HTTP check to call out to an external service. So that one works, but it requires a fair amount of work when you're deploying it, and it required a proxy between Oslo policy and the actual policy engine. So we're doing this kind of as the next step to improve that support. And the design is still in progress. So if you have interest in this, go out and take a look at the spec that's linked there and provide your input. So cross-project work, like I said, all of it. It's kind of the nature of Oslo that everything we work on is touching multiple projects. So, yeah. Specific things that we'll be working on. Oslo.limit is something we actually introduced last cycle. And so we're working closely with some of the project teams to figure out how exactly we're going to integrate that with the services. And we're still refining the API there, because as we started doing some of the more cross-project work, we've discovered that there were some limitations there, and we needed a little bit of rethinking there. So, but good progress there. The Oslo config drivers, that's probably going to be mostly deployment tool work, because obviously if you're storing some of your configuration options in an external location, or eventually in Castiland, or whatever the backend is there, that changes your deployment workflow quite a bit. So, there will most likely be some effort needed there. As I mentioned, the config migrator has support for some custom code in projects, and so if projects are getting into scenarios where they need to use that, then obviously we need to get those things implemented in the project, so we'll need some coordination there. And then just in general, we, you know, every cycle we're updating projects to try to remove dependencies on deprecated or removed features. So, you know, obviously that's necessary before we do the removals, because we don't really want to break anybody if we can help it. So that was it for work that we're doing. If you're interested in giving feedback, you can see the launchpad link there, and we've started looking at the storyboard migrations, so hopefully that's coming soon. Don't know exactly when, but keep an eye out for it. We're on OpenStack Oslo on FreeNode, and we use the Oslo tag on OpenStack Dev, and soon OpenStack Discuss. And if you're interested in contributing, be aware you don't need to be full time on Oslo. I don't think any of us are full time on Oslo. We pretty much all have other projects that we work on, so we're totally cool with that. A good way to start is just to pick a project that sounds interesting, review the code, fix some bugs if they are. Well, I think we have bugs from at least one bug for every project, so you shouldn't have trouble finding one. And you certainly don't have to know all of Oslo to contribute to, you know, one specific library. That's one of the nice things. A lot of these libraries are kind of small and self-contained, so it's a good way to get involved. And we're particularly interested in some new owners for Oslo.Service, Oslo.Privicep, and maybe Taskflow, although there's been some discussion about moving that one out of the Oslo project and making it independent. But even if that happens, I'm sure they'd still like more contributors. So those are some important projects that, you know, most of the, a lot of OpenStack projects are depending on, and we're kind of light on maintainers for them, so looking for help. And I think that that was it for the presentation. Any questions before we close? All right, I think we're right on time, so...