 Hey, good afternoon, everyone. We're here to talk about OpenStack Client Project Update. My name is Dean Troyer. I am the OpenStack Client Project PTO. Basically what we're going to do is a real quick overview of where we've been recently, what's going on right now, and where we're headed. We had a release. The most recent release of OSC was last week. That includes quite a bit of new stuff. I'll get into that a little bit. We've got some new networking commands, quite a few new networking commands, some volume stuff, and a little bit of recovery from the impact of Nova Client 8's release. So OSC really depends on it's a collection of these five things in terms of its dependencies. OSC Live is pretty tightly tied to us. OS Client config is used by Shade, which takes it into the Ansible stuff and quite a few other libraries. All of these have had releases since, I would say, three weeks ago, I think is the oldest one that's here. Cliff actually was done this week. So we're actually at the moment on some pretty current packages if you install from Pip. Actually, I think the current version of OSC itself is 3.11. We had a bug fix, and it had to be a minor because we updated the requirements. So the 3.10 release of OSC included quite a few new commands. These are a list of the resources and some of the additional commands we added to existing resources. But there's the consistency group stuff for volumes. We've got some changes to floating IPs. We added a whole lot of network things around networking flavors, around network agents, and some of the effects of taking out the Nova network. Actually, those wouldn't have been new commands. The stuff that's in progress right now is mostly focused around microversion support. OpenStack Client doesn't have any real microversion support in it. You can give it a specific microversion in the API version configuration thing that we have, but that's the extent of it, and you have to know. It requires the user to have to know. We're doing a lot of work around functional tests right now. One of the reasons we've had delays in releases and not been able to release very frequently is because of our functional test that runs all of those listed projects on their master tip was broken, so we've spent quite a bit of time working on that lately. And unfortunately, one of the things we've talked about a lot in previous summits is fixing the help system, and we haven't touched it. Help is still a mess. Where we're headed and the immediate plans are to do a major release this summer and get into the real microversion support. We also need to add some plug-in capabilities. We've talked about this a couple of times also. It's one of the things that never seems to float to the top, but we've got some plug-ins that want to be able to add functionality to common commands like quotas and things that they can't do today. Well, that's exactly what this talks about. We've already got the ability to add commands via plug-ins. There's 22 of them, I think. But we don't have a way, I say as an example, the quota show command. You give it an option of network or compute or what's the other one, volume, or the three that support quotas. Any other plug-in that has a quota support can't add their option to that existing command right now. So that's one of the things we have to try and fix. There are some problems with allowing a plug-in to modify an existing command, because then you could install something and it would change a command that you're not aware of and now things break. We learned this lesson with the Compute API a few years ago when extensions would do exactly that and change things. And so we've kind of got to work through that. And this is one of the things that's not completed and not ready yet. The microversion support is going to be pretty interesting. It's a little bit complicated in that it's not going to be just simply picking a version. Microversions themselves are just numbers. They're just numbers that when they change, they indicate a change in the API. And you can go look up that number and find out what it was that changed in that API. They're not something that we have to use in a cumulative manner. If there is a change to a resource in microversion 2.15, for example, I don't have to implement everything in 2.1 through 2.14 in order to use it. An OSC command that deals with that resource can just call that microversion for that call and then stick with the default the rest of the time. That's the part that's missing is the ability to automatically switch to have the logic to know when. And then all of the support, sometimes the microversion has a new option for a command. And we don't necessarily have that. And so the second point here, do users care? It's my belief that users should not have to care about what microversion they're talking to. But sometimes you might. We definitely want to give you the ability to find out what you're using, to even influence it. But for the most part, if you don't have to care, we've done our job. The client should negotiate the best, the highest microversion that it can get when necessary with the server and just use it. If you pick a feature that requires a specific one, it will try to use it. And if it fails, you've got an error. But other than that, you shouldn't have to know. If the feature is available, you should be able to use it. One of the things that I do intend to add with this is some commands that will make debugging and understanding the microversions a little easier, such as show versions compute. And it'll list you all of the discovery documents. It'll list you everything. Here's the major and minor. That's not right. The maximum and minimum versions that that endpoint supports and so on. Some of this already exists. At least the code to do the querying already exists in OS Client Config Library that Monty Taylor's written to try and gather some of this information. We're gonna make use of it. But I want to make it as a top-level command in OSC. So it would be useful for debugging. If you're working on a problem, something doesn't quite work. You can get all of that information easily and quickly and provide it to whoever is asking for it. At a code level, most of the work that we're going to do for actually dealing with the microversions, we're going to put in a module called Keystone Auth, which most of our client libraries use today for doing all of the authentication bits. I should clarify, most of our client libraries in Python use this today. This is something else that we're working on documenting. The process, not just the code, but documenting the process of doing the version discovery and all of the other little things that we've developed over the years so that other language clients can use them and have the opportunity to operate in the same way and we don't get a divergence. It doesn't help anybody if anything written in Go that uses Go for Cloud operates differently than the Python clients or than anything else, the Java clients. At the most, we want the application developer to have that choice, to opt into that if they want or not have to totally reinvent the wheel or at least not have to rediscover the process to get something that's usable. We're going to do a major release of OSC this summer. I don't have an exact date in mind yet, but 4.0 is gonna come and it's going to happen because we're going to break some things. I don't wanna break too much stuff, but and it's not gonna be anything major, but it's basically things that you're gonna notice in a corner case. If you've got a script that does something, especially a script that does something around floating IPs, for example, you might see some breakage and so we need to signal that with here or with this. Last year, we did a major release in order to do the OSC live extraction and that took longer than expected and we ended up going almost two and a half months between releases. We can't do that again. There's too many bug fixes that got delayed because of that. So we're gonna do this in a feature branch. The good thing about doing it that way is it will let folks test as we go through removing code or making the breaking changes, it will let you easily test in a controlled environment that sort of work without impacting without impacting the release stuff. We can continue to put bug fixes out. So the kinds of things that are gonna break, we've never had an explicitly strong contract with regards to output formatting and we've made changes probably looking back. These were bad changes to make in terms of how things are formatted. For example, the property's on a volume. If you do it and you let the pretty table output come out, right now you'll get a string that's formatted basically what's shown here. If you do the JSON output, you want to have a JSON structure there and today what you're going to get is a string that's exactly what's in the pretty table output now where the data structure is actually a dictionary and it gets stripped down to a string representation and if you're outputting JSON, that's because presumably you want to use it without having to parse it. We're gonna fix that but that's a breaking change. Anybody who's got a script that depends upon this current behavior is going to break. So that's one of the things that we're going to fix. It's going to make a lot of stuff easier and one of the things that I'm finding all of these little niggly things is because we're changing the functional tests to actually use JSON output instead of parsing the pretty table, which is what's driving all the stuff I talked about regarding the functional test changes. Another thing that's going to change is the way some of the global options work. These are the things like the authentication options and some of the other configuration stuff, the API version options that configure that. What's going to change with them is potentially the order of precedence. For example, you can set an API version in an environment variable, you can set it on the command line and you can set it in clouds.yaml. And if you happen to have all three of them set, which one wins if they're different values? Right now, we have a precedence set that is different from the default that's coded in the OS Client Config module. And so that's caused us a lot of work to try and maintain compatibility. And so what I'm going to do is basically punt all of the global option handling down to the OS Client Config module, which will make OSC behave like everything else that uses that module. Shade isn't a command line tool, but the things that depend upon shade and Ansible is one of those things. I think it's best if that behavior is the same for everyone. And at this point, trying to maintain the existing OSC behavior is more work than I think that it's worth. I've not gotten a lot of feedback from folks that I've asked so far that this is a big deal. The biggest problem that I see coming is where you've got something in your clouds.config, I'm sorry, your clouds.yaml file, and you want to temporarily override it, whether you do it with a command line option or an environment variable, both of those will still work. The problem is between the environment variable and the command line option. Which order of those get resolved or which one wins is probably what's going to have some changes. And right now I don't think we're totally consistent in that either. So not all of them are necessarily gonna change. Another thing that we're going to do is remove some commands that we deprecated a while back. These are actually just command renames. We took backup, you know, back in the day. There was only one backup and that was a volume backup. Now we have server backups, we have backups and plugins. So we've actually gone and renamed it to volume backup and we're going to remove. It's supported both now for about a year. We're gonna remove the old backup name, we're gonna remove IP fixed, IP floating, and snapshots, snapshots just like backup, it's a volume thing. The IP ones, we're not removing the functionality. We did change command line options when we renamed it, but really we're just fixing the name of the resource. I don't know where I, I might have been on an airplane at midnight. I don't know where I was when I named them IP fix. That's just wrong. We're encouraging plugins. I mean, one of the hardest things about writing a plugin for OSC is naming the resources properly. You know, you can't just use a word like port anymore or like backup. There's too many different kinds of things. So we have to get a little bit more verbose with the names and having old names like this that don't fit the pattern just is not a good idea. It's no surprise to anybody that there have been a lot of staffing changes in a lot of projects recently. In the last year, we've had three people leave the core team and a good chunk of the contributions, especially on those new commands that I had listed earlier were done by people who are no longer employed to work on OpenStack. The ones that I've talked to are not coming back. You know, the ones that have found another job or whatever, they've gone on to something else. So right now, it's essentially a core team of three and none of us are spending full time on it. I'm probably spending the most. So the rate of things is going down. What, you know, like everybody else is saying, we're looking for people to help. It doesn't take a lot of expertise. Those who are developing on projects can come and work. For example, the networking commands. Those were all done by people that were working on Neutron and they understand the domain. They understand what it needed to do. They understand the API and so they can come and help and we help make it fit into the OSC way of doing things. The other thing that I learned at the last two summits we did usability studies and I think I learned more during those two usability studies from talking to, you know, eight to 10 people each time, then I've learned probably cumulatively the rest of the time I've been working on this project. So user feedback, this is clumsy. This is hard. Sometimes just watching how people use it, I could not imagine the way I've seen some people use OSC and it's very educational and it has influenced some of the things that we're going to do and it will continue to do so. That's all I have for prepared. Do we have any questions? Yes, come to the mic. We've recently done transition from native clients to open stack client and the main thing we've noticed is significant slow down. I've done a trace on typical open stack call, I mean open stack client call and there was like two megabytes of output and three forks in the process. And I even didn't dig enough but it's really slowed down to the level when it's take two seconds to run on talking issue and it's kinda, when I looked to a trace there was endless imports, you're reading licensing again and again and again for modules. It's basically you read, read, read, read, read, read, some kind of recurrent importing any ideas how it could be shrunk down. We've looked at that, it hasn't been something that I've looked at this cycle at least for sure. We actually spent a lot of time in Austin last year talking about it. We've used a tool called OS Profiler to look at static loading times and focused on that at first and we were able to pull about 400 milliseconds of just module loading out of it. One of our problems is we have 30 dependencies and a lot of those dependencies are client libraries and some of those client libraries bring in other dependencies. So it's, we've spent a lot of time also deferring things. We don't make an authentication call until we know we need it. We're not gonna do it every time automatically. I have not looked at an S-trace level output and I'm not surprised that it's massive because just a debug output is massive. What is forking? That I don't know. There is free forks. I'm interested in that. I'd like to get a little bit more information on specifically what command is, or is it every command? Token issue. Token issue has three forks. Three forks. Yes, that was kind of a moment when I started thinking maybe it's something not good. No, I do not know what that is and I think I would like to because it really shouldn't do that. Okay. Okay. But yeah, we know it's slow. At one point we had a patch to DevStack that was counting just the time that it spent in OSC and it was embarrassingly high. I mean like six minutes, that DevStack in just the default gate run spent running OSC. To be fair, that included the time it took to do the rest calls and sometimes we'll do five or six rest calls depending upon the operation. A DevStack cloud is not the fastest cloud in the world but a good chunk of it is load time and it's one of the things I hate the most about what we have right now. So we are thinking about it. It's not the highest thing on the list but the fork thing worries me. I wanna get into that. You're using UUID tokens? Yes. The question was what kind of tokens it was using. Okay, UUID. That's what I use in my testing just because I know that's the, I mean dead simplest thing in the world. So I don't think that would affect it directly. Any other questions? Okay. Well then, thank you.