 Good afternoon. Good morning. Hopefully you're all awake, more than I am. My name's Dean Troyer. I'm a senior cloud software developer at Intel, and I'm responsible for this mess we call OpenStack Client. I'm Steve Martinelli. I'm currently the Keystone PTO and an OpenStack Client core. One of those things happened before the other. And I started to work on OpenStack Client just kind of in my spare time and just been working on it ever since 2012-ish. So a little bit pinch more about us. I'm a senior software developer at IBM, Keystone PTO, and been working on this since 2012, circa Grizzly. And Dean, same thing. Like I said, I started this. And one of your predecessors at Keystone is actually, I've told this story before, responsible for me pursuing this thing in the first place. I could get to blame Dolph for taking the plunge. And you're actually being a bit humble because you've been working on OpenStack since day 0 before it was called OpenStack. So that's pretty cool. Nova. Yeah. Yeah. Nova like 0, version 0. So the agenda for today is we're going to give a quick overview about OpenStack Client, the scope of what we handle, what's new in the newest 3.0 release, a few UX Commandments that we like to follow, the parity that OSC has with the other legacy clients, as I call them. Who is using it? How you can contribute to future clients. So do you want to do the quick hands experiment in this room? There's not many hands. We can count those. Yeah. Sure. All right. So who's actually heard of OSC or uses it? Oh, that's. We have a lot of experience in the room. Good deal. All right. A lot of people who hate us. No. But we don't have to explain what it is. That's true. So you guys all know that it is a command line client for OpenStack that provides a uniform set of commands for computer identity, image, network object storage, and block storage. Read the silly thing. Yeah. Basically, in simpler terms, we just, you know, there was a whole bunch of fragmented legacy CLIs. We just wanted to create a single one. So the scope of it, Dean? The stuff that's in the OSC repo is essentially what existed at the time I started. And then we added networking, quantum, neutron, when that became a thing. Although it's only been with the last release. We'll talk about this a little later. It's only been with the last release that we've got any substantial support for network API. I'm trying to think of how long it was. Almost two years ago, maybe, we added support for plug-ins to allow the other clients to tap into the OSC shell and be a part of it. And all you really need to do is install that plug-in, and OSC will pick it up and include it as if it was built in. So basically, when we talk about our ecosystem, you can picture it as this. Come on, there we go. I kind of picture it like that. So in the box is the blue box, no relation to the IBM product. So in the blue box, it's what you get when you install OSC. So you know, pip install OSC, you're going to get compute identity, volume, image, network, and object. And then if you want some of the other stuff, you can install the plug-ins. This is just the way we decided to architect it because the OSC team doesn't scale. There's only a few of us, and we can't all add support for all the existing commands. So we allowed each project team to be able to maintain their own commands in their own repos. That's actually a pretty good point because there's a little bit of history with that too. One of the reasons this didn't become an official thing a lot sooner was at the time the projects weren't willing to. They didn't want to let go of control of their client. And they were all doing things differently, which of course is part of the reason we're here. So maintaining the plug-ins in this way lets each client, and in addition to the scaling, it lets them control it. That also leads us to some of the plug-in clients having command structures that are different in subtle ways than the core projects. But if you want to install them, it's actually indicated on the previous step. So you install the client first, and then you'd install the other client. But we got some feedback on that. We're thinking about changing the way we install them. So look for that in the near future. So what's new in the 3.0? We just did the 3.0 release in August. I spent the summer rewriting the authentication flow. There's a, is that on here? It is. OCC is on here. There's a thing that exists now called OSClientConfig that manages a Clouds.yaml file for common clients. The Shade SDK uses it. I know some of the other info things use it. I don't know who else has picked it up. But it puts all of the Cloud authentication configuration things into a single file. The addition of that led to some nasty duplication and timing problems. And so we've totally rewritten that section of it. In combination with, I'm going to skip down a little bit, in combination with sucking a bunch of the things in OSC out into a new thing called OSCLib, partially to support the plugins. Because the plugins were using OSC as a library to avoid having to rewrite some things. I mean, it totally makes sense. And so we pulled it out. And those two things together led to some significant restructuring internally. Hopefully none of that is user-visible. If it is, it's a bug. So the second and third bullets are not really user-visible, but they were a large restructuring of things under the hood that needed to happen. Just for making auth easier and allowing the plugins to be more scalable as well. But the first and the last point are actually some of the big user-facing items that we had. And the first one being networking commands. We never really had many networking commands, I think, just generically create network, which would create a Nova network for you. But Rich Tice over at IBM and a few other folks, along with the Neutron team, did a lot of work for this release. And there's a ton of fantastic networking support now for agents, network RBAC, port, subnet, fix and floating IPs and routers. It's just really great stuff. And it's actually really slick. It will still work if you have Neutron or NovaNet. It's kind of agnostic. It'll figure it out which one you're running. And it'll still create it depending on what you're running under the hood, Neutron or NovaNet. So really cool stuff there. For the things that both of them can do. Yeah, for the things that both of them can do. It'll obviously fail in things that NovaNet doesn't do. Yeah, if NovaNet doesn't do it, it'll fail. But with a nice proper error message saying, hey, running NovaNet, you can't do this. And for a bunch of the commands we've now support bulk deletion, we'll actually get more into that in a minute. So some of the command opens that UX Commandments. I thought this was a good one. I like that you added this. This is pretty slick. Yeah, I thought it was a good one. We need to formalize it a little more. Go for it. Number one, I shall always provide a multi-delete option. So whenever a new command comes in now, we now force the person contributing the command, if they're doing delete command, always provide multi-delete options. So if you think of all the legacy clients, they're always, you know, delete a single resource. But I don't know, I'm kind of lazy. I don't want to type that many things, that many different commands over and over. So just keep appending, appending, appending, and it'll delete. We should note that this does not do, and I don't think we want to do things like wildcards, which are serious footgun sorts of things. We're trying to be a little bit safe about it. Right. Yeah, and obviously for some things, it's a little bit different. I mean, for deleting many objects in a, or for deleting objects in a container, I think we have some particulars around there, even some of the identity stuff. It's going to assume you're gonna stain the same domain, not gonna delete different objects from different domains. But yeah, we thought it was pretty good, and we added it to a few commands, and then I saw someone on Twitter say, hey, why is this not in all the commands? And I'm like, oh well, I didn't know you liked it, but thanks, and we'll add it now. So we did it. Yeah, Twitter, great feedback mechanism, rather than filing bugs, of course. Yeah, number two, they'll show all those group operations logically, not by API. So just because a certain API set, you know, you have five different APIs available, doesn't mean you need to represent it to the user as five different things. So in this example over here, you'll see, you know, you can update a volume. So you give the volume that you want to update the name or ID, and then from there, you can update it with the name, description, whether it's bootable or not, bunch of the metadata properties, the state and the size. The Cinder CLI, I guess they like to follow what's done, you know, this kind of logical developer mindset is to do one CLI for every API. You know, I'm gonna set a bootable on my volume, I'm gonna set the metadata on my volume, Cinder Extend, Cinder Reset State, Cinder, I don't know what it was, the changed name, I actually couldn't find it. But, you know, it's just, why not one command? Well, and this is a little bit even more general than that, just the idea that we're trying to service the user, we're trying to make things easy for users to do, not do what the API did, what the REST API did, and it's been amazingly hard, hard to get developers writing commands here to buy into that. They tend to, once they see it and see how it works out in the end, they tend to do that. We still get a little bit of pushback here and there, because they feel like we should be representing the REST API directly. This is also probably the biggest point of... Contention? Not contention, but difficulty in migrating. If you want, like when we migrated DevStack, you know, to use OSC, it did things, and we had to actually think through some changes, not just do a, you know, we can't write a tool to do a search and replace to migrate. That's true. And this isn't picking on the sender team. I just use them as the example here, because we're literally migrating some of the sender commands over. Yeah, this is a recent example. What's next? Commandment number three, provide meaningful error messages. So I got a few there. Let's say the set option there. Many of the, so when you're doing a lot of set options like that, the previous one there, you want to provide which one actually failed, and which operation failed, how many of them failed. A lot of things are just providing the text of either the REST error message, the body, or a generic exception type thing. And this is still a work in progress. We're not done with this by any means, and a lot of it has to do with translating what we get back from the Python libraries. Yeah, and it's just a matter of, you know. But this is important, and that's why it's up here, is to put it in front of everybody, and developer-wise, and remind them this is important. Number four, down with pretty table, long live cliff. Sounds kind of weird. Well, pretty table was the tool that, you know, prints out the nice little well-structured table when you do a user list or a user show or user create. But oftentimes, you can see this in the old DevStack setup scripts there. What you were doing was, if you can see at the bottom there, what folks were doing were... Oh yeah, the parsing. Trying to parse pretty table output. You would do a grep, and then a get field number two, and get field was actually a function in DevStack that actually did a whole lot of more funkiness. Or, you know, you can see there was ock, slash, and then the name, slash print, slash dollar two, and kept going like that. Instead, you know, we built in native support for this in Cliff. I think Doug did that one, Hellman. Yeah. And that's right, that's... So all you have to do is say, all right, give me the column name, so you can see, in the first example there, say, hey, give me the ID, if you do dash f value, just print out the value there. So, when you're running that command, it's literally just, all right, user ID equals that command, and that's how you get your value instead of that mess down there at the bottom. This is one of the newer formats of machine parsable output that we have. We've had CSV, we've had from the start. We do JSON now. I don't know if it does YAML yet. It's actually excellent. But the value is just to get the single value thing. Yeah, so you were talking about JSON. Yes. So, rather than just pretty table, you can print things out into various formatters. So, if you specify dash f JSON, that should all be on one line. You'll see that it gets printed out as a list. Each element is in a ID and name. You're gonna get the keys, the column headers as the keys, and then the content as values. And you can do it as YAML or CSV. I wanna jump in here because we have a problem that we can't do with pretty table well. We do it badly right now. In things that have, it's basically highly structured output. You do a show, I don't know, let's pick show server that has multiple address values, or where you have a list inside of a list, above a list, nested things like that. Doing that in CSV or pretty much anything else, other than JSON and YAML is terrible. As Sean would say, it leads to a trail of tears. And so, we haven't fully finished doing this with JSON and YAML yet to allow you to actually get the data structure that's underlying, get the full thing, and be able to use it. Right now, we're doing some basic list formatting, and in some cases, I don't think we even try yet. This is one area that I would like to get some feedback on, regarding letting the pretty table output and the JSON output be different in that regard. Not just formatted different, but actually give you different information because of the limitations of the other formats. So, maybe later today or whatever in our session, I wanted to raise that as something to see, if people think the output should always be the same, then we need to figure out how to do the nested structures. Commandment number six is support both ID and name. A lot of the old legacy clients, you'll see that they always want an ID, or for the most part, ID is preferred. But in OSD, one of the first fundamental principles was always support ID and name where possible. And we have a generic find resource method, which we call every time we try to find any resource, and it tries to look it up by name, by ID, by something else, because each API is different. But yeah, generally, it should support, you know, when you're implementing a new command in OSD, it should support both name and ID lookup, where it can. I mean, sometimes I can have- As you said, it breaks down when you have things where name is not unique. Yeah, so for instance, in Cinder, you can have a volume foo and another volume foo, and it doesn't care, because the IDs are different. And we throw up our hands. We're not gonna guess in that case. Yeah, we don't bother guessing. I think we'll just return whatever the API returns. And if the API says, you know, be more specific, we'll return that. Oh no, the API sometimes will give you more than one. Yeah, it depends on- And we do throw up our hands. Yeah, yeah. So if the service actually returns more, returns one, or the first one it finds, we'll return that. If it returns an error message, we'll return that too. And number seven, above all, be consistent with your terminology. So... I say this every time. That's what the C in OSC stands for. It's consistent. So again, use the term project, not tenant, since that's where all of OpenStack is going, because Keystone went that way initially. Use real words not for resources. Don't abbreviate. Don't put hyphens or underscores in there. Don't put CG group, or CG snapshot, or CG group. We have some places where this is still, we still use IP. Yeah. For address, which I think is acceptable. There are places, network is using QOS, rather than spelling it out. We've made the exception to that, in places where it's a very clear, I think everybody would be mad if we didn't, kind of case. There's other security group, we don't use SG. For example. Yeah, but yeah, you get a good point there. I don't really want to type out OpenStack, internet protocol. No, I don't. Create. Although we did put those in the right order now, it's fixed IP instead of IP fixed. And use properties, don't use metadata. That's a long history. I don't know if I want to go into that one, but various services, some call them properties, some call them metadata. We stuck with one, stuck with properties, and we're sticking to that. And we're still getting grief over that. Yeah. There's one other thing about the consistency that's not on here, probably because it doesn't boil down to a bullet point very well. But the consistency in command structure, and in what actually is a top level resource and what isn't. And this also kind of fits in with where we don't pair at the APIs directly. Some APIs present something as a resource as an object when a user will think of it as an attribute to another resource. And so we're actually breaking with the API to that extent at times to enforce that idea. That if it's really just a state of a resource, then we're gonna treat it that way rather than do exactly what the API does. They built the API because it has its own database table and it's a thing, it's a row and a database. Users don't care about that. Do you have an example? Volume transfer. And I don't think it's been released yet, but it's in master. Volume transfer request is a top level thing. I haven't told you this yet, but I had a discussion with thingy about that and he gave me the argument of why that's not consistent. And that's my great is the status of a volume and transfer should be the status of a volume. And so anyway, so I think that's gonna change. But that's the current example. That's actually the example we used in the user interface study that we did on Tuesday. I think that's it for the command. Okay, so parody with the legacy CLIs. We get this question a lot. It's like, hey, you know, I hear from the users, I wanna use OSC, but is everything there? For the most part, we'll try and recap that here. So for compute stuff, we're pretty good. There's some items that are listed as contrived items in Nova Client, but we don't include those yet just because extensions are automatically always there. But otherwise, I think for compute, we're pretty good. A lot of server actions are there and various other things. I think we're good. For identity, we have to be there because there's no more keys don't see a lie. I removed it. For image, pretty good for V1, for V2. We're still lacking. We're lacking. We don't do a V2 upload yet, a V2 create. Right. It's there, but it's because Glance is rewriting all that. We don't implement any of that. This is partially because the Glance team is going through their own issues. And if we tried to do everything logically in a random order, wouldn't work out well. So we always try and go after the big fish first. Go after the high value targets that a lot of people are gonna use. The commands that you use, 80% of the time, 90% of the time, not the other 10%. So we just haven't had feedback saying, hey, we have to have this image command. People really want networking commands, so that's what we focused on. No one's been screaming for V2 image CLI support yet. And sometimes that's a measure of how people are using it, even though image V2 has been around for a couple of years already. I know something about changing versions. A lot of people aren't using it. And that's, right, changing versions is hard. And OpenStack as a whole is learning that. I mean, Nova learned that lesson a while back and they've gone to the micro version route after trying to do V3. He's done V3, so. Yeah, and that's taken a long time, too. So I think that's part of the same problem with Glance. We're gonna be playing catch-up. The other thing that we're going to be doing with the catch-up is even in the cases of plugins where the project team is maintaining the command set and they're gonna be implementing kind of at the same time, new feature in the server, new feature in the CLI. It's still gonna lag a bit just because of the release cycle. Yeah, although actually that's not necessarily true. We may get it before a release. If something's implemented by Milestone 2 and it's in our master, since we don't release and you know what, we need to keep mentioning that too. We don't release based upon the, on the primary six-month release cycle. Whenever we feel like it. We have to pick a release to be the, you know, the Newton release is, what, 302? I don't remember. I think it's 302. 302, it didn't match up exactly. But you know, if we need to do a version four in December, we will. We're not planning that. Yeah, but otherwise we try and release, I think, one new version a month. I think we're already up to 330. Yes. So. Yes, we're trying to get back to, we did that for a while and then the summer, the rewrite over the summer just really, really messed that up. I don't see a reason why I wouldn't go back to that though. So, look for 340 soon. Anyway, go back to here. So networking commands. Kudos to Rich Tyson and the Neutron team for adding a whole bunch of that. Networking commands and he gave me some numbers here and say we're about 56% transitioned out of the 128 core commands. Have support for about 77 or so with work in progress patches for another 30. And of course, missing support for some of the more advanced stuff. So VPN as a service, load balancer as a service, firewall as a service. We should mention those are gonna be done as plugins. Yes. With Neutron, the line is roughly what is core Neutron API is what's gonna go into the box in OSC. The advanced services, everything else will be, excuse me, will be done as plugins. And object support, this is the basic support. Again, it's just been based on user feedback. No one's been hammering us for, hey, you need to add support for large objects. In general, just the user feedback's not there. So we're just gonna concentrate on other things. And one of the other things was, hey, you need to add volume support there. And we've actually done a pretty good job of that. Credit to the two guys in the front row over there who've been doing a bulk of the volume work. And so I think we're about 80% over to V1 and about less than that for V2. But as soon as we fill up the gap for V1, we'll be up to about 80 plus percent for V2. By the time they deprecate V1. All right, so who's using OSC? So which, aside from users, we hope users are using them. But otherwise, so Puppet uses OSC when bootstrapping and doing a whole bunch of other things. I know Bluebox, a lot of the guys at Bluebox were using it. They drove some of my requirements that I then pushed upstream. We migrated a whole bunch of DevStack commands over. They were using the previously using the old legacy CLI. I think over a year ago, we moved them all over and then Triple O via Puppet, because Triple O uses Puppet, so as a result, Triple O also uses OSC. So there are some actual real projects out there using it and just some users using it now. Here are the glowing reviews. And this is from Zego, Thomas Gorman. Hope I'm saying that last thing correctly. For those of you who know Zego, for him to give glowing reviews like this, it's a pretty high praise. Egan's read there, there's a little link there at the bottom. Basically, he really loves the CSV output and it just makes life a whole lot easier. Another one was Boris42. Another guy who's very hard to please. And he left it very cryptic, the subject line over there. Part of that opens that client. I tried it and, amazing UX, great job everyone involved. Keep it up. This is the kind of stuff that the folks implementing it. You want to see it. You want to see the bug reports and you want to see this too. How can you contribute to opens that client? I've been talking a lot. Well, talk to us. I mean, the things, it's easy to say that the way to get involved is to start implementing commands, but there's more to it. And like most projects, we also have the documentation problem. The documentation, we're trying to require for everything new. We're trying to require at least a minimal reference documentation. Here's what the options are and here's what they do. And that's still very cryptic. It's not very user helpful. And in the study that we did this week, again, the most amount of typing in the comments from the participants was about help. The fact that some of it was there's too much help. I can't find what I want in that pipe, in that fire hose. And there's not enough help because I couldn't find what I was looking for. You know, that's always gonna happen. But the idea that some of the stuff just isn't there. We need to get a little bit more verbose and we need to have some more things like use cases, examples. And having real world stuff is definitely more useful than things that we're gonna make up. I mean, you know, I've spent more than, more years than I care to admit as an essay. And that's one of the reasons why this is a big deal for me is I have seen far too many bad CLIs. But that still doesn't mean that I know, you know, even what a third of the use cases are. We sit in those studies and I watch people do something and I never would have thought of doing it that way. You know, so it's always very eye-opening and for those who don't want to code, don't care about it, but still, you know, want to do something, that kind of stuff, even if it's not finished product level, it's, man, this example would have really helped me send it to us. We'll massage it and get it in. I think right now that's one of the biggest areas that we want help in, that we haven't been doing. I mean, we've known help was a problem for a long time. It's not just the built-in help, it's the documentation that goes with it. And the dev docs that we have, especially like with the networking commands. If you're not a networking person, you're not gonna know that to get a server on the internet, you have to have a port, you have to have a network, you have to have a subnet, it's Monty's rant about get me a network, which we can build that together, but there's still pieces to plug together. And we can fill in some of that gap by at least helping you connect some of those pieces. And just a point that I wanted to put out there was, if you're a PTL of a project, especially one of the core projects over there, NOAA Neutron Syndrome Lance, you know, just enforce the rule that if someone's adding a new CLI to the legacy CLI, make the contributor also propose it to OSC. This will help us from perpetually playing catch up. Even if they just propose the initial patch, it at least gives us a heads up. Hey, you know, there's this new thing in Cinder and you guys need to do it too. Here's the initial patch and then we can run with it. We can massage it as necessary. We can re-implement, just massage it in general and get it in. But if we don't even get that first patch or even a heads up about a new client, we're seeing it for the first time when we type in Cinder help and hey, what, there's new command there. Ah man, another one to catch up on. So- Yeah, there's far too many repos for us to go cruising through looking for new things. And yeah, I'm not gonna follow that. If we could just get a heads up and as a PTLA notice. And I've gotta say, this was more of a problem a year or two, even that recently than it has been now. We're getting the involvement, as you saw, just the list of plug-ins. People are aware of it and I think teams are getting that kind of feedback. How come this isn't supported in your plug-in yet? Yeah. And that's always helpful. To one extent, actually the users have been going to legacy CLI folks saying like, hey, why isn't this in OSC? I wanna use that one only. Why do I have to go back to the old one? Yeah, we do hear that a lot. That's how we find our gaps. Matt, read him in with the Nova PTL was actually the first one to actually enforce that rule of if you're doing a new Nova CLI command, has to be done in both spots. Nova CLI and OSC. If we could just have that, that'd help us out so much. Future plans. Yeah, future plans. One of the big things that's come up with the plug-ins is the inability of them to, I hesitate to use the word extend, but the inability of them to have any involvement with built-in commands. And the example Steve's got up here is quota. That one's gotten the most press. The quota command is not API specific, or the set of commands like quota show. It looks at the three, right? There's just three compute and volume network, right? The three that are built in that have quota support. But if I've got, I don't remember who was it that brought this up. There's a- Barbican. Barbican maybe? No, I wasn't Barbican. Anyway, for any plugin that has a quota to be able to hook into that like it should to make a consistent command set, we don't have that support yet. And so one of the things that's next high on my list anyway is to extend the plugin support to add hooks so that you can hook in and add quota support for another API and make it seamless to the user. The other one we got was handle more business logic. You know, if you're familiar, there's an open stack operator kind of group mailing list channel, all these things. Someone from the ops group actually made a command. It just does one thing. It's called OS purge. OS purge, you specify a tenant or a project, deletes everything about that project. All the networks related to it, all the volumes related to it, all the servers related to it. It recursively goes through looking for and deleting it. Yeah, it nukes everything. And they actually came to us and they're like, hey, can you put this in open stack client? Cause we're tired of maintaining our own thing. We want, you know, we want, we want to just use open stack line. That's it. And we said, sure, sounds like a good idea. Well, I said sure. I don't think you were sure. Oh no, I'm sure. I didn't know it existed until, I think it was only a couple of months ago. So, I didn't, I mean, I've been familiar with the idea. I didn't know somebody had done it. So that was good news and it's a nice contribution. We do have to talk about what the command's gonna look like though. Because there's no resource. Yeah. Project Verge I think. That's a good one. I've mentioned help, needing help. It's been on our list since at least Tokyo, to restructure how the help works. When you type, you know, open stack dash dash help, that's a fire hose of stuff and the commands listed may or may not be applicable to you because it doesn't authenticate to get a service catalog to find out if it's Novenette or Neutron or what you have permissions to do. And so it's, or the versions of the API supported, you're gonna get the default versions. What we wanna do is dash dash help should provide a summary of how to get to what you're looking for. It's gonna be a little index of oh, you want command help, it's open stack help, and then the command that you're looking for. Or even a partial command. And so on, or the global options or whatever. We need to break that down and make it a little bit more accessible. That's finally floated near the top of the priority chart. And I should also say that we're gonna be talking about these priorities and stuff at our session at, what is it, 11-15? Yeah, the design sessions today and tomorrow. You wanna talk about the tokens? Yeah, just local caching of tokens. So, you know, it's, we can't really just use the session caching because, you know, it's a command, right? Yeah, at the beginning of the command, we authenticate you and we start the session and once the command exits, that's it. The context is lost. So every time you're, you know, doing another command, you're gonna get a new token. There's no real need for this sort of repetition. You can catch the tokens. Yeah, for some amount of time. I mean, the token has an expiration time. It does. We know how long it's gonna live. And otherwise, there's also key ring integration, which Jamie and you keep bringing up. Those kind of fit together. Yeah, exactly. That's how you're gonna store the tokens in the key ring. The OSC live stuff. We did a first cut of what went into it and I think we're gonna think about some additional things to go in and move up the stack a little bit. Stuff that went there is the base level of, you know, the client manager and some of the common functions that all commands or most commands, you know, want available. It's the stuff that the plug-ins were really looking to use. We're thinking about a little bit higher up in the command stack. Some of the business logic, maybe some of the pieces of purge will go into OS plug-in if it turns out like looping over a series of things. Region name has been a topic in the last couple of weeks. I need to loop over all of the regions that I have and there's client caching issues with that right now. But some of that stuff may move into it and things a little bit higher up in the command chain in the structure. And the output column naming is also relatively new. We found a problem, we found the problem in the gate in DevStack where the column names that you get in a list command and I think it was one of the IP commands are different between Novenet and Neutron. And we've never had a really hard contract with output. We've been kind of loose with it and the column names may or may not be the same thing and of course we were bragging about our JSON and CSV and so on support. You need to be able to name those fields reliably to do that. And so this has now become another one of the things we're gonna talk about and flesh out a little bit more because I don't think you and I have talked about it much. We're going to do, we're going to need to think about how to map column names to keep them consistent and across as APIs change but also do the same thing we did with tenant and project. There are things, the same thing is named different things in different places like some resources have a display name, they don't have a name, things like that. So that's what that's all about. So I guess now main presentation is over and we'll take some questions and hopefully provide some answers if possible. Yeah, sure. Does anybody have a question? Anybody need some caffeine? Yes. Okay. So for me as a user, it's always difficult which CLI version matches best to the OpenStack version I'm using. So you're like versioning differently. You're using differently. I want to make sure I understand. Our releases versus the OpenStack release. Those versions, okay. Or is it API versions? You're at the current review. I go to GitHub release in Itaca so I'm picking the one before that. Related to that also concerning versioning. I think you switched in a micro version release, the default for Glance from v1 to v2 which is, for me as a user, very difficult, right? I mean you're switching a major API version in a micro version. Yeah. That really happened? I don't remember when it switched to default. OCC. Oh, okay. That one, yeah, that one happened because OCC is where that default was. Did I take the first part of the question? Yeah, go ahead. Okay, so you know how you were saying how, so there's two parts of this question. For those of you who didn't hear it, I'll just do with versioning. So the first part is just because your OpenStack that you're using is at the Metaka release and we released version 3.3.0 and Newton, you don't have to be limited to what was released in Metaka of OSC and Metaka to use it on the cloud. You could use OSC now in the Newton release, the latest one, 3.3.0, and it'll work all the way back to like a Kilo cloud probably. Well, Kilo's still supported. Our attempt is to support every officially supported OpenStack release, backwards compatible. You don't have to install this on the same system as the cloud, right? It's for remote management. So you can just install it anywhere. You can install it in your own virtual environment if you're using Python and it should work all the way back to hopefully Kilo Liberty even earlier if possible. Do you have a specific case that doesn't work? We test some of it, yes. Well, we can't test every combination but via the testing in the gate, much of it gets tested. And if you find a case that doesn't work, report it, it's a bug. Because yes, even master should work all the way back and I suspect we'd still work with most of Juno. It really is gonna depend upon how the servers or the services have changed their APIs. Yeah, if a service like Keystone or Nova changes their API, then we can't do much about that. The default thing bothers me though. We should override that. And yeah, the other one about API versioning, probably, that probably should have been a major release. If that causes user trouble like that, then that's good feedback. If we're changing the default, we should make that a major version release. So instead of 2.3, it should have been a 3.0. Good feedback, I like it. Also one way to deal with that is at clouds.yaml file. You can set your version, the API versions in there. Any other questions? We've got a minute, I think. Yes, how do we determine, let me rephrase this, how do we determine whether it's something lives in a plug-in or lives in the repo? Yeah? It's in the project repo. They fall under the project's governance. Most of them put them in their client. So Python dash project client is where the plug-ins will live. I think there's one or two that have a separate repo for the plug-in. Yeah, for the most part, if heat client is a plug-in for OSC. So you'll see the code for the heat client plug-in in the Python heat client repo. Was that your question? Okay, good. It's hard to hear. I think we're over. I think we're at time. All right, well thank you everyone. All right, cool.