 That must, excuse me, that must be my cue to go. Thank you all for coming. We're going to talk about OpenStack Client and a little bit about the OpenStack SDK and the relationship between the two. My name, let's not, there we go, let's not breathe on the mic. My name is Dean Troyer. I have been around this OpenStack thing for a little while. Tomorrow marks six years that I first went to NASA Ames to start working on what became Nova at the Nebula project there. And I've actually, I think my resume is on the back of the laptop here with the exception of Mozilla. I've been through NASA, Rackspace, Nebula, and I'm now at Intel. Working on DevStack and OpenStack Client for much of that time. In addition to other projects, brief history of why OpenStack Client exists. When I was at Rackspace, we spent a couple of months trying to get the original four authentication options added as environment variables and as command line options so that we could use the same configuration, same authentication across the then five command line clients, Nova, Cinder, Swift. My mind has gone blank. Anyway, Keystone and Glance. The frustration of doing that, I guess we hear a lot of talk now about how long it takes things to get merged. That's not new. The reasons for that taking place, I'm not totally sure on why. Part of it is part of his ownership. Part of it is not invented here. But at the end, I was at the Rackspace office in San Francisco one week towards the end of that process. And I shared a cab with Dolph back to SFO at the end of that week and just made a comment of what if we started over and just forgot about backwards compatibility and trying to fix things. Let's just do it right from the beginning. And he gave me just enough of a receptive response that I spent three and a half hours on the flights home that night mapping out the command set and the rules for how to build a command for OpenStack Client, for what became OpenStack Client. I built a proof of concept that's actually still in my GitHub repo. I'm not going to tell you the name because I'm embarrassed that it's still there, frankly. At the Folsom Summit in San Francisco, we did a short session towards the end of the week where I just kind of tried to pull the folks that were there, pretty much all developers at that point, about the idea. And I had a proof of concept and a lot of hands went up when, does this sound like a good thing to do? And there was a lot of support there. And then there was, of course, not a lot of support when it came time to getting help. But that is where I met Doug Hellman. And we talked about this on and off all week for quite a bit. And then he went off and wrote Cliff, which is the library that we use for, am I on the right slide? That doesn't matter. It's the library we use for handling the global command line options and the command structure, parsing the commands and the actual class structure before building them within the client. That thing has turned out to be incredibly useful. And I'll get to later why part of that is. But that was the kick that I needed, and a few other folks needed, to get started with actually getting something implemented that wasn't a proof of concept. In Portland, a year later, it took us this long, in Portland, we changed the command structure. The change was to switch the position of object and action. We had action first at one point. And we switched that in Portland. And the following December, we did our first usable release, usable in the sense that I was able to get real work done with it. And it was dependable. We weren't ready to commit to the command format yet. But in December of 2013 is when we produced something that we were ready, not just to say, yeah, we're doing this, but to tell people to go try it, give us feedback, see what's going on. A year after that, we released 1.0. And at that point, we committed to not changing the API, the API being the command line itself, not changing the command line. Any incompatible changes? Well, we weren't going to make any. We were going to commit to the backwards compatibility. And part of that was because a year ago in Paris, we had a couple of projects we found out that they were using it. Was it Chef? I don't remember. Puppet? And I know there's at least one or two others that are using it now as an API to OpenStack. And so that's why we went ahead and did the 1.0 release and we're committed to it. That way, we're not going to break those sorts of upstream. I guess that would be downstream things. One of the questions I always get is, at the time anyway, not anymore, but at the time, it was, why are we starting over? And one of the problems, and this is the sense I got out of that original effort, was that the projects didn't want to let go of control. And there's still, we've always had a bit of culture where the projects are, to some extent, autonomous. They get to make decisions. They get to make technical decisions about how they're going to implement code. The reality is, at the client level, all but two or three of them, even now, the standalone clients were essentially forks of NOAA client. And the differences that came out of that, even though they were forks, they diverged and went off and did things differently. And so that was a big part of why we started over, to get things into one interface and get rid of the backwards compatibility problems. There was no way we were gonna fix the existing clients and not really make a lot of people mad in the process. Keystone was the first client to fully commit to this. When, I don't remember if it was Dolph that actually made, I think it was that made the decision that the Identity V3 API was not going to have a CLI in the Python Keystone client package. The Keystone command never implemented it. It's in the library, it's in the Python API, but it's not in the CLI. Which is a good place to say that we consume, and from the beginning, I didn't feel like we could take on the effort of replacing anything more than the CLI. This is essentially shell.py in most of the clients, one file that we've replaced. We consume the Python layer so far, we'll get to that. And why did it take so long? The gap between San Francisco and the O3 release that we did after Portland was a year. And a lot of it is me. This was a evening's project for me for two years. And a couple of other folks, we didn't really have a lot of regulars committing to it, and I didn't have the time to drive it. And so, it was an evening project, and it was an evening project for, these are just three of the reasons that I've heard. Some of them as recently as April, and I had the opportunity to talk to an awful lot of companies who were interested in having me work for them after Nebula closed. But they didn't feel like a command line was a good place to spend time, which was, I didn't realize how widespread that was. I knew that I had that problem inside Nebula. One of those quotes came from a Nebula executive who was not Chris. And I think that explains why a lot of things, I mean, that's why it takes so long. It doesn't close sales. It's not on anybody's roadmap to get it, and it comes down to resource allocation, getting people to allocate resources to work on things. And honestly, we could do a global search and replace in this little bit of the story and say rolling upgrades. Or pick your feature in OpenStack. This is a repeating scenario for us that the things like the product work group are working to try and address. It's not isolated and there are areas that are actively working on it. But again, command lines are boring. Who cares? At some point, and I guess I did that too soon, at some point I did finally get support internally and was able to make this part of my job. We got that, we got Steve Martinelli came along, Terry Howe came along, you know, the three of us, we've had a few other people join us since then that are regularly contributors. We added Lynn as a core not too long ago. We've got a small group of people who are working at bringing this along and we've done an incredible amount in the last year. A little bit about the philosophy, and this may also have something to do with, with the way people see this in the community. I like to tell people that the C in OSC stands for consistent. The, that is the driving word for the command set. And this is all about the command set. Everything we do here is in service of providing a commis, a consistent command set that is hopefully not going to drive a user nuts. You know, maybe even you could be able to prevent, to predict what a command might be. If I need a new resource, what am I going to do? I'm going to create it. It's always going to be great. It's not going to be make. It's not going to be build. It's always going to be create. And those are contrived examples. We did have the real example of get and show both being used in the other clients for looking up a single resource. So that's the, that's the terminology. Well, I'll come back to that in a little bit, but calling something by the same name everywhere is one of the things, a very well-defined command structure. And other than the change we made in Portland, the fundamental structure has not changed since those flights almost four years ago. API independence refers to something that's coming up again as a problem now. In terms of, we have a lot of projects in OpenStack and a lot of projects want to use OSC, which is awesome, but we're having collisions with object names. If I talk about a policy, what kind of policy am I talking about? If I talk about a server, everybody pretty much knows that that's compute. If I talk about an image, is that compute because the rest API can do some images and that's left over from the old days, or is that the image API, the glance command, users should not have to know, users really don't care which API implements something. If I wanna create a new image, I wanna create a new image, I don't care how it gets done, just go do it. And again, we'll talk about this a little bit more, a little later, but for the most part, everything in the repo, the five APIs that we handle in the repo today, a user does not need to know which API implements it. We're lacking network right at the moment, I'll talk about why in a little bit, but there are a number of functions that Nova implements for network that Neutron also implements, and I think we've worked out how we're going to auto-detect that so the command won't change. The security group commands will work the same if Neutron is installed or if it's not. Again, because a user shouldn't need to know or care. If you're doing something that is Neutron only that NovaNet can't do, like create a network, create a port, whatever, you'll need to know that, but that's not something that, it's not something that a client can hide. One of my default answers to a question of how should we do this, what should this look like, is to be very Unix-like in stuff. A Unix command doesn't tell you anything more than it needs to. If you delete a file, you get a backup prompt if all went well. If there was no error, if you need to verify that, you check the return code. If there was a problem, you get an error. And we do the same thing. If you delete a server and all went well, you don't get any sort of feedback. There's some tension in the community about whether we should be more verbose or not about that. And at this point, I don't know, we haven't totally resolved it. We haven't changed it yet either, but I still like the idea of not saying any more than you have to. And possibly if you need a verbose mode, we've thought about that. Certainly we can turn on some more debugging and verbose output as is. Another big problem that we've had, the output of all of the clients, and we picked this up by default just because it was going to be the least painful for users, is in pretty table format. Pretty table is a Python module that takes a table of stuff and draws lines around it and makes it pretty, amazingly enough. It's a mess to try and parse that. If you're using this in a script and you need to get the user ID from a username and you do list user, you search on the name, you have to parse out the pipes and to get the ID. So one of our goals from the beginning was to have an option for totally, easily machine parsable output. This is one of the things that Cliff gives us. So today, Cliff does shell, which is basically attribute value. You know, it's like a value assignment in a shell command. It'll do the bear value. We use this in DevStack a lot to get exactly that. I'm gonna look up a user and it returns nothing but the user ID, which you can then easily drop in without having to do any val. For list formatted commands, it will do CSV and actually it'll do JSON, we'll do all of them. We can get everything out in JSON now. So anyway, anything that we add to Cliff, if somebody decides we want YAML, we add it to Cliff and we get it for free. And extensible here is talking about the plugins. We built in a plugin architecture that allows additional commands to be added. It doesn't have to be a new API, so far all of the plugins that are publicly available are to support other APIs. The five that are in the repo now are built as plugins. They just happen to be in the repo. They use the same mechanism for registering the commands. Part of this is what leads us down to the problem with making the commands look alike. Plugins are free to do what they want. If plugins want to do object first, if they can do anything. We don't have a way of enforcing the command structure other than social pressure and user feedback saying, why did you do this differently? Occasionally, I will go and look at something and either file a bug or put something on a review of something that I think is inconsistent. And I see that as much as an education thing as anything else. I do have the rules as they exist outlined in the documentation. Actually, I think that's what I had up here. In the user documentation page, we have second document here is the command structure that talks about all of the rules and here's how you decide what it should look like, list of the objects, all kinds of craziness. As far as terminology goes, these are the two that we've had the most feedback on being consistent. In the beginning, Nova used the word project to talk about the owner of resources. The rack space deal, the combination that led to OpenStack, rack space used tenant. So Nova changed. And somewhere along the line Keystone, V3, the identity V3, they went back to project. And so we made the decision to go ahead and use project again. We translate all of the identity V2 commands to use the word project instead of the word tenant. Hopefully, we've got it all in all of the output. A user should never see the word tenant from OpenStack client. If they do, it's a bug. And feel free to file a bug against it. I know there are places that we haven't caught the translation. But again, the idea being that any given thing should have one name. There are other things in here in the OpenStack APIs where this is also true and we have not necessarily addressed it. But project is the best example. Property is another one. Can anybody tell me the difference between property and metadata? Yes. Okay. You want to use the word metadata to describe it or you want to use that field in the API? Okay. But it's my point with the difference here is does a user care? Does a user know? Because the APIs, there are APIs that use both metadata and property. And there may be a technical or a back-end reason why they're separate. But the case that I'm most familiar with this is it was actually rather arbitrary and it was done to not break compatibility to do that. Users generally don't care what it's called. As far as your specific thing with doing that, I'm not sure that I have the answer for you there. But in places where the Nova command or any other command uses metadata to attach a value to a resource, we call that property. And again, we're trying very hard to always use that same word and do it in the same way in every command. The command structure should be pretty clear that I'm the person to blame for what it looks like because I've done it and I've been pretty forceful about enforcing it. Digital gets credit because it's based upon the VMS command line. DCL used verb noun or action object, but it was very rigid and regular in its structure and you could predict it. And what little experience I had on VMS came after I'd learned UNIX. And it was like an incredible breath of fresh air. I mean, VMS has its problems and it has its other fun things in scripting it, but that command structure was predictable. You always knew how, if you wanted to do something with an object, you knew how to do it because the verbs were always gonna be the same. That structure, as it's wound up, this is a slightly simplified version of what's in the documentation. But it's an object, an object is a resource like a server or an image. The action is what you're gonna do to it, create, list, delete, show. The options here are the command-specific options. In between the word open stack, the name of the binary and object is where the global options live. Things like the authentication variables. But the command-specific options and then most commands have one positional argument, some have none, some have two. The positional argument, I believe I can say always in that it is the name of the resource. It can be an ID, on a create command you don't have the ID yet, so it must be the name. But it's the name or the ID of the resource that you're operating on. The options, we use the GNU style long format options, dash, dash, something. And this was probably one of the hardest things to get across to people. If the option has two words and has a separator in it, it will always be a dash, not an underscore. And that's almost a religious issue with some people. It's not quite as bad as VIR-EMACs, but it's close. At least for a few people it's close. And so that's actually something that has filtered its way back. I see a lot of times now in at least the original clients that I still pay attention to, I see options being put in, most of them go in with just a dash. I see some of them being put in with both. I'm not sure why new options would need an underscore, but it happens. Yes, those are global options. And those are mostly handled by Cliff. The output formatting ones are all handled by Cliff. The authentication option, oh, thank you. The authentication options we handle, but they're all global. So yeah, we do have options that apply everywhere. And those appear ahead of the object name. So the options are position dependent. I think the only one that does not have that, you can put help anywhere in the command line and you will get help. So the OpenStack SDK, it's in the name of the talk. I wound up not having near as much to say about it as I had planned, partially because they're suffering from the same sort of problem that OpenStack clients suffers from. There's a small group of people doing almost all the work. And in most cases, I don't think any of you are doing it full time, are you? No, Terry most recently has been doing it almost none because of other commitments. I don't feel like we can include in a release. Oh, here's why, let's do it in this order then, follow the slides. The reasons for the SDK is to get rid of the dependency on the project clients. On the, this is replacing the Python APIs, which are also, even though they may have forked from Nova Client, they're all different. And they're different in different ways and you can see in our code base, some of the tricks we've had to go to, to use a common authentication, just to use a common authentication for all of those clients. We've had to do things that are nasty like patching private structures and all sorts of things because they clearly weren't designed for this sort of use. But some of the clients bring in dependencies that make life miserable. I'll talk about Windows after a while, but the quick example is Glantz uses OpenSSL and have fun doing that on Windows without downloading a prebuilt binary from somebody who's not an official source. I last tried to do this about a year and a half ago, so things may have changed since then. But just Glantz's use of OpenSSL was a problem in a lot of platforms. The other reason for the SDK is to clean up the internal interfaces, get rid of all of those weirdnesses we had to do between the different clients. I mean, this is the right thing to do for a lot of reasons, it's the right thing to have for developers to use. The client, there's an object inside of OSC called Client Manager, which is the piece that abstracts away a lot of the differences between the Python-client libraries. Almost could be used as an SDK, but it doesn't do enough, it doesn't do nearly enough. So the SDK, when we bring it in, will not replace it, but it will sure clean out a lot of the cruft that's in the Client Manager. The problem that we're having with the SDK is I don't feel like I can commit to including it in OSC in a release until they're at a 1.0 release and can commit to not needing to modify the API. Terry and I haven't talked about this for a little while, but he told me a little bit ago that they're closer to the 1.0 than I was afraid they were. So this still may be coming. He was hoping actually to have a demonstration of it. Actually, it's gonna look like the same thing, but it's always nice to show that the code is running. The alternatives to using an SDK is to stay with where we are, we can do nothing. We've had suggestions of using Tempestlib, and for those of you who don't know what Tempestlib is, Tempest is our major testing harness structure, and it has its own set of REST API wrappers for each of the projects. It's a very low level, and it's really little more than a wrapper around, here's the route, here's some values, and go do the REST call. Interestingly enough, for probably 80% maybe more of OpenStack clients work, that's enough. That's all we need, because basically we are a pipe. We're sending values down, and we're simply returning them to the user, and possibly reformatting it. There is something that's similar to Tempestlib, it's a little bit higher level, already inside OpenStack client. We never used Swift client, because when we started, Python Swift client didn't exist. To get the Swift client, you had to install Swift, and that was a non-starter. So I extracted the interesting bits from the Swift command, and dropped them into OSC directly, and reworked them enough so that it felt like calling other OSC style functions. The structure that I stuffed those Swift bits into, and I've got some other API bits with that, was something that I was hoping we could put into the SDK as a low level API, and that didn't happen. So right now we're still doing all of our object store work using that. It exists for list commands, I think I have list commands for three or four APIs. There's one that does multiple versions to show how we would handle multiple versions of APIs. So those are other possibilities. I don't think these are off the table necessarily, but we're not, this is more to describe the other things that were considered. The plan is still to go with the SDK. We're gonna use it. Hopefully that means some day to install OSC you will have less than 10 dependencies to install instead of 32, which takes me exactly to the next point. Part of our problem, as I've already talked about, is the whole problem with dependencies. We've got too many. I alluded to Windows a little bit. That turns out to be 90% of a problem of doing anything with Python on Windows. You've got to install an interpreter. And if you need to build modules, you have to make them match the installation of the interpreter you used. If you download it from python.org, you go here. If you used, I don't even remember what all they are, the only one that I thought actually worked smoothly was Sigwin because that's essentially the port of the Unix code. And that actually worked very well if you install all the right bits. The SDK timeline, I've talked about. Too many commands here is referring to the namespacing problem with the plug-ins and the duplication. I've actually already covered this. So what we're looking at doing, this is what's on our roadmap for the immediate future, is using something called Keystone Auth, which is an extraction of the authentication plug-ins from the Keystone client into their own standalone thing. SDK, this is the major bit that's still left to do with the SDK also, that's holding them up. We have some issues with load times. Takes about 350 milliseconds just to do the imports required for most OSC commands. In real time, a command that my usual thing is about four seconds to run a image list and one and a half of that is the rest calls, the rest of it is all overhead. The one last thing I want to say about the SDK is network support, we don't use Neutron Client, we don't have anything, we've got the network command. I think we've got a token one command setup. The rest of that is basically waiting on the SDK. And rather than go through a bunch of examples, I'm just gonna point to where you can go look. In DevStack, we use it quite a bit, we still have some work to do to replace the other CLIs, but there's two series of functions in DevStack. Get or add, user, project, whatever, and the get or create show a couple of interesting things that you can do with OSC and that includes parsing output, examples of doing the machine parsable formatting. Go ahead, I've got examples left and that's all, so we're short on time, go ahead. So it's off, okay. So I wanted to ask if anybody looks at building Raml or Swagger specs for the OpenStack APIs so that other people could build dynamic clients and dynamic libraries? Right, that gets talked about a lot, the API working group that gets brought up regularly. I don't know if they're planning to talk about that again. Somebody's gotta do that and generally it's not gonna be the client people, so until an API provides us that, it's not an option. If we had it, I'm of the opinion that I still don't wanna do it. Partially it's because of the speed, the load times, it's, you know, Glance uses JSON schema now and something called Warlock to do a lot of that stuff which is causing problems. For the most part, I don't think our APIs are complicated enough to warrant going to that overhead. Doing the Swagger docs or doing any of that to document the API is still an excellent thing to do and having it available for clients who want to do that, I'm not against doing it, I'm against doing it in OpenStack Client. I mean, I'm trying to reduce overhead, not increase overhead and that will increase overhead in terms of running a command. We've totaled up that, I think it was about four or five minutes of a CI job that typically takes about an hour but that four or five minutes is running the OpenStack command. We run it that many times. That if I can cut that time in half, I mean, it saved a couple of minutes but when we're going around spending work trying to save 10 minutes, that's significant. So, I do know, I'm trying to remember what project it is and I'm not sure that it's Nova. I know Nova has something, they used to have waddle files. I don't know what the state of doing that is. If there's any sort of session with the API working group around that'd be a great place to ask because they will know who's doing it and gentle is keeping track of that. That's definitely something she wants for the documentation part of it. She wants to be able to take those and write docs. Thank you. Any other questions? Yes? Huge, huge, just on the machine. It's unstructured of a non-structured and so on, Amy. We wanted the way SCRI called to display it to user. Our first take was just to print it. Which works nice, but it's not really in spirit of a mistake like that. So, for example, people wanted to use this formatting of options that would be just disabled in this case. But the problem is we use table formatting as if now it doesn't rub lines. So, if it ends up with tables that's not visible just because one line was so long. It's so long, right. And the second problem, when we use just on formatting it doesn't give us that same result. Because it returns something of format like key, as this, value, this, not the original, just on this, it was key, value. Right? Okay. Any piece of advice? On the second one, if you, and just pick one of the other display commands to look at this, all of the show commands, in Cliff it's called display one is the class that does this, that implements the formatter. But if you just get the data structures right to return to Cliff, it will handle all of that for you. So, key value is pretty straightforward. If you need to wrap lines or if you need to do some sub formatting, for example, if one of those values is a Python list, it will print it, Python style is that list, which is pretty ugly. So, you have to go in and do all of that ahead of time before you return to Cliff. For the first part, we're struggling with some of that too. We've got places where the output just doesn't fit well into the table formatter, into a CSV. And I don't have good answers for that yet. Output, I should say too, output is one of the things that we do not commit to not changing between releases. It's not that we wanna go and break people, but if you have something critical, you have the option to say, I want these fields and I want them formatted like this. So, that won't break, it's just the defaults may change and so the formatting of those defaults is something I don't have a really great solution for. There was another question? Yes. We have an option called wait for creating a resource which will pull and it works, if I remember right, at least it used to, works the same way that the Nova command does. It just pulls once a second, are you done? Looks for status and when the status returns, yeah, so we do have that in some of them. We don't have it everywhere, but we have it in a lot of places. Yes, it's there. I don't know if everything has it. It turns out that's the reason we swapped the order of the arguments. It made it a whole lot easier to do that but I can't tell you which ones are missing at this point but yeah, the idea is they should be there. Oh right, Cliff does it, so it just works. Yes, yes, yes, debug dash dash debug will do that. It will show you the actual and the response and if you do, you can get a whole big stack of JSON in a response, it'll show you the raw response. We've been talking about refining that and making an option that makes that a little easier to use as a REST debugger. Yes, our intent is that it should work with every supported version, meaning the release says this is not end of life, it should work. The only one that we have control of that over is Swift and the Swift API is they add things occasionally but other than that it's rock solid. So that's our code. For right now, everything else is the project's API. So if that works, we're good. When we go to the SDK, it'll be an SDK issue but our intention is that it should work and honestly as long as the API hasn't changed it'll go back as far as the API works. I think you might be able to boot a server off of Diablo still. I wouldn't promise you that it would work but okay, I think that's the end of our time I suppose. Is there any more questions or anything? All right, thank you.