 All right, it's 4.20, so time to get started. I'm Brian Curtin. This is Terry Howe. I work for REC Space. He's for HP. And we're here to talk about the OpenStack SDK, what it is, how it came to be, what's going on with it. So I first wanted to start by saying, what actually is this? So the OpenStack SDK is a way to kind of consider, you know, it's a project that we're working on to consolidate efforts to work on developer experience across OpenStack, across all the clients, all the tools, all the libraries. And it's a step towards having kind of the one-stop shop to work with OpenStack. Right now there's kind of, as you know, a lot of different clients and different ways to do things, trying to make one unified solution that hopefully improves and gives people a better experience working with OpenStack, whether they're vendors, clients of vendors, developers, any type of developer, whether you're a core developer of one of the projects, you're an end-user building applications at OpenStack, all across the board. And so kind of looking at, you know, why do we kind of need this? So as OpenStack has grown in terms of adoption, in terms of the feature offering, literally everything about this has grown, looking at the conference, looking at the marketplace. Everything is significantly larger than, you know, if you look at the Austin Release versus Ice House, the release notes are eight times larger. The feature offering is significantly larger. Stackforge is growing all the time. And what this has kind of led to is, you know, now we have roughly 30, if you count both OpenStack and the Stackforge repos, I think there are now up to 30-something projects. Each of them has implemented their own client, most of whom have copied things from other places. And a lot of them, there's a lot of duplication, fragmentation. The three main things you see as the project has grown is that there's a lot of fragmentation, has led to a lot of duplication, and it's led to a lot of inconsistency in interfaces and usages. So if we look at the fragmentation across OpenStack, you know, we see, like there's 30 different clients, they're all written by different people for the most part. They're all different packages, so I don't think anyone's building stuff that uses all 30 of these projects. But even for a small number of different services, you're consuming a bunch of different packages with different varying levels of documentation they're used. Even the slightest bit differently, and those differences really do add up. Certain contributors work on different projects, that's all fine and good. People have different things they want to work on. But that leads to different design ideas. People have different ideas about how things should be used, so even a service that works with another service could be implemented by a different person that has different ideas on how things should work, so things don't work as uniformly across the board as you would hope. And so some of that fragmentation has led to a lot of duplication. So looking into a lot of the client limitations or any of the tools, you see one comment read that everyone has mostly written their own HTTP client. A lot of them you can tell are copy and pasted. Some of the more recent projects have inherited, for lack of a better term, although not actually inheriting in the OO sense. You know, the HTTP class from someone else, a lot of them come from Swift or Nova or something like that. And if you look at how the big three, the originals, Nova, Glantz and Swift, have built their HTTP class, two of them are built on requests.session, which is the fairly common way. Nova uses, it's written in its own connection pool that under the hood can use sessions, and that's done, I'm sure, for a perfectly great reason. I'm not trying to slam that, I'm just saying that there are differences that immediately, while I'm looking at three things and you see two different implementations, then you step into just tell their name, their name differently. One of them is HTTP connection, one of them is something else. They also stretch beyond just doing HTTP things. I believe it's the Swift one, one of them ends up doing authentication for you, the others have authentication classes that work with those. So really we have three, and just looking at three projects, and the first ones I looked at, we have three different implementations of roughly the same thing. So that fragmentation has led to this duplication, which in the end leads to some inconsistencies across all these projects. So if we look at listing resources, one of the simple first things you might do when you import any of these or work with these. So in Swift you have containers, in Nova you have servers, in Glantz you have images. To list the containers in Swift, you call get underscore count, because you do a get request on the slash account endpoint in the REST API. And that gives you a tuple of some metadata and then the container information. If you do servers.list in Nova, that gives you a list of servers. That's pretty straightforward. If you do in Glantz, if you do images.list, that returns a generator of images. I prefer generators myself, but if you look at that, once again, three things, doing three fairly simple operations in three entirely different ways and named two different ways. So this is all kind of adding up as you start to build applications that use multiple open stack services. The differences are mounting up, and no matter what type of user you are, if you're someone in this room who's flying to Paris to go to a conference about open stack, you're probably on the higher end of technical ability, some of you can deal with this. We have customers who are not even sure what Nova means. They just want servers. And then they look at this and see all these differences, different ways to work with things, all these different dependencies. And from multiple angles in the open stack community, this is not really a great experience. So this project came around to focus on getting one place to consume open stack, whether you're an end user, you're a developer of one of the services that consumes another service, you're an operator, you're anyone who's doing anything with open stack. It's the term one-stop shop. This is ideally something we install once, and it runs wherever open stack is found. So we're calling this the open stack SDK. And SDK is a fairly standard term software development kit. But what it entails is roughly the same across the board. But just to put it out there, it's a set of libraries, tools, documentation, and examples that allow you to work with open stack. It's going to be a set of libraries that's working on the services, the command line clients, the open stack client is an existing library that actually wants to consume the libraries that we're working on. Documentation and not just API documentation, which we're working on, but equal documentation across the board. If you look at the documentation of a lot of these other projects, you'll see a wide range of completeness, not just an API docs, but a user documentation, which is the pros, like saying, how do I build this service? How do I use this service? Telling people what functions to call and what parameters to pass to them is great. And that's a requirement, like you have to have that. We actually need to have more about how to build things and how to use, with solid examples, how to use these services. And again, this is the install once, good anywhere. So the target audience, again, as I said, is really everybody, anyone who's using any open stack from any angle for any purpose. But in order to get started, obviously, we don't want to boil the ocean. Working smaller with end users who are building applications on open stack and developers of open stack projects, so the people who are working with one service that consumes values of another service. And so some of the goals are, again, to produce a quality set of libraries, tools, and documentation that provide a more uniform experience across all of those services and across the boundaries between a library and a command line tool or any other tools that we might be producing. Again, the documentation is the full spectrum of what you would want to know when working with this. So in terms of library goals, my earlier examples with everyone having their own HTTP layer gone, one of them, you only need one, one base class that covers as much as we can. Obviously services may have things they need to get from there as they're customizing things or as things just happen to work differently for other services. Having one solid base class that with everyone in the same, everyone's, all these projects all working together in the same kind of namespace makes it so you can actually subclass our base thing versus copy and pasting into your own project. Consistent interfaces, another key thing. The way we've built this, on top of all the lower level communications type stuff, there's a resource layer that everything is built on that represents the server side resource and creating consistent interfaces at that resource layer, as well as the higher level where most consumers will work with this, something we're actually working on actively right now. We're trying to couple different ways of working with this and as we show in some of these examples and pass around later, hopefully you can help us as users and let us know which way you wanna work with this stuff and we'll go that way and make sure we have consistency across the board so we're not doing get underscore account to receive your containers and we're not perpetuating a lot of the old naming and stuff like that. Coming to naming, clear naming. Trying to get rid of the whole Nova, Swift, Clans, a lot of the project names that we all know internally. Those are not really useful to a lot of end users. Computes, object storage, images are more useful names. It's very clear when you import, do from OpenStack, import compute that you're working with compute. Important Nova, we kinda have to tell a story. Why that is, where the name came from, why you have to import Nova to work with servers. Clear names like that I think are helpful across the board especially as this is gonna grow outside of the core contributors because everyone here probably knows what Nova is and everyone would like to import Nova and work with it but the people who are buying, who are customers of any of the vendors in here, people who are running the stuff at universities don't really wanna have to know the backstory and the history of names and as you know we all change names as trademark names come up. I don't wanna have to import Sahara which is something alias from Savannah, whatever that story is. We wanna make sure that the project that it works on is not gonna change. So use those names as much as possible. The tool goals are very similar to the libraries that are gonna be built on top of libraries. Same thing with creating nice consistent interfaces across the board. OpenStack Client is run by the famous Dean Schreyer. He works on it too. A lot of people do. It's a command line tool that is looking at actually consuming this SDK, the libraries from this SDK eventually to produce the command line tools for each of the services versus how it is now with importing a lot of those different clients and making things work across the board. And then documentation. Again, this is composed of any type of documentation we can come up with. So to make sure our users are well informed of what's going on, they don't have to know the kind of inside baseball of how the services work. They don't have to follow OpenStack Dev to know this stuff. We should be able to just put out there nice and plainly in plain English without having prior knowledge of a lot of this stuff and say, to work with this project, you have to do this. And here's how you build these examples, how you solve these problems that you might have solved elsewhere or on other platforms. Here's how OpenStack does it. Here's how this project does it. Ideally, we're gonna have a ton of documentation. That's something we're gonna start working on. We're gonna loop it on it with API docs, but coming out with a really good, complete set of documentation that allows people to come to one spot, get their answers, and not have to resort the first time to go into Stack Overflow and the mailing lists. Ideally, we can solve a lot of that up front by really solid and complete documentation. I'm gonna pass it off to Terry, who's gonna run through some code examples we have. All right. Step back here. I'm live. Yeah, I wanted to kinda start out talking about what it's like to work with the SDK from the user persona, and then we'll get a little bit more into what it's like with the developer persona. There's two main user-facing classes, the connection class and the user-preference class. And this is pretty much, you know, ideally these would be the only classes you'd really be interacting with as a user. The connection class, you basically set it up, and there's several arguments you could pass to it, but a minimum set you need to pass in some authentication information, so you would create your connection. And the connection kind of is, I like to think of it more as the glue that kinda holds together everything, and I'll get a little bit more into that later. We get into more detail in the connection. But examples of using the connection, that once you've created one, you typically can access a service by just specifying the service name, and it dynamically loads these service names. And you would say, you know, for instance, connection.identity.findproject, and you would get a particular project. Once you have a resource, you know, from the API, you can act on it without specifying the service name, because within the resource itself, it knows what service it's related to. So you could say connection.update.project after you've made your changes. And the same interface is available through, you could still, you know, use that project object and pass it into the identity update project method if you wanted to. The user preference class, and I had to liven this up a little bit and add picture Neo with a matrix example, just something, you know. But the user preference class is where you basically store all your wants as far as which service you wanna talk to, where you wanna talk to it, what version, and also potentially the visibility of the URL that you wanna talk to if you wanna talk to an admin or a public interface. Some services are available, you know, through multiple visibilities. So that's it, all right. And as an example, I kind of wanted to, and maybe I should try and see if I can get this example going. I can escape out of that. And if I can mouse without seeing, but here we go. I need to get that, there we go. I have this example, it's in Garrett right now as a review, but this is an example of kind of running the, and there it started. So let me just, how can I move this? Okay, yeah. That's gonna take a little while because so it's actually created a server already and it's waiting for the server to come up. But while that's working, and we'll see if it does work is, you know, who knows. But this is what the example is doing. And this is kind of a somewhat simplified, you know, the example itself is kind of designed to be idempotent like you can run it over and over again and hopefully get the same result. But in order to create a network, you would just say, you know, network.create. You know, we're gonna go through the steps. We're gonna create a network and we're gonna create a subnet, we're gonna create a router and here we're just creating our subnet and that's about it. Next, we're creating a router and we're using our external network ID there that we extracted earlier. And we go through and create security groups and a key pair for our server. And then finally, we get down to, and this is kind of greatly simplified from the example that's in the code, but we're gonna create a server and so here we're using cloud init to app get install Jenkins and in this case, in order to install Jenkins, you're gonna need, you know, you're gonna need to add a repo to app get and do app get update to refresh what the repo knows about and also I added like a basic kind of security thing where you could at least have a log in so your Jenkins server isn't just able to access by anyone. So that's what I did in the example but here we're just using this simple example we're passing in our user data here for cloud init. We've associated our network and our key and our image and flavor and our name and it actually calls it create. After we're done our create, it's gonna create an IP and then this is the part that was running when we were last looking at the example it's calling this wait for status and this by default it expects to wait for an active status but you could pass a different status if you wanted to wait for that. And then finally we're gonna add the IP to the port that is associated so this first line is getting the port from the particular we have server ID over there it's using the device ID in that as a filter and then we assign that floating IP to our port. And this is. I'll take this one. Yeah. So to kind of contrast the way a lot of that was done with passing in some of those dictionary for keyword arguments and then also operating on the resource that was returned. I wrote up a little, most of Swift is implemented in our high level and that for that connection class and the kind of simple example is pointed at the directory because recursively lock that directory and upload those, the contents that directory to object storage to Swift and simply using OS that lock. Awesome function. Just pointed at that and then very simply the container name so get that out of the directory objectStore.createContainer just give it a string, it'll create that. Whereas the other way kind of and you'll see if you look at some of these other examples build things up with those other arguments or you get back a resource and pass the resource into createContainer that nice string name I thought was a different way to do it. Makes it pretty nice as one simple line. Walk through that directory, give it a little pattern and just find the files that match so if you do start at JPEG we'll just run through your directory, upload everything, simply just say createContainer, createObject in that container, give it the data, the name of the file and very simply in a handful of lines most of which are just Python standard library stuff, upload that whole directory. I think it's pretty easy, hopefully take a look at some of these examples we'll probably put together like a blog post and we'll go through some of these examples and kind of contrast the differences and hopefully have people check it out, see how you want to interact with this stuff and then hopefully we'll make some awesome APIs. Good to you. Okay, that was kind of a discussion from the user persona and now I wanted to kind of get in a little bit more from the developer persona like well, what does this look like from a high level and you guys have seen the connection class and on the far right you've seen the user preference class and within that the guy that actually is holding the these things and that's why I call the connection the glue is the session object and the session is kind of built on a quasi OSI model type of thinking in that it holds the authentication information and it will make sure that your authentication is there and up to date and it supports a full kind of HTTP get put, et cetera. Let's go on the next slide. We're gonna get into some more detail. So here's the session object and the reason I kind of call it the glue is because it's the one that well, well, no actually take that back the connection was the glue the session is the context kind of because it holds the transport and the authentication information and the user preferences. So in order to use the session we're going to and actually maybe this example way before I knew anything about Paris and I happened to pick this famous French pilot who's the interesting about Roland Garros was he was one of the guys that was involved in getting the forward facing machine gun to not hit the propeller which I thought was really interesting. So the session object you'd create your session object with your transport and your authentication and your preferences and then from there you could use it as a basic HTTP object but it's gonna do your authentication for you. It's going to if your tokens about to expire it's gonna get a new one if you don't have a token yet it's gonna get one and then it'll it just acts on the authenticator in a kind of in a known way it's gonna just try to authenticate with that. The transport object which actually Dean put together and Dean is here somewhere I saw him earlier but this is one of the first classes that was put out there but this does your basic transport layer type HTTP work and it also adds in some some JSON handling for us and this is the guy that's derived off the request object and I think the request object is actually called session which is a little confusing but so it does basic gets and puts. The off plugins were originally Jamie Lennox put together the off plugins over in the Keystone client and we kind of brought those over and kind of took out the material that was kind of in the Keystone client for backwards compatibility and we've kind of simplified them a little bit as well. Right now there's you could auth with V3 auth or V2 auth those are the two that we support. It's a plugin architecture which I think I get into later see different authentication plugins could be added but in this example we're just doing some user pass authentication. We pass on our arguments and then we give that to our we give it a transport and let it authorize so that the authentication plugin is using a transport as well. So yeah I guess in here I'm getting in a little bit more about the off plugins. We're using Stevedor to load the authentication plugins and that's using entry points so that other people could add their own plugin if they wished and currently we're supporting as I mentioned before identity V2 and identity V3 token and user pass and they can add their own. So a little bit more about the internals. You guys within the connection class it's adding a bunch of these proxy objects and the proxy objects are for each service there's gonna be a proxy that would kind of define which methods are available like for example in the earlier one there was a defined projects. So the identity proxy is gonna have a fine projects. That proxy itself is going to access a resource implementation and in the example I just gave that would be like a project resource. It's derived off this resource object which kind of does our, what we would expect for basic credit operations this guy is as I recall and I'm not sure exactly I think it was derived off a collection multi map. Do you remember? It's a multi map I think. So it acts like a dictionary or whatever. So here's our proxy object and example of that. It will hold a copy of the session so it has a context with what to work and you would just, it has a bunch of methods like this that are very simple. I wanna create an IP and so it's gonna get the floating IP resource and with whatever data you may have passed and use call create, passing create the session. The resource itself, so this is kind of an example of how a resource is implemented and it's fairly simple to add a new resource. I mean your key things are you're gonna add a plural and a non-plural resource key which you would expect to come back in your responses so it can parse the response and you add a base path which is gonna be whatever it's gonna be added onto your URL to access that resource. These base paths can also contain other keys of course. So and it's gonna expect to extract those other keys from within the resource like if you had any kind of like a security group rule you know it would add the security group that the rule is associated with potentially in the path. The resource itself also has the service that it's associated to so it knows how to get an endpoint and then it goes in with these resource properties there. These are optional, the properties that are in there don't necessarily need to be here but these are kind of for convenience. You can do some basic type checking as you see here. A couple of these resources are integers so if you try to set one of those it would complain that this is not the correct type. And the nice thing about these properties is that this is where one of the places where we'll kind of figure out some of the inconsistencies with kind of naming and things that are coming back. You have a lot of names that come back in a camel case or all caps or different stuff like that and if we're gonna work with those from within Python this is where we kind of normalize that to be the kind of pepate with underscore separated so that is public. Obviously, if we even try to set that OS dash flavor dash whatever as a property on a Python object it's not even syntactically correct because of the hyphen so we kind of make it a nicer name to work with is public is a pretty, obviously that's on the end of it anyway but it makes some corrections at this level so we kind of smooth things out for users. Now as far as performance considerations we haven't done any kind of performance testing with this but the SDK itself is a fairly thin layer. You could argue that it could be thinner but it's fairly thin, it shouldn't have a huge impact. The transport class has the potential to be reused among multiple connections so you could potentially get some performance help there because you could have maybe a connection associated with maybe a different project and then the performance that we are gonna get is probably gonna be more reliant on the Python request package, whatever performance you would get out of that is what you would get with this. Installation of the SDK right now we have an alpha release out and we've had kind of a dev release out before this but basic pip install Python open stack SDK and it's a 0.1.0 release right now which has pretty much what you've seen here. So it's highly subject to change and we'd actually prefer change. Prefer people to take a look at it and let us know what they think. Try to build out some trivial little examples, spin up a Jenkins server, write files to Swift, stuff like that. Right now it's Swift, Nova, parts of Neutron. Yeah, I think that actually the Neutron support is pretty good. Compute is pretty good as well and I think Steve Lewis put together most of telemetry, right? Yeah. What else is there? All the basic, there's Trove, I think the Trove support is half decent and we have orchestration, but it's almost nothing. So there's enough to kind of toy around, build some, don't put this in production by the way. Take a look at it, let us know what you think of the interfaces, how it is to work with it and yeah, pip install is pretty easy. Yeah, here's some links that might be useful or GitHub repo and our Wiki and of course our reviews and we have a ton of reviews out there right now. And we also have regular meetings, Tuesday, 1900, UTC on OpenStack meeting three channel. And I think that's about it, right? That's it. Any questions? Yeah. I guess come up to the microphone I think. Yeah, good. Yeah, so the question was, we have a lot of scripts right now that we are using the older SDKs and let's say if we move to the newer ones, is there going to be support for the older one? I mean, how do we transition to, let's just start using the newer ones? Like you mentioned, you know, instead of NOVA client and stuff, you'll have compute. So how's somebody supposed to do that? So I'm not, we haven't really talked like migration plan or how to move from this stuff. I imagine this will be available and those things will over time, over a long period of time, given the way deprecation works and the way these releases work. Even if this stuff was ready for today, it's that NOVA client and those type of things will probably be around for a couple more releases. I think they will probably just live in parallel. I don't know of any plan to say, make NOVA client work with this under the hood or any of those other clients. I imagine it will just be this is there and this is there and one of them eventually goes away and one of them, if it works out, one of them takes over. I mean, you know, one thing that, I mean, I don't know, because I'm fairly new, maybe six months into this, but one thing that I always found, because I use the APIs a lot is the SDKs. Unless you go into the code units, there's not a whole lot of information out there. It'd be good to have some sort of SDK going forward, at least with the new APIs that really help folks to actually use them. Yeah, hopefully we'll cover the whole documentation issue and obviously the libraries out there really kind of designed more to be a CLI than a library. So we're kind of trying to focus on that's our first use case SDK. And since we were trying to get this code working for these examples, documentation is slightly lagging, at least on my side, at the next week or so. Hammer on docs and get it to where it's up to speed and so it should be better. Obviously, as we build this out, keep going with that. Okay, great. In regards to the discrepancies you noticed in some of the APIs, are you intending to push changes upstream to the actual projects to normalize the APIs as well? Is there a sort of an overriding goal to make the restful interfaces look cleaner amongst the projects as well? So we haven't looked into that yet. We could. So right now we're focusing on getting rough compatibility with what's available on the client side in terms of the breadth of features that are there, in terms of pushing that influence to the server side and what the REST API is giving back. That hasn't really come up until, as far as I'm concerned, or know about right now. It's something to think about. That's probably more of a long-term goal, though, I think, so for sure. And right now, also, I guess we have to support the current API, so there's some cases already where, oh, filtering doesn't work on this API, so we've written a different find method for particular objects so that filtering works on that object, so. You had a question? Yeah, you said this is, you know, not production alpha. Should we expect some of the APIs to actually, or the library functions to actually change or if we write a script, it's gonna keep functioning forward? I mean, what is it about that makes it not production or alpha? Is it just not fully implemented or things can change, we want to go back and change our scripts? So it's a little bit of, so right now, I was telling this story earlier, so like the session and transport and HTTP stuff, I think it has a pretty solid core. It's round, it's spherical. We then, for the resource layer, which is the one that interacts directly with the REST APIs, it's kind of not fully spherical. We haven't built everything out and we haven't fully, if for every API we've added support for at the resource layer, we haven't gone full out on all of them because we're trying to make sure we don't want to go, we don't want to build everything and then realize we have to change it and then we have to change everything. So we've built out a good amount of that and then on top of that, we've built out some of that higher level to connection, the proxies that go to connection. So really this looks more like a football than anything. So before we really fill everything in, I want to make sure that the ends of that, that high level are good and solid. I don't suspect there will be a huge amount to change and the functions are going to still be the same. I think what you pass to them may or may not change as we start to figure out what's the best way because my Swift example with the create container just give it a string. None of the other ones support that right now. They're more on keyword arguments of that correspond directly to what's on that resource or you give it a resource. So it's a little bit of a different angle. So there are going to be changes but I don't think it's going to be anything super drastic. Potentially backwards and compatible with, you know, it's the time to do it but it's we're not too rigid to the point where I would build things that are seriously going to depend on it. But I would expect small changes and then once we get that figured out fill in the entire sphere. The current client CLI does some client side validation. Does this SDK do any validation or does it just pass the data in for like configuration? Well, if you have the particular properties when you're putting them in a resource it would validate those properties as you put them in but that's all it's we try to keep pretty light on the validation because it's hard for the SDK to keep up with changes within the API. We don't wanna get where people can't use it because it's too rigid. Right, thanks. Yeah, this is less a question than a suggestion. It seems that oftentimes with open source projects that the documentation lags way behind and finding an advocate who specifically focuses on the documentation sometimes works better because oftentimes the people who are doing the development don't have the time and whatever the real focus on actually getting the code developed. So it's nice to have the documentation move ahead separate from the developers. If I could I would have Garrett would gate on the proper documentation being in there. That was even a possibility. What about error checking? Are there any plans to coordinate with the sub projects to have some kind of error handling that passes up the information of the call stack or something like that? I mean right now we're trying to grab as much information as we can out of like an error response and populated into our exception. And that's pretty much the best we can do. We're trying to be a fairly kind of thin layer on that I guess and it's hard to code to all the different possibilities. Does this include it in Dev stack? So if I just install Icehouse does this automatically got installed I can use? No it's completely, this is a stack forage project and it's you would have to mainly install it separately. Separately. So how do you, second question, how do you handle the different versions for the same API for instance like Keystone? That's a good question. There's a whole and it's not actually complete in the SDK but there's a whole versioning problem and it's handled some of the situations because you have what the user wants, what the SDK provides, what's in your service catalog and potentially what's in the versions API on the particular breast interface. And right now we're trying to, we match up basically with user preference, we'll use whatever version they request. If they requested a special version we'll just try and use that. If they said they wanted 2.1 the API would try to use that. So that's kind of where we're lacking I guess is actually querying to the versions interface and seeing well what does it really support potentially if the user selects a version that's not in the service catalog. Any other questions? I guess that's it. Thanks a lot for coming and check out the project and let us know what you think. Thanks. Thanks.