 All right, well, my name is Colleen. I've been working on the OpenStack puppet modules for about nine or so months. And I wanted to come talk to you guys about what our experience has been trying to develop these puppet modules against the OpenStack APIs when we can't use Python. So in very general terms, the problem that we're trying to solve is we need to manage OpenStack resources with Puppet, using the Puppet DSL somehow. So the puppet modules are going to deploy. OpenStack packages are going to configure the services. And then as part of that automation, they also need to set up things within OpenStack. So they need to set up Keystone users and endpoints. And we might want to set up Upload Glance images or set up Neutron Networks and that kind of thing. We want to do it with Puppet. And so since we're coming from this from a Puppet perspective, I want to give a couple of vocab words, just so we're kind of all on the same page. When we talk about a resource in Puppet, it might be a little bit obvious. It's just something that's being managed on a system by Puppet. Then a type in Puppet is going to be the interface that Puppet gives you to that resource. So that's the thing that lets us specify the properties of the resource, the name of the resource, that kind of thing. And then the provider is going to be the back-end implementation. That's how we get the resource to come into existence. And types and providers are both written in Ruby. They're plugins for Puppet, and they must be written in Ruby. So as an example of this, the file resource is a built-in resource in Puppet. The type for the file resource is what's going to let us specify things like the content or the permissions or the name of the file, the file path, that kind of thing. The back-end implementation here, the provider, is pretty basic. It's just going to use basic POSIX utilities built into Ruby to manage this file. But then if we want to take that a step further, if we want to manage something a little more complicated, we want to manage, say, a MySQL database. That's something that's not built into Puppet. That's something we write as a plugin to Puppet. The type for MySQL database is, again, it's going to be the thing that lets us specify the character set and the collation, the name of the MySQL database, and lets us use the Puppet DSL to describe that instead of something like SQL. So then how would this be implemented on the back-end? Well, we have a couple of options. We could shell out to the MySQL command line client and implement it there. Or we could use some sort of Ruby library that provides MySQL bindings internally to that. And what we end up doing in the case of the Puppet Labs MySQL module is it actually does shell out to the MySQL command line client. And you can see in the debug output what it is actually doing. It shells out to MySQL. It does a show databases via that MySQL command line. And then it does a create database with all the properties that we wanted. But as a Puppet user, I don't really care about the back-end implementation. I just care that I've gotten this resource to come into existence and I didn't have to write any SQL, it just happened. So then we take it a little bit further. Similar to the MySQL database resource, we might want to manage a Keystone tenant or Keystone project using the Puppet DSL. The type, again, is going to be the way we specify all these properties. And then the question is, how do we do the back-end implementation? And that's what the talk is about. So with software developers, we have a few requirements that are gonna constrict us on how we choose our back-end implementation. They sort of fall into these categories. We need certain features. So we need to be able to manage Keystone users, for example. So whatever implementation we use has to be able to be capable of that. And then we have other restrictions. The most important restriction is that we have to be using Ruby for this. And there's other general restrictions involved in that as well, and we'll get to those later. And the Puppet people aren't alone in this. There's other applications that are also trying to do similar things. So Terraform is an example of a Go utility. It's got to manage OpenStack resources somehow and it needs to figure out this OpenStack provider way of doing this. All the other config management tools are gonna run into this as well. Or if you're an operator and none of these tools are working for you and you develop something in-house, you're gonna be writing shell scripts or you're gonna be writing some sort of internet thing that's gonna be facing the same problems as we are. So I'm coming at this from a Puppet perspective, but I'm hoping this kind of applies generally to everybody else as well. And so we went through three stages when trying to solve this problem. The first stage is how these providers were implemented from the beginning when these were originally written in the olden days. This is kind of what we used to do. Sorry, it's written in Ruby. Ruby or Puppet provides a way to translate command line commands into Ruby functions so we can change this keystone command into a keystone function and pass in all the flags and arguments that we wanted into the function as if we were just shelling out. So we'd set some environment variables to do authentication and we'd call this function and that's how we would get keystone resources. And you can see in the debug output what this looks like. This really is essentially just shelling out to the command line and getting things done that way. So what was great about this approach was this is actually very idiomatic to Puppet. This idea of shelling out to the command line client maybe seems a little strange or a little gross if you're a real application developer, but in Puppet, the only thing we can necessarily depend on is that that client binary was installed because we installed it in the Puppet module. So we know we can depend on it, we know we can use it and we can take advantage of it. It also makes it pretty debuggable because we have that debug output, we can see exactly what it's doing and that means we can run all those commands and see what's actually going on if something goes wrong. What we didn't like so much about this was these individual command line clients for each service were very unstable. They would print out warnings unexpectedly, they would change their error reporting format, they would change all the time and it made it so that these had a very high cost of upkeep, it was very hard to maintain these things. Another issue that we kind of wanted to solve was a lot of these providers were doing roughly the same thing, but in slightly different ways and we really wanted to centralize that into some sort of common library call that everyone could use. And the reason why we decided to switch away from these command line clients was because of the instability. So that brings us to stage two, which was, well, if we don't want to use command line clients, what do we get to use? Well, we use the APIs somehow, we bypass the command line clients, we use the APIs. Well, what does that actually mean? Well, a lot of people ask me first, why don't you just use Curl or the native Ruby HTTP library and you just use the APIs directly? That's the most direct, it doesn't have any dependencies. Isn't that the easiest thing to do? Well, it's actually kind of a giant pain in the ass. Well, first thing you'd have to do is you have to manage the Hokens yourself, which is not terribly easy and it makes a lot of, complicates the code a little bit. Then the next thing is you have to do some sort of action and I'm gonna use Curl commands to kind of show how this would look if we were using the HTTP REST calls directly. So if we wanted to, for example, update a project or a tenant, what that would look like is it's a post request and it's got some data in the body, well that's simple enough. If we wanted to update a network though, that becomes a put request with data in the body again. There's kind of differences there, that's a little bit annoying to account for but that's not terrible. Well, then what if we wanted to update an image? Well, if we're using the Glantz v1 API, that's a put request and that's data in the header. Well, we're losing some consistency there. If we wanted to use v2, that's a patch request. The data is in the body, but it's a completely different JSON format that we have to deal with. And so this is why we didn't want to use the REST endpoints raw like that because the complexity of managing this type of thing just grows exponentially and the complexity was too much and we didn't want to re-implement everything like that. And we knew that there were already other utilities that were accomplishing this for us so we didn't really need to reinvent a framework to do this for us, we could try to find something else. So that's why we tried to use an SDK. That's a set of language bindings that provides a language level API for accessing OpenStack in a manner consistent with language standards. So we could use a Ruby library to do OpenStack things and that sounded really great. So this is the choices we had. These are the sort of recommended SDKs available to us. Problem is they're not actually official in any sense. There's various levels of upkeep, they're not maintained consistently, they don't have every feature that we need. So for example, the Java SDK, there's a Java SDK, there's a pull request that's been opened since last year to implement token-based authentication. So that's something that we would've needed if we were using Java that's not there yet and Keystone v3 is just completely off the table and no one's even thought of trying to add that yet. The Node.js library, it doesn't have any support for Keystone, probably because it's meant to be run against public clouds. The PHP SDK is, it's the only one that lives on Stackforge, so it's more or less official in some sort of sense but it hasn't had a commit since October, no one is working on it. So this is what we have to work with. If you can compare that to what Amazon makes available to you, they have official SDKs in every language you would want. They even have it for Android and iOS, I'm not really sure how you would use those, but these are all guaranteed to stay up to date and keep up with all the new features and they are official, they are reliable. We don't have that in OpenStack. And this kind of sums up our feelings here. Currently OpenStack user stories for both command line and application developer, consumers of OpenStack based clouds is confusing, fractured and inconsistent. This sums up our feelings about the SDKs. This is from the Python OpenStack SDK wiki. So they're trying to solve this problem but of course they're still using Python which is still a restriction for us. So, but we're actually, we're at least kind of on the same wavelength there. One of the SDKs that was available to us was an SDK called Fog. It was a very general purpose library, the abstracts, all the different clouds. We quickly decided not to go with that one because it was too big, it was too general purpose. And one of our restrictions was we didn't want to install gems on the system we were managing so we'd have to vendor the whole thing as a module. And because Fog was so big and so complex and there's different parts that we didn't care about were always going to be changing, we didn't want to go with that. We settled on a library called Aviator which was a much more lightweight framework just for OpenStack. And so we were able to vendor it and it wasn't so much of a problem. How this kind of worked in the provider was it was gonna create a session object, it authenticates the session and then you can start making requests and parse your responses. And this was okay, this was nice. What was great about this was, again it was OpenStack focused, we didn't have to worry about other clouds here. The maintainer of this library was an individual maintainer, he was very nice, he was very responsive, he was very helpful, it was great to work with. What we didn't like so much was that session management again became kind of a pain. I showed you a snippet of what was going on but the complexity of managing that actually grew a lot and we started getting a lot of spaghetti code because of the different ways one might manage a session. Again, vendering the gem was something we had to do but it was kind of an ugly solution, it wasn't perfect. It was something we were willing to do but we didn't like it a lot. The bigger thing was this question of sustainability. There's one person managing this library and OpenStack is a giant project so how can one person keep up and are we, the Pub-Up module maintainer is willing to help with that effort. So what shifted our focus completely was we decided we needed Keystone V3 that became an immediate concern for a variety of reasons there were lots of pressures from different organizations. They said they needed Keystone V3 and Aviator didn't have it. And so we had a choice, we could try to implement that in Aviator but then the next time something like this came up, next time we needed a new feature, we would have to do that ourselves and we foresaw that this was gonna continue to be a problem. So we moved away from Aviator which brings us to stage three. We went back to a command line client. We started using OpenStack Client and this is what the debug output looks like. It's still a command line client so we still faced some of the issues we had before. The big win was this supports Keystone V3. Another good thing about this though is that since it's consistent there's one command and all the ways you call us command are pretty consistent. We can still abstract this out into a library module which is what we wanted to do originally with Aviator. So it was great. We had Keystone V3 support. The client comes installed with distro packages so we didn't have to vendor the thing ourselves. It's very well supported. There's a team of developers actively working on it. They're keeping up with new changes. They're responsive to our complaints and our needs. And it was a way to provide consistency across these different puppet modules that needed to do close to the same thing, accessing these APIs. What we're finding out now, it's not so great, is now that we're dependent on the distros, we're actually at the mercy of the distros to release these packages on time. So if we needed a new feature in OpenStack Client we have to wait for the distros to supply it for us. And the stability problem, which we were having originally, we're still sort of evaluating whether that's actually still gonna be a problem or if that's gonna be better in this common client. We're hoping it's gonna be better. Sure, we can have questions now. So they would print errors out and it would kind of change the way, they would change the way they formatted the errors was one thing. So puppet, when we do this in puppet, they have the function is not good at distinguishing between standard out and standard error. It just grabs the whole thing. So we have to figure out, we have to parse the whole output and figure out what to do with it. And so if they're gonna change the output on us all the time that makes our job harder. Another question? Great. And so the status of this project where we've been trying to reimplement our puppet providers is incomplete. When I submitted this talk, I thought we'd be all done and this would be great celebration. And this has actually become a giant task and we're not done yet. We are, we've created this library module. We have some of the functionality in there. A lot of the Keystone stuff has been moved over to use that instead of the Keystone client because again, the Keystone client doesn't support V3. We needed OpenStack client in order to get Keystone V3. We're starting to move glance. We're starting to implement that Keystone V3 functionality. But we're not done yet. We're not really super close to done it either. So the point of this talk was not that we have all the answers and we figured it out and you guys should do it the way we did it. The point was more that as a developer who's not using Python, the way forward is not very clear. And so we've tried three things. We're hoping this last thing works out. We're not really sure. We hope it does. So these things that we want in SDK, sustainability, keeping up with the new features, but also being reasonably stable are things that we're hoping to see in the future, but things that we haven't really reached yet. And so I can open up now for more questions if you have any or if you have similar experiences that you wanna talk about. Otherwise, you can find me at these various places. I'm crinkled with a C on FreeNode. You can talk to me in the Puppet OpenStack channel. And does anyone have anything they wanna, any questions or anything you wanna discuss? The first thing was using the individual command line clients. So instead of using the OpenStack client, we used like the Keystone client, the Nova client, the Neutron client. And now we're trying to centralize on one client so they're all consistent. Yeah. So, yeah, that's part of the reason it's not entirely solving the original problem we were having is because a lot of that output and those errors and stuff, those are coming from the client libraries, not the binary itself. So not exactly, not up until this point, not until we started to discover OpenStack client. And at that point, we started to develop a relationship with the developers and they started to hear our concerns. And on the OpenStack client team, Steve Martinelli and Dean Troyer were the most helpful. We can do that. Okay. Is Rackstreet stuff determining the fog in J-Club? Yes. And I certainly take your criticism of the complexity and size of stuff like the fog. I take that to heart, we know that's an issue. We know it's not optimal that these are only cloud libraries when really all you want to do is open stack. They're serving, yeah, it's a different audience than what you might need specifically. So it's interesting to see that I should really appreciate that he took us through the different options you've been considering. Great. Any other questions? All right, that is all I had. It was kind of a short talk, but. I'm so sorry. Oh, it's fine. Did you try hitting the API directly? We didn't try it. We ruled it out early on because of the problems we thought we would have. So the complexity of managing the sessions ourselves, we didn't really want to deal with that. And the problem with where we can't really abstract the API calls, they're all going to be different even if we're trying to do similar things like updating an object. They're all going to be pretty different from each other. It's a post-request or it's a put-request or it's a patch-request, and they're all, it's too hard to abstract. And we didn't want to re-implement something that someone else that already did had already done. Exactly, yeah. Because there's already others. They're just thinking everything, right? Yeah. We've found that, too. I'll fix that. We get no reward from the reward. And I try to put a horizon on it and fall on the reward. Putting together these slides was kind of a pain because I had to guess at some things and get it to work because the documentation wasn't really complete enough for some cases. Thanks. I think maybe within a year, the API will start to not be happy. Mm-hmm. Right? Mm-hmm. It's kind of an abstract of the, you know, it hasn't happened just today to be happy to select. It's kind of an abstract of that. So we have a Ruby file, a generic Ruby provider, and that kind of gets translated into, we use that actually to Python libraries. It's interesting. Python libraries in Python. So it's kind of like that. It sets a JSON payload. And it will be able to create a free object to go out there. And that just basically allows these libraries on the Python side without, you know, writing the whole thing. That sounds like it could have been useful if we'd known about that. I feel like to get out there. All right. Thank you so much.