 It's exciting. Welcome to the first talk after the keynotes. As always, there's fun to be had for everybody. What did I call this darn thing? Sweet. Resolution-wise, how is that? That's probably a little bit small, right? Let's make that a little bigger. Cool. So let me get set up in the other one. Let's go ahead and make this dark for you. That will make the switching less panic inducing. Cool. So welcome. Thank you. Sorry for the delay. That will make the fact that there's no way I can get through this content in the allotted amount of time even more fun. So I hope nobody has anywhere to be because this is going to take me at least a year anyway. So this is going to be a talk about consuming multiple open-stack clouds easily because I do it all the time and I find it very easy. But people tell me that, wow, that seems complicated. And it turns out it's not. So I'm going to hopefully tell you everything you've ever wanted to know. I will, in fact, tell you more than you've ever wanted to know. And after this, hopefully you should want to kill me. But that's just how life is. So quick introduction. For those of you who don't know me, my name is Monty. I work on the open-stack infra core team. I also work for the fine folks at Red Hat. I keep doing that at open-stack summits. I have more than the last time it was I was trying to reference Red Hat anyway. So don't know why that is, especially since I work for Red Hat. You'd think I would know the name of that company. But clearly I don't know how to talk. So my name. For the purpose of this talk, the important piece is the infra core part because that's where most of this comes from. I am Mordred on IRC and free node. And you can also tweet angry pain at me anytime on Twitter. And I will gladly ignore you because, you know, nobody needs that kind of anger in their life. So today we're going to talk about the shade library, which is a library that I wrote along with some other people. It's a task and in-user oriented Python library. So it is not a Python library that exposes the open-stack REST APIs. If you would like a library that does that, the open-stack SDK project is an excellent project that is oriented around that as a concept. That is not what this does. Shade abstracts deploy your differences. So we've granted our deploy your community an immense amount of flexibility in how to deploy their clouds. And that's wonderful. It gives them the ability to express themselves and to try and meet their customer's needs. And then, unfortunately, we've done that in such a way in some places that it makes it hard for users to consume those clouds. Silly us. So anyway, so shade is designed to abstract those problems and work around them for you. It's designed to be multi-cloud from the beginning. When we started writing this, we were already consuming, I think, either three or four different clouds simultaneously in our automation. So that was a really important feature for us. It would not be useful if it hadn't been designed with that in mind. Hopefully it's simple to use. I think it is. But I also wrote it. So I might be missing something. I might have too much context. It's also designed for massive scale. We use it inside of infra's node pool system, which is the thing that manages all of the test nodes for the open-stack CI infrastructure. This is different than the pool of Kubernetes nodes that were talked about in the keynote this morning. It turns out there's only so many names for a thing that creates pools of nodes. So, you know, yay. But in our particular instance, we use this library behind a system that spends up and destroys around 20,000 VMs a day. So we're pretty confident that it can handle the scale. There are some things I will not talk about in this talk. There's another shade talk tomorrow that's a little bit more about theory and design, where I will go into some of the optional advanced features for high-scale stuff. They're weird and you shouldn't think about them unless you actually need to. But they do exist. And as I mentioned, it came out of node pool. Then we turned it into a library because we were also starting to work on the open-stack support in Ansible and realized that we needed the same things. We'd already written a node pool and like, wow, this clearly needs to be a library so that we don't have to duplicate all of this logic again. And thus, shade was born. So it is free software. You can get it from Git in the open-stack Git repositories. We can talk about it on the open-stack dev list. We also have a channel on IRC, open-stack.shade, as you might expect. It came out of the infogroup but it is now actually an official open-stack project of its own because that's really important to everybody. Incidentally, this talk is also free software. It was written for a piece of software called presenti, which is a console-based presentation software that takes restructured text files as input. So this, the source for this talk is actually in now the shade documentation directory as part of the shade documentation, which I just realized as sort of working on this that that was a possibility. And I'm really excited about that. That's a really cool option to be able to do that. And we've come up with a couple of improvements in that workflow that we'll make. So there may be a meta talk at some point about giving talks that are also part of your documentation because that was definitely a path we should all go down. I'm mentioning some paths, some local paths to example files and stuff like that in the talk. Please don't consider those to be written in stone. Like, go look for them or whatever if you're going to check out this later on. But since this is the first presentation that I put into Shades Source Tree, I may need to reorganize it as we figure out what we do with maybe five or six presentations or whatever. So the things will be here and this will stick around. But, you know, I might move it. So that's a slide that's a response to a slide that isn't there anymore. So this is a complete example of creating a server on a dev1.js node on three different clouds. This is all you have to do. It's the whole thing. This is completely functional. I'm not going to... I'm going to run a... Because I'm stupid, I'm going to run a bunch of scripts during this talk and show them to you that all are going to talk to live public clouds. That, of course, always works perfectly. So especially on conference Wi-Fi, I'm not going to run this one because one of the things it does in the middle is it uploads an image to the cloud. And I don't think that trying to upload an entire operating system image to a public cloud over conference Wi-Fi is a good idea. So this is basically here as a... Here's a simple script that does all of the things across multiple clouds without a whole heck of a lot of divergent logic other than here's the list of clouds and regions. And that'll work. I will talk about that in more detail in a second. Before I talk about the specifics of that script, I want to run through some terminology. I'm going to try and do it quickly. If you can't follow it, that's fine. For the most part, you don't need to know any of it. But a quick overview, I think, is really helpful, especially if there's some terms in here that people in the community in general get confused about, especially as they relate to off. So I'm going to do those. If you're a keystone Dev in here, if I get a couple of these things wrong, just let it go. It's fine. So some terminology. There's a few things. Clouds. That can get questionable in and of itself. We can probably go to a bar and talk for two hours on the definition. For these purposes, we're essentially talking about uninstallation of OpenStack. It will have one or more regions, but it's a logical construct run by some people. It probably has a name. You could point your finger at it and be like, I'm going to use that cloud over there. That's a cloud. Inside of that cloud, there may be one or more regions. In OpenStack, a region is essentially a completely independent installation of OpenStack. That only shares being registered into the same Keystone service catalog. Other than Keystone, OpenStack regions share absolutely nothing and are not aware that they're even in a region. It's not a construct past the Keystone level. There's a word that doesn't really come up, but I want to point out that there's a thing we're missing a word for. I'm going to call it a patron for right now. That's the human who may have an account on a cloud. We have users, but a user is an object inside of Keystone that describes an account that can connect and do things over the API or whatnot. A human may manage one or more of those things and may have one or more projects or whatever. Somewhere in here, there's a human. Humans are people that get billed. Users are things that interact with clouds. In a lot of cases, it's a one-to-one relationship, but it's possible for it not to be exactly the same. A user is an account, as I mentioned, in a cloud. A project is a collection of cloud resources. All of your cloud resources go into a project. They are owned, literally all of them, they go into a project. It gets lumped in with our authentication, but it's the container for resources. And then a domain, which not everybody interacts with all the time, is a collection of users and projects for namespacing purposes. A domain can have more than one. A cloud can have one or more regions. A patron can have one or more users. A patron can have one or more projects. It turns out a cloud has one or more domains. All clouds have at least one domain. It's called default. If it only has one, and the cloud operator has not decided to do anything with domains. In some clouds, each patron has their own domain. And in this context, giving a patron of a cloud the ability to create their own users and projects is empowered by handing them a domain with domain admin credentials on it. I recommend to all of you who are deploying clouds, please deploy your cloud in that manner, because it exposes some features that people keep asking us to add to OpenStack. They're already there. It would be great. That was added three or four years ago. People keep asking us to add the support to do that. So please start giving people their own domains and let them create users and projects as a normal user of the cloud. For that matter, while I'm on this rant, especially in the organizations where the authentication is some sort of federated, your federated with the corporate LDAP system or some sort of external system, those external systems are describing the patron. They're describing the human who has a relationship with the surrounding entity. That human may still want to create some API-only user accounts that they use. You don't want to do a single sign-on dance every time you want to make an API call. That would make automation scripts impossible. So these are great things. Every user is in a domain, a project is in a domain, and a user has one or more roles on a project. This is not a talk about keystones. We talk to the cloud over HTTP. And those HTTP interactions are authenticated by a keystone. Authentication returns a token. And essentially that authenticated HTTP session is shared across the region. So you can use that same authenticated HTTP session to talk to all of the services within a region. You're not guaranteed to be able to do that in a different region. So I'm saying all of this because essentially a cloud region, like the combination of a cloud and a region is the fundamental unit of talking to any opens that cloud. Even if the cloud only has one region, you're talking to that region. You must have a cloud in a region to talk to anything. And so this is sort of the unit that we instantiate objects at. So you will be creating a connection object for a region and then doing things on it. A cloud has a service catalog. That service catalog should, unless you're a naughty, naughty person, that service catalog should contain all of the end points of all of the services in all of the regions for the cloud. And if you get the service catalog in any region, it should show you all of the services in the other regions as well. If you're not doing yours that way, please change your cloud to do it that way because that is the way that makes everybody be able to do things well. And as I said, a region is completely autonomous. So just a little bit more. I mentioned this already, so I'm not going to do all of the bullet points here. But essentially, if you have multiple domains, projects and user names are only unique within a region. And so as you're expressing your authentication in ConfigVal, which I'll show you in just a second, this is one of the main points that people get confused on. If you are in a Keystone V3 domain enabled cloud and you want to express your config of your user and your project by their names, so username and project name, you must include domain information. Otherwise, it has no idea in what domain you're asking for a named user or project. If, on the other hand, you express your user and project information by ID, because every user and project has both a name and an ID, IDs are unique because they're a hash, a shaw, whatever, they're a UED. So those are actually unique and you do not need to express domain information if you're in either, whether it's V2 or V3, if you're doing it by ID. So IDs is actually the easier way to deal with it, no matter which off version you are. But also, IDs are extremely difficult to think of when you're looking at a ConfigVal and you're like, am I connecting to the right project? Am I going to create these resources and delete these resources in the correct bucket? Oh, nope, oh, because I don't know how to memorize UU IDs. So I do recommend using names and just adding the domain information. So anyway, if that's all a confusing mess, that's fine. We're not going to talk about it very much more and hopefully we'll take care of guiding you in your use of things with appropriate error messages and whatnot. But a little bit of the background on why you need to provide certain pieces of information in certain places I thought would be a little bit helpful. So there's essentially authentication per cloud. You're saying, hey, here's my user account. I have authentication information. This is how I authenticate the cloud. And then that cloud may have one or more regions which, before you want to do some operations, you need to tell the library, I want to authenticate to this cloud and then operate in this region over here. So they're selectors in a lot of ways. So you configure authentication per cloud and then you're going to select the config that you want to use by cloud and region. You do all of this in a file called cloud.yaml. Some of you may have, in the past, used an OpenRC file which sets some environment variables. Great, you can still do that. I recommend not because the whole topic here is dealing with multiple clouds. And if you're going to do OpenRC files, then you've got to have a directory of OpenRC files and source one. And you really need to make sure that each file has some unset lines in it that will unset previous ones because otherwise you'll persist settings for one cloud over into another cloud and things will get confused. And it'll be like, you can't log in. You're like, but I logged in with that yesterday. Why is this broken? Just stop using the OpenRC files. They were great in their day. But you should use the clouds.yaml file. They are supported by Shade, which means they're supported by the OpenSack Ansible modules. The salt people are also working on adopting Shade as a back-in so the clouds.yaml file should be supported there. It's also supported by Python OpenSack Client. So basically you're, and the OpenSack SDK. So all of the things that you should be using in a general basis where you can talk later about other language ecosystems, that's a to-do list item that I hope to have fixed in the next month or so. But I don't want to say that it works today because it doesn't. But in general, this is the sort of path forward. We also just added a patch to Horizon in this cycle to provide you a clouds.yaml file out of Horizon just like it today can provide you an OpenRC file. So we're trying to get the tooling out so that it's really easy for you to ingest configs of clouds. So you can go in your home directory in .config slash OpenSack slash clouds.yaml. You can also install it system-wide depending on what your use of a particular system is and how it's using. You can stick a config file in Etsy OpenSack clouds.yaml. Be careful if you're sticking passwords in there, obviously, to password, to protect it so that the wrong people don't get it. But those are two choices and that's your call as to where you want to stick your config file. If there is information both in your home directory and in Etsy, your home directory wins because that's the more specific of the two. You can read full documentation on clouds.yaml. I say full, I'm writing a whole bunch more documentation right now because last time I talked to the other language ecosystems, they're like, what the hell are you doing in this file? And I'm like, well, I know what it's doing. So we're trying to document that. But as part of the OS client config library is the one that implements all of the clouds.yaml config processing. And so that's where the documentation for that is. If you're on Mac or Windows, tilde.config OpenStack and Etsy OpenStack aren't actually the strictly the correct places to stick config files. So Shade will look for files in the appropriate location for that operating system. In OSX, that's apparently, people tell me library application support OpenStack. And on Windows, it's C colon back slash users, your username, app data, local OpenStack, OpenStack. So don't ask me why there's two OpenStacks. I think it has something to do with vendor and then project. But anyway, those are there and are supported. And we actually use a library that knows how to find things, which is the reason I did not pick those locations. Those are the right locations. If you don't like them, sorry. Yeah, so inside of the config, there's kind of two different types of config. Again, a lot of this is things in the general case you don't have to worry about, but in the slightly more expanded case you do. There's things we call profiles, which are descriptions of a cloud. So a profile describes inherent qualities. So like Rackspace's public cloud has a profile that's built into OS Client Config and it has some information about the cloud, which is true for everyone, right? That information does not change regardless of your user preferences. And then there is actually your configuration for using that cloud, which is your authentication information and potentially some other preferences you have about how you want to consume that cloud. And things like that. I apologize for the fact that the cloud config is known as the word cloud. It's just I'm out of words, I'm sorry. It in general doesn't wind up being terribly confusing and other than when you're describing the concept, you're overloading the terms, but in general usage, the things do the things you expect them to do. I mentioned environment variables. You can use environment variables. They pass through appropriately. So if you have a bunch of OS underscore environment variables, they will be processed and slurped up and put into a cloud definition that is called envvars. So when you refer to a cloud by name, you can refer to the envvars cloud if you had set some environment variables and that will be that cloud. We do not overlay environment variables on top of a given config because that it turns out is always confusing and nobody can ever expect what is going to be the right thing. And so we decided to stick them in there. You can set the environment variables, OS cloud and OS region name to be default values for selecting the cloud and region. So if you set those two environment variables, Shade will use them as a default and you don't have to specify those in your script or your code or whatever. Okay, so that's way too much talking. Sorry, there's no code on the screen. There's no examples that are clearly gonna fail because of contents wifi. So let's show a little bit of that. So this is a basic cloud.yaml for the example code that I've got in here. What's the first piece of one? It's gonna be on three slides. So this is a config for a cloud that I have named my city cloud. It references a well-known named profile that refers to a cloud called city cloud. Actually in my real cloud.yaml, I, both of these, it's just city cloud, profile city cloud, but like to make it clear that one is the name that you're going to refer to this configuration as and the other one is the name of the cloud that you're talking to. I've renamed them. Then there's my authentication. The authentication goes into a dictionary that is not at the top level. This is on purpose and it is actually an implementation detail from the Ansible modules. So there's a little bit of designing this based on what was needed to be able to pass things appropriately. But it turns out even though a lot of clients don't implement this, Keystone authentication is fully pluggable and the parameters that you pass into Keystone authentication are completely variable based on the plug-in. So you can't do parameter validation in a general way on things that we could otherwise validate. So we keep auth as an opaque dictionary because the only thing that knows how to validate it is the authentication plug-in you're going to pass it to in the first place. That's too much explanation for why that is. But I bring that up because we get problems from people that will stick things up at the top level or stick other pieces of config information into the auth dict. The auth dict is just the parameters to the auth plug-in. So username, password, auth URL, et cetera. You may have noticed that there is no password in this auth information which makes it very bad auth information. It's not going to authenticate very well without a password. So, and I extracted that to point out there is also an optional feature in my actual cloud.yaml, my passwords are all in there. It's fine. But if you wanted to be a little bit more squirrely about things, you can stick just the, you can stick anything you want to in another file called secure.yaml that will also get read and overlaid on top of the settings that it finds in the cloud.yaml. So the most sane reason you would ever want to do that is that you want to have one file with just your passwords in it that's protected more strictly and then another file that's readable that more people can see so they can understand what's going on. That's why I added that feature and I don't use it myself. So, but it's there if you want to use it. So here's an example, secure.yaml. You'll notice this cloud here is named my city cloud. So in the secure.yaml, the cloud is also named my city cloud and has an auth dict with password in it. So those will get combined together and everything will just work. You can provide additional information into this config other than just your authentication information. In this particular case, this is a definition for a configuration for Vexhost which is another public cloud. In this, I'm telling it that I want to use version three of the identity API regardless of kind of what detection or whatever finds and I want to use that as my image endpoint. I want to ignore whatever's in the service catalog and I want to use that one for image operations. It's actually not strictly necessary on Vexhost. The code will find V2 of the API in Vexhost fine but for sake of pointing out when you need that kind of escape hatch, you can do it in this way. And then standard auth things. You see here that I've expressed that I'm going to use the domain ID for my user and my project are both default. Yes, default is both the name and the ID of the default domain. So this is a much more complicated example for the third cloud that's going to be in the demo and that is my connection to internet which is yet another public cloud. You'll hear me saying the words yet another public cloud a little bit. So in this one, I'm actually not using, there is an internet profile that's defined in the library but I decided for a sake of example to show you do not have to reference a pre-existing cloud profile, you can put all of the information directly in this. One of the reasons for the profiles is that information doesn't change and it's very repetitive so there's no need for you to have to manage it. I can manage it fine for you and cut new releases of OS client config and it's all up to date. But if you have things or maybe you're using a private cloud that I don't publish because it's a private cloud, you may need to put in some of this information. Incidentally you can also make an additional and it's not included in this talk because we start to get into the weeds, not like we are already there but you could actually, if you wanted to, make a profile definition today and distribute it to your users for them to install on their systems in well-known locations and it will find that. So you could have a locally defined profile for your private cloud, hand that to your users and then it would have all of the appropriate information. We also have a proposal that I will be talking about tomorrow afternoon in a session called Exposing Deployer Differences Without Death to start having the clouds be able to provide the profile directly from an endpoint. So that all of this of there's magic files that Monty is maintaining in the corner can stop be magic that I'm maintaining and just be sort of a standard thing that we know is documented and people can count on. But that's, again, that's a talk tomorrow afternoon in the forum. So again, I'm expressing an identity API version. I'm telling it that this cloud doesn't have floating IPs because I know that right now. Shade will figure that out of its own free will but it takes a few additional API calls and in this case, I know this cloud doesn't have floating IPs so I'm just telling it this doesn't have floating IPs so don't bother trying to look for them please and that will save some introspection in some of the interactions. Also, you're seeing we've got a list of regions that have some values. Regions, you can do this for any values that exist. Any of the values can be per region or they can be global for the cloud. In this particular case, there's an interesting characteristic at internet that when you as a patron create a new account they spin you up a user in a project in the cloud and they provision for you your very own public network and private network and they give them to you. Sadly, there is no way in the Neutron API to tell that that public network is a public network. It is impossible. There is a feature on Neutron Networks called, or there's a property called router external. That does not mean this routes things externally as we found out with the fine folks at internet app. I was like, hey, so I've got this public network and it doesn't have router external, true. Could you set that? Turns out that makes the network visible to all of the other users of the cloud, although they still can't connect to it. So it broke a bunch of their users who weren't me and so they had to revert that real quickly, but I do appreciate them helping us learn this about this. It's physically impossible to know this from the Neutron API at the moment. So we just exerted here. We've got a couple of pieces of metadata. If you find yourself in a similar weird hole, our general philosophy is we don't think you should have to configure anything, really. You should just be able to give it off and everything should go. If there's cases like this where it's just not possible to figure it out, we wanna make sure that there's a flag somewhere that you can go and say, no, this is really what's happening and be able to move on with your life. So in this case, we're essentially annotating these networks. We're saying this network routes packets externally. This other network does not. And then finally, the default interface true there is, so there's two networks that show up in my project. So when I'm gonna go create a server, Nova's going to expect me in this case to tell it which network I want to create the server on. That's fair. I know based on my usage patterns that I'm always gonna want to create servers on that public one because just the way that I do things. So I've got a flag in here that I can set. It's optional. If you don't set it, just whenever you're doing a create server, you'll have to in this cloud say, hey, I want to use this network or this other network or both networks or whatever, which are all fine things. So blah, blah, blah, blah, blah, config, config, config. So if you have a cloud.yaml that's lovingly set up in such a way, you can actually run the script, which I told you that I wasn't gonna run, but I'll run a different version every second. But I'm gonna walk through it real quick. I think it's fairly self-explanatory, but real quickly, we're going to import the library. Step one, import library. Step two is initialize logging. So Shade uses Python logging for things. It's sort of a full normal Python logging system. So as a person writing an application, you can configure that logging to do whatever it is you want to do. It's possible you don't want to configure the Python logging system to do specific things. You would like to just kind of have it do whatever. So in this case, we've got you a simple helper method called simple logging. It's not very flexible. It has three options. It will do some amount of logging set up. Sort of the basic of this is that it squelches some meaningless warnings from some sub-libraries that are annoying to you and you can't do anything about. So we make them go away, because you can't action them. And I don't believe that warnings that the user can't do anything about are useful. Thank you, Subject Alt Name Morning in Requests. I can't fix the cloud I'm connecting to. It doesn't have a Subject Alt Name. That's just what it is. Sorry, rant. So it does some easy defaults like that. You can also pass it debug equals true or HTTP debug equals true. These are usually how I'm doing things. And these will either print out some amount of debug information about what's going on or actually the full on HTTP interaction tracing. So if you really wanna see what's going on, it'll do that. So as a quick example, oh, I should probably make this bigger too. So I've got this. Oh, I can't type in the name. This would be great, you guys get to see me. So running a script with debug logging on will show you things like this. So that went and did an image list on the thing. So there's sort of two different pieces in here, but you did the one-act operation. It's telling you that it's gonna run it and then it ran it and how long it took. It's also in this case because we've configured the request IDs logger to log request IDs, it's actually logging the request ID that that particular interaction took. And it actually did it twice. Oh, because there's pagination in the Glance API. So if we're gonna get the image list, it turns out sometimes that takes more than one call, shows you all those things. So great, and that's very exciting. Just everything you've always wanted to do, but that's the script that it was just running. We want to get that one image from Xhost. So if I did the other thing, which is HTTP debug logging, it will do the same thing, except it's going to spit a bunch of the HTTP thing. So you can see here's the actual payload that was returned from the get image call. There's the URLs that it's calling. So this is like the sort of low-level HTTP library interaction. Obviously, this is not a logging level that you want turned on all the time because you will not be able to read your logs. It would be completely useless for you. So there's a set of cloud regions. As I mentioned earlier, you've got to have both a cloud and a region to be able to do any operations. So in this case, in order to do a multi-cloud thing, I've got a list of tuples containing both a cloud name and a region name, and I instantiate a cloud object using this helper method there that is connected to that cloud and that region. Then we're going to upload an image. If any of you have ever tried uploading an image to OpenStack across multiple clouds, you will know that it is very hard because there are at least three different ways, one of which has parts of the API that aren't fully documented to do it, and there's no way to know which one of them that it is. We've done all that work for you. We will figure it out to the best of our ability and do the right thing. I'm going to make a suggestion here that's probably controversial to a lot of people, which is that you should, if you want to do multi-cloud things, always build and upload your own base images. You can download them from a vendor that's making operating systems and just upload exactly what they've got, but the main thing is that otherwise you have absolutely no way of knowing what the things are and that breaks down in a couple ways. You have to go through machinations to find what the image name is for the thing that you're wanting to run on that cloud. In fact, if you look at this example, you've looked at the example in a second, we'll see that. The images with the same content are named differently on different clouds. Images with the same name on different clouds can have different content, which is a lot of fun. So if you upload themselves as your base image, then you know what the base image content is and you can manage that from that on forward. Flavors are also named differently on clouds. I don't have really any good ways around this. I'm sorry. Every time I think of trying to mitigate this problem, we do have a method called getFlavorByRam, so you can express rather than use this flavor name, you can search for things. This doesn't always work because you can have four different flavors with four gigs of RAM. So if you're like, I just want a thing with a gig of RAM, I don't know which one are you gonna get, I don't know. So that gets tricky. There's not really a great solution for that because there's really valid reasons for there to be different flavor names. So you just kind of gotta deal with that and I can't fix that for you. And so finally, we're gonna create a server. This does, oh, I formatted that for you, I apologize. So this is gonna do three different sets of actions for those different clouds because of the configuration in the cloud.yaml file. On vexhost, it's going to boot the server and wait for it to be active because that's all it has to do on vexhost. On internet, it's gonna boot the server but it's gonna give the boot call, the parameter inapp17037-wan1654 because that's what we said was the default network in that clouds.yaml file. And then it's gonna wait for the status to be active. On city cloud, it's gonna boot the server, it's gonna wait for the status to be active. Then it's gonna find the neutral on port for the fixed ID for the server. Then it's gonna create a floating IP on that port and then it's gonna wait for the floating IP to attach and then it's gonna tell you that the server has been booted. Those things are all because auto IP is true. If you want to manage your IP address allocation and attachment, don't say auto IP equals true. It is default false because that could be a very confusing experience for somebody. But if the thing that you want to do is I want this server to exist on the internet, which is probably one of the more common things if you're trying to do a workload across multiple different clouds because probably at least some of them are public, then this is a thing that helps that out immensely. I don't recommend trying to manage that yourself, it sucks. And look at that, it's a demo and we didn't even deploy WordPress. I'm very confused. So this is a different version of this that I can run to sort of show you the whole thing in action that rather than uploading an image and rather than requesting just a flavor by RAM has both of those things listed out. So you can see that the Ubuntu Xenial image on these three different clouds are named those three different things. All of those are reasonable names. There's nothing wrong with any of those names of that image, they're all clear to a human. They're not clear to a script. They're not clear to automation that all of those are the same image. So, you know, that's life. We'll talk about that later. But so we've just listed these out. It also turns out that just having a list of images and flavors isn't the world's worst thing. There are worse problems to have in life. So we're gonna create the cloud and then we're gonna create the server. And so if I do that, you will see the lovely, you'll see a bunch of lovely things going on. We're gonna go check out some things about networks. We're gonna get lists of images to look through them. We're gonna get lists of flavors. We're gonna create a server. And then we're gonna sit here polling the server to see when it's done and it's active. Because this is Vexos, which is the thing we need to do there. We're waiting while we're polling because just polling with no weights would suck. Oh, and there's the server. And then we're gonna delete it because this is a demo and I don't really wanna just leave the server around. Probably in your orchestration scripts, deleting the server immediately after creating the server isn't the best strategy for having good working workloads, but you know, you can do that if you want to. No reason why not. If you see, incidentally, in some of these, some of these say like task, network, get subnets, and some of them say like task, server, create, we are in the midst of a process in shade of removing the use of all of the Python client libraries from OpenStack from shade and switching to just making direct rest calls. You can tell which of the calls has been migrated and which hasn't by how it's logging into the debug log just because of code structure. We didn't do that so that you could tell that. You should need to know the difference. If you need to know the difference, then we've done something horribly wrong with transition, but it's a thing to point out we do that. So we've now successfully created the same server on three different clouds live in a demo at the OpenStack Summit on conference Wi-Fi even. It is worth noting two things. There's, oh, not in that one. Next slide should be after the next slide. There's a thing in here where you see we're passing this name. So this is an image name and a flavor name and we're just passing them to create server call. All of the places inside of shade where you're going to say, hey, I want to do this by a human readable string will match against both name and ID correctly. So it will find it by, it's all name or ID and it'll figure it out. Except on the one cloud that this guy told us about a couple of days ago where in the flavor list there are four flavors whose names are the IDs of four of the other flavors. Please, God, don't do that. That's a terrible idea and I can't, I mean, there is a mediation for that, however. So I mentioned delete servers so you can do that. It's worth noting on the delete servers call there's a delete IPs flag. If you were having shade auto manage IPs to attach to your server, you can tell delete IPs which will delete any IPs from the server that happened to be attached to it. Again, if you're reusing a floating IP across multiple servers or whatever you probably don't want to do that. But if what you're doing is you're creating and destroying servers and you don't care whether it's a fixed or a floating IP that it's getting you just want a darned IP address for however that cloud decides to do it then delete IPs is essentially the inverse of auto IP. So you can also if you're in the situation where our friend with the weird flavor list was or you just happen to know that you've got an ID and you know it's an ID. A lot of times in a config you don't know whether it's an ID or a name that you got in its weird but if you know you can pass a dictionary. It can have anything in it that you want to for these but if it has an ID field in the dictionary that has something then Shabe will know you have an ID and it will not try and figure out the ID of the resource for you because you've said I'm giving you an object that has an ID field in it. Use that. So Shabe will do the right thing. So you can pass a name, an ID or an object, object, a dictionary to most of the things in Shabe that want to take a reference to some other thing and it will do the right thing for you. Related to that it is worth pointing out that we don't return dictionaries actually I lied. We return munch objects and they're basically almost exactly like dictionaries except you can also use object notation on them so they're kind of like objects in JavaScript. So you can do that. I'm not gonna actually run that demo. Let's assume that you can read from the code that I'm going to get an image and I'm gonna print its name using both object notation and dictionary notation and it's gonna work. I would like to point out that this is pointing at a cloud called Zeta, which is yet another public cloud that's in Norway. In fact, all the rest of the examples are on additional clouds. So by the end of me running through the examples you will have seen probably like 12 different open set public clouds that all work. But I'm running out of time. In fact, I'm over time so I'm gonna get to the interesting things. Other things to be aware of. Every resource has probably all of these equivalent methods in it. So there's a list servers which will get you the entire list of all of the servers in the cloud. There's a search servers which is, as you might imagine, a way to get a subset of those servers matching some criteria. There are a insane amount of ways in which you can filter that, including dictionary matching, FN match, wildcards, and James path expressions if you know how to work James path. I think that's really cool and still haven't been able to use that in anger but it's kind of neat. So there's get, get will fail if you tell it to get server and it matches more than one server it will throw an exception. If you think you might get one or more servers use search, search is you saying it's okay for there to be one or more than one thing that matches. There's create server which creates things, there's delete and there's update for updating things. Pretty much every resource in the system supports those things. If you're a normal user some of them may not make sense for you. You can't create flavors. So those will be hidden. Make sure there's not a slide in here that there's another, so the helper method for creating a cloud is open stack cloud. There's also a helper method called operator cloud which gets you a subclass of open stack cloud that has more methods in it that are things that we know are only relevant to operators. This is one of the other annoyances that we had in using our open stack clouds early on is you go do a help list on a command line thing and it would show you all of these neat things. You're like I'd like to do, oh that's an operator only command crap. So we're trying, sometimes it's not possible to differentiate those things in a specific way, but we try our best to stick operator related things into a bucket so that you have a better chance of not attempting to do something that you are not allowed to do. Other things in Shader are all named verb noun, so there's attach volume, wait for server, add auto IP, et cetera. I've got a cleanup script in here you can read that just cleans things up. Another thing that's really important to note is we do this thing called normalization. So depending on what version of open stack you're connecting to, payloads are different. Things get renamed because reasons or just rearranged in the case of Glantz v1 to v2. And so we have a data model that we commit to from shade. So if it's listed in that documentation, we commit that we will always return the values that are in the documentation even if a subsequent version of an open stack service stops returning that will at least put a none in there or something like that, but that is an absolute contract for us. So you can do that. And so on the Fuga cloud, if you looked at normalization, it will, this is gonna get a server dict and show you an image dict. So this is a normalized image dictionary. You can see we've added this field called location, which has information about what cloud this came from. So if you were, say, doing a loop across all of your clouds and doing a list servers and amalgamating them for each individual object, you would know how to trace it back to the cloud that it came from or to the project or whatever. There's a few other things. You can read the documentation on that. You can also pass to the cloud constructor a flag called strict. This was inspired by Pearl for those of you who have any background in that. But strict will only return to you the things that are in the data model. So if you want to make sure that you're not accidentally depending on your script and something that is not in, we will pass through all the things we don't know about otherwise if you don't do that because we don't know what kind of backwards we don't accidentally break you in a fit of purity. But if you say strict, we will only return the things that we know about and we'll stick the rest of the things into a properties field. So that if there's additional stuff, it'll show up as properties. So I've got another thing, but I'm massively over time. So you can look at the examples here. There's a utility script. This is how I found the image names for the other examples where I just pasted image names into a list. So I just ran a simple thing that listed images and looked for a name in it. We add, so servers are different and this will probably be one of the last things I can say before I run out of here on a rail. Servers are tricky, right? Like servers are one of the fundamental pieces that you're interacting with as a consumer of the cloud and they have some especially difficult problems because there's things like, well, you've got a server, how do you connect to it? Well, good luck with that. Finding the server's public IP or private IP if you're in a private environment is an exercise in madness and if you want to see how mad you can go read the code that's in shade slash meta.py and after that you'll probably gouge out your eyes with a stake because it's some bad stuff. So we do a few things. We add some additional information. We add a field called interface IP which is best we can tell the best IP you should use to connect to the server. There are ways to influence how it picks that in config things but it will do things like auto-detecting if your current execution context has routable IPv6 connectivity and if the server has IPv6 and if so it will put an IPv6 address into that. Look at this, we're re-implementing DNS except client side in Python because we don't have DNS as the first class citizen in our compute service. But anyway, we do our best to do all of those sorts of things. We also, Nova's address metadata can get out of sync especially in several of the releases that are out there in the wild and so it's just, it's stale and so you get failures spinning up and waiting for a server because the part of that waiting for a server is, does the server actually have an IP address? Did this boot with any connectivity at all as you get these servers back with an empty addresses dictionary? But it turns out that it has IP addresses and they're fully functional. It's just that neutrons managing them and Nova's behind. So we give up on Nova's addresses dictionary and we query that directly from neutron and we overlay it onto the Nova server record for you. There's a few other things we do. Those do result in extra API calls. So if you are, if you're particularly sensitive to that for whatever reason, you can, there's a couple of flags that you can turn them off. There's sort of basically three levels. There's regular which is detailed. You can turn detailed off which will not add additional information but it will fix stuff that we know is broken to the best of our ability so we minimize things or you can say bear which says please for the love of God, don't make the other calls, I don't care and we use this in fact inside of shade a lot like things where we're just grabbing a server to do something else with it and we know we're not gonna be looking at any of that information they'll be looking for than we could like polling if a server is ready. Guess what? You don't also need to ask neutron what its addresses are if you're only gonna be looking for the status field in the poll loop. So that's the thing. So those are some examples but you can look at that. We throw exceptions, they're all subclasses of exception called opens at cloud exception so you're always safe to catch opens at cloud exception it will catch any exception that we throw other than we do, it is part of our API that we consume Keystone auth library for the HTTP interactions so there are some possibilities that Keystone auth could throw an exception and we don't hide those because we're okay without being part of our public interface as well. Those are mostly authentication related like you didn't provide a password, you nitty. Those sorts of things might be Keystone auth but for the most part opens at cloud exception will get you. For REST calls that we're making directly there's a subclass of opens at cloud exception called opens at cloud HTTP error which also subclasses requests exception HTTP error. So it has all of the features and functionality of the requests exception stack but if you wrote code for shade before we were doing REST calls and you were catching opens at cloud exceptions you will still catch them which we thought was pretty good. You can inject user agent information into the thing so if you're writing an application and you want that to show up in the user information so this example works with data centered which is based out of Manchester and if you do it's a sort of simple thing that was added recently to Keystone auth but if you do that and you look at the HTTP interaction you can see here that we're showing my amazing app that's in the user agent string also has client convicts version and shade and Keystone auth and Python requests and Cpython all get put in there. If you don't add something to the user agent all of those things will still be in there but amazing app slash 1.0 will not be in the user agent string if that's important to you. Uploading large objects if you're uploading objects to Swift Swift has a max file size. It's expressed in the Swift capabilities that you can get from the Swift capabilities URL and if you want to upload a file that is larger than the Swift max file size you need to split it into chunks and upload it as a Swift large object. There's two different ways to do that. We hide them all behind the create object call so if the file is too big we will automatically split it into chunks for you and we will upload them in a multi-threaded fashion. So if you want to see... Oh, so as far as a demo goes I can't really show you that because the default max file size is five gigs and there's no way that I'm going to upload something larger than five gigs. You can also explicitly specify the segment size to the create object thing and that is the size that it will chunk whatever you're uploading into. There's also a flag, a static large object and dynamic large object. If you know enough about Swift to care about which one of those you're doing you can request them. If you don't do anything it will just make a static large object for you. One of the features of a static large object is when you go to delete the logical object that is referred to as bi-static large object will delete all of the segments whereas with a dynamic large object you kind of have to clean up behind yourself. So we default to the thing that will delete all of the things when you think that it's going to delete all of the things since we magically made a large object without telling you is the general thing there. On KisCloud I could show you that KisCloud here has the network service. So Kis runs Neutron. So the has service call and shade will tell you. For the most part we feel that you shouldn't need to make those types of conditionals in your code otherwise we're not doing something fully right. But if it's like a hey I'm doing a thing and I wanna do some analysis of whatever and knowing beforehand if the cloud has Magnum or not because more clouds have Neutron than Magnum these days there's an API call that will do that and that both looks in the service catalog and but also honors if you've done an override in your config. So you can override those here and show just in the thing you can say has network and I am picking on Rackspace in this case they put Neutron into their service catalog but you as a user can't actually talk to that API so service discovery will let you believe that there is a network service there but is in fact lying. So that was actually the reason we added that feature is so that a user could put in their config. Nope, nope I promise this cloud doesn't have Neutron I know it says it does it doesn't. So that is a way to override that and that's wow we actually I only went over by 20 minutes that's exciting I'm sure the next person loves me. So these are the sort of are coming soon we're almost done with the restification process which incidentally has taught me that the OpenSack REST APIs are actually way better than they're getting credit for. The Python client libraries make everything harder. Most of the code has gotten much cleaner since we've ditched the Python client libraries. I highly recommend that if you're not gonna use Shade and you're not gonna use the open.sdk just use REST calls. There's a couple of places where it's a little bit squirrely but in general there's some real rich features that are available there that get hidden by the client libraries which is pretty terrible. We're currently working on specking out for all of OpenSack the right way to. We do full version discovery for images and volumes because we have to. We have to do special things for those but that has led us to realize that there is absolutely no documentation for API consumers on how version discovery works in OpenSack and certainly no documentation on how it should work. So we're working on a document right now which is this is how it works. Anybody implementing a thing in any language can implement this and you'll get it right and then hopefully move that forward to something that works less complicatedly if you want to read something that's really complicated go read that spec in the API working group. I don't recommend reading it. It's pretty wild but we're hopefully gonna get that implemented very soon because I need it because that's a precursor for being able to consume micro versions in the services that provide micro versions. You have to be able to do version discovery to figure out what the micro versions available are so that you can do the things. Anyway, so we'll get micro version support in soon. There's a talk tomorrow on shade that we'll talk about that. There's a caching tier in here which is one of the things that it's opt in. It's off by default but you can configure it to cache a whole bunch of things using dog pile cache so you can use memcache tier or Redis or whatever you want to do. We need to map that to more things. There's a multi-cloud facade layer coming on so that you could just get a single object and say, hey, multi-cloud object list servers and it'll in threads go fetch the server lists from every cloud in your list in parallel and stitch them into a single list for you and give it to you. Just need to write that. It's not that hard but that'll be a thing that we're gonna do soon. And also, we're very friendly and we could use some more developer's help. So if you like hacking on client consumption libraries and solving interoperability problems by working around them, we would love to have you come hack with us. And thank you for listening to me, Babbel, entirely too quickly and I didn't show you enough examples and I blame my laptop and something in the world of projectors for the delay there. Anyway, thank you very much.