 Oh, so there I'm now some nice and loud and booming which for those of you who know me knows completely Unnecessary, I'm always loud and booming Much to meet people should grin we are going to be talking about shade in here if you are Here because the sign outside said something about IOT This is not what we're gonna talk about you're still welcome. I'd love to talk to you about shade It won't probably teach you anything about IOT It may not teach you anything about anything, but but it definitely will not teach you anything about about IOT so I Don't know where that actual that doc is supposed to be So these talks are online Or these slides are online if you want to get at the the content. That's the URL You can also Twitter me at that at that Twitter with all the Twittering if you're in that sort of thing That you can also grab the source code Get August calm In it's the it's the source code for my website. So you get the talks and also You know the rest of my personal website. So great and good for you I'm not sure exactly what you'll do with that, but it's available. Yay open source So go nuts as you as you feel like doing or whatnot So I For those of you don't know me I work for a company called red hat Something with Linux. I'm not really sure you may have heard of them We have a lovely Lovely logo there that it has a name that is shadow man So we all we all have to like him. I Believe I have the appropriate amount of white space around the logo. There are guidelines for that But I'm not a graphic designer. So I might have gotten that wrong. I work in the office of technology Which is sort of like the off the CTO, but we don't have a CTO. So you can't really have an office of the CTO So it's the office of technology I work on a project called Zool Doing exciting things in the CI CD space and also closely associated with our fine friends in the Ansible organization Probably I probably you know wouldn't have they were they were you know involved in getting me to red hat in first place So we like them. They're great All that type stuff from an open stack context because you know, hey, this is the open stack summit We're all open stacking all the time. I said on the technical committee Of of open stack. I also am on the developer infrastructure core team And and I realized actually right at this moment. There's a bug in these slides I probably should list on the slide that I am the ptl of shade Given that's a thing and this is a talk about shade. It's possibly relevant So you just have to get that from the audio version That is that is how that works So we're gonna talk about a few things I gave a talk yesterday that you can't now attend because it was yesterday unless you can travel backwards in time And if you can please see me after the talk because I'd really like to learn how to do that But I gave a talk yesterday on like how to use shades. It was a bunch more like exemplary type Stuff today. We're gonna talk a little bit more about what it is why it is like the motivation There there will be some examples of stuff, but hopefully more focused on what's the problem Being solved and why we have a library and all of those sorts of things with some examples because you know talks about examples are boring and We will also get into some of the more advanced things towards the end Caching and in the task management interface and a few other bits. So that's that's a general overview I will also warn you that I have a strong tendency to To ramble and go over time. I will as always attempt to not do that. I will almost certainly fail So sorry in advance if like yesterday I go over by 20 minutes But I started late yesterday. So so hopefully I'll only go over by like five So what is shade shade is a Python library? It wraps business logic around open-stack resources and operations As few design principles it is to expose a single API that works on all the clouds I as a cloud user find it very frustrating when I have to know deploy your Choices that were made and have if conditions in my code So that my code works differently on the different clouds It so it hides all of the vendor and deploy your differences that it possibly can It is explicitly written to support multi-cloud So sort of right once run anywhere, but I'd like to do that better than Java It's hopefully simple to use Say in defaults. I will talk about some things that you can do that are more complicated It does not have plugins. There are no shade plugins There never will be shade plugins It if there's an open-stack service, it is welcome to add its code to shade We have you know, it turns out clouds have services and the in general It is it is a fair thing to ask a user to do to to check to see if the service exists on a cloud So at the service level, I'm kind of fine with that That's fine. It is provably efficient at scale And and we we make some claims about the API always being backwards compatible as With every human endeavor Humans make mistakes. It is our intent not to ever break the API But you know, I'm not perfect so But if we do break the shades API And and it breaks you please let us know we will consider that a massive massive massive problem And we will fix it It is it is not breaking the API intentionally is is not okay And we we find the need to rename things or whatever. We'll just keep aliases in the code. That's that's Our cost to bear it shouldn't be your cost I actually should put another bullet point on here. I think it's encompassed in this but but for what it's worth It is it does not aim to As you might be able to tell from that description it does not intend to expose a good a sort of direct Well-crafted interface around the OpenStack REST API itself If that is a thing that you're interested in you should check out the OpenStack SDK project which which is is a thing that is tied to the Exposing those APIs to you well, right? Like so those are two different kind of kind of consumption models depending on what you're wanting We we make some different choices in the OpenStack API does if they open their times in the open stack API has a thing That's named something or or has a behavior and we don't do that because we don't like it That is a freedom that we've decided that we are free to do So if that kind of mismatch with the published REST APIs bothers you we are not the library for you Because that is important to us So the source code is in OpenStack infrastructure as you might imagine you can also get it on Pi Pi Although I hear I'm still supposed to pronounce that either Pi PI or cheese shop But I can't get myself to do that. So you can pip install and it should work pretty well This is used behind Ansible. So if you use OpenStack from Ansible, you are using shade It's used in infras node pool. This is different than eBay's node pool Just as infras zool is different than Netflix's zool We seem to have projects named the same thing as large Bay Area companies. I'm not really sure what's up with that But it is is use infras node pool, which is why I can assert that it works at massive scale very well It is it is now its own official OpenStack project. It was is always been Working on it's always been working on OpenStack because it was birthed in the infra team, but we recently this cycle moved it into its own governance entity Which is why I forget to mention that I'm the the PTL of it I think of it still in my own brain is being kind of a smaller part of a bigger effort Which in many ways it is the infracore still all have core on on shade and we will we will keep that In terms of current status, we've been working on converting From using the Python OpenStack Python libraries to using direct rest calls That's almost done. I will talk about that more in a little bit So why why why did we write shade? There are other libraries. Why did we write another one? so I Dropped this line in a talk at Tokyo and I decided that I would just reuse it and keep it here Because we all love talking about branding at tech conferences But it's it's the idea that that we we Everybody likes to go be unique But actually the profit really comes from things being consistent. So I want to use lots of different open stack clouds I think that's the value of OpenStack, and I want to be able to use them The same so unfortunately for all of us and there's historical reasons for this which if you find me in a bar I will you it will be very difficult for you to prevent me from telling you historical stories about How how all these things came into account? I am in many ways the old grandpa of OpenStack Sitting around by the fire telling, you know yarns of back in the day, but so we leak abstraction layers There's things that we expose that are That that you have to know something about the deployment and that's a bit bad We also break APIs from time to time. We're getting better about this We're getting better about versioning this I think the micro versions work that the projects are rolling out is actually fantastic I promised some folks that I would write a blog post about exactly why I think it's so fantastic back in Atlanta And I have written exactly none of that blog post yet So just suffice it to say I think there's a lot of progress being made there There are some basic concepts that are needlessly complex Clark Boylan has a session later this afternoon in the forum about about some of this He and I can Trade off for each other in terms of complaining about this particular topic. So that'll be a fun session The the client library is one of the things that's become really clear to me from this work Is that they they look like they're they're things that were written for you to use? They are not they are very clearly written with The primary purpose of the services talking to each other And they're they're they're reasonably good at that But the the the design assumptions made in them is is very tied to Nova talking to glance not me talking to glance And they work pretty pretty well for that So it in our world in infra We I'm stretching a little bit, but we we run across a lot of clouds and at pretty massive scale We spin up and tear down around 20,000 servers a day in in service of the open stack development effort and it turns out that That takes a lot of effort and we learned some things about using the opens egg APIs We thought that may be sharing that with the rest of the world and not just keeping it to ourselves on our node pool project Would be a nice thing also. I started hacking on the open stack support for Ansible and I Was like wow, I'm just re-implementing all of this logic that we have a node pool. That's terrible Other people should be able to use that so this is a very simple example of Using shade. This is creating is uploading an image to vex host and then booting a server on it with a Public IP and waiting for it to be done That's it that does all of the things And that that works I'm not gonna run that for you because Uploading an image to a public cloud over conference Wi-Fi is really a terrible idea No matter how good the conference Wi-Fi is or the cloud That's not gonna be very enjoyable for everybody to watch But it's nice to say that that that script if you have exos the count should work just fine for you We said this originally we said that the existence of shade is a bug and it's a little bit of a flippant way I think to to to say that it would be really great if the problems that shade was working around in the open-sac API's We're fixed right and I still agree with that. I still think we're going to we've got several sessions this week Where we're where we're talking with deployers and other people about what are some solutions that we can we can do to Move the state of the art forward But I actually think that we've we've changed our mind over the last couple of years a little bit That it may be that we have some issues with the API's but if we can provide our users with a way to To consume things then great right like that's That's at least a step and then maybe over time we can get to the point where Where where shade doesn't have a bunch of logic in it that other things don't also have Because we've added discoverability features to to open-sac We've we've made sure that we're exposing the thing so that that you know other language ecosystems and and other libraries can Can we we can all share the same definition of what right is and it's it's clear and concise That's that's the effort of like that's the effort of lifetime We're gonna be working on that for forever right, but we can definitely get different or excuse me We can definitely get better at it And in the meantime, we don't have to sit around waiting for all of that work to be done We can we can move things forward. We can we can provide some help to people today And and if that works for you great, and if it doesn't you know great too like it doesn't subsume any of the other any of the work so So you've decided that that sounds great to you and you would like to use this neat So step one in using anything is is configuration I I'm personally a fan of things that require no configuration And if you have to have some configuration, I prefer minimal configuration that a user has to do it turns out It's impossible to To have no configuration because at some point you've got a username and password So you're gonna have there's a there is a there's a bare minimum and a and a URL What's the what's the off URL for your cloud? So but there's a whole bunch of other things that you you might need to express about your cloud So we've got a project called OS client config. It's actually part of the Dean what's the name of the is it just the client part? What's the the project name? Yeah, it's part of the opens that client Projects so the the Python opens that client the command line client is the is the the governance home of the OS client config project as well It's a library to handle configure a information config information for opens that clients Amongst the things that it does it also tracks differences in vendor deployments. These are mostly public cloud vendor deployments I it's kind of weird to put information into a public git repository about somebody's private cloud deployment We have some thoughts on on how to how to improve that in the future But so it keeps a bunch of defaults in tree about things that we've discovered It's Because ansible uses shade was client config is used in shade also Python opens that client as you might imagine with it being in the Python opens that client Governance project uses it as does the the Python opensack SDK And and some other things Those are those are some of the main ones. It it reads a cloud a config file called clouds at YAML It also can process environment variables and our purse arguments Should you wish it to do so? Here is an example clouds.yaml file. This is For my city cloud account city cloud is a lovely public cloud Running open stack that is based out of Sweden You should check them out. They're great. My password for what it's worth is not a bunch of X's That is that is redacted, but essentially here. I'm I'm naming a cloud my city clouds that I can refer to it by name in other places It it is referring to a known profile of a vendor and that that vendor is named city cloud In my actual clouds.yaml file I've named this city cloud and it uses the profile city cloud But that's pretty bad for a for a demo and is confusing. So I renamed it for the purpose of the slide But so this is basically saying This uses the settings in the that we know of for the city cloud vendor So there's some defaults there like what's the auth URL for for city cloud turns out? It's the same no matter who you are So so you can just refer to them by name and then here's the actual The actual Authentication information it's worth pointing out actually I think I point seven later side So I will not point it out now Here's a slightly more complicated one I added some settings here that are not needed for vex host but just to show them Vex host is a wonderful cloud run by Muhammad in out of out of Montreal In this one. I'm saying this uses the vendor profile for vex host I'm listing the regions in here Just so you can see that you can you can list which regions are available which also means you can restrict Which regions are available to you if you want some validation. It's not really a big win in doing that But you can and I've overridden a couple of things I've told it that I want to use version three of the volume API And that I would like to use that in point for images You do not need to do either of those things on on on vex host But they are they are things that you can express and this is how you would express them in your cloud.yaml file So that other things just work and and do what you want them to do Here is an even more complex Set of things that you can do and you know, hopefully you don't have to do all these things internet Also runs public cloud. I they have a vendor profile But I did not use that vendor profile in this just to to show you that you do not have to use pre-existing vendors It's it's merely a convenience thing. So in this one. I have listed InterNaps Auth URL directly in my in my config file and I've got my my auth information in there I've also told it that I I would prefer to use version three of the identity API Which is a bit weird because I don't have any domain information in my authentication So that's not going to do exactly what I think it's going to do but but I can express that I've also told it that this cloud does not have floating IPs Shade will figure out whether you a cloud has floating IPs and whether or not you need a floating IP for you But that does involve making several Inquisitive calls to the neutron API to to sort of figure out what's going on If you know that the cloud doesn't have that then you can say nope, please This one doesn't have that please skip all of those Those attempts to investigate Whether or not floating IP support is here and then those those calls will get skipped Internet does an interesting thing When you create an account and get enabled in a region there They create you your very own public and private network Because of some things about Neutron it is impossible for you as a user without pre-existing knowledge to tell purely from the neutron API Which of these networks is the public network and which of these networks is the private network as a human I can tell that because one of them says when in the name and one of them says land So as a human, it's it's not particularly confusing But from a general API consumption if I want to tell the API, please boot me a server on the public network It is impossible to know that It for what it's worth there is a there's a flag on neutron networks called router external You might think that that communicates to you that this That this network routes packets externally that's not what that flag does That flag tells you you can attach a neutron router to it and fetch floating IPs from it It also incidentally implies public So when I when I asked him gone a at internet to please flip the router external true flag on My public network because that would allow me to discover that it was a public network And he did that all of the rest of the tenants at internet Saw my my network, but I couldn't use it because it wasn't actually a shared network So that that broke some people we revert it really quickly. So I appreciate him doing that and also that we all learned from that experience So in this case, we've added the ability to the config to express some additional additional information That can't otherwise be gleaned just from API investigation in this case. There's There's a flag you can add That is routes externally in this case I have said that the WAN when routes externally is true and the land when his routes externally equals false in general the intent behind that terminology is that Public and private are vague if you're in a private cloud scenario So it's about does this route packets off of this cloud or are these does this route packets only within this cloud? And yes, there is a question It is not this is this is additional information that you are supplementing to the information that That is going to be found from the API It's not the same. It's yeah, I'm fair that the Router colon external is the but you're right that that is a that is a fair Sorry about that Yeah, it's terrible. I'm very bad person All right, well think about that for next time Think about fixing that anyway We've also indicated here So those are both networks that are going to show up in my network list in my for my account Which means that if I want to boot a server in every case Nova is going to want me to tell it which network I want to boot on So that's cool. Like if that works for your workflow, that's awesome. It's not hard to do that But in my case, I happen to know about my usage that I always want to unless I say something else Boot on on that network. So I have labeled it as the default interface and so if I do not give If I do not give the shade create server call a network list on this cloud, it will pick that network And this also points out that you can any of the any of the off any of the settings That are that are in the file you can set on a per region basis Inside of the region's list. So I I could also set those Globally, but that would make sense since each of those networks is only in the AMS-01 region Hopefully in general in the most cases, you don't have to do things like that Also, we will process environment variables that start with OS underscore We put them into a cloud named in Favars It is in fact possible to override the name of the cloud that it generates for that But don't bother. I just know that they're in a cloud called in bars we do not overlay them on top of other config because In in our very early days we attempted that and it confused all of us No one could ever predict What the result would be? Correctly and and and weird weird weird weird bad things would happen. So those go in there so if you want to in your life use a combination of environment variables and Config file that is that is the method available to you There's a there's a thing that has confused people so I thought I'd call it out here real quick keystone has pluggable authentication This may be a thing that people weren't aware of because most of the clouds just use the normal password off But there are other ones like OIDC connect and SAML and all sorts of other great thing mostly related to Federation types of things If you don't set an off type it's going to default to password which is going to do keystone off's best to To auto-detect which back into should use based on the parameters that you have provided in the off-dict The contents of the off-dict itself are essentially opaque There's a we're actually validating it probably more than we should which is causing Dean no source of problems And we are working on unwinding that so that it happens later and and when it should So I apologize to everybody that may have caused problems for but in general This this is the the contents of that off-dict are completely dependent on on what off type you have And it's just sort of a thing you have to know and it's one of my issues with ACI API usability We haven't fully solved yet Because you have to have a priori knowledge of what off plug-ins your cloud uses So that you can connect to the cloud Which is a little bit chicken and egg problem So yeah, well talk about that later on the day some options to fix that It is important pointing out that off type is not a member of the off-dict It may seem like a thing that should go into the dictionary called off. It is not It is a it is a global setting that tells you how to process the off-dict So if that's confusing, I am sorry, but that is the explanation and that is where it goes Oftentimes if you can't connect and you're getting authentication problems It is entirely probable that you either have the off settings up one level or you have other settings That you thought you were using in the off-dict So be aware of that You can use this through Python opens that client the config stuff So you can do opensack dash dash os dash cloud equals my vexos That's the name that I had listed in the cloud or in the thing and servers list Or you can do the same thing by using the os cloud Environment variable there's two environment variables os cloud and os region name that are selectors They do not cause the creation of an in vars config entity They they help select which of the existing things you would like to use So they may be useful to you if you're doing a bunch of opensack client Commands after each other to just set an environment variable to tell it which of the clouds you're wanting to use at the moment So that is that is available There's a command that shade ships called shade inventory. There's a dynamic ansible opensack Inventory plug-in that we ship with ansible that gives you a thing and the code implementing it is all in shade actually So we wrapped a really quick Command line client around it Basically just in case it's useful You so you can use this to browse your your things and see what it is so this is This is the The list of servers that I have in So I think so this is my I think this is my IRC bouncer. That's right here And you can see a few of the things that we that we do in Shades data normalization one of them It's worth pointing out that I don't have a slide on anywhere else as we add this location stanza to basically every object That we return from shade That lists the cloud and the project information And any region and zone information that are associated with that thing so that if you do something like for instance this Inventory list which is a single list of my servers across three different clouds in this particular case You can know in any one given point what what cloud or what region of that cloud or what project that that object came from It does make this a little bit more verbose as we have some nested objects in here like these security groups Which also you'll notice have locations in them and in the list of security group rules also have locations in them But that's just That's just life. You're just gonna have a bunch of repeated location information You know you can just ignore it if it doesn't really work for you. That's fine But it's there so so essentially each open stack cloud object, which is the main object in in open stack Represents our region of a cloud like that's the essential unit of operation You can you can share authenticated HTTP sessions Amongst services in a single region of a cloud you you cannot do that across regions of another cloud a neat offshoot of thinking about that in that way is that a region There isn't really much difference from a user perspective between Two regions at City Cloud and a region at City Cloud and a region of Exost right there just cloud regions Which is which is kind of neat? it means that that you you ultimately can have a Cloud with 30 regions in it Even if nobody is running one of them just by themselves, so it's kind of kind of cool I'm pretty into that. So this is the simplest way to get a cloud object We have a helper method inside of the shade Package called opensat cloud that'll do all the things that you need to do If you only have one cloud in your configuration This will work right this will do the right thing It doesn't require you to specify which cloud you want if it is obvious because you only have one That's fine If you have more than one but you use one of them a lot and you you're tired and just like when you're trying to do a Few things you want to do it. You can there's a you can add a section to clouds at yaml That sets a default for cloud And that will cause this to pick the right one simple cloud construction from Shades perspective if you have more than one and you want to select them you just tell it which one so this is Selecting the Beijing region of United stacks public cloud Which by the way is a lovely a lovely public cloud run in China Because turns out we have public clouds all over the world which is really cool and run by people in those locations Which is even cooler You can get more complicated if you want to so like you it's possible that You may want to use more configuration facilities for reasons. We do this in a couple places Ourself, so you don't have to use that helper factory function if you don't use that helper factory function It's a little bit more work to get things, you know up and going which is why there's a help factory function But you can directly grab a config object from OS OS client config you can manipulate it and you can pass it directly to Shades constructor and you'll be good to go. We use Python logging. It's Python library. So yay We we use the standard standard logging stuff We have a helper another helper function called simple logging Since shade is a library. It would be very inappropriate for it to in at constructor time Set up loggers and whatnot since those are things that applications are supposed to do If you're setting up logging in that way in Constructors of your libraries, please stop because it makes it harder for people who are using your libraries from their applications To set up the logging the way that they want in the application But that said setting up Setting up Python logging can be complicated and annoying if you basically just want some logging real quick so we made a Helper function that applications can use or scripts or whatever you're writing To just turn it on and set it up the things one of the things that that function does is it turns off a bunch of annoying warnings From some of the libraries that decide to warn you about things that you as a user cannot do anything about One of my personal philosophies that is shared by at least some of my colleagues Is that warnings that are given to a user should be things that the user can do something about not warnings About things that are completely outside of the user's control and thus are just going to be warnings that are displayed Every single time they do anything those are useless warnings And I am in fact looking at you Python requests library for warning me about the Certificate at rack space the certificate at rack space is fine The fact that it does not have a subject alt name although that is a recommendation of people to put on it is not anything I can do about I can't fix it. I don't work at rack space If I did they wouldn't let me fix it. It's their certs. So don't show that warning. So we fix that for you That goes away and you will not get that warning for using shade In fact for that particular warning we have a library called requests exceptions And you can use that library to turn off those annoying warnings from requests in your application Should you be doing something else and they annoy you? It's it has one function in it, which is turn off the warnings Yeah, anyway, that's me ranting about that topic There's basically two options to this simple logging helper function. One of them is debug equals true This does exactly what you think it would do it turns on debug logging I hope that's not shocking to anybody The thing that that may be interesting to point out and about that is we have a separate logger That we configure inside of shade for logging information about request IDs Simple logging debug equals true also turns on that logger that is separate. So if you're writing a larger application And you want to log that separately that type of measure or you you want debug logging But you don't really aren't really interested in in opensack rest request IDs You can you can squelch them or whatever But if you're just using a simple logging you're gonna get those HTTP debug equals true implies debug equals true. It will set it for you. There's no way to get HTTP dog HTTP debug without debug And it basically dead just adds the the request tracing at the HTTP level So if you want to see exactly what's going on at the rest interface, it will set that up for you appropriately Quick note on exceptions. We haven't gotten rid of all of Python Client libraries inside of shade yet Where we are still using them and raise exceptions. We we catch those exceptions and raise different ones This is also an evil practice. I I'm a very bad person for having Participated in it for a period of time But we've known pretty much since the beginning that we were going to migrate off of the Python client libraries at some point And since exceptions are part of a library's API I did not want people to start depending on a Nova client exception and then switch to using rest calls and then their applications Be broken. So we chose to do the evil thing and wrap exceptions for a period of time Now in all the places where we're making direct rest calls. We are we are throwing the original exception correctly So we are we are in addition to getting off those libraries We're getting out of the wrapping exceptions game We do include the wrapped exception in the exception that is wrapping it There is a way to get the entire trace back so that you don't lose the context information But it is worth knowing that about that Also other things there we'll talk about that in restification The exception stack is very easy in shade there are all subclasses of open-stack cloud exception So catching that will catch pretty much anything other than keystone off Authentication errors we do admit that keystone off is part of our part of our key stack and so its exceptions are Valid we're not going to get rid of keystone off It's it's part of the API Because it processes the off plug-ins for us So so if we weren't admitting that that was how those were working then everything would be horribly wrong Our direct rest calls throw opensack cloud HTP error, which is a subclass of opensack cloud exception It also subclasses requests exceptions HTP error, so you can catch either one of them That was basically because we'd been throwing opensack cloud exception the whole time So we needed to We needed to continue throwing that but there's some really good information inside of the request exception stuff And people know how to work with that exception So you get both both things with all of our rest things. We have two Specific exceptions that we we haven't really expanded this past those two. They're just there for historical reasons So a 404 will get you your I not found and 400 will get you bad request Those are also just subclasses of HTP error So you can which has status codes in it so you can catch HTP error and just deal with the status code Which is probably what you want most of the time I wouldn't be fully opposed to throwing more specific exceptions But I also haven't found a specific use for it yet. Nobody's rusted it. So that's just the thing with that stack So all of that I said all those things going to show you the basic example again. So here's Uploading to Devex host with debug logging and creating a server So let's talk about some problems. That's a whole bunch of preamble By the way, how am I doing on time anybody that ends at 12 I think seven minutes So Image API versions one of my favorites Depending on the cloud you either need to use the v1 put interface to upload an image the v2 put interface The v2 tasks interface which involves uploading the image to Swift and then requesting that glance import it from Swift This is different than the v1 import from URL, which is not a feature in v2 And that shade does not expose because it is not possible to expose it for both both API versions so as a user you have to know those things it's The way you discover that is by trial and error So Which is one of the reasons it's a piece of information that we include in the vendor profiles of OS client config So here is some image upload code specifically We've seen a chunk of this basically like in five slides now, so I'm apparently really into uploading images to vex host But these two lines are the lines you need to upload images. This is what it will do So what it's going to do is first it's going to calculate some hashes for that image because uploading images is a really long and expensive process and We add some hash metadata on them so that we can detect whether when you ask us to upload this image If we have already uploaded that exact same image and it already exists so we can appropriately know up We then and so we got an md5 and a shaw for that It turns out you can get both from the same loop through the file. So why not? We then check to see if it's there Which we do twice for some reason. Oh, sorry. I This took me up yesterday. We make two calls because the glance image API is paginated So you have to make sure that you've got all of the images then we we we post the image content and And then we excuse me then we post the image metadata So we create the image object and then we put to the images file File in point the actual data of the image itself and you can see we've got the debug logging turned on So we're seeing the request IDs that were used in those calls. So that's that's pretty straightforward like that's not that's not terrible Terrible if that was what the interface was for everybody But it's possible that your cloud might implement the v2 tasks API Like our fine friends at rack space. Oh and look at that. There's a there's a span So I was trying to highlight Apparently I'm bad at HTML is trying to highlight the the the fact that VHD is the format There that is Amongst the problems we talked with fine folks and the glance operator Session yesterday about the fact that it's not possible as a user really easily to know what image format You need to upload an image in when you're uploading an image But anyway, this is essentially the exact same code except with cloud is rack space and region is DFW Otherwise, it's and it's got a span in the name of the file name. That's probably not the name of the file But I don't know maybe maybe I got really clever, but it's other than that. It's the same code So this is what that does We start off with the same thing We calculate hashes for the for the thing then we We do some some API discovery Which you didn't see in the other one because I had that image image endpoint Override in the in the VEX host entry So we do a we do a discovery thing a glance version discovery failed Because glance version discovery doesn't work at the rack space clouds. We have to fall back to the endpoint in the catalogue That's a whole other topic Then we we get the images and we paginate Twice because there's a lot of images there so we check to see if we've got the image that we're about to upload Then we Then we go to the object store because this is a task thing so we we do some things we check a container We we create the we make sure the containers there. We push an object into Into the images container This this bit right here where we're putting To the object store if the image is big enough Which it was not when I created this example log output because I didn't want to wait for an hour for a very large image to upload We we do actually run that in in multiple threads in the background So it spits it out because you're actually uploading it in in large object chunks So that also happens transparently for you. You don't have to know anything about that especially not if you're If you're just wanting to upload an image you do not have to know about chunked large object uploads and swift Then we do the the the final tasks We we create the task and we we track the task and and then the task is done and then we have an image Which is very exciting so I mentioned version discovery in that All the open-sex services have this great version discovery Endpoint that returns version discovery documents Except the thing that we put into the service catalog is the versioned endpoint Which is not the endpoint that carries the information the really lovely version discovery Document which means that although version discovery is a concept in OpenStack. It is not very well Accessible to anybody consuming the rest API actually because you wind up having to do URL Manipulation to find the actual API or the actual endpoint which is which kind of bad in addition to that some of the services for historical reasons have promulgated an idea of versioned service types on to the world because Because we'd been sticking versioned endpoints into the catalog and version discovery wasn't available Then how can you roll out a new version while the old version is still there even though there was a mechanism for that to exist? We just started adding new entries to the service catalog so From a user's perspective where what you want to say is please use version 2 of the volume service It gets really complicated depending on what the version is and this actually is just a general problem Not really a deployer specific problem So we work around this in shade for glance and sender today because we have to it is not possible to not work around it We've been working on some documents to document the process for doing this completely They're long and Evil to read so I don't recommend reading them, but we've got some some plans to document that and Get people onto the open-sac service types authority so that there's consistent names that people can count on and Getting the other languages to implement version discovery Correctly so that we can then start to talk about changing how people register things with catalog It's probably gonna be a couple year process But we are we are trying to work on that as a solution for not just shade It may take a little while we will be discussing steps for that in room 102 at 440 later today If you'd like to talk about that and and other sorts of things But ultimately we'd like for users to be able to do things like say I want version 2 of the image API Or I'd like the latest workflow in point or I'd just like the compute endpoint and I don't care what version So give me whatever you think is the right one Or I'd like either versions 2 or 3 of the volume service either one of them is fine for me So network and choices are also fun your clouds can provide you externally routable IPs directly attached from neutron like OVH Your cloud can do that and also support optional private tenant networks like vex host does Your cloud can have private tenant networks provided by neutron and require you to use a floating IP to get public IPs like city cloud Does you can have private tenant networks provided by nova network and require floating IPs for external routing like aro does And your cloud can have externally routable IPs from neutron, but non no working neutron like like rack space does so There's a couple different cases. This is an example of creating a server On city cloud, which is a floating IP required cloud You'll notice that there is a flag auto IP equals true. This says please do what you can to get me An externally routable IP however that works on this cloud And I am running out of time, so I won't completely narrate this log But you might can tell that it does a lot of API calls to accomplish that task If we look at instead internet app, which gives you directly attached IPs when you request them And look at the log it is quite a bit shorter Because you don't have to do all of those additional calls, but the shade API for that is the same So that's probably pretty clear. There's another problem here How do you find the image you want to boot? Because those are all deployed these are these are the names of the who the latest to boom to zennial image on Vex host city cloud and internet As a human that's really easy to deal with as a computer not so much There's not a good solution for that. I'm sorry. I can't solve that for you today We had some conversations about it in the in the glance deployer Session yesterday There's a action item to take to actually collect everybody's user stories because we're pretty sure that Everybody has an idea of a subset of what people are doing in this particular case And I'm hoping that we'll be able to get a point where we can define some some common metadata that we expect Deployers to put into the images they put into their cloud My solution for this today is always upload your own images It is the only way that you can be sure that you know what the image that you want to use is called on all of the Clouds and that you know what content it has in it. I will also hopefully Implement a correct life cycle. There's also this crazy dependencies problem The the centos folks came up to me and complained about this And I think a couple other people came out to me and complained about this And and then more people came up and complained about this the the the Transitive dependency stack that comes with depending on a large set of Python Star client is is pretty intense and insane And it it causes packaging problems for people because you should basically always use the latest shade It is it works with the old clouds as well as the new clouds and we learn new things about old clouds That that we take advantage of in shade So you shouldn't use you shouldn't think to yourself. I'm running a mataka cloud I should use the version of shade from mataka to talk to my mataka cloud You should think I should use the latest version Except if you've deployed a mataka cloud and you're in that environment Then the fact that shade pulls in a bunch of Python client libraries, which are also included in mataka means that you've got a really weird Dependency hell that you've just stepped into so this is amongst the reasons that we're undergoing the restification process that I mentioned Which I mentioned we've been hard to work at we were done with heat magnum swift glance and trove Cinder and neutron are basically done. We just landed the final patch for neutron yesterday The sender I believe the next sender patches just removes under client from the dependencies list That leaves us with Nova ironic and designate left to do And big shout out to both Rosario and Slawek who showed up after I sent out a call to the a mailing list saying you can use some more help on this desk They showed up and did a hundred really great job. By the way the code is much easier With rest the just so much easier and I discovered that the rest apis are actually way better than My impression of them was from having them exposed to me through the Python client libraries So it turns out they're actually not evil to work with If you work with them, so I can't recommend that highly enough Advanced things and as usual I'm way over time So I will I will fly through these shade should do the right thing 95% of the time But there are times and you need to be more advanced than that and we want to support that too We just would have made it want to make sure that the normal users don't have to to interact with that kind of construct So we have this thing inside called task manager which came out of node pool originally and this Encapsulates every API operation that we do inside of a task that's run by a task manager in the normal thing in shade When you should get a shade cloud it it just runs it like there's a there's a pass-through task manager So the fact that it's running in a task manager is completely Transparent to you you should never notice it, but if you need to do crazy things like node pool node pool passes in a custom threaded task manager that implements API throttling and rate limiting Amongst other things and that's essential to node pools ability to do the amount of API calls that it does At the scale that it does So that's a that's a thing you can look into if you have to do really crazy stuff Especially if you're doing higher scale things Our get call all of our our resources have a list to get create an update a delete Get is a wrapper around list and we do client-side filtering There are times when we can push filter conditions down to the cloud and we do that in some places But but we actually do it's mostly list and client-side Caching and we originally did this in support of scaling which may sound weird But when you have a hundred threads creating servers on on a cloud and you're wanting to test their status Listing the list of them and checking the status that way rather than doing a hundred get calls is I promise you Kills the cloud way less. We have also taken down public clouds from Similar things. So so there's both concepts of caching and rate limiting This needs to there's this is done in two different ways inside of shade and once we're done with restification We can go to some consolidation of the approach there But you can put in some caching config into your cloud.js ammo file and express some expiration times on a per-resource basis I wouldn't recommend diving into that very deeply right now Largely because three of them servers ports and floating IPs are special cased We we tried to unspecial case them and got it wrong and had to revert that So we will try again to unspecial case them and to make sure that we can get it right. Luckily we have Luckily the node pool will tell us immediately if we get it wrong So it's it's not really, you know, it's not a problem testing that so we'll work on that next If you're not a pattern developer, we do love you you can use ansible. It's the same stuff Uses the same thing so you can just make simple calls like that And as a final thing just to continue to point push home my point about multiple clouds that all look like a thing This is a working ansible task that will upload my key pair to 30 regions across 13 clouds And that's all of the ones that it is and I probably I could probably implement the ansible in a more clever way That didn't just involve listing them all But the fact that that's run by a you know, 13 different providers Doesn't prevent me from using that as one really big public cloud so basically Keep trying to tell me that opens the cloud isn't a thing. I'm gonna keep telling you you're wrong So anyway That's my that's my thing there. We are trying to push all of this back into opensack as we can So there's a bunch of discussions later on today on service discovery version discovery all this Clark's got a thing on user API improvements We're working up documents with the API working group So this is this gets you out. This work should work for you today But we don't want to arrest our laurels on that We want to make the situation better for for everybody and I'd ask if you have questions But I'm probably 20 minutes over time. So I'm gonna stop talking now. Thank you very much for listening to me ramble