 Hi everyone, I am Mike Perez, you are here hopefully to be hearing about Keep Calm and using OpenStack consistently. In this talk, I am going to stand before you as a member of the OpenStack Foundation team and tell you how inconsistent OpenStack is. But while I am going to be explaining all the bad parts about OpenStack, hopefully towards the end, we will have solutions to some of the problems that we have today. The reason why I am probably like this at this point is because I have been contributing this project since 2010 at this point, it wasn't exactly what it is today, things have definitely grown quite a bit. For the foundation, I am a developer coordinator, mainly working on a variety of cross-project dishes inside of OpenStack. I am a member of the OpenStack Technical Committee, I am a core developer of CENDR and I am also the former PTL for OpenStack CENDR. In OpenStack, there are things from the mission statement that I still feel like we are trained to strive for. In particular, we are trained to have interoperable deployments. We want these deployments to be able to scale and we want them to be easy to use and so on for not just for the users but also for operators to actually deploy. I think we are still a little bit far off from that mission. With any OpenStack presentation, it always has to show this image. This image right here, I love this image because it actually shows what I would want people to see. You have this idea of just compute and networking and storage. What do you have there? What are those things? Those are resources. This is all as me as a person who is using OpenStack to really care about, it's just I want to consume resources. I actually don't really care what sort of technology that you are using behind your cloud. I just want resources readily available to me and I want them, you know, along with them being readily available, I also have a certain set of requirements that I want with that. However, though, today for anybody here who is an actual OpenStack operator knows that this is not the case. It actually looks something like this. This is also another classic image. This is a diagram of just a bunch of, you know, a variety of services. At this point, this is actually a simpler version of just some of the core projects and just how they are all mapping to each other. How they are communicating through, you know, different message queues and what not. And then things got a little bit more complicated because we started talking about the thing called the big tent. And boy did this confuse people. It still confuses people today. And I'm going to tell you today that this is not a scary thing. There's been so many presentations on this and I'm going to also be one of those presentations to tell you that it's not scary. We wanted to be more inclusive. In the technical committee, we used to have a set of ways that we would evaluate projects that was just not really useful. I mean, we would look at the technologies that projects were using. A lot of us weren't really experts in those fields, but we still had comments, suggestions for those projects, and then they would just never get past a phase where they would be accepted as being an OpenStack project because of what we were looking at them for. So instead with the big tent, we wanted to be more inclusive. Do you follow our four opens, for example? If you're not familiar with our four opens, I would suggest you go to the website governance.openstack.org, which will list these out. Are you an open project? Do you perform open design, open communication, those sort of things? Those are really easy things that we could look at from projects and look at their history and how open they have been to actually see whether or not that they would be a fit in OpenStack. But in addition to doing this though, it created 50 plus more projects. And so this is a little bit daunting, especially for somebody new that's coming to OpenStack. Some of you may be new, and you're hearing a variety of code names, and it's just confusing. Some of you might be actually just wanting to use an OpenStack cloud, and you don't really care what a NOPA is, or what a sender is, or what is a cloud kitty, you just want to use an OpenStack cloud. So it created some confusion, but at the same time, it was a great thing, right? Because it allowed people who had different ideas of how they wanted to contribute to OpenStack, it allowed them to do that by having other services that they could provide, whether it's DNS as a service, whether you want to provide policies through Congress, you want to do orchestration, so you do that through Heat. This allowed other people to contribute to OpenStack in other ways. And so I'm telling you, in focusing first with the operators, that you don't need to deploy the world to use OpenStack. Just use the minimum, and what exactly is your use case with your cloud. And the way that I like to show people is actually through the OpenStack.org website. There's a marketplace, and specifically there's a project navigator that lists off a variety of projects, and it lists them off by showing information about their maturity. The maturity is weighed off of community decided tags, and these tags are based off of a variety of different things. Does a project provide things like, do they make upgrades easy? Do they provide rolling upgrades? Do they provide support in different SDKs? Are they supported in docs at OpenStack.org? These are different things that operators can use, and when they're trying to choose how exactly they want to build their cloud. And so having a list of projects is great, and having a way to evaluate them is great. But that still doesn't solve the problem of what exactly do you need for your cloud. And so we have this other idea with sample configurations, which is, what is your use case for OpenStack? And let me give you a set of projects that could help fulfill that use case. So once again, if we go to the marketplace, this is on OpenStack.org, by the way. You can see, for example, for a use case for high throughput computing. And so with here, we have a list of projects that are recommended, as well as optional projects. And so, in addition, we could go look at, say, for example, video processing. And you'll get another set of projects that will fit that use case. And so in addition, though, we can also see a real person that's deploying OpenStack for this particular use case. And we can actually see a demo, and we can see what sort of technology is that they're using in their cloud. This is all information that's available to you if that's a use case that interests you. So for example, with the video processing, this is a digital film tree. I'm sure a variety of you, if you've attended other summits, have seen them through different keynotes. So in a checkpoint, I've explained to you that the big tent is not really that big of a deal. If you're really, I would like to say that I would like more than anything for people to not really worry about that, unless you're doing development, or if you're really interested in bleeding edge projects. But for the most part, leave it to the things like the project navigator to navigate to things that are ready for you. And then if you're really interested in trying out different projects, which a variety of projects are just starting, you feel like that you should try them out and give them feedback. That's what's going to allow them to get better. But really, only care about what exactly do you want to do if you're open-stat cloud that you're deploying. And use these different resources that are available to you to navigate through the different projects, as well as the different use cases, and which projects fit those different use cases. It makes the whole thing a little less daunting, I think. So now I want to talk about where kind of the whole downfall that I think of where things things actually go right. And this picture is actually from the Vancouver Summit. And it was from the keynote presentation, in which we wanted to reach the goal of having federated clouds, just having the ability to authenticate and use resources across different clouds. That was the main goal. But actually, we never reached that goal. We're actually in a very bad state and inconsistency. It's getting better, but with authentication. So federation never really became a thing. It's still a goal that's trying to be strived today. And there are clouds that implement it that are challenging other clouds now to do it, which I think it's a pretty healthy thing. However, though, some of these clouds weren't even exposing the same interfaces for authenticating. Some people weren't even using the Keystone project for a while. I was like, how can you even be an OpenStack project if you don't even have Keystone? Nobody can use it. And there was also like really, like back in the day, there was like really weird stuff in the client code where it was like looking for specific stuff for certain clouds. It was not very good. And right here is another example of inconsistencies across public clouds. And I actually list the public clouds here. And in which just to get an external IP when you want to bring up an instance and assign an external IP, there are so many inconsistencies and different things that you have to do to deal with those. This is horrible. Right here, this is actually a comment in the code base of a library called Shade that I'll be talking about later that deals with these inconsistencies. And this is just directly pulled out. So let's talk about how we're working on this. We have a team called Defcore. And they're specifically looking and testing at these different clouds based off of tests, based off of capabilities. Can they perform a set of capabilities? Do they actually pass this? And this right here is exactly going to be the consistency that's set across what we expect from public clouds. And so the tests off of these capabilities are coming from upstream, coming off of a project called Tempest that allows us to do these different functional tests. So where can you find these clouds? Again, you could go to the OpenSec org site and you can go through and take a look at what different public clouds are available. What's great, though, is, for example, we go to public clouds and you can see a whole mapping of different clouds. And for example, here's with Europe. And we can even scroll down to the different clouds as well. And we can see how well that they're tested in terms with this and which version of OpenStack are they running. That's the other thing, is different clouds are running different versions of OpenStack. But you can see right here, for example, this cloud right here is running Juno. And you can see which APIs are available to you. And then you can sign up for them if you like the looks of it. So this is a way to actually know which public clouds are available to you. So once again, as I mentioned earlier in my talk, consumers of OpenStack cloud just want resources readily available to them. And they want that based off of a set of requirements. They don't care that you're running any sort of form of the latest hot open source block storage solution or whatever. They don't really care what the resource. They want something presented to them of what they need at that point in time. And I stress that because that is exactly what OpenStack is, is providing an abstract layer to these different technologies. So I like to go back to my friend, Flanders, who works at the foundation. He helps with the different app developers in the community. And he said something to me that resonated with me that I wasn't really realizing before with the community, where I was always angry because of the fact that with people complaining, I just felt complaining about things that weren't right, weren't working. And he told me that while it's great that this is all open source, don't expect users and operators to roll out their sleeves. And I have always viewed open source as a wonderful thing because of the fact that I could fix the problems myself, but not everybody comes from that point of view. So the reason why I'm showing you this silly picture of us and showing that quote is basically because I want to show you, I just showed you all the things about why I think OpenStack is bad in these inconsistencies. But I want to show you what we're doing as a community to fix these problems, both for the short term and long term solutions. So it first starts off with OpenStack Client. OpenStack Client is a great user experience for consuming OpenStack Clouds. You want to use a variety of public clouds and you want to use it through some sort of command line interface. This is the way to do it. There are different things in it that makes it using different clouds easy. The first bullet point talks about this tool called OS Client Config. We'll talk about that later. And the fact that we don't have to use codenames to use the different projects. You don't care what the projects are. You know what the resources are that you want to bring up in a cloud and that's it. So it allows you to say resources instead of saying things like sender nova. And then along with that, it's actually predictable. You can actually figure out what are the set of verbs in order to interact with these different resources. So, I'll give you an example. Instead of saying boot servers, let's just say create servers. This is an example of the OpenStack Client using it just to create a server. You say OpenStack server create. You give it a flavor as an option. You give it an image. And then you give it a name for that server to identify it. Instead of saying sender create, which you would have to have some sort of mapping knowledge of knowing that sender goes to volumes, you could say OpenStack volume create. Give it a size and a name to identify it by. And then instead of saying we're going to attach a volume, how about we add a volume to a server? So you would say OpenStack server add volume and the name of the server and the name of the volume. And there you go, you attach the volume to a server. If you want to create a volume off of an image, you say OpenStack volume create and you can specify the size as well as the image to go with it. And things like this, instead of setting quotas by having to set nova quota or sender quota and then for separate requests and different options for them, you can instead put that all together. And you could just say OpenStack quota set and then whatever tenant that that maps to and then those options that you want to specify. So you can specify for the instances and RAM and volume and snapshots just like before. So here's a little example. One of the things I really like about OpenStack Client is you could go into this interactive mode. Now I passed the OS cloud dream compute option. We'll talk about that later, but just assume right now I'm using the dream compute public cloud right now and I could do an image list. And then from there, I could take one of those images and I could go ahead and I could, as I showed before I could do a volume create off of that image. And so we'll use an Ubuntu 15.10 and I specify a size and something to identify by. And so that will go ahead and create the volume and then if we want to go ahead and we want to attach that well first we need to actually bring up a server. And in order to bring up the server we have to get the flavor listing. So we'll stream compute for the flavor listing. So they have things like subsonic, supersonic, light speed, warp speed, kind of fun names. And then so you can do a server create and I could specify that volume that I just had created earlier. So I give it the name that I identified it by and then I could give it a flavor. So I'll give it subsonic and I could give it a public key that I want to be injected into that. I created that ahead of time and then you have a way to identify that server. So it's just a server called test. And so in the end, we should have a server that is going to be booted up. Should be Ubuntu 15.10 that's going to be on that volume. And here we're just looking at the results now of what happened on the dream compute dashboard. And so we should have a server of course that's called test. So we'll do a filter off a test and there it is and it's booting up. So it takes a little bit of time to boot up but just scrolling with my cursor for a little bit. So in the end, we have an active server and should actually be running Ubuntu 15.10 now. So we could go ahead and we can leave out of interactive mode. Oh, first, we'll do the thing, those bullet points I showed you earlier, of just trying to assign an external IP. We'll get one of the floating IPs that are available in our pool and we'll actually add that floating IP to a server. All you say is IP floating add and then the IP address into the server. That's going to assign it an external IP so that we can actually SSH and do it. So and then we'll just verify real fast that it has been assigned, that floating IP has been assigned to our instance. So now we can get out of the interactive mode and then we can SSH into that server. And then that was it. I think that's like a really wonderful experience through your command line interface to bring up a server in a cloud. And notice how I didn't have to enter credentials which we'll talk about after this little demo. So there we go, we're in the instance and what can I say, it worked, cool. And so that was so easy that I did that on the flight over here while holding a glass of wine. And I did it on the airplane's Wi-Fi too. So in review, we went into interactive mode with OpenStat Client and we did an image list. We did a volume create and we took the image that was given to us and we specified that as an option. And then we got a flavor listing and then we specified that in the server create command. And then in the end, we were able to get a listing of the floating IPs and assign one of those external IPs to an instance. So now let's talk about authenticating. I just granted earlier about how Federation is not really a common thing at this point. How some people in the earlier days were not using Keystone and then there's different versions of Keystone that exist out there. It makes it kind of a little bit annoying to use. So typically today, when you use an OpenStat Cloud in the dashboard, there is usually you could download an OpenRC file that comes off of Horizon and you can source that OpenRC file in which it'll inject environment variables which the different clients will read off of to authenticate. And it looks something like this, I source it and then I have these environment variables injected in. There's also a prompt that will, a password prompt that would come from this OpenRC file unless you hard code in your password into it. So you have to enter that password every time and the OpenRC file looks something like this. It has some sort of off URL endpoint and it'll have a tenant ID specified as well as a tenant name and a username and then here's that password prompt I mentioned earlier. And you can also set a region too if you need to for your cloud. So as an example, if I wanted to use Dream Compute, I would source, have to enter a password and then this is using the different clients like Nova for example, to do a flavorless as opposed to using just the OpenSat client. And then I could specify Nova image lists and so on. But then if I wanna use a different cloud, I have to switch over, so I'm gonna use Ultimum now and I have to source that and I have to do another password prompt and now I can start doing image lists and flavor lists and so on off of that. However though, I think this whole idea was like so six years ago. We don't have to do these things anymore. In fact, we have this new thing, so as client config, allows you to specify a Clouds YAML file and your dot config OpenSat directory and in this Clouds YAML file, you could specify a variety of different clouds. So in this example right here, we have Dream Compute listed off and you can see it has an off URL, it has my username and password and my project name and then I have rack space as well and I have my project ID, username and password and I have internet and Ultimum. I even have Nectar, so I now have clouds all across the world now that I can launch off of and I don't have to authenticate with them because it basically looks like this. If I wanna do a flavor listing off of each of these clouds, I just specify what I identified the cloud as and OS cloud as the option. So that same exact example, I could specify OS cloud for Dream Compute, do a flavor list and then so there's the subsonic and supersonic again but then I can go ahead and I can switch over to Ultimum and do a flavor listing and I could get their listing of flavors. So that allows me to authenticate and just continue using different clouds without having to re-authenticate myself and re-inject environment variables. So now let's talk about the inconsistencies that exists in the APIs themselves. There's this Python library and it's called Shade. Shade deals with inconsistencies today inside of OpenStack. It hides everything and it tries to figure out exactly, like for example, if a cloud doesn't exactly deal with external IPs the same way, it will give you the same interface so that you can say, give me an external IP for this instance and there you go. It's here it is returned but under the hood it had to do a variety of different API calls that just aren't consistent if you're doing raw requests to those different clouds. So here's an example and here's a little bit of Python code but even if you don't do any Python code I think this is pretty readable in terms of at the beginning we just print out to say, dream compute make it so. And what I really like about the Shade library is that it's doing the same idea with OpenStack Client but it's specifying everything by resources. So at the very beginning we instantiate an object called OpenStack Cloud and you can see right here that it's listing off cloud is dream compute. And if you remember from when I was talking about with the OS Client Config the profile that I have for dream compute that authenticates me. So I'm already authenticated now in my script and then from there I could get an object that is an image that is Ubuntu 14.04 and then I could get a flavor which is warp speed which comes off of that flavor list and then from there I could say with that dream cloud object that I have I could say create a server and then I can give it a name I could give it that image, a flavor and here is a really killer feature right here is the auto IP equals true. Just automatically that will give you an external IP however that cloud does it it'll get you the external IP and when you get that object return there'll be a public V4 attribute that you can actually get that IP off of no matter what the cloud is. So as an example we'll go ahead and we'll test shape and that same code snippet that I showed you we're gonna do that across a variety of different clouds so we have dream compute and we have rack space and you can notice that there are different images but the thing that shows which cloud it's going to is based off that cloud equals which is mapping to whatever is the profile again in OS Client Config in that cloud's YAML file and then in the end we're going to print all these public V4 addresses. So we go ahead and we run the test and it goes and right so dream compute make it so so it's running and so I can switch over to the dashboard and I can see that the instance is being created and in the end the dashboard once it becomes active should show me an external IP and there we go we have an external IP address that's listed right here. So now we're on to rack space rack space make it so so we could switch over to the rack space dashboard same idea it's gonna go ahead and create it we're gonna get this IP which it looks like it has already assigned an external IP and that'll be active so there we go we have now an active instance and internet was so fast that it was already done by the time I switched over but now so if we go ahead and we do a refresh off of the internet dashboard we'll have an instance with an extra IP and then with ultimum we'll switch over to this dashboard and it should also have an instance that is still building but it will also be active with an external IP and the great thing is is I use the same interface this is what it should have always been the same interface to interact with all these different clouds to give you these different public IPs from each one of them and that's using shade with OS client config the last part I wanna talk about is orchestration Ansible with OpenStack uses all these different tools I just showed you it's using shade under the hood so it's dealing with the inconsistencies that we were talking about earlier but it's also using the features of OS client config as well so here's an example we have a playbook and we have an instance and right here you see that we specify that profile dream compute so it knows which cloud we want to interact with and it can use that for the authenticating we specify an image, a flavor and then we say we wanna boot from volume so it knows to put the image on that volume it's basically doing that same demo that I did earlier with OpenStack client and again it has that really nice feature of just saying auto IP true whatever you have to do get an external IP assigned to this instance I can specify key and so on so we'll go ahead and this will be the last demo basically this is going to bring up an instance and dream compute and then we're going to wait for port 22 to be available so that we can actually do something with it we're going to, Ansible has this idea of inventories for different, for being able to specify instances so I'm gonna add it to an instance or add it to a group and then it's going to install on a very important package in the end that I want on this instance so we're going to go ahead and we're going to run Ansible playbook with that YAML file I showed you and so here we go start the test virtual machine on dream compute so just like last time we'll go to the dashboard and we should see a test instance coming up and do do do do do do do do do do do do do do do okay so we have an instance it's being built we're very excited I'm pretty sure at this point I had about three glasses of wine so and we'll wait for SSH to be available we can see that there's an external IP assigned to it it's active and now the instance has been added to our inventory and now it's going to install that very important package and so we're gonna go ahead and we're gonna take its IP and we're going to SSH in that VM now and use that very important package so how many people remember CalSE? All right, I'm bringing it back CalSE loves OpenStack so that's my silly little demo of Ansible of course you can do a lot more powerful things with it than with orchestrating of course but that gives you an idea of just using a cloud with Ansible using all those different tools like Shade and OS Client Config but now you have the power of Ansible so that's pretty cool so I gave you this on a downer note of all the things that I think that are wrong with OpenStack but at the same time I showed you some things that we're doing as a community that's contributing to it of how we're trying to make it better and we're always looking for feedback and this right here is a demo that was happening at Austin this is pretty cool, that guy playing the guitar it's measuring all of his muscle movement and this is all running off of OpenStack and I just think like seeing all the different demos and seeing what all of you are coming up with and using OpenStack I think that's super cool and I think that's what really keeps us all going and wanting to do better and so yeah, that's pretty much my talk and I don't know how much time I have but I am willing to take questions if you're also shy too, I am very friendly so feel free to come up to me and ask me questions, I'm pretty easy to spot so, cool, thank you