 All right, all right. Well, first off, good morning. Thank you for coming. I appreciate it. This is going to be a little bit of a different type of workshop, meaning the sense that if you've looked at my Twitter name, you realize that I have a night job as well. I'm a DJ. So I like to incite people to have a good time. So the idea here is for us to be loose, have a good time, not be so uptight. IT people, we're learning some great new stuff about Cinder. And can I get a show of hands of how many of you either have used Cinder or are using Cinder within your company organization personally? All right, so you guys are veterans. So I don't have to start at the bottom with explaining what Cinder is and da-da-da-da, that fun stuff. So first off, my name is Walter Bentley. I am an employee of Rackspace. I am a cloud solution architect. And basically my job is to build private clouds that are built on OpenStack for our customers, whether it be designing, migration, hybrid clouds with our public and our private cloud offering. But pretty much I deal with OpenStack every single day of my life. So yeah. So any of us that deals with OpenStack knows that it comes with a lot of power and a lot of pain at times. But that's pretty much my role and what I do. So who am I in a little bit more detail? Basically, I've been in IT 17 plus years. I started out as an ASP developer, not ASPX, but an ASP developer. So that just shows how far back we're going. Query quickly realized that I spent a lot of time and invest a lot of my energy in things. And when someone tells me that they've decided to buy a product instead of the nice, fancy code that I created for them, I realized that development was not for me. So I then transitioned into production support. So pretty much I spent my entire career supporting applications that other people wrote. And wishing that there was a way that I could give them the infrastructure and the ability to do things faster without having to me to do all the time of the work. So I wish OpenStack existed 10 years ago. That would have been great for me. I probably would get a lot more sleep and had a lot less gray hair on my beard, but still be it. So these are just some of the companies I work for throughout my life. Nothing too huge, but I've been around to block a little bit. So I found this image online, and I thought this was actually pretty cool, because to me it actually explains quality of service around sender in a really great way. You can have three different kinds of service. Good, cheap, or fast, but not all at the same time. And if you mix them all up in different ways, you can see that sometimes you end up with the results you want, and then sometimes you don't. So I just thought that this was a really good ice breaker to explain some of the power around what OpenStack brings, as well as some of the power around what the quality of service offering is. So everyone here loves good service. No one wants bad service in their life. And the same thing with your cloud consumers. They want to be able to have the best experience possible, as well as have options. And that's what quality of service actually gains to the table, too, is it brings you the ability to bring options to your cloud consumers. And that's really what we're all about here, as far as if you guys are cloud admins, you want to make sure you give your consumers as many options as possible, OK? So before we get started with the lab, because this is going to be a hands-on lab, I'm just going to lay some high-level foundation. Then we're going to jump right in. So these are some of the ground rules. Again, I'm not going to tell you to turn your phone off, but if it rings, it's mine. And I've been looking for a new iPhone, so those of you who have iPhone 6s, practically the bigger one, just make sure you leave that so that it rings. Requirement, ask questions, all right? Do I have the lab? Please ask me questions. There are also other rackers here that can help answer your questions. Don't hesitate. Don't try and beat your head against the wall, trying to figure it out on your own, all right? And if you obviously have to have a conversation about something, because we all are, what kind of phone you have? What kind of phone you have? Oh, all right. No, no, no, no, no. I'm sorry. I gave up IBMs a long time ago. If you have to have a conversation, we're all IT professionals. We all have jobs to do. Just please take it outside if you don't mind. As I mentioned before, I was going to do groups of five, but I broke it down into groups of two. So any of you who do have lab sheets that are willing to take another person onto your group, look, you read my mind. Raise your hand. Folks, relocate. Take a moment to relocate. Trust me, I'm not going to get into anything good yet. So we doesn't have to just be two. It can be groups of bigger than two, right, if anyone else wants to participate. And last but not least, materials that we're going to review is actually at this link here. Don't worry about it. It's going to be on the next slide again in a much bigger font size so you can write it down there as well. So again, what I did is I gave out cards that basically have information on it. Take note of your student ID. Take note of your tenant name. You're going to need to know that information. Make sure you get connected into the remote environment. So basically what you're connecting into is a fully functioning OpenStack cloud that's built actually on Rackspace's public cloud. So I'm actually doing quadruple O right now with this environment. And believe it or not, it actually works. So it consists of a deployment node, a controller node, a sender node, and four compute nodes. And I'm actually doing two back-to-back labs. So it's this lab and then this one right after. So the compute node is not so much you care about, but the sender node is really what we're going to focus on today. And again, we will be working with the Python CLI. So if you like Horizon, you've got to get that out your head. Any cloud, true cloud administrator uses the CLI. Just saying. And just to keep in mind, no funny business while you're in the cloud. Of course, I know you guys are going to try and hack it and take it apart, and that's fine. It will be destroyed once the lab is over. So good luck once you walk out the door. But just figure out what to put it out there. It's not going to be a persistent environment. So it is going to go away. So you guys kind of ready? Sort of ready? Really ready? All right. So the lab overview, these are just really quick bullet points that we're going to go through. And after sitting through all the presentations I've been doing since I've been at the summit, I hate bullet points at this point. So I'm sorry. You have to endure bullet points a little bit more. But we're going to configure multiple back ends of Ascendor node. We're going to create some new volume types. We're going to create and associate a quality of service definition for those volume types. We're going to add a volume to an instance. And then we're going to connect to that instance. And we're going to do a quick IO throughput check. And again, this lab is intended to just give you one idea of how you can do this. There are many ways of doing it in OpenStack. And that's the power and the pain of it. So this was just intended to give you one idea and one approach, more than willing to hear feedback from you guys if you have some other ways you guys have tried that you've been more successful in as well. All right, so now that you've connected to your environment, I need you to go to this link in your browser. This link is actually going to give you all the instructions you would need to do the lab. And it's basically going to take you to GitHub. And within that GitHub repo, there are also some other interesting things in there. There's actually a white paper that explains this quality of service as well, so something you can actually print out and reference later if you wanted to. This whole presentation is actually in there in a PowerPoint as well. So if you miss anything, or if you want some of the info, you don't necessarily have to take the pictures with your phone, even though I love to see it because I like being in pictures. But the information is all there as well as the lab. It will be there. So let me know. I'm going to give you a few seconds here to make sure you get it pulled up, connecting to your environment so that we can get rolling. What's that, sir? Yep, so that's a good thank you for reminding me about that. So part of the lab is you're going to deal with your tenant and as well as you'll deal with one of your neighbors. So it doesn't matter. We're basically student 01 to 15. So you can pick any one of those within there. It doesn't have to be your neighbor. In the instructions, when I say another tenant, meaning some other tenant other than yours. So again, the tenant names are student 01 all the way up to student 15. So you can pick any one in that spectrum. It doesn't really matter. OK. So everybody connected to the lab environment? All right. Everybody got the instructions up? All right. OK. So at this point, we're going to step through the lab. Basically going to give everybody three minutes to complete the lab. Is that all right? I'm sorry. 30 minutes. Just messing with you. Come on. I know you guys can do it in three minutes. So I'm going to give you about 30 minutes to step through the lab. I'm going to actually step through it myself and kind of speak through it so that way if you get hung up on something, hopefully I'll be able to clearly articulate what I wanted you guys to do. And then truthfully being saying whoever finishes the lab first does get a prize. Not to say this is the competition. Quality is not always rushed, but just saying if you want a prize, you've got to finish first. All right. Don't worry about that, man. You can't focus on that part of it, man. Just know it's Rackspace swag. It's not an Apple watch or an iPad or anything. So don't expect anything too, too fancy. All right. So I'm going to switch over here and walk through the instructions just like you guys would. Now, my screen is probably extremely small. So let me see if I can make that a little bit bigger. Is that quasi-readable? No, that just got worse. Yeah. Sorry, this is always the dynamic of being on stage. And the screen is never big enough to see. No. You don't have to edit an OpenRC file. The lab instructions are exactly as such. You just basically follow them, and you should be good to go. So at the very beginning of the lab, the first thing we're doing is we're connecting into a container. I tell you, I'm going to be collecting some phones, man. They're not even paying attention to me. That's fine. Yes. So you're not able to connect to the 172.29, 236.255? Now, let me come around and see what. So you connected to the environment first? All right, let me come over. I'll come over. Thanks. They have to connect into the deployment node first before getting to that. Yeah, it should be. Hopefully he has the information. What's that? We don't know how other groups can. You can pick one. So pick student05. Student05. It can be anyone. All right, so who used student01 as to create the T name or create the original one? Yeah, I guess I created a bit of a disastrous moment. So I'll reset that. I'll take away the volume type for student01. And whoever did it wrong has to do it with their student ID. And then you guys can do what you need to do. The other group can be any other group in the room as long as it's not your student ID. So it can be student02, student15, student10. Doesn't matter. Just pick another student ID. Because I want to be able to show you the fact that you can't deal with someone else's volume type. Yes. It's just showing you that there's nothing there. Yep. Some of you guys are really moving fast, huh? You've got things to do today, huh? I see, I see. Yeah, to be told that student ID really doesn't matter. It's just a way of keeping you guys from pre-creating each other's stuff. But that's happening anyway. So that was really what it was meant to do. But it doesn't really matter. You can call it Bob if you wanted to. He said reading is hard. It can be. So I can see somebody in the audience, when they created their QoS spec, they did not give it a student ID in front of it. So they're going to have problems later in the lab. Yeah, no worries. I mean, I'm just letting you know that way later on you may have to adjust some of the commands to match that. That's all. It's all good. So the thing about dealing with a lot of things in OpenStack is that it doesn't like the names. So everything in OpenStack, even though you give it a name really behind the scenes, it creates this funky ID. And it really wants you to use those IDs for most of the things. So that's the idea is you can't really use the name when referencing it. They want the IDs. So the idea is is that when you logged into your tenant, you're logged in as an admin. So I basically back you out of that. And then when you source that next file, the OpenRC dash your tenant name, you're logging in as a user at that point. So I'm basically bringing down your credentials so that you can do things that a normal user would do, not as an admin. So that's really the only reason. Again, you could do it as an admin. It wouldn't make a difference. Oh, you never do anything else admin? Right. We're not limiting a tenant. What we're doing is we're creating a volume type that has a certain limit to it. And based off of that limit, based off of that meta that you associated with it, you can tell a person, hey, when you spin up an instance, choose that volume type. And you'll get this type of drive and this type of performance. But you're limiting what that performance is so that everybody doesn't spin up and just totally crash your environment. So you're really adding limits to the volume type is what you're really doing. But you're not keeping people, you're not segregating yourself. As long as Cinder can connect to that as a backend, not necessarily, as long as Cinder can connect to that backend, you can create volume types to that backend. And Cinder will manage all the stuff behind the scenes of that. So I guess with stuff, you would carve out just a big block and you would let Cinder manage it. And Cinder, when you create volume, Cinder will manage that big block. And based off of the volume types, you can point it to different self-clusters or different types of bad drives that are in your self-clusters. It all depends, right? So just giving you different paths. Yep, and that's easy to do because basically when you go to create the volume, you tell it what type of volume type. So you distinctly say, yes, yes, you're creating a volume type that you associate the volumes to. That's the key. What is the backend? Like what it's running? Yep, so this is actually an open stack. So what this is, is this is the RPC install. So this is a Rackspace private cloud that's running in our public cloud. So if you go to Stack Forge, OS Ansible, basically the way we build our private clouds is up there. And that's what I use to build this. And if you pay me enough, I'll give you the exact instructions how to stand up RPC in a public cloud. Everybody OK? Questions, questions? Everybody's progressing? All right, nobody's yelling at me yet. So I guess that's a good thing. Well, see the thing is that the utility container gives you access to all the open stack APIs. Now even though you can do it directly from the deployment node, the reason why we created that container is so that you can ask a cloud admin can log in there and not really affect any, you're not in a container that has this running an open stack service. So you feel a little bit less stressed that if somebody waxed up something in that container, they're not going to mess up Nova or Cinder or Horizon or anything. But you could technically, from the right, from the deployment node, connect to everything. But I'm kind of educating a little bit about our private cloud, how we do things. Because we deploy our services inside our containers, which is kind of a unique thing. So I'm just kind of showing some of that stuff off a little bit, yeah. You have questions, questions? Yes? Do NovaList and look for the name of the instance you created. Should be your student number underscore my first instance or something of that nature. What's your student number? I'll take a look. Yeah, you don't have an instance created yet. It was successful? Oh, there you are. Sorry, I'm lying to you. So your IP is there. It's on the screen. 10.1.100.7. Yeah, if you source OpenRC while you're in Neutron, you'll actually be able to do NovaList. You should not need a password if you're on the deployment node connecting to it. So make sure you're on the deployment node and then you're trying to SSH into that container. What's that? Yes, just a deployment node. Yes? Yeah, it shouldn't. Let me see. Yes, you are. You know what may have happened is, yeah, for some reason your, yeah, no, no, you did everything right. It just, for some reason, the authorized key didn't come into your local profile on that server. Because you're right now, you jumped into a container, right? So you're in a virtual environment. When you get done, you're going to exit and you're going to jump right back down to where you were. Yes? No, it was actually added in Icehouse, QOS. Well, when they first introduced it, volume types came kind of after or in percent at the same time. So it wasn't until now that they're all matured and working together. But it was released back in Icehouse. As long as it can be a back-end to sender, all of this applies. That's the key. It has to be a back-end to sender. Well, it's up to you if you want to do it both, right? But you really don't need to do both. You don't need QOS really twice, right? So if you want to do it with the sand, that's cool. If you want to do it through OpenStack, you can. And then you can just let the sand just let it be open and let OpenStack control how much bandwidth a person gets or how much a person can use of it. So I would actually, personally, I would let OpenStack do it because it's easier. And that's something you can kind of give to a consumer. Yeah. Yes. Yes. Yes, I understand what you mean. Yeah, no, you still have to step through the same way you would do a normal shared storage. Yes. Yes. Yeah, so the way that sender works is it talks to its back-ends using iSCSI. So no matter what shared storage you're using, whether it's connected to FC or it's an NFS, it doesn't really matter. From a sender's perspective, it connects to its back-ends over iSCSI. So it's sometimes that causes heartburn, right? Because people think, oh, I have a really fast shared storage, but now I have to deal with iSCSI. But the reality is that it's all over the network, right? And it's relatively, you don't see the difference in performance as much as you would imagine you would. But it is what it is. There's no way of getting around it. You have to deal with iSCSI if you're going to deal with sender and sender volumes. You did something wrong. No, I was kidding. No valid host found. All right, so anyone who is an open-stack professional can tell me what that error message means. Nope, it's something with this scheduler, meaning probably I'm out of resources, probably what it means. But let me go and take a look. I'm not out of resources yet. Yeah, but you guys are just booting a normal machine, so it shouldn't really care where it puts it. All right, so why are you guys not telling me what's going on? No, so just add dash L, space root, if you're having a problem connecting in. So I see that multiple people are starting to fail now. Yeah, no, it should be. I believe it's something to do with the fact that I'm running on a virtualized environment, so I have to find out. Because I should be able to spin up an instance still. I have plenty of resources left. That's what I'm going to try right now. You read my mind. Well, you know what? I'll do it this way. No, there's no router. It's just a VX LAN. OK, we do have a problem. Houston, we have a problem. So you connected into the Neutron container, and you looked up the QHCP, and you connected to it to that, and then it times out. Yeah, I'm starting to think that something is falling down in the cloud, so give me a second. I ran out of memory. What's this, more disface? Yeah, we don't, as a practice, do overcommit on memory. Host has more disface than they expected. They're not that huge, but I may have to expand that a little bit. All right, so let me ask this question. Who has completed the lab? All right, so let's do it this way. Let's get some people completed, and then that way I can give rid of that stuff. Hey, now. Actually, it was running, but now I see it. Yeah, it's not. Yeah. I guess the memory should be out. Yeah. Basically, it was running. We couldn't have to say to the exact kind of thing. OK, you're all at the last step. OK, of course you know this works, but it doesn't work, because you guys are here. Let me come around this way, that way I don't have to reach over this gentleman. Just trying to see what you guys are seeing. I'm like all over your screen right now. So I'm able to get into it. It's just me and me. Yeah, I think we've overwhelmed the environment. That's the problem. What's up? All right, I have to look at that. Yeah, it's not going to let him in. It should have connected immediately. Sorry, guys. So clearly, my cloud can't withstand 15 people or 30 people connected to it trying to connect into the Neutron agent. So I know everybody. Yeah, no low tests, yeah. So I apologize for that. If you cannot get into that instance, then you have to take my word on it that it does work. Yeah, right, literally. Sorry about that, yeah. So anybody having problems connecting to the instance, everybody is. In a normal world, in your cloud, if you go and try it, it does work. You're connecting to the namespace. Apparently it's not working right now. Sorry, of course it worked 15 minutes ago when I tested it all, but such is life. Anybody else hung any other place in the lab right now, other than the Mitch match? So tell me a little bit more about my mismatch on my back-end name. So somewhere else in this instructions, I give a different back-end name, huh? So those back-ends don't exist in the student environment. But that's a sender thing. That's defined on the sender node. It's not actually in the. Yeah, that's interesting because I'm a bonehead. No, it's right. Because that's the name it wants to use. It's not the name that you define it as, but the volume back-end name. So there, there. So that command is a valid command, and it should work. This is what it wants to use when you create that meta tag or you're setting that filter. Not that name, not the name of here. This is sender.conf. It's in the sender container that's on the sender node of which you don't know where that is. Yeah, so that's another thing about having everything in containers is you can kind of keep folks away from your environment so they don't tear it up. One person got it. I don't know how, by the grace of God, but I'm sure with the help of Matt there, you guys got through it. So anyone else, I guess, finished other than the last step of actually checking the, what's your student number? 10 to 10. All right, so we have to try and figure out who finished first. So who thinks they finished first? What's that? That's a good way of looking at it. Well, the good news is I have a lot of gifts, so everybody's going to get something if you didn't leave already. See, that's what happens when you leave early. You got to ask, see, that's what happens when you leave early. I know it didn't exactly work out, but you don't got to leave early on me. So I'm going to try and spend some instances down just to kind of give some other folks a chance to actually. So give me some student numbers of folks who are done with their lab. 13, 10, 13, thank you. Thank you. 9, thank you. Thank you. All right, so there should be some more room left on the environment now. So 10, right? 10, 14, you guys are done. 13, I thought. Yeah? Do you think everybody agree with that? OK. Why didn't let me delete them the first time, but everybody OK? OK. OK. I like to hear that. It makes me happy. At least you can see it does work, just not for everybody. At least I'm able to prove you that. Yeah. Make sure when you call them support about that. You tell them Walter said, you can have that. They'll totally do that for you. Sorry about all the complications, guys. Yes, it definitely happens when it comes to OpenStack. Yes. Yeah, so I'm kind of the guy that customers come to when they want to really specialize their cloud or do things that are not normal out-of-the-box stuff, such as doing this, QoS, or doing multi-tenant isolation, which is the lab right after this one, setting up availability zones, doing multi-region across data centers. Really, part of my job is to sit down with the customer for two, three days and literally design their private cloud for them. And I give them back a document that actually says, OK, this is what your private cloud will look like. These are all the details, tenants, AZs, users. How are we going to set up the partitions on the servers? I mean, it goes down to the bare-metal detail. Yes. Yes. You can change it in a Horizon dashboard or you can rerun it through the CLI to make a change to it. Those sets can be adjusted. You just have to reset it again and it'll just reset. But yeah, no. Because it's at the time that you create the volume is when it pulls in those variables. And that QoS, as long as that volume type is associated with that QoS that you made the change to, then it will apply to everything going forward. But this is where you have to get down to granularity. If you're going to use different volume types for different people, when you have different tenants and you have different purposes, you have to create different volume types for them. Because you can't really share them. Because when you make that QoS change, it changes it for everything that's associated with it. And I'll actually go in more into that in my next lab after this one talking about when you're doing multi-tenant isolation, you kind of have to do redundant work to really accomplish it. But it is what it is for now. Are folks progressing? Yes, a little bit? Yeah, so the last step of connecting to that instance through the Neutron agent, it may or may not connect. I'll say it that way. Well, I'm going to allow you to ask me some questions. And I'm going to give you some marketing splah. And then, yeah, that's it.