 OK, should we get started? OK, let's go. OK, everyone, welcome to meta-dos and don'ts. This is the hands-on exploration of the OpenStack meta-data service. So I need to get a show of hands to kind of find out where you guys are at, where you're coming from. We all know that with the Summit, the experience levels are all over the map. So just to show of hands, how many of you currently use VMware? I know it's kind of a generic thing to say, use VMware. Workstation, Fusion, vCenter, do you do it one more time? OK. All right, cool. Awesome. How many of you use Amazon Web Services? AWS? OK. OK, how many of you use OpenStack? OK, all right, so a lot of OpenStack people in here. All right, this is a beginner workshop. So there might be some things you already know. But you shouldn't have any issues with the labs or workshops or anything like that, as far as going through the exercises. So I'm assuming that most of you guys feel comfortable in the command line, right? Pretty good? OK. And how many of you have ever taken a Rackspace OpenStack training class? Anyone? Few people? Did I teach your class? No? OK. It was Phil in the back there that taught it, maybe. OK. So my name is Matt Dorn. I'm an OpenStack contributor, technical trainer. I'm LinkedIn, Matthew Dorn, GitHub, MA, Dorn, IRC. How many people are here on IRC? All right, cool. Yeah, so just direct message me if you have any questions. I'm always on IRC. It's always on. And then mat.dorn at rackspace.com is my email. You can email me anytime you want. I always respond to everything, so just shoot me an email. OK, so first of all, I buried a secret message into this workshop, OK? It could be in the slides. It could be in the exercise link that I'm going to give you in just a moment. But there's a secret message buried into the workshop. If you find it, you get a prize. The secret message will tell you what to do, OK? So that's how you're going to claim your prize. Do what the secret message tells you to do. All right, so put that in the back of your head. Hope someone finds it, OK? So OpenStack and metadata, what are we talking about here with OpenStack and metadata? There's a lot of metadata stuff in OpenStack, right? There's like a Glantz metadata API, and there's a lot of cinders, LVM metadata, Lord metadata pops up all the time. But what really are we talking about here in this class? What are we going to talk about? Well, we're talking about instance personalization. But we're also talking about instance initialization, which is kind of the same thing, instance customization, parameterized launches, instance metadata, nova metadata service. Sometimes people call it the neutron metadata service, neutron metadata proxy, OpenStack metadata API, user data, network data, vendor data, cloud init, nova key value pairs, EC2 init, what's going on here? And cloud config, OK, what are people talking about when they're throwing around these terms all the time? Because I'm getting lost. Well, the story of OpenStack metadata actually starts way before OpenStack. 10 years ago, 2006, it seems weird, right? It doesn't seem like 2006, it was 10 years ago. But something very special happened, something very important. This is an actual press release from March 2006. Michael Errington of TechCrunch says, Amazon Web Service is launching a new web service tonight called S3, which stands for Simple Storage Service. It is a storage service back in for developers that offers a highly scalable, reliable, and low latency data storage infrastructure at very low cost. This is game changing. It's March 2006. Anyone here ever use S3? Amazon S3, AWS S3, anyone ever use it? OK, so you know how this was powerful, right? Think about this. I know it was 10 years ago, where you could actually put your credit card on file. And if you were a developer at that time, you could store as many static objects as you want it into what? We call it a bucket, right? You could store as much stuff as you wanted in there via that RESTful API or SOAP API at that time. Really, really powerful. Just being charged whatever you consume, all your gets and posts and all that good stuff. And this was pretty incredible at the time. But then the game actually changed forever in August of that year when EC2 actually dropped. Huge deal, OK? This is the actual press release here. Since Amazon EC2 gives you access to a virtual computing environment, your applications run on a virtual CPU, the equivalent of a 1.27 gigahertz beyond processor, you get 1.75 gigabytes of RAM, 100 gigabytes of local disk. You get the idea. You pay just $0.10 per clock hour. You get billed as you use it. And you can get as many virtual CPUs as you need. Just take your mind back 10 years ago. Think about this. This was the moment. And this was in beta, actually, when it first got released here. Did anybody sign up for the beta when it first came out? In August 2006, anybody ever use it then? So why was this so cool? It provides powers to developers, right? Developers don't manage a data center. They don't want to worry about the hardware. They're going to quickly scale their application during time to success. Go ahead and spin up some instances here. And if one of them fails or the configuration gets messed up, who cares, right? Spin up another one. When you signed up for that beta in August of 2006, you were presented with AMIs, Amazon Machine Images, right? These were like snapshots, so to speak, or hard drives on ice. No more going through the Fedora installer, the Sentos installer, the Ubuntu installer, the blue screen, not to say that there were things before that that would allow you to bypass that and automate that. But you didn't go through that. You spun up the instance. Your OS was there. But there was an issue. Think about this. Maybe before Amazon released this to the public, they were thinking to themselves, well, if we have these Amazon Machine Images, and someone wants to boot maybe three or four of one image, we have to make sure something here, right? First of all, same IP address, maybe a static IP configured in that AMI. That wouldn't make sense. Same host name. The goal is to make these things unique and secure. So how are we going to make these unique and secure? We want a unique IP. Well, I can tell you a way you can solve that. How can we get a unique IP? What's a way in our environment that we could get a unique IP? Exactly right. Exactly right. We could do DHEP. But what about this? Unique host name or injecting a public key. Why do I care about the key? I want to be able to securely connect to that thing, right? And how about with now being prompted for a password? That would be nice. Have every instance be unique with the unique host name, the public key inserted into the SSH authorized keys file, a unique IP. Well, Amazon said, this is how we're going to do it. Let's bake some initialization scripts into our AMIs. And as that instance is spinning up, have it actually curl. You guys know curl, right? Utility for talking to a server here. Variety of protocols, but commonly here over HTTP. To 169.254.169.254. And get back a host name, a unique host name, and a public key perhaps passed when the person booted the instance. And then when I boot that other instance, guess what? When that instance curls 169.254.169.254, it's going to get back a unique host name and a unique public key, and so forth and so on. So once I actually SSH into that instance, at that time, I signed up for the Amazon beta. I'm good to go. I could actually manually go and curl that IP address just like this with that 1.0, which means the API version. And literally, this is August 2006, right? So 1.0 is a reference to the fact this was the first version of that metadata service. And I would get back my public key here. I could also curl for a host name and get back my host name here. I could curl for local IP if I wanted to, if I didn't want to do an IF configure IPA, and I'd get back the local IP address. Now, what people started doing when they realized that there was this little metadata service thing inside the instance is they would spin up an instance. They would SSH into it. They may create some kind of customized configuration file or a script that would actually use some data that lived at the metadata service. And they would have that information output into a configuration file. Maybe that would be used for something who knows. Then they would take a snapshot or a backup of that image or instance they spun up. We call it bundling. Anyone here ever did a bundle? Anyone here ever did a bundle on Amazon web services before? Right? So they would run this bundle command on the actual instance and back up a snapshot of that AMI 2S3. And now they can boot as many of those AMIs as they want. But this was a real problem, OK? Because at first it was really awesome. But after people started using this for a couple of months, what happened was is they end up with too many AMI bundles. Here, if they were to do a list of all the bundles that they currently have, they have one for every time they have to make a change. SSHD config, they want to install Apache. Every time they want to do something, they go into the instance, they run the bundle command, they back it up, and they put it in there. And this is annoying, OK? Pretty awful, right? And then something happened late 2006. You guys want to know what it was? Anyone know? No one knows? The Amazon Zoom or the, I'm sorry, the Microsoft Zoom came out. Anyone here ever, anyone here who the Zoom is? Anyone here own a Zoom? First in the front, he's got a Zoom. Revolutionized the portable music industry forever, OK? There's something else happened too in 2006. That was the biggest deal, OK? But after that, something else happened. This is actually December 5th, 2006. So OK, we have the launch of EC2 August 2006. And then December 2006, Amazon says, guess what? In addition to that metadata stuff that you can get out of the metadata service, you can actually supply additional data to instances at launch time. And this blog post, by the way, is still up. And you can see this. And what this enabled you to do, this introduced the concept of user data. It introduced the concept of being able to actually run an instance or boot an instance and specify some data. I mean, you're probably looking at this maybe and you're thinking, how the heck would I use this? In other words, I can specify some data or I can specify a file with data in it. And then if I were to SSH into that instance after booting it, I can curl this link right here. And guess what? I get back the data that I specified on boot. Again, you may be looking at this thinking, how would I use this? But Amazon EC2 users, they were very smart. They were using a cutting edge technology at the time. What they started doing was this. They started spinning up an instance, SSHing in that instance, and then creating a bash script that said it curled that user data. And it said, if you see this in the user data, do all these things. If you see this in the user data, do all this. So based off a certain keyword for a web server, let me ask you, what may we have it do if it's a web server, for example? Install what? Apache, Nginx, something like that, exactly. The database, when I do Postgres, MySQL, we'll take a snapshot of that with our bundle, put it into S3, and then go ahead and boot it with the web server. And it will become a web server, because we'll say if it's a web server, install Apache, and configure it. We'll say, hey, here's a database. And here's some kind of service, right? So this was actually a pretty neat way of taking advantage of that user data at the time. And in 2007, it was the year of Amazon metadata. Curling that 169.254.169.254 address gave you, in 2007, January 19th, that's the actual date of when this API came out, you could get a public IP. Super powerful, right? Your external IP address. So you could actually, at boot, have a script go out, get the external IP address, and go talk to some sort of maybe dynamic DNS service, right? To resolve a name to an IP address. Also, you could get a block device mapping. You could literally get, if there was an EBS, or elastic block storage volume attached to that instance, you could get the fact that it was forward slash dev VDB, or VDC, and then what? Maybe your script might mount it, or something, right? So this was pretty cool stuff. So in May 2008, somebody took advantage of that user data feature. And they came out with this EC2 run user data init script. And this was a special init script that you could download. If you baked your own AMIs, you could bake this into your AMI, or you could download an AMI at someone baked that had this already in there. And what this allowed you to do was actually pass an actual shell script. And this init script would go and run that script. It would make sure to put execute permissions on it, go ahead and run it. And guess what? You rebooted your instance. It would not run again. It would know when it ran. So think about how this is a game changer here. And you can look at this script. It's installing a LAM stack, something you probably would have done back in the day. Not too popular maybe now with people that are on the cutting edge, but you get the idea. And this is where we get this philosophy of instance personalization. This is where we start treating our instances like they're robots reporting for duty, so to speak. We have an instance spin up. Assets name, who am I? Gets a name, gets an IP address, grabs an SSH key, updates DNS, mounts a specific EBS or a plastic block storage volume, installs Apache, opens up port 80, copies some files that I may need from S3, and start the services, and the instance is ready to go. This is where people really start taking advantage of this. And in 2009, something called EC2 init came out, which became the cloud init project, which is basically the program that we find, the application we find installed on almost every single cloud instance we spin up, whether we're on Google Cloud or Azure, Rackspace, AWS, 2009. And this really changed the game because this introduced the concept of Cloud Config. You can see that this is in YAML format, right? I'm sure most of us are probably pretty familiar with YAML. But the reason why this is really amazing is because it doesn't matter if I spin up an Ubuntu instance, it doesn't matter if I spin up a Fedora instance, it doesn't matter if I spin up a SUSE instance, right? A package update is going to be a package update. I mean, think about what am I probably going to use if I'm on Ubuntu? What's it probably going to use to update the packages? Yeah, APT or apt, right? If I'm on what is SUSE, it's going to use zipper, right? If I'm on CentOS, maybe YAM, Fedora, maybe DNF, whatever, what have you. I mean, it doesn't matter, it's an abstraction. It doesn't matter what type of instance or flavor I spin up, okay? And you can see here the packages. I can install, I'm filling Vim and Git there. And here I'm writing a file, Etsy host file. So pretty powerful stuff. Now, in 2009, 2010, this is kind of when the open source cloud war started to begin, right? People want to get away from that vendor lock-in from Amazon. We want to start really creating our own cloud software and Eucalyptus being one of the first, and one of the ones that actually started creating their very own metadata service. They actually used the same address, 169254, 169254. So if you split up instances in Eucalyptus, you could get metadata. And of course, Open Nebula, Apache Cloud Stack, look, the only one that's, we all know the one that's really relevant now, okay? So how do OpenStack's metadata service work when it first came out? Well, let's just talk about how it works in general if you're using the traditional OpenStack metadata service. You spin up an instance. We want to make sure that instance has cloud on it installed on it. We go ahead and we send a get request to 169254, 169254. And the DHCP server injected a static route inside there, which will direct that 169254, 169254 directly to the DHCP namespace. Inside there is a proxy bound to 80. It's listening, waiting for that request, gets the request, the IPC socket talks to the Neutron Metadata Agent. Neutron Metadata Agent, they do a lookup to know what IP address it's coming from that request and they can then find out the instance ID, the X tenant ID, and then it uses this thing called a signature, which is like a secret code that you set in the com file mixed with an instance ID, it's like a hash, and it sends that to the Nova API, and the Nova API then can take a look into the database and fetch the metadata or user data. And then obviously the instance or individual or script or whatever that requested it gets the response back. And you can see that cloud on it will actually write that to var lib cloud instances, the instance UUID, okay? But guess what? OpenStack comes out with this thing that eliminates all this complexity to say hey, you can do this if you want, but guess what? If you don't want to, it's fine. Config drive. Why don't we just bring that data closer to the instance by putting or attaching a CD-ROM, a virtual CD-ROM or ISO to that instance. That information gets written to the CD-ROM, it gets during boot, cloud on it, will actually mount the CD-ROM. It's a devSR0, mount the temp to the temp folder, copy the contents, that metadata, that user data, copy it to the instance, and then unmount. It's all during the initialization stage, okay? Simple as that, very, very quickly. Now, every single cloud you hop on is gonna have some sort of data source. If you spin up instances on Google Cloud Platform, you're gonna have the Google Compute Metadata Service, which is very similar to the 169254, 169254, it's something that you request metadata over HTTP. Microsoft Azure has this rip-off of config drive called CD-ROM, and DigitalOcean has the droplet metadata service, okay? So everybody has these data sources. Now look, if there's four things, before we hop into the workshop, there's four things you wanna understand. If you get this, you're good to go. You don't need to understand how metadata service works or any of that other stuff. Here we go. We got metadata, we have user data, we have vendor data, and we have network data. Now, vendor data and network data, I haven't mentioned that, newer stuff. It's newer stuff, we're gonna talk about it though. Metadata is stuff like an instance ID, a random seed, a private IP address, or a public key. User data is stuff like cloud config, or a script in any language, as long as you have the interpreter installed on the image, right? This would be a Ruby script, a Perl script, Python script, whatever. Or you can do a pound include. Anyone familiar with pound include? Right, you can send a link to actually grab and pull down that user data. Other than that, user data is restricted to usually 16K, just so you know that. You want, if you have some kind of huge script that you want bigger than that, then you would use an include, bring it in, fetch it from somewhere, whether it's Swift or S3 or whatever you want. Vendor data. Vendor data is vendor provided information. Think about this. If you run a cloud, let's say you wanna get rich off this open stack thing, right? Every time someone spins up an instance, regardless of what project they're in, by project I mean a count or tenant, whatever word you wanna refer to there, you can have it provide maybe a registration code or information on local source mirrors, or maybe just like a greeting, whatever. Everybody's getting the vendor data, right? And then you have network data. And network data is network information applicable to the instance. It grabs this actually from Neutron. So for example, let's say you decided to disable DHCP in your environment. Guess what? You could have config drive mounted, cloud init will grab the network data. Maybe you have some ports where you define some static IPs on there and set some IP addresses on there. You don't even need DHCP running. Or let's say you do have DHCP. If you want a couple, you wanna eat one, you wanna eat two and you want those to be static. You can use network data for that. So this is relatively newer stuff, but it's still awesome because it gives you some really cool control. Now, are we ready for the hands-on? Okay, so this is how we're gonna do it. As you guys know, there are three ways we can interact with open stack here. So we could technically run the workshop in three ways. We could do it from the horizon dashboard, right? What are the other two ways? You guys know? API or CLI, right? So we could do a horizon dashboard, CLI or API. They all use the API. But in here, we're just gonna focus on the CLI, okay? So with that said, you all are gonna get one of these sheets. This sheet has an IP address on it. It also has a username and password. Go ahead and SSH into that instance. If you have Windows, you can use Putty. If you have Apple or Linux, you guys know you have terminal built-in, okay? And with that said, in case you're curious about where the heck you actually are SSHing into, you all are going into a console VM. I'll call it the console VM. The reality is this workshop, you don't necessarily have to SSH into anything. So it's an important thing that I wanna stress. You could just install the clients on your machines. If you guys are running Python and Windows, you can install, do pip install Python OpenStack Client. If you were doing on your Mac, you can just install the clients on there, it's fine. But we don't trust that your systems are set up properly. So what you're doing is you're SSHing into these little environments that are clean, okay? You're gonna install the Python OpenStack Client. That's all it's used for, interacting with OpenStack. Nothing more, nothing less. What's the point? You don't need a SSHing to anything to interact with OpenStack. I think that's the biggest point that I wanna stress there, okay? A lot of people are under that impression. So all of you are gonna go into this little console VM and from that console VM, you're gonna interact with an OpenStack environment, okay? Now, if you could open up your web browsers to summit.cloudtrain.me and I will be handing out the credentials, okay? Are we good to go? So, let me just show you the website real quick. On the website, I have a link to download Putty in case you don't know how to Google it or something. But you can download it right there and the slides are there if you wanna grab those and then I have a little zip file for you. If you wanna download everything, you can just click that button and you get a zip file. The exercises that we're gonna go through are one through six. After going through those exercises, I have a ton of examples here that you can take home with you. These are good examples about Cloud Config. I have links to the Python modules that each Cloud Config script actually references and I'm actually working on a blog post right now that will outline every single Cloud Config Python module which we'll talk about a little later. Describes what directives you can use and all that kind of good stuff so it'll be like a mega post. But in the meantime, feel free to grab all this stuff and also look for the secret. Remember I told you about the shared secret, okay? The shared secret can be in the slides, in the exercises or in the examples and once you find the secret, it will tell you what to do. So I wanna throw that out there to you guys again. So let's go ahead. I'm gonna hand out, can you guys help me in rackers? Can you guys help me hand these out? I forgot to give them to you before, I'm sorry. So if you could, wouldn't you SSH in? If you could just like, look, if you wanna go ahead and start going through stuff, that's fine. Feel free. But I would probably recommend that you just like, I'm gonna go through everything. I'm gonna walk you through it and if you could do that and wait, that would be great. But I'm not gonna stop you from going ahead if you wanna mess around with it. But I'm gonna be describing everything and we can go through it together. Okay. Again, if you can hold off on doing anything, just make sure you get a connection and get a prompt. If you cannot connect, you can't get in, it's not working. Just raise your hand and one of the rack space people on the back will definitely help you out. Then we got a gentleman in the front who's having a little bit of an issue. Remember if you have putty, don't forget to put the username in. I think actually putty will actually prompt you, right? We'll say login as, but remember it's a username, it's open stack and the password is open stack. Okay. Are you guys connected? Everyone's connected, okay? Everyone's, yes? Okay, good, awesome. Okay. All right. Now check it out. The first thing that we're gonna do is flip over to number one, which is set up. You may be wondering, what the heck do we have to set up here? Well, I want you to remember where we are. Remember where we're logged into right now. We're logged into a console. That's right. It's just a little box that we're using to connect to open stack. It's not open stack. But let me ask you this. If we decide to install, we're gonna install the Python open stack client. Does anyone here know what comes with the Python open stack client? What comes with this thing? What is it? What's that? What is it? Oh, sorry, I can't hear you. It's hard to hear back here, I don't know. It's a CLI, right? What comes with it? What would be the commands that come with it when you install it? Yeah, Nova. Yeah, so this is basically, you guys remember back in the day, if you remember this, if you're not totally new to open stack, you have what? Python dash Keystone client, Python dash Glance client, right? Python dash Neutron client. All those are gonna come with the Python dash open stack client. It's gonna come in with all those. But we also have the ability to type in open stack, right? That open stack CLI, where we can actually talk and do a lot of those things, we're just using the open stack command. But before we actually install that thing, we need to go ahead and set up the Ubuntu Cloud Archive repository for Liberty. You may be wondering why do we have to do that? Look, we could of course maybe install this Python open stack client with PIP or something like that. But what we wanna do is we wanna make sure we get the right version of Python open stack client. Remember there's a relationship, don't forget this. There's a relationship between the version of Ubuntu you install, right? And then the packages that you get, the open stack related packages that you get when you do app get install. So we're just gonna make sure that we set our repository here to the Ubuntu Cloud Archive. And the first thing we need to do is install the Ubuntu key ring for those GPG keys. So it's secure, we actually bring down the packages. And we also wanna make sure that the software properties common packages installed which basically gives us the ability to run add app repository. Yes. So go ahead and install Ubuntu Cloud key ring and software properties common. And now we're gonna go ahead and install the Cloud Archive for Liberty which will give us the appropriate Python open stack client that I wanna use here in this workshop. And of course we wanna do app get update to update the package index there after setting our new repo. How's everybody doing okay? You guys all right? Good. All right, now we're gonna go ahead and install Python open stack client. These things are pretty, there's like only 512 megabytes of RAM on these boxes. So pretty slow. Now we're gonna go ahead and create our credentials file. And you guys know we call it an open RC file, right? It doesn't have to be called an open RC, it can be called whatever. But you realize why we're doing this. We're doing this because we have the CLI installed. What happens if we don't have credentials or anything set in the variables and we try to use the CLI? What happens? It asks for it, right? It says, I have no way to authenticate. I need information to send the keystone, right? So I'll get a token back and send that off to the respective service. So we're gonna set this file up and just copy and paste it. And you might be thinking, what's this my node ID thing? What's this my public IP thing? What is all this stuff? Every one of you are in your own project. And every one of you, you're different groups that have different open-sac environments. But you are all gonna have, be in your own project with your own, my user in the respective number. And I've set these variables for you, my node IP and my public IP. I've set them up for you already. So you don't need to worry about it. Just go ahead and copy that in. Create the credentials file there, or directory, just like that. And then go ahead and source this user file that you just created. And now go ahead and do an open-sac token issue. And we're using the open-sac command, right? Newer one. And everybody should get back a token. Give it a second. But we should get back a token. Okay, I got mine. You guys get your tokens? Okay, that shows us that obviously we're able to communicate with Keystone, right? Which is good. Now we wanna go ahead and, we wanna do a glance image list to get an idea of what images we have in our environment. And we have two images in the environment. We have seros and a boot 2.16.04. Anybody here have used seros before? Okay, so you guys know the whole seros idea, right? So would you use seros for in production? No, obviously it's just for testing stuff. But we're gonna use it to explore metadata at first. And then we're gonna go boot in a boot 2 instance. It's important that you understand that seros does not have cloud in it installed on it. It has something called seros in it. Which is a cooked up version, especially for seros. Does not have cloud, it does not have cloud in it on there. Now boot 2.16.04, in this case that we're gonna run, does have cloud in it. But we're gonna explore the metadata service via seros first. It's lightweight, it comes up quickly. It's good for just checking out the environment and looking at what we have set. So go ahead and do nova flavor list. And we should have one flavor that I set, 256 megabytes. That's it for the summit. Now I'll do a nova list. You should have no instances at all set up. All right, now let's do our neutron net list. And guess what you should see? Does everybody have the public network? Can everybody see that? Okay, good, now check this out. Just so you know what the heck you're looking at. Guess what that is? That's your ticket out of this world. That public net, right? That public network that you see sitting there was created to talk out. It's a provider network, right? Can someone tell me what's the opposite of a provider network? That's exactly right, a tenant network. Something that we're gonna create that's owned by our individual project. So what we're gonna do here is we're gonna go ahead and do a neutron net create private to create a private tenant network, okay? We should all have the private network. Now what's the next thing we have to do? Create a subnet. So we're gonna go ahead and create the subnet. Notice that we're specifying a DNS name server so that the Etsy resolve conference ID instance can resolve names to IP addresses, right? By setting it to 8888, Google DNS. So I'm gonna go ahead and do that. How are you guys doing so far, everyone okay? There's our network, there's our subnet. Guess what gets, when we create that subnet, can someone tell me what gets created on that subnet? You can see it right there. DHCP, right? 192.168.1.2. Who's gonna get 192.168.1.1? The router, right? It's gonna be the gateway IP address. Once I create the router, it's gonna get 192.168.1.1. So we do neutron router create my router. We're gonna create your router there. If you look at the next line of the exercises, I'm just gonna show you this, okay? And then I'm gonna run through it so don't feel like I'm going ahead. Neutron router interface add my router private subnet. We're gonna add that private subnet to the router. And then what happens as soon as we do that? An interface, right? Appears inside the router 192.168.1.1. Then we're gonna set the gateway of the router to what? The public network. Bam, just like that. And depending on which instance you are, it's probably gonna get 10.0.1. whatever, four, five, six, seven. That doesn't really matter though. But just so you know what's going on behind the scenes. And then finally, in the next section, we're gonna boot an instance and it's gonna be connected to that subnet and it will get 192.168.1.3, okay? Just a visual to kind of show you what's going on here. I'm gonna flip back and just run my commands, okay? Everyone okay? Everyone have this configuration right here? Yes? We're good? Awesome. Okay. We're gonna go ahead and boot the instance now. So if you go flip over to number two, exploration, we'll now be able to go ahead and boot a SIROS instance. So go ahead and copy this. We're giving an image, SIROS, the flavor summit and the name is My First Meta Instance. Anybody, is this anybody's first time booting an OpenSec instance? Anyone's first time? One person in the back? Congratulations. Come on. It's pretty awesome. I remember my first instance. It failed. No valid host? No valid host? Okay. So if I do a nova list, I should be active and I can go ahead and run the nova console log just to be able to see what's happening here and everything's booted up okay and there's my SIROS login screen but obviously I can't log in just from looking at the logs. I'm gonna go ahead and get the VNC console. So nova get-vncconsole, My First Meta Instance. I'll say no-vnc and that's gonna give me a URL and I'm gonna paste that URL into Internet Explorer. I can only use Internet Explorer, right? I'm sure, I hope you guys have it installed. Okay, whatever. So go ahead and copy the link into your browser there. Your browser there, that's pretty small. Just like that. The username is SIROS. The password is CubsWin with a smiley face. Now don't forget this. Look, it's really frustrating to work with no-vnc in the web console. It's super frustrating. You can't copy and paste and going up and down. Look, there's a little tidbit there. If you're on Windows and Linux, you can do shift page up and shift page down. If you're on MacBook, your function shift up and down to look up and down in the console if you wanna do that. Now here's the deal. Config drive is typically enabled by default with nova, right? And I'm gonna prove to you that it is. Just prove to you that when we boot an instance, we can go ahead and do sudo block ID. Oh, I don't have it in mind. Let's see. Oh, I knew I was gonna do that in the wrong session here. sudo block ID. And we should see devsr0, right? So you'll see devsr0, which is your optical drive, your config drive. And we're gonna go ahead and we're gonna mount it and see what's on it. So we're gonna do sudo mount devsr0 and we're gonna mount it to this mount directory here. Yes? It's like an optical disk. It looks like the reality behind the scenes is just a file, but it looks it appears to the VM as an optical disk by default via the nova comp file. The nova configuration file will enable config drive by default. We can disable it by setting config drive to false. We can disable it by setting config drive to false, but by default, it's gonna connect to config drive there. Having some issues here with this caps. sudo mount devsr0, just like that. And now if I actually go and look, yes. It might be the caps lock, the forward slash thing. If you do a forward slash and it becomes a question mark, sometimes it's because of the caps lock, but I think that it takes a second to register with the console, it seems like. How many, anybody here having an issue typing into no VNC console? A few people, okay. Is this caps lock on? Okay. Yeah, I didn't have that I thought you'd leave your caps lock on, but I mean, try hitting caps lock a couple of times with the VNC session open. Hit it, toggle it a couple of times. And look, if that doesn't work, just hit control old delete on the top right and reboot the instance and it should refresh everything. I hate, I don't like working with no VNC console here, but that's our option if we have this many people. Sure. Is there any way you can get a mic or something? A mic? I can't hear you, I'm sorry. Is there any way to open this zeros VM CLI into the VM that we were working before through SSH? No, but you could hop on the actual open stack environment and mount it in there. But you're not, you wouldn't be able to mount this from the VM that you're actually SSH into right now. You're SSHed into a, was that the question? No, yeah, I understand that, but I'm saying right now we are opening this VNC viewer, right? Through the browser and then going there and then logging into the CLI, right? I'm just saying that to log in into this VM, can we VM from our first VM where we SSHed before? No, the first VM that you SSH into is not an open stack environment. Okay. Yeah, that is a, remember it's called the console. I got it. Right, yeah, it's totally different than, yeah, it's not an open stack environment at all. Okay, okay, all right, thank you. That would be easier though, right? Yeah, yeah. I agree, I thought about that. Yeah, okay. That would be easier than using no VNC, so it's a good point that you brought up. All right, thank you. Yeah, sure. So what I can now do is CD into mount, open stack and you see how you have two directories here on config drive, you have EC2 and you have open stack. What do you think the EC2 stuff's for? Any guesses? Why would you want to have EC2 metadata? I mean, look what happens if I actually go in there and I do an LS. What the heck is that? Any guesses as to why you would want this? What's that? Yeah, it's compatibility with Amazon. If I have scripts that used to talk to Amazon's EC2, like I used to have images that were on Amazon that have brave baked-in scripts, I could actually use that if I wanted to. Now, it doesn't go too far back here, but if I wanted to, I could, right? If it was actually compatible. Now notice how this is another thing to take note of. You have 2009 latest, let's go into the open stack one. If we go into the open stack one here, you can actually see that we have all these API versions of the metadata service API and then we have latest. Well, here's your first don't in terms of meta-dos and don'ts. Your first don't is if you're gonna make a script that takes advantage of data that's gonna live in here, you don't wanna use latest because guess what's gonna happen? Open stack is gonna come out with another version that has a whole bunch of new features or maybe they write the key value pairs a little differently or something, right? And then your script will no longer be talking with the latest, right? So you wanna make sure you use a specific date. In this case, 2015, 10, 15 is what you would use your script to talk to and the contents of 2015, 10, 15 should be the same as the latest. The latest just shows you what the latest version of that metadata API contain, but those two should be the same. So when you make your scripts, you wanna make them according to a certain specific date. Now, what we're gonna cat out here is if we go into the latest, we have metadata, network data, and vendor data. How come we don't have user data in here? Any guesses? Yeah, we didn't pass any user data on boot. So by default, remember what we specified when we booted the instance. We just specified the image and the flavor. That's it, we didn't have to give a network. We only owned one network, right, in our project. So we didn't have to even provide the network. We assumed, oh, okay, you have a network that's the one you wanna go on. So by default, you get metadata, network data, and vendor data. Now, let's go ahead and take a look at the metadata. Let's look at it. Now, notice all this stuff we have in here. First of all, you have an admin password. That's a random password. This has nothing to do when you're using a hypervisor like KVM and LibVirt and you don't have any kind of injection, admin password injection. These are all extra stuff you have set up. A vanilla open stack environment will not use that admin pass, that random admin pass at all. You can set it up to use that if you want and set your root password to whatever that random admin pass is. You can configure it to do that, but it's not gonna use that in a regular vanilla open stack environment. The random seed, anybody know what the random seed is? Any experienced people know about the random seed? Any guesses? Where do you think it came from? SSH? Not SSH. Look at the random seed. It's a random string of characters there. This actually comes from, you guys know dev random, dev u random, right? This comes from dev u random on the actual compute node on where that VM resides. They'll actually pull out some randomness from that and then they will use that to actually seed dev u random on the actual instance because we know that entropy or randomness is kinda hard to come by on VMs, right? Don't have any metal or stuff on running in those. So, again, that random seed comes from dev u random on the compute node and then it also uses that at boot to seed dev u random on the VM itself. Well, who threw the mic? Who dropped the mic? Come on, the use for this? Well, the first use case is for seeding your dev u random on your VM, which just makes it more random. That's the first use case. The second one is that if you are an app developer, maybe rather than pull from dev u random or whatever, you could just pull from that if you wanted to. If you're creating some kind of, you need some randomness, okay? What else do we have in here? We have a uuid, that's a uuid of the instance. We have the availability zone. Nova is the default availability zone. We have the host name, launch index. Anybody know what the launch index is? Any guesses? Launch index, anyone ever used launch index before? No one's used launch index? It's not a popular feature, but anyway. If you were to boot five instances, if you were to say like, Nova boot, and I want to boot five, like max number five, you know what it does? It actually, every instance gets a number. The first one gets zero, the second one gets one, two, three, four, five. Maybe one, what's the point of that? Well, you can create a bash script that actually is, you know, using a delimiter. And you can say, okay, if it's number three, then do this. If it's number four, then do this. So it's just a nice little convenient thing there. If you do a batch of VMs, you can use the launch index. Then you have a project ID. That's the project you belong to, otherwise known as your account. And then you have the name of your instance, okay? So that's config drive. But guess what? If you have the neutron metadata service or Nova metadata service properly installed in your environment, guess what? It's gonna work as well. So that 169254, 169254 address is also going to work in addition to config drive. Now, the number one question in your head after realizing that should be, well, okay, which one is cloud on it, or seros in it in this case? Which one is it gonna use? Is it gonna use 169254 or is it gonna use config drive? Well, that's what we call data source priority. And seros in it by default, uses config drive first. And if config drive's not there, what do you think it grabs the data from? EC2 metadata. That priority is actually outlined in the configuration file for cloud config here. On cloud config, for seros in it, yeah. It's gonna be the same? Well, not always. Pretty much it is a say. Okay, let's just say it is. But I'll tell you what's not the same. Is that you cannot store files with the EC2 metadata service. It won't store files, because you can actually inject files on boot, which is one of our exercises. And you can't, what else is it in there? Oh, it doesn't know how to do network data yet. EC2 metadata service, if you do 169254, you can't get network data. Only config drive has network data. They'll probably fix that though, but most people just rely on config drive. Always go config drive, I think. It's more convenient, obviously. How much time do we have left? What are we working with here? What do we have? What time does the session go till? 610? Yeah, okay. What's that? Okay, six maybe, yeah. Okay. Well, we might not get to everything, but let's see, we got vendor data. What's the vendor data? Remember, I told you what the vendor data was. The vendor data is actually the data that you actually, if you wanted to get rich off the cloud, you could provide this information to everybody in your environment. If you boot up an instance, you know, the last person I saw taking advantage of vendor data. Rackspace takes advantage of it. It has a whole bunch of networking information that they actually put into their vendor data, because I don't think they have network data enabled yet. Digital ocean, when you spin up an instance, they have vendor data, which is pretty cool. And again, why would you use vendor data? You might want to give a special message at every single, doesn't matter if you're hosting Starbucks, McDonald's, Burger King, whoever you're having, spin up instances, right? Doesn't matter. They're all gonna get the same vendor data. Again, it might be local source mirrors, it might be a registration code, it might be thank you for being a loyal customer. Here's a coupon code for, you know, doesn't matter, exactly right. It's controlled by the cloud operator, the person who's trying to get rich. Well, everyone's trying to get rich, but you know what I mean. The person who's trying to make money off OpenStack, right? The cloud operator, right? Okay, so if you check out, do an IP route inside, just do an IP route. Check out the 169254 static route that got injected here. So this was injected by the DHCP server, and where's it going? Where's the headed? To the DHCP namespace. What lives in the DHCP namespace? The metadata proxy. So we can actually curl 169254169254, I mean, go forward slash OpenStack, forward slash latest. And just like I mentioned before, network data is not peering in there because they haven't put it in there yet. Maybe they never will, who knows? Everyone uses config drive. But I can see that I have metadata.json, but check this out. There's a big difference between EC2 and config drive. One of the other differences is this. Look what happens every time I curl 169254169254. Do you notice something changing? Random seed. Every time you do that, it's fetching some randomness from debut random on the compute node. Okay. That's one pretty significant difference. The other one is what we're about to show you. Think of EC2 as real time. Think of it as you can actually do real time manipulation. What do I mean by that? If you flip back to the exercises here and you actually click on NOVA meta, I know I skipped over a line here. It's not a big deal. It's not gonna hurt anything if you don't do that. But if you look at this line here, NOVA meta, my first meta instance, set open stack summit equals awesome. What I'm saying here is, can you please add this key value pair to the instance? And look what happens when you actually do it. So again, you're not doing this from inside the instance. You're doing it from your shell, like this. And now look what happens after I do that. After I do that, and I come over here and do the metadata.json, do you see that? My key value pair then appears inside of the metadata.json. So it's actually updated in real time. Do you think config drive does that? No, config drive cannot do that. Even if you reboot the instance, it will never appear in config drive. Once that config drive is written, it's written for the life of that VM. You can also remove a key instance by just going ahead and doing NOVA meta, my first instance, delete, just like that. And then go over here and you can see that it goes away, which is kind of cool. Any questions on that at all? Okay, so let's move on to the next section. Anybody find the secret key? Did anyone find the secret? You did? Okay, don't say anything. You did what you're supposed to do? Okay, we're gonna just, right before we end, we're gonna do it. That's awesome that you did. Okay, okay. So let's go ahead and exit out of our instance. Just type in exit, although you don't have to do that. And then we're gonna go ahead and delete the instance from your SSH and it should be gone. Okay, that's basic understanding how easy to metadata works, config drive. If you flip back to the main summit.cloudtrain.me and click on key pairs, we can now actually inject the key pair. Let me ask you a question. First of all, injecting a key pair, understand that there's two things that we can do. Key pairs are stored in Nova and we have two options with key pairs when it comes to Nova in general. Two options. I can have Nova create a public and private key and then it's gonna hold on to which one? Which one is it gonna hold on to? It's gonna hold on, which one is it gonna hold on to? The public key. It's gonna hold on to the public one and it's gonna give you what? The private key and then you're not gonna show that private key to anyone, right? Secret, don't show it to anyone. Or if you have your own key pair generated on your machine or a trusty public key that you've been using for months, you can give that to Nova and say here's my public key and then it gets stored in Nova and then when you boot an instance, you can say, hey, can you use this key? And you can refer to them as just like a name if you want to. So what we're gonna do here is via the CLI, which is basically the API, we're gonna go ahead and we're gonna do keep here add, my keep here and then we're gonna say greater than there and output the file there. Now again, Nova's gonna hold on to the public one and it's gonna spit out the private one. So let's go ahead and run that command and now we gotta change the permissions on the key because if we don't SSH won't like it, right? SSH is gonna freak out and be like, you better set up owner only. Owner only has rights to that thing and so we're gonna do a change mod 600, my keep here PEM so SSH doesn't yell at us and now we're going to look at the flag dash dash key dash name and the name of the key pair. Again, that name that I'm specifying there is the name that I specified right here. So go ahead, copy that, go ahead and do a Nova console dash log. Okay, make sure everything, this shows us everything is spun up here and then we'll do a get VNC console. We got that link. We're gonna paste that link into our web browser and we're gonna log in with seros, cubs win and once we're inside we're gonna go ahead and curl that metadata service 169254169254 slash open stack slash latest and we can even do slash metadata.json and guess what's in there now, our key pair. I'm sorry, not our key pair, but our public key. Now I can easily SSH into that server without having to, if it's only to provide my private key, I'm getting in, right, without a password and that's where it actually sits. Any questions on that at all? What we did, pretty basic stuff, all right, but at least you're able to go through the steps and understand what exactly is going on. If you actually look in your directory, home, the seros user.ssh, authorized keys, you can see the public key has actually been inserted into the authorized keys. You may be wondering, well, how did it get there? Seros init. Remember, seros init is the rip off, not rip off, that sounds negative. It's the, a net cloud init like utility of program, right, that is inside seros that actually copies that over. Now if you're booting Ubuntu instance, cloud init will do that for you. I guess one of the questions is, why is it doing seros versus root? Well, there's a default user set inside seros init that says make sure you got a public key that you put it into the seros user authorized keys file. Okay, and that's it, pretty basic stuff. You can go ahead and exit out and go ahead and delete the key pair instance from your SSH session and if you go back, exercise number four, shell scripts. Now, hey, here's the big thing, just so you guys know. You probably shouldn't be doing this. You probably shouldn't be injecting shell scripts. It's fine for simple stuff, but the real power, if you wanna do shell scripts or whatever, is really gonna be in cloud config and I provided the slew of running commands with tons of examples inside that main sum of that cloud train.me if you click on those. It'd be better to run a script with cloud config because then you can put tons of other options in there. Remember, it's gonna be for the most part, won't care about what distro you're using, right? So you don't have to say, you know, if it's gonna be SUSE do this, if it's gonna be boom to use this and do it this way, right? But this at least shows you that scripts are pretty easy to execute. So what we're gonna do is we're gonna create the shell script called user data dot text and we're doing this from our command here and you can see what it says. It says hello world, the time is now, whatever, print out the time and then we'll output that file with the t command to output dot text inside here, the seros output dot text and the seros directory. You can do an LS and you can see the file's been created there, user data dot text and now look at this flag we're gonna use. It's dash dash user data dash dash user dash data, user data dot text and we're actually going to say, hey, can you please run this script on boot? This is our user data, just like that. Now the really cool thing you can do here is do a Nova console log and scroll up a little bit and see exactly where the script executed. The script executed slightly after getting an IP address there and then generating the host keys SSH. So after generating the host keys for SSH, it then runs the script. So that's what seros init basically does. Now with cloud config I can choose at what stage I want a script to run. Do I want it to run before it gets an IP address after it gets an IP address? Maybe a little bit after, if you're doing system D, after journal D is up and running. Do I want to do it after RC local? You have complete control over when that script gets executed. But with seros init, this is just an example. Slightly after generating the SSH host keys. Now let's go ahead and do a Nova getVNC console and get that link. And once I get the link and I do an LS, I should see there is my user data. The script actually outputted it to home seros and there it is right there. And if I do a LS, I'm doing ALTH. It's dash ALTH to get the details and put it in a particular order here. I can actually see the permissions on that file there when it actually spit it out. Now check this out. If I actually go and I curl 169.254, 169.254 because it's more convenient than mounting config drive at this point for an exercise open stack slash latest, I should be able to see my user underscore data and there's my script sitting in user data right there. So seros init went and fetched user underscore data and ran that script just like that. Now again, don't really recommend you do this. It's much better to use cloud config because I'm assuming that in production, you guys are not going to be using seros instances, right? You're kind of going to be booting a boon to in production. So go ahead and you can exit out of your instance just by typing exit. And then we can go ahead and delete the user data instance from the shell session and we have about 10 minutes left. So what I'm going to do is for, let's take the next five minutes. We're going to go ahead and skip this file exercise because again, this is showing you a really cool thing that you can do with injecting files with open stack. But guess what? Very limited in the size of the files that you can inject. There's a dash dash file on the CLI dash dash file flag that you can actually run. But guess what? It's very, very, very, very limited. I think you can only do, I think it's like 85K. You can do multiple files, right? But like the total size is like 85K. So not really big. Might be smarter to go and just have a URL that goes and fetches that data from somewhere else. So since we only have a few minutes left here, 10 minutes, why don't we just go ahead and jump over to the cloud config because this is where the cool stuff happens. Okay, number six. So if you want to click on number six and follow along with me, I'm going to create a file. And the file is called, starts with pound cloud dash config. It's called password dash user dash data. Doesn't matter what it's called. But it starts with pound cloud dash config. We're going to put inside there ssh underscore pwoff. Look, let me scratch all this stuff. I shouldn't have even been talking about this yet. Here's the deal. You guys know about cloud images? Look, anytime you just want to grab an image and put it into the cloud, you're going to grab a cloud image, right? Ubuntu cloud image, Fedora cloud image, CentOS cloud image. What are cloud images? They're images that were baked by somebody over at whatever, whoever works over there. And they are made for the cloud. That means they have zero synnet. I'm sorry, they have cloud init on them, installed on them. It also means that they're set up to be secure because you're spinning up instances in the cloud. So by default, if you grab one of these images, chances are you will not be able to get in with a password, right? Because that wouldn't be secure. So what would you think you'd have to do? Guesses? Keys, you have to use keys every time. Unless you push some cloud config, because remember, all these cloud images have cloud init installed on them. If you set up some cloud config and you inject that user data on boot, cloud init will read that and it will override the default behavior of cloud init. The default behavior of cloud init, for example, on an Ubuntu instance, is to completely shut down password off via SSH, completely. We are not going to allow you to SSH this thing via password, no way. So in other words, there is a module that sits inside a cloud init, a Python module, that goes into SSHD config and disables password authentication via SSH, completely. So to override that, we say ssh underscore pwauth equals true or colon true. We're going to say password colon password, it's a generic password we want to set for root. And not for root, for the default user Ubuntu. And again, who specifies the default user? Cloud init. Every single cloud image you also get not only has, typically requires you to use SSH key pairs, but it also has a default user. It's not going to let you log in with root. Okay, by default anyway. It wants you to use a special, I think for sentos, it's like cloud-user, Ubuntu is Ubuntu, that's the username. And then guess what happens if you don't do this change password piece here? If you don't do this change password piece, it's going to make you and force you to change your password as soon as you log in. Ubuntu username, I put in password. Initially, if I don't put that in there, it's going to say you better change your password. But if I say change password, chpassword, colon, and then do this nested dictionary thing here with this expire false, then I can keep it password and it won't bother me. And again, all of these directives here, these key values, these are all references to that link right there, that cloud init. See that CC set passwords, Python file, that module? These all refer to things that are contained inside that module. That's looking at the upstream. That's actually the dev code for cloud init. You can see it right there. It's in bizarre for cloud management or whatever. Okay, so let's go ahead. We created the file. Sure, yeah. Yes, this is a plain text file, right? I put that in there. The reason why I put that in there is I'm showing you that the cloud init program that's installed on Ubuntu is actually running that module right there. And when it runs that module, that module looks for those keys in the user data. And if they're there, it's gonna override the default behavior of the module. So I'm gonna create my file. I'm gonna go ahead and you can see I have the file right there, password user data. And now I'm gonna go ahead and boot my Ubuntu instance. And notice all I'm saying is dash dash user dash data. That's all I'm saying there. The real power here is in cloud init. And what cloud init's gonna do? So if we go ahead and boot that, guess what? Give it a second to boot up. Go ahead and grab our Nova get VNC console. Go ahead and grab that link. We're gonna log in with the default user that cloud init wants you to use, which is Ubuntu. And the password is going to be what? Password, right? The word that we set in our user data. And there we go. Set it just like that. Now look, we could write an entire book on cloud init and how it works, it's a huge program. I've got a lot of good data in there inside your exercises under the actual examples. There's tons of good stuff in there, different stages it runs at, how you modify it, how you override it. Okay, take it with you, take a look at it. It's actually some really good stuff, some stuff that's not even available out there. I didn't rip it off the web, okay? A lot of it's my research. So feel free to use this. And hopefully you guys will start using a lot of the cloud config if you ready boot instances in the cloud. Were there any, did you have a question? Yeah, I'll keep it up. But click that zip, click the zip. There's a link up there, you just hit the button and it will download the zip. Now, the person who found out the secret, the secret, we're leaving it, six 10s their time, right? We're leaving out of here, six 10? Okay, we've got a couple minutes. Can the person who found the secret tell me where the secret, can you go up, do you mind coming up to the mic, do you mind? If you don't want to, it's okay, but. So one of the example files contains some base 64 data that when decoded said to email you. Yeah, that's exactly right. So one of the awesome things that cloud config can do is, and actually, wait, you can sit down, but I'm gonna give you your prize. And we're gonna have to show everyone what the prize is. If you click back, it's pretty cool that you found that. But if we actually go through here, these are all the examples that I supplied. One of the examples is you can write files with cloud config. And notice that here, we're writing a file using the write files directive, which refers to the CC write files Python module that lives in CloudNet. And here, you can see that little pipe right there just means, here, I'm gonna just take everything I'm presenting to you here. Usually it's good if you do multi-line stuff, it will, essentially, it's telling you how to treat this text. So I'm saying here, my new test file. But look at the base 64 text here. That means I can essentially send something that's base 64 encoded, and CloudNet will decode it and run a base 64 decode. I had a secret message inside here, inside that string. And if you actually copy it like that, and you could probably just put it into, let's say, my file, and I'll paste it in, and then I'll do base 64 dash decode, my file. And it says, you discovered the C, congratulations. So come up and get your prize, please. Round of applause for this gentleman. Okay, we've gotta come up on stage here. Don't be shy. I am presenting you with the product that revolutionized the industry. It's not the zone either. It's yours. 16 gigs, have fun with that. I will. Okay, so, hey everybody, I really appreciate you coming to the workshop. We have the cantina going on with Rackspace here. It's right down the street, you can't miss it. Lots of drinks, lots of food, a lot of fun. Please check it out when you get a chance. And that's basically it. Thank you.