 Yeah, just so you guys know as well, we have some lovely assistance here that we're kind enough to help me out, so they'll be running around and helping to answer questions when inevitably things go wrong despite the fact that we've prepared and it should just be a few simple commands. Ping should work against this address. Yeah, it's non-USB-keyable. We kind of put this together, I kind of, no, I mean, but just in terms of transferring stuff around. This was at your, it's, like there is, you know, we have a proxy set up and it's going to configure to do all the package installs via proxy and kind of, yeah, there's some risk associated with this. I'm okay with that, huh? Another thing is some people could try to connect through the other network. Oh no, no, never mind, never mind, that's not going to work, yeah. Could just throw the vagrant box on it if I'm going to use it. Great, so I think I'm going to kick this thing off. So welcome everyone to my grand experiment. And I think that when you submit to talk at conferences, I'm pretty sure that they'll always accept hands-on labs because you have to be crazy to actually try to get a large group of people in the same room and make something work. We've kind of tried to take some various provisions down here with bringing our own devices and router and creating our own network to try to alleviate some concerns. But I think we tried to pick the way to do this that wouldn't take forever, but also felt like it would probably work. So I know that especially in terms of everyone fighting for network through our router, it'll be interesting. We might have to retry some things if we have some spurious failures, but fortunately Puppet knows how to deal with partial failure states, so we could just keep on running it. So I'm going to go ahead and kind of walk everyone through some instructions. And I know I'm just about to go through all that stuff. And I know that so it was listed as a requirement that people should have virtual box and vagrant installed before they're here. And I'm just curious, show of hands who read that and was like, man, you guys rock. I didn't even have to do all this setup. Well, I've brought my own, so I'm going to walk through some of the steps for things that I actually have. And Cody, if you can log on to that server, maybe trying to put like the base box, oh, you can't. Trying to put the base box on the USB might not be the worst thing, if that's what's taken up bandwidth. So I'm going to go ahead and kind of walk through the steps. Just so that everyone knows, I kind of put something together and I'm not exactly sure why it's not showing up in my lovely web server. But the server up here with a lot of the prereqs on it can be found once you've connected to our network, which is puppet underscore open stack, same password. And there's a read me up here, which has everything you should need, including the things that need to be downloaded. I'm going to walk through these things, but just so that everyone knows, there is a read me, because of course people are going to be at different stages as I go through the presentation. So everything is in this read me, which is on the web server, which people are free to download, and it should be fairly easy to download. So I'm going to go ahead and start walking through first some of the steps for this, and we're going to kind of do this in two parts. We're going to go through the let's actually get this thing working. And then once people have this thing working, we're going to dissect especially a lot of the puppet part of this. So that people can get a better understanding for what's going on under the covers. So the first thing is that we've brought our own network. The password is actually the same as the SSID for the network. And we're bridging to the outside world from here. But please respect that we're bridging through a single patch cable. So please respect our pipes. There's going to be a couple dependencies, and it looks like most people have either gotten here early and gotten the virtual box, which is the heaviest of the dependencies on. We're also going to be using Vagrant, which is a fairly light dependency. If you look up on our web server, you'll see all these things. Maybe the biggest bandwidth constraint for getting started is going to be the base image that we're going to be using to install OpenStack on. And we also have the actual puppet modules and all the content on the server as well. So I can update this because we didn't know if we were going to have a predictable IP address. But everything that you need is up here. Even for people getting started, at least it kind of assumes Mac and sorry I had to kind of assume something. But of course, the only things that are Mac specific up here are the actual VirtualBox DMG for installing VirtualBox and also the Vagrant DMG. Everything else is not specific to any platform, right? So the precise 64 base box or base image doesn't matter what platform you're actually on. And also there's a tar ball that contains all the puppet related things. And of course, doesn't matter what platform you're on. So if we're looking at the server here, just looking at 10, 0, 1, 2 in the share directory. You can see all the things that you need in order to get started. The biggest of those is actually this precise base box. And I know that one of my fantastic assessments is actually going to be putting that on a USB key just in case people are having bandwidth issues for getting that box down. Cuz I think that is definitely the biggest of all the requirements and possibly why you gentlemen in the front row were trying to hand me a USB key a second ago. So that should just be a few minutes. Also it's actually on here as well. So I'm going to give people a little bit of time here. Are people currently downloading? I'm just kind of curious, what's the bandwidth and latency feel like right now, slow? Good, 25 minutes on which artifact, on the base box. Let's go ahead and just put that on a USB key and maybe just pass that around. It's a plain box, there's really nothing special. If it's a box, if that box already has Puppet installed with the reasonable version. I know that that specific base box for folks that use it was recently updated. Both the old version and the new version weren't just fine. So for the fusion providers, so the question was that for people who are using the fusion provider of Vagrant as opposed to the virtual box provider, will it work? The issue you're going to run into is the Vagrant file actually is using the old format and hasn't been updated. Cody right there has a version of that Vagrant file that works on Fusion. So the only thing if you're using the actual Vagrant provider for Fusion is that the Vagrant file itself hasn't been updated to the new format and it has virtual box specific configuration in the Vagrant file. I know that Cody does have a file that works. So using Fusion should be possible. You'll just have to update the Vagrant file, Cody's done it before. It's a little bit risky, but for the sake of bandwidth it's maybe better to have as many people as possible not re-downloading things. It's going to be easier if we're using Vagrant on this stuff, it's going to be just a couple commands. Trying to catch up manually is potentially going to be different, because there's quite a lot of recreate. I'll walk through and kind of dissect everything. It's also guys, it's here, it's on here. They're going to grab it out there and pass the Vagrant for Fusion cost of money. The base box should be in the, what is it, dot Vagrant slash box directory. Is that right, can you help me out? Look in just downloads. And I think for now, if people are trying to get the base box, I think we're going to switch that up to be a USB solution and help pass it around just to save on bandwidth because it looks like it's crawling slow and maybe even unrealistic that people's downloads are going to finish. Yeah, it should just be Vagrant box add. You have to do Vagrant box add the name, and then the location of the box add, or. You guys are getting ahead. I mean the proxy just needs to be this address and then colon three, one, two, eight, but that stuff's going to be in the public config. So you guys are at different paces. But we've already hard-coded all the proxy stuff, so it should just work. And you'll know when you run Vagrant OpenStackUnderstoreController, that if you haven't set up the proxy correctly, then things are just going to blow up. Everything's going through a proxy, so everything's going through an app proxy. I don't think there's any direct. The only HTTP call would be to get the base image which is on the patch server. We'll get there. No, I think that the default right now is not HTTPS for the actual API endpoints. I know that people are doing it, but I don't have my head wrapped around it right now. I know that people are definitely using the HTTPS endpoints with the public modules, but I'm not sure what the exact configuration is for that. So I wouldn't really worry too much about the precise 64 box requirement now. We're going to be putting that on USB. We'll just need that when we're actually trying to fire things up. So once people have actually downloaded our zip tarball, which is actually all the puppet content, you'll need to unzip that. And actually from within this directory is where we're going to be running the lab. And again, these things are just on the web server. Good people are adding boxes. And then once people have the boxes, which will maybe be a second, I know are people still trying to download the base box with mixed amounts of success? If you already have an existing base box called precise 64, you're fine. Well, are you trying to add a box when you already have a box? Is that, or is it a bandwidth? Yeah? No, we're not, don't mess with that right now. Yeah, once people are up and actually like once you actually have the content and the TGZ, it's actually two commands. And if some people are ready, it's definitely going to be staggered. But if people are ready to go now with the public content, you just need to run vagrant up open stack controller and then vagrant up compute one. You'll need to run these serially. Vagrant doesn't support, so you'll have to first run vagrant up open stack under store controller and then vagrant up compute one. Yeah, and definitely for people that are ready, thank you so much that we should start doing that now because I know that the main bottleneck is going to be the package installs through our proxy running. The readme file, sorry, for some reason you can't see it on the server, it's just a misconfiguration, but it's just right there. And feel free to ask questions. I actually put this readme together pretty quickly last night, but it should have everything you need. The only thing that I'd warn people is that this command here, once we have open stack operational, hold off because this will start trying to download the service image from the internet, which we actually have that available on the Apache servers, so we don't have to go out to the internet. This is the one thing that's not in the squid proxy, it's just a misconfiguration. It's 10.0.1.2 slash share slash readme. And that's assuming that you guys are connected to the puppet underscore open stack network that we've brought. It's just gonna run for a while. It'll just, I mean, you're just running it from the terminal, so it's just gonna exit, preferably with zero. Yeah, it sometimes takes a few minutes to set up the network interfaces. It's actually creating four sets of virtual interfaces on those devices. No, the box image is, yeah, I would stop that. Or that's gonna clog the bandwidth. No, you just need to run this command, vagrant box add. You need to specify the name of the box, which needs to be precise for. And then specify the actual file path to the box. And the box is, we were previously supplying on the Apache server that we're running, but I know that because of the size of it, we're quickly throwing it on some USB keys and passing that box around. I mean, whatever, 193.187, whatever. I'm not too fussed about specific versions of Ruby for this, so. Good, you guys have, just out of curiosity, has someone taken the base box from the USB key and added it to Vagrant successfully? Not yet, but it's in progress? Good, so it looks possible. And again, once you have all the artifacts, it's really just this vagrant box add. It'll be vagrant up OpenStackController and vagrant up ComputeOne. Yeah, no, this is good, that's puppet, you're doing puppet stuff now. So pre.64 is the file that I have that basically sets up the app proxy. Like that's one of the modifications. That sets up the app proxy before you run app.getUpdate because you're actually running app.getUpdate in mind. You shouldn't have to modify the proxy anyway. Like all that stuff's hard to do. Like the version that we distributed from the web server has all the proxy settings hard to do. I'm sorry, all the jakers. So, yeah, so just to repeat what Cody said. There's a couple USB keys with a lot of the initial dependencies going around. And just keep on passing those around. And they have the puppet content, the actual image that we're going to be using for the virtual machines that we're going to spin up theoretically. And then we're going to get to the interesting part. What's that? I mean the real connection is between everyone's laptop and that. And our tiny little router. So the readme is available. That's just share slash readme. So the readme is available on the web server that we're hosting. So it's just 10.0.1.2 slash share slash readme. Has the instructions. And pretty soon a lot of what people are going to be doing is just waiting. Oh, this is the USB key line. Yeah, I think they're very slowly just making their way out through the audience. So I think that if people are looking at the documentation, don't worry about the statement that it makes about searching for proxy stuff. All the proxy stuff set up. So in the readme, you can ignore that step. Sorry about that. There's a typo in the file name. But you don't have to do that. All the proxy stuff has been hard coded. All you should have to do is just the vagrant up in the, oh really? I mean, there's probably two questions. One is, is it actually getting hits in the proxy? And the second is, I mean, I mean, I guess there's some reality that we're just saturating the hell out of, out of, like the connection between here and that tiny little router is just not going to work. Do you figure at least it could do like rolls of like 10 at a time, but it should only take a couple minutes to get from, it's the face box. It's just too big. We've got tons of time. We've got so much time still. It's all, it's all, it's all math, right? It's time, laughs, bandwidth. But what? No, that's just done. You don't have to change anything. It's fine. Yeah, it's just taking forever to install packages because the freaking network is totally saturated. And I would say too, for the sake of the network, if people are actually actively trying to download the precise 64 base box, just please stop. Please stop. It's just too big. It was probably a mistake for us not to USB and we're in the process of fixing it. So if you're trying to download the precise 64 base box, just stop. So we can slowly start to have people connect and we've talked about various things. Those are the folks that who's, who's waiting for USB sticks? Especially people in the back. I like this. Oh, this is only part one. Who's to wait on, I think. I think, I think we're getting pretty close to a reality check here. And that is that, that, you know, despite all the wonderful effort that people are putting forth for, for trying to get some of the initial requirements down together, that's actually part one. And I think the reality is that it's just our, I would guess it's just the poor little disk on our, on our single server just can't possibly process all the, so many file requests, squid proxy or not. It's even people that are at the next step. Either people are still trying to download those giant images. But, like, I've, I've yet to see a single person actually install a package. And, and those have been going for at least five minutes. Has anyone actually seen a puppet indicator that they actually did successfully install a package to the proxy? How many, how many? But I think that, well, you can download what in ten seconds? Oh, really? Can you see the disk, the disk, what, what the disk guy looks like? Yeah, we'll see, we'll have to look at the configuration. Now, it's, now is when the problem begins. No, that's the big one. Is, I mean, I mean, well, no, no, no, actually, the mice of the package is the big one. That's, that's blind. Yeah. All right, some, so there are, our, our first round of people are actually installing OpenStack now. And it looks like, it looks like, I guess people have, have now given up on their, on their, on trying to download all the stuff that we're distributing via USB. Amazing. I think as soon as one person's done, we're, we're out. So I'm gonna let, no, I don't have the right one on here. They're all just on there. So, so if, maybe, maybe you could spread this back to the box. If they just go to this server, 10.0.1.2, this is, this, this has the instructions for the kind of things that need to be downloaded, what content they need to do, they need to untar this, and then this should be the vagrant box add command. Cool. Yeah, cool. Yeah. I mean, this is what took forever is the installation of MySQL server package. Well, I mean, this is, that's the biggest package to install for MySQL server. It has tons of binaries. Yeah, well, that's just the update that you have to do with the package installs, which is what's, which is what's gonna be really causing it. It's gonna be disc latency. It's, it's installing OpenStack, I'm having, I'm having a blast. So, so I think I'm gonna, I'm gonna, I'm gonna trudge on and we're gonna talk a little bit about what's going on because I think that everyone for the most part knows what needs to be done. Some people are gonna finish, most of you probably aren't. But feel free to download the read me. I'm kind of sorry about this instruction. But I'm gonna, I'm, I'm, I'm gonna talk a little bit about the thing that we're working on, maybe. So, I'm gonna move on to starting to talk a little bit about what's going on. And I think we're pretty much past where everyone feels like they have their requirements and anyone who's gonna try has what they need at this point to get started. So, I'm gonna talk a little bit about kind of, kind of what's going on. I, I, I want to start with kind of a, of an explanation of, of what we're attempting here, which, which may or may not be crazy. It's just to have your laptop. We're gonna start with just installing. I mean, for, for some people fusion, even though what I'm using here is, is virtual box so that we can use Vagrant and, and I'm not sure who has some familiarity with Vagrant or, or, or uses Vagrant. It's a pretty nifty tool. I think that if we start to look in the OpenStack landscape and we start to look at things like triple O plus heat are, are, are, are kind of do the same thing as what Vagrant does now. I know a lot of, there have been a lot of sessions about, you know, triple O for actually being able to build kind of, or, and, and heats for specifications of, of actually the multiple machines that you would want to build as a part of something. And in Vagrant, we're defining two virtual machines, which is why we're actually running Vagrant up for the controller, Vagrant up for the compute node. So it's Vagrant that's responsible for specifying how we're gonna actually use that precise 64 base box in order to create those two virtual machines. Which will be the common roles for deploying a, a multi-node OpenStack instance. And the last part of that, that, that as a part of the Vagrant up command, we're also gonna be running puppet, which will convert those virtual machines from the base precise 64 image that we've been distributing via USB key when the Apache server fell over. And, and actually convert those into the roles of one OpenStack controller and one OpenStack compute node. So I wanna talk a little bit about, about some of the technology bits. And, and, and then maybe I'll make it through some of these and I'll, I'll, we'll do a status update to see where people are. So the first thing that I wanted to talk about is Vagrant. And, and Vagrant's really being driven by this Vagrant file. So all of these things you can find inside of the OpenStack DevM folder. And that's the folder. So for example, for the Vagrant file, I just wanted to walk through and, and the basis behind this is that it's a, it's a somewhat simplified DSL for specifying how to create virtual machines and, and more specifically, how to then run Puppet on those virtual machines to turn them into some machine with some actual role. So looking at, at my Vagrant file, which I'm gonna bump up the font on, just a little bit, I, I kinda have, have gone maybe a little bit crazy with, with, you know, embedding Ruby code in this stuff. But the main point of it is that here you can see that I have some specification of, of the kind of machines that I wanna boot up. But I wanna boot those machines up into having some specific role, right? Sometimes I have, I have DevStack machines, you know, that I wanna boot up and say, just boot up a machine that has DevStack fully installed. The machines that we're gonna be looking at are, are the OpenStack controller machines. And what you can see here is the actual IP address that we're gonna be assigning to those machines and also how much memory those machines are gonna be using. So even for people who feel like they're resource constrained on their laptops, you may wanna, wanna tune down the actual amount of memory that's being used by those machines a little bit. And I know for sure that nobody probably wants to run a, a Compute2 instance, for example. Cuz this is, Compute2 instance is what I used to run Tempest. So it has tons of, of, of memory because Tempest doesn't always clean up all of its VMs as it runs, so, so it's good to have lots of things. And, and you can see even for, for testing that I have, of, of more specific roles, I have all these various machines that I bring up for testing. And, and the reality is that, that for each of those machines, and, and this is really the meat of vagrant, is that we can specify, you know, what, what actual starting disk image do we use to boot those machines? And, and here we can, you know, I'm, I'm basically doing CentOS or PreCy 64 right now. I know that eventually we'll add Debian. And I, because this is what I used to run all the actual continuous integration stuff for the public modules is, is this exact script? And I'm running that right now, both on Red Hat, as well as, as, or specifically, I think CentOS 64, sorry, CentOS 63, as well as, as PreCy 64. So, so in this machine, in, in this machine you can really see everything that I'm doing and including creating the various interfaces that each machine will have. And then the last thing that, that you see is that we're, we're basically running an AppKit update or a, or a YUM whatever the cache update command is. And then the last thing we're doing is, is we're, we're integrating Vagrant with Puppet for a few different runs. One of those runs is, is, is just to set up the base environment. And, and we'll look at this setup.host, or setup slash host.pp. And then we have another Puppet run, which is specifically based on the operating system name for operating system specific setups. And, and, and the main point of, of the, of the OS specific setups is to essentially set up whatever Cloud Archive or, or Apple repositories we need to set up in order to be able to install the correct packages for OpenStack. And, and, and we'll look at those in a second. And, and the very last part of this is that we just do this, this basic Puppet apply run on, on, on the site manifest. And, and we'll be, we'll be looking at, at, at each of these files in a second. So we're actually running Puppet three times. One for, let's call it environment specific setup. One for package repository specific setup. And then the last runs are actually for installing these machines with the correct OpenStack roles. And that, that's, that's kind of the vagrant file. And again, it's, it's, it's for really specifying the information related to the virtual machines that I use for testing. And I, and I think that other people find useful as well for just being able to have a real simple way just to get up the exact same OpenStack environment that I use, the exact same OpenStack environment, which is what I use for testing as well. So there's, so the question was, you, you see things for running apply, running agent. There's two ways to run Puppet. One of them is called Puppet apply. And you're assuming that the actual Puppet content is, is on your local machine. And you're just running Puppet against that content. With Puppet apply, you're assuming there's a server somewhere. And I'm, I'm, I'm not getting into Swift now, but because of the multi-node orchestration stuff for Swift, it requires a Puppet master, right? Because the machines need to, to place information about themselves in a central database that they can read from each other to understand how to build the ring for Swift. So the Puppet agent specific stuff which requires a Puppet master is only required for Swift. And for Swift, it actually requires that you boot a, a Puppet master, which is one of the roles specified in the vagrant file. Yes, yes, so the question is where does vagrant sit in? And, and yes, it is analogous to, to, to bare metal, right? But it's, it's more specifically about setting up virtual machines, creating virtual adapters, or sorry, virtual interfaces for those machines, and getting a base image on those machines. And then the next thing vagrant does is, is call Puppet. So that, that vagrant up command sets all that stuff up where the vagrant file is all about specifying those virtual machines, which is analogous to, to bare metal provisioning except we're also creating the hardware, right? Not just installing the OS on it. And then running Puppet is, is the last thing that it does. So, so the next thing that I wanted to, to, to, to kind of talk about is, is, is Librarian Puppet, but more specifically this concept of the Puppet file. Cuz I, cuz I have a feeling that Librarian Puppet will be deprecated eventually. But the file format that I'm gonna show you is, is, is kind of what's interesting and, and, and important here. That in the same directory, if we just have a look at, at people's Puppet file and I encourage you to open it. You really see that, that this is exactly what I use. These are all the modules that I use. These are the external Git repositories where I'm retrieving those modules from. Most of the content that's OpenStack specific is coming off of StackForge. There are a few things like, like Quantum, which is coming off of my Git repository, but that will soon be moved to StackForge. And then there's a lot of things that are utilized, you know, are Sync and, and Exxon at D and MySQL and, and Git setup. That aren't specific to OpenStack. And, and those things in general are, are just content, that's Puppet content for specifying that configuration that we're just getting from somewhere else, right? So, so this is kind of broken up into the Puppet content that's OpenStack specific, the Puppet content that's, that's more generalized, middleware. In the middle, and then in the bottom, just various other components for things like configuring, you know, Exxon at D and SSH. It's, it's ComputeOne, VagrantUp ComputeOne. So it looks like someone has a controller that installed Successfully. Who's installed a controller? What? Keep going. It's happening, things are happening. This is gonna actually work. If only I, I wish I could like do a, a hill click now. It's gonna exit with zero. Yeah, it's gonna exit with zero. It'll, it's gonna, it's gonna do its thing until it's done. And you'll get your little shell prompt back. Sorry. No one's maintaining it right now. And it is, it is, it is a massive pain. But nobody's actually maintaining it right now. And I think, I think everyone is, is, um, waiting for someone to, because given how, how useful this is as a utility, I think everyone's kind of waiting for someone to pick it up and do something with it. There's a couple of projects out there, um, R10K or something is one of them. Also, Henson is GitHub originally wrote library in Puppet. Now they're working on something called Henson, which is, is maybe not in, in a, in a releasable state. I know last time I tried it, it was only 193, which I can't. I, I have to do things on 187. Compute one and also if you do a vagrant status, you can see all the machines that you can boot, but please just boot compute one. Please don't boot compute two. It uses 12 gigs of RAM. I'm gonna, I'm gonna get there. I mean, I'm gonna explain all this stuff. So this is, so this is the next thing is that, is that these are the files that I use. So, so a, a pretty reasonable setup would say to, to start here. You know, if you need to fork things, you can specify the things that are forked here. And the thing that I really like about Puppet file is it gives you almost this really nice to-do list to say, wow, you know, I have my upstream repository name here. It means that I've forked those things, right, right? These are the, these are the actual places where I should be submitting these things back upstream. But it gives you a really easy way to specify that some of these things might be local if you have to fix things for your environment. And, and this is the file that actually, that, that's used to install all the content. And, and, and of course, everything was pre-populated on the actual puppet tar bowls that I handed out. And, and that was just done by running, oh, does it? Yeah, it's, it's, it's Compute One. Sorry. Yeah, it's, it's Vagrant, up Compute One. Sorry about that. I, I unfortunately wrote the read me to, to help everyone, but, but very fast and, and very late last night. So, so if you run a Vagrant status, you can see the names of the possible machines that you can boot. It's Vagrant, up Compute One. And, and, and, and you don't have to change the proxy. The, the proxy settings have, have all been hard coded. So, I'm gonna move on to, so now we see in the Vagrant file, we see in the Puppet file which, which specifies kind of all the content that'll be populated into the module's directory. Because the main difference between the project as you would find it online and as we distributed is that we pre-populated the module's directory. Which when you go online to, to Stackforge, it's not there. You need to run Librarian Puppet, target the Puppet file in order for that to happen. And, and, and again, just an example of the Puppet file just like I already showed everyone. And now we're gonna start to get a little bit into the things that are Puppet specific. So, so the peripheral tooling around Puppet is, you know, Puppet file for knowing what content to install. The Vagrant file for knowing how to specify the virtual machines to bring up as the base images that we're gonna install, OpenStack on. And then looking in, in the site manifest. Each of these site manifests are gonna be run on the actual nodes in order to bring them into the proper state. And, and the site manifests are interesting because they, they perform different kinds of actions. You guys actually have a, a pre-manifest which is kind of added just for the purpose of this to make sure everything was going through the proxy. But specifically, if you look at, at the host manifest, it just does some, some very, very, very basic set up that's specific to the environment. One of those things is, is set up host entries for all those machines that use the same IP addresses that were specified in Vagrant. So this is now, now getting into, into actual Puppet, right? This is, this is Puppet specification for the entries that should exist in Etsy host. Is, is what we're looking at here. This is also doing, you know, setting up basic group on Puppet. It's also just laying down this simple file so that if we're running Vagrant, the Vagrant provisioning, it does a lot more than just install OpenStack. It also throws this script down that we can run if we need to rerun. This is Puppet syntax. This is Puppet. Yeah, Puppet has its own language. That's what's being used to specify how to configure these, these individual roles. So I'm, I'm, I'm gonna talk a little bit about Hyra, but the actual Hyra configuration data store lookup is, is also specified in this file. Hyra allows us to, to look up data from external sources. And in this case, we have kind of all the common data that we can override that's in an external file. We'll talk about, I have a specific Jenkins file that I use to override things for continuous integration. And then specific nodes, specific configuration files so that individual nodes can also update data. We're gonna talk about Hyra and we'll look at the Hyra data store that comes with this project in, in just a second. The, also in this, in this manifest, and again, we're just looking inside of the manifest directory. This is where all of the kind of initially loaded code, you can think of, of these manifests like the main for your program. This is where Puppet starts processing to figure out how to configure things. So inside of this setup, we're specifically looking at the precise one, but there's also one specific for Red Hat that sets up Apple. And this is mainly just setting up things specific to configuration of the repositories for where we're gonna actually download the content. So this is setting up the, the cloud archive. In this case, the, the Red Hat one sets up Apple to the correct locations. This is actually looking up data so you can externally specify the OpenStack version. And, and you, you may also note that's doing a Hyra lookup. So we're looking externally for data to tell us if we should be installing right now, you guys should be installing Grizzly, but you can also install Folsom. Just by updating that OpenStack underscore version variable in Hyra. You can see the, the actual Hyra lookup that it's doing for that, right here. But this is doing basically just the setup. You know, we're setting up the proxy. In this case, this is targeting my local laptop, but you guys are targeting the, the lovely server that I have set up here. But the real meat of, of what's actually installing OpenStack, and it's worth knowing that of course you need to know what external repositories do I need to set up. Those things are happening in the, in the setup precise.64, which is the second puppet apply that's run from the vagrant file. And, and the last thing is the actual site manifest. And the first thing you see here is we're making a lot of external Hyra calls in order to retrieve the data that you would actually want to configure. And then that information, for example, you can toggle between, if you guys were crazy enough, the, the quantum stuff's almost ready, but you can toggle between Nova networks and, and quantum where, where Nova networks assumes the flat HTTP network. This is manifest site.pp. So, so inside of the manifest directory, you really see all, all the main configuration code. This is the code that's actually specifying the role. So if we look at, at node here, you see node OpenStack dash controller. And this is actually the, all the code that's being done for setting up the actual controller. For the most part, there's a lot of red hat specific stuff, especially around setting up firewall rules that, that probably needs to be moved further down. But, but for the most part, it should be, you know, for, for, for people who, who know Grizzly somewhat, there's one Grizzly specific line of if it is Grizzly, then install Nova conductor on the controller. You can see class Nova conductor enabled true. But for the most part, it's just calling this single OpenStack controller class. And again, this is puppet syntax. And then specifying all the configuration required in order to configure this machine as the OpenStack controller, which is all being passed through this class. And classes in puppet are an abstraction layer that you can use to specify configuration interfaces. And in this case, it's just simplified configuration interfaces for, this is how we configure a controller, this is how we configure a compute. And someone had the question of, how realistically does this map to a production environment? And the further that we drill down, the more that it starts to map to what you would do in production, like, like what I'm providing here is a simple framework that just works. And for those of you that are still having bandwidth issues, just trust me, it just works. But, but in reality, what's important is to understand how to drill down so that you can reach the correct abstraction layer that's flexible enough for your use case. And for some people, this may be the layer that, oh, there's this thing called OpenStack controller. Oh, there's this thing called OpenStack compute. Those are the things that I care about. We're gonna drill down a little bit into those configuration interfaces to show how they're actually just flexibly composed of a set of core modules that provide all of the possible OpenStack services. And again, the important things to note here is just that we have nodes. And in this case, node is specified based off the certificate names of those machines. And here, it's the things whose name matches compute node should be installed as compute. There's some kind of QMU specific logic up here and some Red Hat specific things. But for the most part, it's just setting up this OpenStack compute node. And also setting up some volumes for sender as well in this example. But it's really just this kind of unified interface for this is the interface of how you configure OpenStack compute. What's the permission tonight? Yeah, you may need to... You know, I would say for now, just you can just sudo bash and just switch to root before you run it. So if people are getting to the point of running that script, one thing that I would encourage you to do, sorry, I'm gonna break as people are asking about that is, if you open up the test underscore nova.sh script, you're gonna see one thing that I would encourage you to do. This is like a half configured instance. So one thing that I would configure to do, if you look, the first call there's a Wget. If you could maybe install that image and not do the Wget, I think we'll get saturation if people start downloading that Syros image. That Syros image is supplied on the Apache server, but you'll need to put it in one of the directories that's NFS mounted, which is the modules directory or the higher data directory. I may have to say that again. I'm gonna keep on walking through this a little bit and then we'll come back around to that. But we're definitely gonna be bandwidth constrained on installing the Syros image. It is available online, but you may have to modify that script slightly. So again, just kind of at a high level, I showed those code examples from what we call in puppet language, the site manifest. Site manifest you can think of in programming terms is being like the main. This is where puppets gonna start executing when it tries to configure the actual role of machines. And specifically, it's this syntax of node blocks that's used in order to say four nodes who have been identified like this. These are the rules or this is the specification of how those nodes should be configured. Where in the example of our site manifest for this project, we saw that that was typically by the classes OpenStackController and OpenStackCompute. So the next thing that I wanna talk about is Hira and Hira is embedded in the site manifest. And it's actually, that's how we're resolving all the data, is through Hira. So Hira stands for or it's short for an external hierarchical data lookup system. And what this means is that often, data needs to come from some external source and there are certain factors that determine what that data actually is. And as an example for how we're using Hira in this environment, you have kind of common data which is all the defaults for things which you can adjust in an external config file. I have a specific Hira file that I can insert from continuous integration runs in order to update things like, you know, like Jenkins may wanna set what network mode are we running, what version of OpenStack are we testing. And I have various build matrices that are modifying the data in that CI layer which then overrides data from the common section. And then finally, we have node specific data. A more general use case is that maybe we would have defaults, maybe we would have data that is specific to what country our controller lives in, maybe specific to what region or what zone. So you can build this override hierarchy so you can edit individual files and know that that's gonna have precedence over some other file. And if we go back to, in this case I'm going to the hosts set up, we can see that I'm actually configuring this hira.yaml configuration file which is where I specify what the lookup precedence is. In this case that we have some common which is overridden by a configuration file called Jenkins which is then overridden by configuration files per hostname. And if we wanna look at those configuration files, they're all in this hira data directory of the project. So if we CD into this hira data directory, you'll see a couple things. One of those is there's a common directory and then there's the specific Swift storage one, Swift storage two, Swift storage three. So if we look inside of that common directory, we just see that we have all this externalized data that we can set. And it's things like, what are my database passwords for the MySQL database that we're configuring? What's all of the various keystone settings for the service users that this creates for keystone? What are the settings for connecting the rabbit? And then a bunch of Swift specific stuff as well. The knob related to this that I wind up configuring the most is one switching between Nova network or Quantum network. The Quantum stuff's not 100% perfect but if you wanna try, see where we're at with Quantum just change network typed from Nova which is what it sets you now to Quantum and then you can try to build out a Quantum environment just by changing this external configuration file. So we also can have configuration files per node and for people who understand Swift pretty well, it makes sense that nodes may wanna override what's the actual zone that they live in for Swift. So we can see for those nodes that Swift storage one, Swift storage two and Swift storage three are just overriding their zone which is specific to their node and that's living external to the actual puppet code in this configuration data. So again, we're just in this higher data store of that higher data directory that project we're looking at the YAML files and the last thing that I have right here is I have this simple Jenkins.yaml where for example for my matrices the Jenkins job does things like this, right? So I just have and that will now override things that are specified in the common.yaml directory but again this would be overridden by things that are node specific. So I just have this simple three layer external data hierarchy where I can specify the data that's really driving the configuration of all of this stuff. Any questions about Hyra before? Yeah, it's just an external data hierarchy. Yes, sorry? So if we go to, if we look at the site manifest which is driving the main configuration if we go to the very top of the site manifest you see lots of calls that look like this, right? In the context of the thing that's really driving the configuration we're making external calls to Hyra where Hyra is now resolving the value of that data through that external Hyra code lookup. So this again is in the site manifest. So previous to Puppet 3.0 we need to make these external Hyra calls but once you start looking at Puppet 3.0 or sooner Hyra is automatically embedded and automatically called from all class parameters. So all class parameters can automatically be overridden by external data storage. That's a Puppet 3.0 issue but right now I'm actually supporting Puppet all the way back to 2.6 to 3.1 with these modules. 2.6 because of Red Hat. Yes, Hyra is totally configurable. The default for Hyra, so Hyra has what's called backends and the default backend is YAML. You can have multiple backends. So you can have a Puppet backend, a YAML backend and specify those things as part of the same hierarchy. There are JSON backends. There are also SQL backends that exist. There are no SQL database backends that exist. So it is pretty easy to configure the Hyra backends and I think that these days even in the Puppet training class we're teaching people or in the Puppet developer class we're teaching people how to create custom backends. And I know that I'm really interested in the no SQL backend because you get an automatic programmatic API that you can use to set these data hierarchies. And I think there's something called, I think it's HTTP, I forgot, but there's a backend out there by crayfishx. So if you search Hyra crayfishx, he has backends for I think CouchDB and at least for one other no SQL database. Yes, another question? Absolutely. And I think that if you're deploying multiple clouds your Hyra data lookup might look something like this where I have different data stores that indicate what the addresses of services are. It's possible to do kind of automated lookup of things with Puppet but I would probably start with hard coding addresses and then once you get into things like PuppetDB you could start to say anything that is, has a certain subnet should connect to a certain master or to a certain controller. So if you were deploying multiple clouds then this is really what you would do is you would specify the data that's different between those OpenStack instances and those data centers in Hyra. So in this example those are going into the default or into the actual common Hyra lookup. So the precedence here is lowest precedence is at the top highest precedence is at the bottom. So that the defaults in this example are all in common or default from the top and then you might have let's say data center specific overrides and then node specific overrides would always win. So that question about is this how I would deploy OpenStack across multiple data centers across multiple countries? Yes, this is the easiest way to do that is using this hierarchical data lookup. So we already looked at kind of what exists in my hierarchy which is pretty small mainly because I just use it for continuous integration. So just to drill down a little bit more we've looked at the vagrant file that I'm using primarily for testing purposes in order to get the OSs up and going. We looked at the site manifests that I'm using for various site manifests for environment specific things or setting up the external repositories and for configuring OpenStack. Now we're gonna look, we're gonna finish the time talking about the components that are actually used to configure out OpenStack. So now we have pretty colors. So these are more or less the, we'll call them the very high level but also constrained configuration interfaces that are possible through the OpenStack module. And when I say module if you wanna find the actual source code for these things you should CD into modules. And then you're gonna see a directory called surprisingly enough OpenStack. So if you just CD into modules slash OpenStack then you can see that we have these things defined. We have things for all in one installations for the simple case of just splitting out compute for controller for setting up kind of all of the possible databases that need to be set up on a central MySQL server and also for setting up all of the required Keystone endpoints as well. So and OpenStack in general tends to do it for everything. So if we look at OpenStack Keystone that's gonna set up Keystone endpoints for all the related services. Glance and Nova and Swift and Keystone even needs its own endpoint. And these are the things that we're actually calling from those site manifests. So if we look at something like OpenStack all we declare this thing as a class we specify all these parameters where the actual variables that we're referring to here may be variables that themselves are resolved through HIRA. These could be direct HIRA calls when we actually declare this configuration interface that we use to convert machines into functional OpenStack all-in-one instances. And just to kind of go from the source code into there again I'm just changing my directory to modules and actually we can see the modules that were installed via library and puppet here just in the modules directory. In this case we're talking about the OpenStack module and for that module the actual content is stored inside of the manifest directory of that module. So we can pretty basically see all the kind of things that are specific to this constrained way to deploy OpenStack. So you can see the all-in-ones I think that the center-specific stuff doesn't quite work. Let's drill in something that's fairly obvious like Keystone. So this is the other end of things, right? Previously we've seen examples of how we declare a class and that's how we specify specific configuration interface that feeds into a class. Additionally the class has to be defined. So here's an example of this will set up a Keystone server with all the endpoints that you need for all of your services. And this has a bit of conditional logic just to set things as default but the gist of it is that we're actually building out a Keystone server and then when things are enabled we're building out the administrative and the member roles for Keystone. And then for each of the services we have configuration that we can specify in the interface to say should we actually set up the Keystone endpoints and authorized users for these various services. And I mean for people who are familiar with Keystone endpoints it's really the things you would expect to see, right? That we have a password, we have a public admin and internal URL and also region. You can also override what is the tenant but again when we're looking at OpenStack we're looking at a very constrained view. So in this case we're just assuming the default tenant of services, I believe it is. So just to look at one more example here if we look at an example of a NOVA controller again it has a lot of specific configuration but it actually deploys something called NOVA which NOVA is kind of all the configuration that's shared by all the NOVA components. It's deploying something called a NOVA API service and then it has conditional logic for if quantum was set to false then we deploy something called a NOVA network. Otherwise we're deploying something called a quantum server, a quantum OBS plugin, a quantum OBS agent, a DHCP agent, an L3 agent and then some various other services get deployed on this constrained view of a NOVA controller including a scheduler, an objects to restore, a cert and a console off and also if we're enabling VNC also a VNC proxy. And I think this is a pretty good view of again these OpenStack modules themselves are a real constrained way of doing things but they themselves rely on these individual configurable components that are very, very, very flexible. So again to answer the question of how do we do this in production one of those ways is if you're happy with OpenStack start with OpenStack classes but if you're not have a look inside here and use that as examples of how you would want to configure out your own OpenStack services. So for some people they're happy to use these for other folks they actually need to kind of crack these things open and use them as an example of how they would build their more customized versions of OpenStack. Any questions about the OpenStack module before we drill down and talk about more of the core modules? Yeah, so when we think about the OpenStack module it's presenting a more constrained way of doing things. And so for the example of Keystone think of OpenStack as being related to all the modules. So from the perspective of Keystone if I'm thinking about Nova that means I need a Nova endpoint and a Nova user and a Nova role and a Nova service. But if we're thinking about Keystone from the perspective of OpenStack I need endpoints for Nova, for Glantz, for Keystone, for Swift. But those things rely on the individual components from the other modules. And those are the two main purposes of the OpenStack module. One is to have this view of OpenStack and utilize individual components from the core modules. The other is to have this very constrained way of installing OpenStack just for simplicity. That all I wanna do is an all in one installation or all I wanna do is a controller with compute notes. Yes, there's a center module. Then I think that I'd have to actually look at the center module. So the question is like how flexible is the center stuff? And the reality is that it's as flexible as the community around the puppet modules have made it. So if we look at the center module then in this example I'm seeing in the volume directory I see that right now the things that are supported are NetApp and iSCSI. I know for a fact that there are patches for NextCenter. Those are the things that people that are using these modules have added. So if there's something not there you would wanna look at the center module and say how's it laid out? Okay, so there's something called a volume directory. That's where the volume extensions are going. And then add your own extension that are using the other ones as an example. And content that is like add this new plugin is pretty easy to merge. We're a lot more critical on things that are touching existing components because they may break backwards compatibility. But for things adding new components we're pretty okay with just merging things in and saying okay we trust the person that added this plugin knows what they're doing. But these are the plugins we currently support it and that's just because that's what people using this module are using is NetApp and iSCSI and NextCenter which is coming soon. So this is kinda drilling down to the core modules. And for the core modules it's things like Nova Swift, Glance, Keystone, Horizon, OpenStack. OpenStack we talked about that probably shouldn't be in this list in Cinder. Each of those things have their own modules which have specified all of the various services that can be configured as a part of that component of OpenStack. I have Quantum and Solometer kind of in bold here because those things so the modules live on StackForge. Who knows what StackForge is? Well just a few people. So StackForge is the OpenStack info team is creating kind of a we'll call it continuous integration as a service platform for OpenStack that uses the Garrett system, uses the same coding processes as the other projects and has some gating capabilities. All the modules live there or all the modules are migrating there except Quantum and Solometer aren't there yet but they're kind of coming soon. But again if we look at the Puppet file which I talked about around Librarian Puppet you can see which things are on StackForge which things are pointing to my modules on GitHub for example. Right now Quantum in the next couple days will be pushing to StackForge. Solometer right now everyone's using the one from Innovance. So just Innovance's Solometer on GitHub. The Innovance module will be the one that moves to StackForge. And just to drill down in here for people who have some familiarity with OpenStack I think that the thing to note is just how flexible these individual components are. And in this example I'll just do a quick PWD here. Most specifically to show that we're in the Nova directory of the modules directory so we're actually looking at the Nova module here and you can see that it has just tons of configuration for the individual components. And the reason for that is that the configuration at this level really thinks about what are the services and what are the configuration abstractions that exist. So for people who have more complicated use cases of how they wanna deploy OpenStack they can use the OpenStack modules as an example of how to compose these things out into the individual roles. And just to take some of them, most of them are pretty easy. And if we look at example at the Nova conductor module it just uses this Nova generic service which just sets up a service, sets up the actual package. One of the interesting things here is that params, all the things that are specific based on what operating system we're using and supporting, all this data is split up in the params file. So you can see that the main difference, most of the differences between operating systems have to do with the fact that packages just all have different names and different packages are supported for different things in the components. All that stuff is kind of abstracted away and you'll see that these params classes do exist for all the modules. And really the last thing is just to talk about that, all these modules themselves rely on just a ton of other core modules that are useful for configuring all the baseline components of OpenStack but they're also generally useful as well for even configuring virtual machine instances onto OpenStack. And a lot of these modules grew up out of the OpenStack group of modules but have really gained legs on their own that people are using these modules outside of OpenStack just to configure RabbitMQ for some other reason or MySQL for some other reason. Patchy's kind of a, we use, and a lot of these are just general purpose modules that we use which are the same things that are used by the puppet community. All these modules you can find on Puppet Labs Forge which I'd like to show a link to. I don't know why I typed it there, I think I'm just so in shock that. This is the example of the Puppet Forge. You can see there's over a thousand Puppet modules there. The more stable release version of all the OpenStack modules can be found on the Forge but the Grizzly modules just aren't on the Forge yet. It'll be in the next couple of weeks. Right now the stuff on the Forge is fulsome. The Grizzly modules are working, they're functional. The lab we did today, if you wanna take that home and finish some of it does use the Grizzly modules but they just haven't been pushed to the Forge yet. This is generally where the more stable releases of the modules are. There's even a StackTec module up there. And even just a simple search of OpenStack shows all the modules that are on the Forge that are OpenStack related. Yeah, Librarian Puppet supports both the ability to download content from the Forge as well as the ability to get content directly from GitHub. So the last thing that I wanna talk about is maybe just a little bit around the move to Stack Forge for these modules. Just curious, who actually finished? Who has functional OpenStack stuff? That is not bad. You guys are rockin'. Especially thanks for my helpers for that. And I think for people who don't, I can maybe clean up that, read me a little bit and make sure it's fixed. The main difference when you take this thing and go and do it at home is that you're gonna need to change the proxy to probably use like a local squid proxy or just eliminate the proxy configuration all together. Oh, sorry, the credentials for Horizon. Yay, people are making it to Horizon. It's admin and the password is changeme, capital C, capital M. But again, if we wanna know where the credentials we can use to sign into Horizon are, where can we find them in the code? But where is that point to? Hira, they're in the Hira data stores. But yeah, so all the credentials, all the passwords, all the information that's data related is gonna be in that Hira data store. So the question is, that's great, you showed us a single compute, but reality is we want multiple computes with a single controller. So for that use case, so if we look at the site manifest, and sorry, I know I go way too fast, but if we actually look at this site manifest, which is at the root directory manifest site.pp, inside of that module, the important component from the site manifest are the actual node definitions. And we'll see two node definitions. One is a node definition for OpenStackController. The other one is the node definition for compute. And the thing to notice about these node definitions are these super cool forward slash lines. What does that usually mean when something is enclosed in super cool slanted lines? It's a regular expression. So, and this regular expression maps to the identifier for machines, which by default is hostname, but actually can be configured with dash dash cert name from the command line. So any machine who targets this site manifest and has a cert name that matches compute will itself become a Nova compute instance and will be configured to talk to the same control node because it's gonna be doing the same higher lookups to get the same data values. So you just specify another one, specify dash dash cert name. In this example of the vagrant specific environment, there are two compute nodes, but please be warned the second compute node I use for tempest testing and it has 12 gigs of RAM. So you may wanna go into the vagrant file and lower the RAM for compute to a little bit because it uses tons of RAM. But it has to for tempest. Maybe it may be for the fulsome tempest, but we'll see if I need so much RAM for tempest going forward. So again, it's a regular expression. Any identifiers that map compute will become compute nodes. You can spin up multiples. Oh, it's on stack forge. So stack forge slash puppet dash. That's actually a good question for people who wanna look at these things. If we go to github dash stack forge. Sorry. I would guess Monty, but it was either Monty or Mordred or someone who works for him. It is, it is, it does not mean that these are incubated, but it is open stack infra team has what I would call continuous integration as a service. And this is the platform for that continuous integration as a service. These modules are leveraging open stacks infrastructure for continuous integration. Like I said, Monty and his team has built a pretty interesting continuous integration as a service tool, which is all based on stack forge. So this means that the modules follow the same development practices. They have ticketing system and launch pad that the code sits in Garrett. This is actually a mirror from the review.openstock.org from the Garrett, sorry, garret.openstock.org. This is actually a mirror from that. But the code actually lives and breathes and accepts patches from Garrett. So that's in Hyra. So the question is, where can you specify version? In Hyra, you can specify it in one of two places. Common is probably gonna be the easiest. And if you look in your common, just specify an open stack version. And this accepts Folsom or it accepts Grizzly at the moment. By default, it's Nova Networks. I think that assuming quantum by default is, I don't think it's there yet. But right now, the quantum stuff isn't perfect, but it's getting better and better and better every day. There is this variable called network type. Like for example, on my laptop, I'm doing all quantum testing right now. But right now, you guys deployed Nova Network. So network type was set to Nova. If you wanna see the latest status of the work in quantum, change that to quantum. Quantum only works on Ubuntu right now. It'll be working with Red Hat probably next week. Quantum will work with Red Hat. So two things, the latest version of the code is always gonna be on Stackforge. But if you wanna follow the project, review.openstack.org, and each of these are separate projects. So for example, if you wanna see what patches have been submitted, like what things are approved, what things have failed unit tests, are people pretty familiar with this view of the open stack development stuff? So, like these are all of the open patches, and there's a lot of open patches because we're all here this week. But you can see that each of these has projects. All the projects are under Stackforge. For example, Stackforge slash puppet dash keystone. So this is really where the code lives and breathes, and you can see all the patches that are submitted right now. A lot of these patches are targeting Havana, and within a week or so, the modules will be targeting Havana, and we'll be just back porting critical bug fixes for the Grizzly stuff. But right now, we haven't actually cut the Grizzly branches yet, which is gonna be happening soon, because everything works for Grizzly at the moment. I'm sorry? Sure. So for my purposes, the stuff that I demonstrated today is what I use for continuous integration. I prefer to not use a server for CI purposes because it's an extra machine that I have to spin up that causes me extra cycles for individual test runs. For the Swift stuff, the Swift stuff only works for the Puppet Master. But for the non-Swift components, because in Swift, I have to do all this dynamic data lookup to understand how to build the ring. But for the Nova stuff, it can run either Puppet Master or Puppet Agent. Yeah, I mean, I don't really have, so there's stuff out there for bare metal. But most of the examples you're gonna find are cobbler-based. Like Cisco systems, I think Syberra has something cobbler-based. There's a lot of cobbler-based bare metal stuff out there, but it's not directly associated with the project. I'm kind of, the project assumes, like figure out how to get the OS there and then start. So all the OpenStack stuff on Puppet Labs GitHub is being deprecated. So Stackforge, all that stuff has been moved here. Because we were kind of doing the GitHub workflow. So the question is, what should we use? And the answer is this, and then this stuff gets cloned to Stackforge. All the stuff under Puppet Labs namespace is gonna be deprecated. Stackforge will still be speeding into the module Forge for stable releases. But the Puppet Labs GitHub stuff is deprecated. And the rationale for that is we wanted these things to live as close as possible to OpenStack and to follow the development process of OpenStack. Because best case scenario, we can get more OpenStack core contributors working on this stuff. But another advantage is it sure would be nice to have the operators who are working on the operation deployment tools learn the actual practices for getting code submitted to OpenStack. As a ramp on process, we can get more operators contributing. Any other questions? So if you get this thing just from, so that's a great question. The question is okay, you've given us a slightly modified version because we're proxying off your server. What should I use for real? If I wanna recreate this thing, and if you just get this stuff off Stackforge, then it assumes you've installed a squid proxy on your Mac. So if you install your own local squid proxy on your Mac, and then just grab this stuff from Stackforge, it'll work. And the readme does explain what those requirements are. But it's all been designed assuming a local running squid proxy. And if you don't want a proxy, if you actually do wanna make connections to the outside world, which it's, I don't know if I recommend because it makes things way slower if you're gonna do it more than once, then you wanna look at the site manifest and specifically things in the setup directory. Like if you go to the manifest setup directory, in those files, that's where the various proxies are being configured. And if you just remove that code, it'll do direct connects through the netted network on VBox. The proxy configuration is totally decoupled from the image. So the image itself doesn't assume any proxy. Yeah, that's exactly what I just said. Yes, you need to remove, if you don't wanna use a proxy, you need to remove stuff like this. And specifically the only things you have to modify are in the manifest setup directory. Like look for things that say proxy and comment them out if you don't wanna use a proxy. Yes, if you're gonna be doing stuff like this on your laptop, you should use a proxy. That's why there's a default, right? Because yeah, that's a lot of network. But I would definitely recommend using a proxy if for some reason you don't wanna start with a proxy, comment stuff out, but honestly it's gonna be just easier, just install a squid proxy. It assumes like on its local 117.16.0.1 network that port 3128 is gonna be running the proxy. So I think it's fine. I mean honestly I run, I have 500 gigs of SSD with 16 gigs of RAM, but no, no, no, I can spin up about six or seven boxes before it falls over. And I was previously had no SSD with eight gigs of RAM and it was like four boxes or five boxes when it starts to fall over. The main memory consumption, I would want at least, I like a gig on the controller because especially you miss a lot of messages on RabbitMQ. If you don't have enough memory you get a lot of like missed connections on RabbitMQ without enough memory. For the compute host it really depends how big the image is that you're using for testing. If you look at my stuff it's using a zero sim and tiny. Tiny's still 500 gigs of RAMs, I'm sorry, 500 gigs, 500 megs of RAM even for the tiny images in OpenStack. So either create your own custom flavors which I don't know how supported that is in the APIs. I know you can do it from the command line. So either create your own like tiny flavors or assume a gig and a half to two gigs so you can spin up a couple of VMs on the compute host. But for a basic three note environment with eight gigs you should be fine. And any other questions? I don't know, 20 gigs? Nothing? It's not that much. It's memory is a way bigger requirement than disk. This stuff just doesn't take up that much disk. The main disk consumption, my CI boxes churn through tons of disks but it's creating it over and over and over and over again and installing all the components. But I think the whole thing, how big is the precise space box? 300. So the whole thing is probably, I don't know, 10 gig? I mean it just doesn't require that much disk. Yeah, memory, it requires memory. Honestly, I'm watching all that stuff but someone has already said, why aren't you using heat? Because this works and heat is some circles and arrows on a board. But honestly, if someone wants to show me how to do this in heat and it works today and I can do it in a few hours, I'm down. If someone wants to show me how to do this with triple O in heat, I'm down. I would guess that I'm probably better if you show me in a year or in six months. I mean I don't really follow the Chef stuff very much. It's similar, it has a fundamentally different configuration model, but in terms of what support do they have for OpenStack? No idea. You should ask them, they'll probably tell you something. Other questions? Yeah, so if you look at StackForge, Puppet OpenStack DevM, and you should probably, the main requirement for this is gonna be a squid proxy running on your local machine in order to get it done. Well how about something on port 3128? That they can do some kind of proxy. Yes? The answer is that it kind of depends when you interrupt it, but if it's done with building the box and setting up the network interfaces, you can run vagrant provision just to do the puppet stuff. So vagrant provision name will do the puppet stuff. So it looks like I'm totally out of time. Thank you everyone.