 Okay. So hi, I'm Matt Trinish, and I'm here today to talk about, well, the title of the talk is Dirty Clouds Done Dirt Cheap. I probably can't say the name of the band the title's from without getting sued, but it got you in here, so that counts for something. So today I'm going to be talking about building clouds. Specifically, building a small cloud. Part of the OpenStack mission statement says that they will work at all scales. And you know, during the big keynotes and all of the talks we hear about the really large deployments, because that is exciting. But as part of OpenStack developers in the community, it's not always clear how well we do for the small deployers. When I was in college, I was a CIS admin, and I was working by myself in a lab, and I, you know, think to myself sometimes, could I work, could I use OpenStack in that role? Would I have been successful if I tried to deploy OpenStack myself by hand? So I thought it would be a good exercise for myself to go and try to do it, like I was back in college as a CIS admin. So the scope of the project was to pretend I was a CIS admin with no prior knowledge of OpenStack, and just go about installing it from the install docs and Google searches like everyone does when they're trying to figure out how to use software. And I set myself a very modest budget of 1500 US dollars. This was also the cost of my first desktop computer that I bought with my bar mitzvah money a long time ago. And I thought that would be a great price to see, you know, could you actually build a cloud with a modest amount of money? I set out to scope myself to just build a compute cloud, so just the ability to provision VMs that I could SSH into and do work. So that involved Keystone, Glance, Nova, and Neutron. Those are the base pieces you need just for that. And it also gives me a base to install with or to upgrade to a bigger feature set in the future if I needed that. I decided to use the Okada Tarballs because those are the most recent stable release. And Tarballs kind of go against the pretend I'm a CIS admin with no prior OpenStack knowledge, but I chose that specifically as an OpenStack developer. So I kind of broke my thought exercise almost immediately. But I did that as to see how OpenStack as a development community releases software that people can use. And to see how well you can use OpenStack with just the code and no packaging or no assistance from anything else. And as part of that, I said no automation, no pre-existing install scripts, just running commands in a shell and writing config files. So the first thing you need to do to build a cloud is to have hardware. Since I set myself a budget of $1,500, I needed to set criteria for which I'd buy hardware. Since I'm building a compute cloud, the first thing that was most important was the number of cores I had. So maximizing the core count per US dollar was critical. The second priority is RAM because if you don't have enough RAM, you can't really do a lot even if you have a lot of cores. And the machines don't have to be fast because fast costs money. I thought about, you know, could I build small desktops and put them together? That didn't work because the cheapest desktop you can build is like $500 and only has four cores. And a lot of people do similar experiments with NUX, NUX, I forget what Intel, the acronym is for, but the little computers that use laptop processors that are like small cubes. And I didn't use those because they're laptop processors and have two to four cores and they're also a few hundred dollars. So what did I do? I turned to eBay. Everyone who has a data center and upgrades throws the servers away and there are companies out there that buy the used servers, refurbish them and sell them on eBay. So that's what I started looking around eBay, see what I could find. Turns out there's a lot of reasonable stuff from eight years ago that are dirt cheap. So I bought some Dell PowerEdge R610s, which have Nehalem processors. I probably am not pronouncing that, but that's the first processor after Core 2 Duo, the basic architecture. And it came with two of them. Each of those have four cores. Those came out in like 2008, 2009. So they're quite slow. And also came with 32 gigs of RAM and 250 gig drives, more or less. And they were only $215 each. So I bought five. This is the day after they were delivered, or the day they were delivered, the FedEx guy came and dropped the boxes outside in the rain. And I got a lot of stairs for my neighbors because they're these giant boxes blocking my door. But yeah, I got five of them. So the next problem I had was, okay, I've got five servers in my tiny one bedroom apartment. Where am I going to put them? How am I going to mount them? I looked at all of the racking solutions out there, and all of them cost $100 or more on eBay, which, you know, I just spend $1,200 on computers. That's kind of pushing my budget. So I turned to the Lackrack, which for people who are not familiar with this, this is an IKEA side table that has a 19 inch width between the legs. And that's the width of a rack mount server. It comes in tons of colors, and it's only $10. Worked very well. So I racked the servers. I also bought some casters because servers are heavy and it made it easy to move around. And the casters actually cost twice as much as the table. Then I found a place for them in my closet. That is my bedroom closet. You can see some of my formal wear and suitcases in the background and boxes for TVs and stuff. And it sits in my closet. So that was my solution for mounting everything. So then once everything was wired and powered on, I was able to turn on the servers. And I found a ton of quirks with them. Because if for anyone who's bought hardware on eBay, you know, you never get what they describe. So the first thing was they're super stripped down. This company takes everything out of the servers, so it ships you the bare minimum working unit. So this meant no management interface, no redundant power supply. These were things standard from Dell at the time. I had very similar servers back in college. None of that was there. The description said it was eight by four gigabyte sticks of RAM in there. Turns out it was four by eight. So win for me. The memory was installed in the wrong slots. So it didn't all pop up properly. There's, you know, the dual channel configuration for memory on the motherboard and they screwed it up and it only showed half as much memory. One of the RAID controller batteries was dead. So it lights up orange when I powered up and I still haven't figured out where that is on the motherboard. Also another win for me, it said it came with 10K RPM drives, came with 15K, so a little bit faster, but also louder. And my favorite thing was it came pre-installed with Windows Server 2012 and the default password of Apple 123. I don't know why Windows Server 2012. I mean they do, they put an OS on there to make sure their reconfiguration works, but I would have just put Linux on it. I mean Windows, but whatever. So, you know, it was fun playing with all of this hardware. So now we've got a working rack of five servers. How am I going to set up OpenStack on it? I've already defined the set of services I'm going to install, so I needed to figure out what I was going to configure it. After reading the install guide, I figured, okay, I've got limited capacity, you know, it's eight cores, poor machine, and I've got five of them, so 40 cores, 80 virtual cores if you count hyperthreading. It's not a lot of capacity. So I'll do one controller node that also is a compute node, doing all in one. I'm not going to super load it down. It's rabid and the database don't have to scale like crazy because there are only five nodes. And then I had four dedicated compute nodes. And with that, I, you know, I felt that was a fair balance for building the small cloud. So then I figured out the steps for installing from tar balls. This is not something, this was the first issue I had related to installing things. We don't document anything between steps one and six. Those are not documented anywhere. All of the install guides, say, install the package, you know, apt-get install nova, or whatever the SUSE thing is. So, you know, you have to download the tar ball, create the service users, install all of the binary requirements, which may or may not be documented in the project, often not. Then you have to create the, you know, service etsy-durs and the var-durs to make sure that, you know, they can read the config files from the default location and write any local state. Then you have to copy the data files from the tar ball into the etsy-dur you created, and then you can pip install the tar ball. Then after you've done all of that, you can follow the install guide after it says apt install. And these are all things that packages do. You know, packages provide you, and that's why you use packages. But as a development community, we don't actually tell anyone all of these steps that you have to do is just assume knowledge that, and I guess the packages figured that out on their own. So with the basic steps outlined of how to install from a tar ball, I went about starting each service, the first being Keystone. Install guide says install Keystone first, and that makes sense. And Keystone, honestly, was a very pleasant experience. I was very surprised, especially based on some of my past experience with it in the developer community. It took two config options to get Keystone working. One was optional, and it was something the install guide recommended, which was set the token type to Fernet. The other was the database. You tell it how to connect to the database. It's all it needed to set up. It also, the install guide did not document how to install an Apache WISGI app, using modWISGI, because Keystone only ships a WISGI script, and you have to deploy it yourself on your web server. But a Google search quickly found an Apache guide for Keystone that documented exactly how to use it, and where all the config files were that they ship in the repo. It was, honestly, the most pleasant experience of the whole exercise, that it just worked and only took two config options. But it was not without issue. Hit a trace back as soon as I started it up. Turns out Python does not have a depth solver in its installer and its packaging system. So Pip installed Keystone from the tarball, installed all of the packages and all of their dependencies and, you know, treat off, and turns out somewhere in that path something installed in the wrong order and installed a version of requests that was incompatible with Oslo policy. This caused a trace back. This was simple to fix. I just installed the version that Oslo policy was defining. There's a laser pointer right there in that parse script. So Oslo policy was looking for that, and it was installed 2.13, so I just downgraded to 2.10. Or not 2.13. I guess it just doesn't want 2.13. And life was good. Keystone worked. I was able to do all of the commands I needed. The install guide told me to set up the token rotation and Furnet and all of the service users. So after Keystone, the install guide tells you to install Glance. The Glance configuration was actually pretty easy as well. It's just basic information of how to configure your store, so where it writes images, how to set up off the directories for where everything is stored, and the database connection, obviously. It was pretty good. Everything went pretty well. But I forgot to create my directory. Simple mistake. Turns out the Ubuntu package does this for you. But a helpful config option said configuration for store failed, adding images to the store is disabled, and that was a hint to me that I forgot to create the directory. So I went back and made the directory. The next one after Glance is Nova. Nova was a bit more of a beast. Wasn't too bad, but database migrations were a bit slower. That's because it's doing a lot more in the database. Nova now from Okada, and maybe the release before, has a separate API service for the placement engine for the scheduling. The install guide tells you to install the placement API. But Google search for placement, it only ships as a whiskey container just like Keystone. But doing a Google search didn't provide the same helpful guide that Keystone did. How to install the placement API is completely undocumented in the Nova docs. So I copied what I did for Keystone, but it would have been helpful if the Nova docs had told me how to do it. The first problem I couldn't solve easily, and I gave up on, was noVNC. NoVNC is a proxy for VNC connections from all of the guests you launch it. It lets you proxy all of those to a single endpoint. Turns out the Ubuntu package did not work. It's a binary dependency, but install it, apt-get install noNVNC tries to pull in the Python Nova package from Ubuntu, which is the backwards dependency. So I couldn't install it from there, and then I looked at how hard it was to build from source, and it was too involved, so I gave up. Turns out it's not actually required, but things complain about it. And then one thing that will come up in the future is that I had to set force config drive to true, which will come up a little bit later in the presentation for why I needed that, which is not documented in the install guide, but it is in other places. But I also had a ton of problems with traceback, so I had another requirements issue related to PBR, which was installed by another transitive dependency. At this point, I realized I forgot to do something pretty important, which was use a concept that OpenStack invented, or was invented for OpenStack and pushed back into PIP called constraints. The OpenStack community ships a list of all of the packages at a specific version that all work, and we test at that one version, and that's the constraints file. So when you PIP install and specify a constraints file, it always installs that one version, and that's guaranteed to work because that's what we develop with in the community and everything is tested. Turns out the requirements files we ship do not work. So when I saw this traceback for the second time, I was like, oh, I forgot the constraints file. The problem is this wasn't documented anywhere. It's not listed anywhere in any of the docs on how to build or install OpenStack from source. But when I saw that, I remembered and moved on. So I PIP installed Nova again, specifying a constraints file, and it all worked. The next thing I needed was a sudoers file. Turns out Nova does some privileged operations on the local hosts that it's running on. And it uses a system called rootwrap to privilege escalate from the service user to run that. But rootwrap needs sudo access. So you need to write a sudoers file to do this. There's no documentation on this. It's in the packages. When you, you know, apt-get install Nova, it creates the sudoers file because the packages realize you need to do this. And that's, you know, this is the role that packaging plays, but installing from source you'll hit all of these problems. Next thing I needed to do, same problem I had with Glantz, I forgot to create the directory, the state directories. Simple mistake. Now we get to networking. Everyone's favorite topic. So reading all of the install guides, all the networking guides that the OpenStack docs team produce, I read through all of it and was like, okay, I'm going to use provider networks because they say it's simple, just an IP on a flat L2. I didn't want to deal with self-service networking where I had to create a network, a subnet, and a router to get external traffic to my guests because I'm building a compute cloud. I wanted the guests to come up and on my home network I could just log into them. And provider networks, at least according to the docs, seemed to indicate that was the best way to do it. So I copied this diagram and modified it a little bit for reasons that'll become obvious in a bit on how to configure or how the networking is set up on the nodes. So you can see on the controller node, it's on one interface to my home network on my unmanaged switch. And then on the compute nodes they split up a second interface with Linux bridge and the provider network connects to the same network so it's just bridging. So this is how I was going to configure Neutron and I started reading the install guides on how to do that. Problem is it is really complicated. I've configured Neutron before. I'm also a DevStack core contributor and I've maintained that code in DevStack as well. The problem is Neutron has about four to five configuration files that specify all little bits of information and it's not entirely clear how they all come together. And it makes it really confusing to figure out what you're doing. So it says in the install guide it says copy this chunk of code, put it in Neutron.conf, copy this little chunk of config, copy it into the ml2.ini, copy this little chunk of config and put it in ml2 slash Linux bridge agent.ini. And it's not entirely clear how all of these pieces fit together or exactly what you're doing. And it also turns out that you don't even need this because underneath it all Oslo config is just concatenating it into one file and just using it as one file. So why is it writing multiple config options? And the really complicated bit with the multiple config files is when you launch the service you need to specify all of the config files that are used for that service. And Neutron contains about three to five different processes that have to run on the machine to make it work. And if you don't, you don't know which config files go with which process. So for example, the Linux bridge agent, you have to specify the Neutron.conf, the ml2.ini and the Linux bridge.ini. But how am I supposed to know that? This was the first time I actually had to give up my premise of not looking outside of the docs or Google searches and actually look at packages or dev stack code to figure out how all of the bits fit together, which was disappointing that I couldn't get everything running without cheating. The other thing that I hit was the root wrap and pseudo configuration are completely undocumented from what I could find. To get root wrap working in Neutron you need to specify two config flags to tell it which commands to run, basically the path to root wrap and where the config file is. I hit an issue booting it up and I had to Google search to find it. And then when I started everything, I hit this. And I don't think anyone can read that, but on the bottom it says error, value error, IO closed, IO operation on closed file, unserializable message. Anyone have a clue what that means? I was completely lost. Turns out this is how Neutron tells you IP set is not installed on the machine. The hint for me, it took me about three hours to decode this was right here it says execute root wrap daemon, which was the hint to me that it's trying to run an external command with privilege exhalation. So it's, you know, going pseudo run command. And the only way I was able to figure out which command that was, was looking at the code in Neutron to figure out how it runs commands through root wrap and then figuring out, oh, that's running in a separate daemon process that has its own config file that I have to go into and manually set to debug mode. And if I didn't do that, all I would get is this. And when I set it to debug mode in rootwrap.conf, it would print out, oh, IP set is missing. So, you know, that would have been nice to know or print a helpful message. I actually filed about two or three bugs about this in Neutron and it kind of is fixed. They now log the command they're trying to run before it errors, but you still get the same error message, which I guess is enough of a hint for most people. So then after setting up the base services on the controller node, it's time to move to the compute node. This is basically the same thing. You run less services, but it's rinse and repeat over all four nodes. The one thing you don't want to forget is now in a cells V2 world, you have to run NovaManageDiscoverHost to make sure that Nova knows about the compute hosts as you add them. The one thing I did hit with this is you want to make sure you turn off app armor or if somehow you're smart enough to, you know, write app armor rules or SE Linux rules, you know, write a rule to allow it because by default, I was getting issues with Libvert being blocked on the compute hosts because of, you know, security settings by default in Ubuntu, which was the host OS I used. And I'm not smart enough to figure out how to use app armor, so I just turned it off, which is what I think most people do. So after I got all of the setup, I was like, okay, great. I've got all the software installed on my cloud. It's time to test it for the first boot. I downloaded a Ceros image, which is what the install guide tells you to do. Uploaded it to Glance and said, okay, Nova boot. That's what I got. Well, be a bit more, a bit fair. Nova did say the node went active, but I put an SSH into it. Nothing worked. I did a console get, you know, to get the console log and it was completely blank. So I sat there and scratched my head for a little bit. Why would the console log be blank? It took me a second, started tracing through all of the logs to see the boot process. And I found this. It's a debug level log. I had to turn it on in Glance. And it says, wrote zero bytes to varlib Glance images with checksum, whatever. That checksum actually means zero bytes. I definitely uploaded the image. I was confident I did. And, you know, it's a Ceros image. It was fast, but it was all on a local network. So I thought, you know, it should be, it felt like it was sending the data. But Glance said zero. So what do I do? So I spun up TCP dump on both my desktop, where I was running the Glance upload command, and on the server that Glance is running on to see if the data was transferred on both, transferred from one and received on the other. Turns out, yeah, the data is definitely going over the wire. It's like, well, now, what's going on? I have no clue. Glance only logged this. I ended up having to instrument the Glance code. I went into the file of the running process and had to print out the size of the image object as it goes through the WISGI service to figure out, okay, where is this data disappearing? Because clearly it's getting to the machine and the process is receiving it. And it was there at the beginning, and when I went to the code inside Glance that actually processes the image data, it was zero bytes by there, which meant somewhere in the middle, which is outside of Glance code, the data was getting dropped. And it's like, oh, man, I remembered that I forgot constraints before. So I reinstalled Glance with the constraints file, tried it again, and it worked. So there's some kind of requirements in compatibility. I was so happy that it worked I forgot to take notes, so I don't actually have the ability to file a bug anymore. But there was some kind of requirements thing and it failed like this, but it's like I had to instrument the code to figure this out. And if I didn't understand how it worked internally, I would have never been able to figure out what was going on. Granted, no one would have ever hit this issue if they just used a package. So it's my own fault. But so anyway, after I got this working and I uploaded the image and everything was great, started it up, got a console log, tried to SSHN, put an SSHN, looked at the console log a bit more closely, and it was doing DHCP and timing out. And that's where it comes to what I was talking before about forcing config drive. So this is my home network a bit more annotated. I took that previous diagram and I'm really bad at art and the docs people aren't. So you can see where I screwed up. So that's my home router. That's my physical switch network that's unmanaged in on a single L2. And those are all my other computers on the network and that's the internet exploding. And the issue I had was running everything on one L2 means the DHCP servers conflict with each other because they're in the same broadcast domain. So the DHCP would go out from the guest, go to Neutron's DHCP agent, as well as my DHCP server. And when I saw this, like, well, that's wrong. So I turned off DHCP on Neutron. When I did that, Nova sets the static config information in the config drive, but turns out Cloud in it does not understand static network information in every case. Ceros being one of them. So it sat there waiting for DHCP because nothing was configured and it timed out and then I put an SSH into the guest. I had to know the entire workflow and the handshaking, which I knew from OpenStack development and debugging this issue in the gate personally, to figure this out. And my solution for this was the infra team has a project called Glean, which is their solution for this exact problem. When you provide an image, you need something to read the config drive to write the network information to the interfaces file. And this would all have been solved if I use self service networking because Neutron would have created an L2 and there wouldn't have been a broadcast conflict. But I didn't do that because the docs made it seem like provider networks were what I wanted because all I wanted was a guest to come up on my network and not have to deal with self service networking. So after I figured that out and rebuilt all of the images with Glean using disk image builder, I was finally able to SSH into my guests and everything was happy after that point. It's not the best solution, but since I'm the primary user, I can remember to build the images with Glean. I might someday get a managed switch so I can layer 2 isolate it or I might switch to self service network so I can avoid this issue. But at least I have a workaround in the short term. So now that everything's working and I can build it, I had a bit of a crisis for this talk. It's like, okay, great, I spent two or three days getting everything working, getting everything set up and wired, what am I going to do now? Why did I build this? Just spent a lot of money. So the first thing I came up with is OpenStack Development. I am an OpenStack developer, I contribute a lot to the upstream project, and this cloud has a ton of capacity for DevStack. It's pretty good. They're slow, but it's good enough for development and testing. The other thing I found was it's really good for developing applications on top of OpenStack. I mean I've got a cloud at home, really low latency, and I can hit and experiment with it without worrying about, you know, Mohammed left, but I was going to make a joke about Vexhost because I have an account on his public cloud. I can, you know, experiment without paying money and just play around. And I actually have developed a couple of applications that sit on top of the OpenStack APIs, which I wouldn't have done with a public cloud account that cost me money. I also am a Tempest developer and Tempest runs against clouds, so I tested Tempest on it and I also found four bugs in Tempest. So it was, you know, a useful experience just from playing around with OpenStack. The other thing it's good at is cloud-native compute of workloads. It's really good for running embarrassingly parallel workloads because I've got 80 cores sitting there. As long as they don't need to communicate with each other or, you know, do anything else and I don't care about the individual throughput of a single node, but I can do it in mass. For my example case, I decided transcoding, not going to say exactly why I do a lot of transcoding, but I do a lot of transcoding at home. And I wrote this little script that maintains a pool of nodes that launch a guest using Ansible SSH into it and run Handbrick to do some transcoding and then write it to a NFS mount that I have unrelated to the cloud elsewhere on my home network. And it turns out these CPUs are really slow. It takes about 95 minutes for a 1080p 45 minute video file. And on my desktop, it takes about 10 to 15 minutes, which is an eight core high-end i7. So, but the advantage is I can do, I've got 80 vCPUs. I can spin up 20 in parallel. On my desktop, I can do one. It turns out the throughput is faster if I use the whole cloud at once versus my desktop. And it means my desktop doesn't bag down in Firefox while it's transcoding. The other thing, which I'm not actually doing, which is why this slide is so blank, is virtualized home infrastructure, which was actually my biggest motivation. I have a lot of servers at home and when the power goes out or the internet goes out, everything goes down and I lose email or IRC bouncer or whatever. But if I ran it on the cloud, it's virtualized and my power goes out. That's all on a UPS and I could just burst to a public cloud while until the power comes back and it would hopefully be seamless. All theory, I know there's going to be devil in the details and it's not actually gonna work the way I hope it does, but that was the plan. But for reasons, I'm not running this cloud all the time. Which I'll get to, I guess in a couple of slides, I can't remember my own slide order. So, installation pain points to come full circle. Python packaging, not suited at all for system services. Clark Boylan who's sitting right there had to talk on how Python packaging is the bag of worms or some other euphemism. That's not the right one. And there are all of these issues I hit because I use tar balls. All of those requirements things. The binary dependencies, we have Bindep which is a project that the Infra team started to document system package dependencies for Python projects. Most projects don't use it, which was an issue. So I had to figure out where those, what to install. Data files and Etsy files, they're packaged in the Git repositories but we have no mechanism of automatically installing them with PIP. And then there's the dependency solver. And then the other thing was debugging OpenStack if all of my experience with instrumenting the glance code or understanding the workflow of how metadata about the IP address gets into the guest wasn't any indication, you need to really understand the system that you're, and not just OpenStack but the systems that OpenStack is touching to figure out how when something goes wrong, what's going wrong. And I feel like that's something as a community we should be working on trying to figure out how to reduce that burden and that knowledge assumption that when NOVA fails with the Libvert error, that Libvert error is descriptive enough because not everyone is intimately familiar with Libvert's operation. But it's really actually not that bad. It only took me about two or three days. I remember deploying a rack of MPI, a rack of machines for an MPI workload, setting up the scheduler and all of the machine management. It took me weeks. Over 90% of these issues I had were because of tar balls. If you follow the install guides like a smart person and use apt-get install Python NOVA, you're fine. The only thing you'll have, the only issue I hit which was not caused by my stupid decision to use tar balls was the cloud init thing which was also just me misreading the install guide and my bias against self-service networks which comes to the next one which is networking and neutron is still too confusing. That was the only part that I was completely flummoxed on and I had to cheat and look at how other people were doing it. And I've actually been and we've had some good discussions in the neutron community on how to improve this. So hopefully in future releases, this will be easier and better documented. And then working on improving the logging and the error reporting would make life great. The neutron thing, they're making progress on that. The glance thing, I wish I took better notes so I could figure out how to report that. Things like that, but making improvements in that area so the debugging of when things go wrong, you don't require, it doesn't require experts in three different very divergent systems. And pull it all together. The reason I'm not running the cloud in my closet is it's not pleasant to have five U of servers in your bedroom closet. Most people think it's the noise. It's actually not the noise. It's not any louder at idle than my air conditioning in the window. It's the heat. When the window is closed, it raises the temperature in my bedroom between five and seven degrees Celsius with the window open between three and five degrees Celsius. It's very unpleasant when this machine is running to sit in my bedroom. And if I'm trying to sleep, that's very uncomfortable. Also the power bill, I haven't actually measured it super thoroughly but it peaks about 1.1 kilowatts, which is that high load. And idle is about between five and 600 watts of draw. That adds up on your power bill if you're running at 24 seven. And I didn't get to spend $1,300 on a vacation. There's the opportunity cost. I spent $1,300 on five nine year old one U rack servers and an IKEA side table and put them in my closet. It would have been better money spent to go on vacation for a weekend. And with that I had some links of very useful docs and the dev mailing list to ask more questions. And also I was going to plug my friend Elizabeth Joseph's book, which I didn't read but it also has very similar steps and pain points on how to set this up. So with that, I'm gonna open the floor to questions. I don't know how I'm doing on time because I'm bad at that. But if anyone has questions, there are mics you can go to and ask them. And if not, I'll be around afterwards. Hi Chris. So the lacrack, if it comes in 10 different colors why did you pick yellow? Because it was two reasons. Georgia Tech Alarm, close to the school colors. And two, it's bright and stood out. So when I go into my closet, when I need to get my suit I'm not going to kick it when the lights off. Hey, so I'm a VMware guy and I built a lab in my house. One of the things I did was I put ESXi on there and then I actually bootstrapped the control plan right on top of that and had the control plan running inside of the own cloud there. So I didn't have to have like a dedicated controlling node for it. In the interest of budget and not reliability for a home cloud, can you do that with OpenStack? I mean, yeah, I could have easily done that. The reason I didn't do it was the install guide doesn't say that. Okay. And I was pretending to be dumb. Thank you. Yeah, there we go. After doing it with the tar balls did you consider just going back and trying doing the regular wage to see if you ran into anything weird there? I decided not to do that because when the pike release comes out I plan to do an upgrade from tar balls. And then I'll have a talk which will let me go to Sydney. So I have actually questions for the audience. Has anybody ever installed OpenStack successfully from in an environment similar to that or by yourself without a package? Oh yeah, without like a commercial package directly from Trunk. Cool. All right, awesome. I'm not alone. So I just went. Yeah. Great. Are there any other questions? Okay, well, no. So there was one question. I was just wondering, did you, so you did that for 1500. Do you reckon you could do it for less? Were you like pushing the servers that you bought or could you use cheaper ones? So on eBay, this. A couple of raspberry pies. So on eBay, this was about as cheap as the servers I could find. I found a couple for like 180 but they were single socket as opposed to dual socket and became a matter of capacity. I actually did the math and started graphing and found the sweet spot for cores per dollar. You could do it for cheaper if I bought less than five. I set the $1,500 price point for the reasons, it was my first desktops price but I could have done three servers and saved about $500. Okay, thank you. I guess that was the last question. Thank you.