 All right, well welcome everybody here. This is another fine day in Vancouver. We are here to talk about a fun presentation that we do and kind of a fun series in general that we wanna talk about, which is Couched OpenStack. We're very happy because we've got a great room of folks here today and hopefully you get some good enjoyment out of this. We definitely encourage people to ask questions along the way. The one thing we do hope, of course, is just for the purposes of recording everything, there's a microphone in the middle aisle down there, so no pressure, but you're gonna be recorded for the questions. It's good though, because it'll go up on the video. Otherwise, if you do ask a question, we can repeat the question, but it's ideal if we could go off the middle there. Again, we encourage folks, if you have questions along the way, the good news is we won't get through our slides, the bad news is we won't get through our slides. Either way, everybody wins, I think. So to get everything started, the idea of Couched OpenStack is the same idea as a couch to 5K. How do we get started? While we're at the OpenStack Summit and we've got a lot of folks who've probably been deep into the ecosystem for a while, there's always a freshman class. There's always folks that need to kind of get rolling and maybe bring a comparative learning from where they were before in the ecosystem. So talk about who we are, but so you know, get a sense on who's up here today. My name is Eric Wright. I'm a blogger and a few other things. I'm a technology evangelist for a company called VM Turbo and I blog at discopossi.com. That's just pretty easy to find that way and I'm at discopossi on Twitter. And if you had any doubt, our pictures are on the slides too. My name is Melissa Palmer. You can find me on Twitter at vmiss33 and I also blog at vmiss.net. If you wanna take pictures along the way, don't be afraid to tag us because we love pictures of ourselves while we're on stage. It's a rare thing because we can't do it while we're here. Okay, so let's go over what we're gonna talk about today in a little more detail. First, we're gonna talk about OpenStack getting started and some challenges sometimes people face when they're beginning. We're gonna talk a little bit about OpenStack distributions. We're gonna briefly cover some of the key project topologies. Go over something called the OpenStack cookbook lab. Focus a little bit on Nova and Neutron which are two of the key components of OpenStack and we'll go over some online resources to kinda help you after you leave the room. Our goal today is to coach you from zero to hero on OpenStack. Outside of here, there's a lot of other resources. One of the online resources that I shamelessly like to promote is that I've done a course for Pluralsight. If anybody is already familiar with them, if you're a Vexpert or Cisco Champion, there's a lot of community groups that actually give out free access to Pluralsight or they have a trial available and I actually did an introduction to OpenStack course there which is about a two and a half hour walk through a little bit more deep or dive unfortunately we can get into in 40 minutes. It was for the Havana release but at the same time of course it is valid because a lot of it talks about the general projects and the concepts behind it. Why we're talking about couched OpenStack is because even though we're all smart folks, we maybe work in virtualization, we work in networking, we work in all these areas for a long time and we think, oh this is gonna be easy, I'm just gonna, just like I did when I learned VM or vSphere or learned Citrix. I learned other technologies, I'm like I'll just go and read a blog and figure out how to do it. So you go online and you wanna learn about OpenStack. OpenStack is a lot of things, it's even hard to describe what it is sometimes. So you quite often you'll see your journey that you get brought on by a quick little article on how to learn OpenStack, looks something like this. Very simple two step process, draw some circles, step two, draw the rest of the owl. Now we know that we've been through this, you know I've learned, I wanted to get into deep Docker networking so I found an article and it skipped about 85 middle steps and got right to hey it works and I never got there, it took a while to practice. OpenStack was the same thing for me, I went through the install guides repeatedly kept finding errors. It was a real challenge. We're not alone in this journey, I like to think I'm a smart person but at the same time you can only be as good as the documentation and the guides that bring you through the process. This is a common experience that you're gonna find on Twitter and it's a fun interaction, you get to have just imagine as your a teacher or an OpenStack advocate when you see someone says perhaps it's the most hideous installation, procedures no demand, thanks OpenStack, I've made it. You finally get to the point where you see the horizon dashboard login screen. It's like crossing the finish line at a marathon. It doesn't necessarily need to be that way and that's why there's a lot of different ways we can deal with this. So let's talk a little bit about how to make this a little easier for you. There's something called an OpenStack distribution and if you've been around the marketplace today you've seen many, many options for this. Some of these you'll see that are already in your data center, vendors you're familiar with and some other vendors specialize just in OpenStack. What all they have in common is they're much easier to deploy, much easier to upgrade and they also kinda add some secret sauce that vanilla OpenStack when you install it yourself doesn't have quite have. So distributions are a great place to get started. So common free platforms, let's say your budget is zero like mine when I started with the OpenStack, what's your favorite flavor of free Linux, Ubuntu or CentOS? So each of them have their own free distribution, canonical or RDO and if you look at some blogs, yes they're a little difficult sometimes but there's a lot of really good walkthroughs on how to get started with either of these flavors of OpenStack distributions. So how many of you people use VMware? All right, lots of hands. How many of you people have Enterprise Plus licensing? Lots more hands. So the cool thing is that you have something called the VMware Integrated OpenStack which is a new product from VMware and it's their OpenStack distribution. One of the great things about this is A, it's free to get started with and B, for your operations team and administrators, you're gonna use a lot of the common V center and virtualization components that you're used to. So yes, you'll have the horizon dashboard but you'll also be doing these things in V center and V realize. The great thing about this besides it being free is you can eventually move it into production. However, when you do go to production there will be support costs which are usually a good thing. You would want support and production, remember that guys. So of course the one thing that happens as well depending on how new you are to OpenStack is you may see a lot of names on here and the different projects or programs and it's a lot of confusion around what it is. So as we walk through this, some of you folks may be ahead of this and some of you may be just getting rolling. So we wanna do a quick walkthrough of what the different projects are within. And again, the word project is challenging because they were called projects and then they were renamed to programs because tenants within an OpenStack cloud are called projects. And then they decide, okay, good, we'll call it programs. And then all of a sudden about six months ago they started calling them projects again. So you'll see that interchanged when we talked about the different things inside OpenStack. The kind of core of what OpenStack is is this neat little eye chart right here. You've probably seen it in most of the presentations you've been in. Unfortunately, it is a bit of an eye chart. It's best when printed on a large poster. In fact, it looks really great. I wish I had this TV in my living room. But it's a good depiction of the way that the projects interact and what each of them are as well as different subcomponents. And of course, we aren't gonna start here because it's kind of a gnarly way to get rolling. So we're gonna just quickly walk through what each of them are. Yeah, so let's go through the project step by step and talk a little bit about them. Keystone, the identity service, and I love how some of the names have great little puns in them. Basically, Keystone boils down to authentication and authorization. So who are you and what can you do inside of OpenStack? Now Keystone is really important to deploying a highly available manner because if you don't have Keystone, you're not doing anything else in OpenStack. One of the great features that came out in the Keylo release this week were the Keystone to Keystone Federation which simply means that I have different OpenStack clouds and now they can actually talk to each other. It's an important initiative, especially as folks get started and they want to get into OpenStack but maybe you've already engaged in other cloud platform or you've already got another OpenStack cloud so that's gonna be an important step. Then we have the Glant service. And Glant is our image service. Not images as in like pictures but images as in templates and virtual machines and virtual instances. This is another one, the nomenclature can be an adventure sometimes. Now you've probably got a few different operating systems that you run in your own platforms. So you've got Ubuntu, maybe you've got some Windows boxes and maybe you've got CentOS. Maybe you've got some other thing of choice, whatever your operating system is that you want. The important thing about Glant is that there's a lot of cool ways you can deal with that. Glant itself is how we store and manage our services. It's the actual registry of all those services. It can be shared as a global image so you can upload an image and share it among all your tenants or if you only have one that's nice and easy. The good thing is that you can also do per tenants. Let's say you have a development group and you want your development group to have all sorts of fancy images over there. You give them fancy images and you don't share them with the marketing team or human resources, maybe they don't need that. The good thing as well is when you've got development teams you want to enable them. That's the whole goal of OpenStack and a big way is to empower your consumer to be able to do more with it so they can actually upload their own custom images which is a handy thing. You store these in different projects. You can actually store them on different types of storage. We're gonna talk about Swift and Cinder in a moment and of course you can also store them on the native file system right in your Linux host itself. And if you're really cool and fancy and you wanna keep them elsewhere just in case your local instances are not necessarily reliable or you wanna share them out from different areas you can also store them in AWS S3 or any other object storage where it has an API to be able to interact with it. Next we have the Horizon dashboard which is basically your GUI for OpenStack. It's where your users are gonna go log in if they wanna deploy kind of a self-service application. You can perform common administrative tasks in it but if you're not a GUI person that's okay too because you can do everything through the command line. Not all components are integrated into Horizon but the things you'll need to get started with are. And as Horizon matures it's beginning to add multi-language support. Now the Swift system is an interesting one because it kinda stands alone by itself. It literally is its own project and it can be treated as a product. So object storage in the way that we treat is the same as objects and CDNs out on the web. These are our traditional methods where we'll have an internet facing service that has its own public facing switch as an example. Doesn't necessarily have to be this way. This could be in your own on-premises deployment as well. So Swift will actually authenticate you, see who you are, what objects you've stored in there or what you want to store and decide what buckets you have access to. And then it's gonna send you via proxy and that's gonna bring you into the backend where we actually store all those neat nifty little objects. You can choose to span it out however large you want or however small you want at a minimum of three nodes. You can actually do two nodes but multiple storage buckets within and using a ring topology, it actually spreads that information and stores those objects elsewhere in small chunks. All of that chunk management and the replication is handled by Swift itself. It's actually a really cool project and you'll see there's a booth called Swift Stack and that's actually what they do. They effectively take fairly vanilla Swift but they package it nicely and give a good management front end for it and they help you to deploy that in your own platform or out in the cloud. The way that you use this is not in the way you use a normal file server where you can modify in place. It's more that you would take a bunch of objects and you store them in there as a readable object. You can access them through HTTP or HTTPS. So when you've got your handy dandy user who say Jens wants to go on and he wants to access his documents, he would do so. If he wants to upload a document then he would take that and he would push that through HTTP or HTTPS into the backend store. Same thing, I said read operation. It's all done over HTTP or HTTPS. The other thing you notice when we do a read, it doesn't actually remove the object. It's the same idea of general HTTP. You don't move it. You actually would read it if you want and then if you wanted to, you could delete that movie in the middle there because we don't want that anymore. It's taking up unnecessary space. And then we have the other type of storage in OpenStack, Cinder or block storage like a Cinder block. I always find that funny. It's very similar to AWS's elastic block storage and in Cinder you'll create a volume which you'll attach to an instance. An instance is just a virtual machine or a guest or whatever used to calling it in another world. And then the cool thing is that Cinder volumes will survive the termination of an instance. If you're coming from a virtualization background you're really used to taking care of those virtual machines. Those virtual machines are super important. You have to plan about upgrading them and patching them and taking care of them. The cool thing is with OpenStack and Cinder, we don't care about those anymore. Our instances, they're just disposable. So if I have one version of OS and I need to upgrade it I can just blow away my instance and then deploy a new one and connect my Cinder volumes back to it. It's a very different way of thinking for virtualization administrators but it's another layer of abstraction. Now one thing we're gonna get into we talk about networking and there's two types of networking within OpenStack itself. There's the traditional legacy, legacy but still continues to get development is Nova Network. So Nova Network is part of the Nova project itself, the compute platform which we'll talk about in a minute. And Nova Network has its own basic capabilities which are actually fairly versatile depending on what type of cloud you wanna deploy. But for extended networking features as well as being able to do overlay networks and such you're gonna wanna run Neutron. Now this one is a whole yak shaving exercise into itself to get rolling and that's why it's important as we have our couch to open stack type of build that we do we're gonna show you what the lab looks like. It's a good way to be able to test the waters on Neutron in a small environment and then you can kinda see how it works best for your implementation. Now who here is a say a virtualization admin in their current role right now? Okay who's a network admin in their current role? Oh we got a few more. Okay who's, well we've got a lot of non-hands so we're gonna assume there's a lot of other maybe development folks? Oh okay more, ooh I should have added a DevOps slide just to make everybody happy. So you think about networking as necessarily part of your day to day today but as an open stack admin it transcends where we came from in regular virtualization. We've got the ability to now be sort of tightly engaged with our network platforms of the physical layer or the logical layer but it's done in an interesting way. These what's called an ML2 or a modular layer two plugin and that allows you to be able to be flexible in how they interact with open stack itself. Neutron has what's supported by traditional Nova networking which is local, flat and small networks. You can also use VLAN in order to give some L2 boundaries in your own open stack cloud and then you can also use overlay networks which include GRE and VXLAN. The good thing about this is that you can actually take your existing physical topology and extend it into your Neutron platform and all across your open stack cloud. It's an important piece as we think about how we use open stack versus how we did traditional virtualization or standalone servers. You're not necessarily gonna have the old school, I know there's four servers in this rack so there's four network addresses and it's all in one VLAN, it's much more flexible and Neutron gives you that flexibility. So next we have our compute layer or Nova. This is a platform that we're running our instances or our virtual machines are guests on. We're gonna boot these from Glance Images and the cool thing is we can pretty much support any hypervisor you want. So if you're running VMware today, that's great, you can support it there but if you wanna look at KVM, you can do that too and they can all live in harmony in the same open stack cloud. The only thing is you do need a different Nova controller for each type of hypervisor and this is what's gonna create and deploy your virtual machines for you. So there's other hypervisors that maybe but we kinda always pick on the top four. So who right now is running KVM as their hypervisor of choice? All right, I'll send a note to the redhead folks that'll be happy with that. How about Zen? All right, anyone using AWS? All right, you've got Zen, you don't realize it. vSphere, we saw a lot of VMware hands, all right, very cool. So this is neat because of the- Hyper-V? Yeah, there we go, sorry, I forgot about that one. I don't think one hand went up, okay. I wait one, okay, great, yay. It's not that bad, you can do it. So actually, we've got a lot of support. Microsoft has been very good about actually adding extended support for open stack because they, like most other vendors, like VMware themselves, saw that it's important that they be a part of this ecosystem otherwise that kinda line of business and their customers are gonna move away from it just because they want that flexibility that is offered by open stack. So now we go back and we look at the iChart again. We've got a simple concept of what each of these programs were and then we look at the interaction. Of course, within each project, the good thing is that you can see different components that are available. These are the individual projects and then amongst all this there's one you hear, everyone's well called Oslo and this is kind of core features that are required in order to support your overall open stack cloud. This includes your database environment. So you notice that there's a little database bucket inside each of these. That's where it stores registry information and program information and all of the interaction between every one of these projects is done by that handy dandy red dotted line and we talk about APIs. APIs are very important. Now who uses only APIs to communicate with their hypervisor today? Exactly. That's the beauty part. APIs are important to open stack but they're not necessarily important to my desktop the way that I interact. However, under the covers of course, you're using APIs to communicate whether it's over HTTP through Horizon, whether you're going directly through the command line. Ultimately that does use the API in order to interact. This is a loosely coupled environment. That's an important thing because in open stack as we upgrade and move around features inside projects, they're all consistently available via the API. So that you know that not only the API it's availability but it actually has a version. So as additional features come up they'll add V2 of the API. They'll keep V1 for a while so you get continued flexibility. It's very gentle deprecation which is a nice piece. They've actually added a dotted release of an API with Nova, it's actually 2.1. So you always know by the API, by the URI which one you're addressing and that ensures that you can upgrade different programs without affecting the other ones in the environment. It gives you that flexibility versus if it was all done through straight code and they're all treated effectively as one big bucket of code, then you've got that really tightly coupled environment which has high risks. We've gone through this if you're a VMware admin as new versions come up, I'll say great I can upgrade my hypervisor. Like oh not if you're running this other project or other product that doesn't necessarily work. And then we talk about the lab itself. Now because we've only got 40 minutes we can't go through all sorts of exciting things to spin it up. I'm gonna do the Martha Stewart pre-baked oven and show you how it works but we'll actually do that easily in a couple of slides. If you wanna run a good cookbook lab which is what we're gonna use we can just use a couple of free simple tools. Who's using vagrants today? All right, that's what I like to see. Mitchell Hashimoto will be very happy to see that. And VirtualBox, VirtualBox anyone using that as a local hypervisor, excellent. All right, everybody on the row. Who's got a GitHub account? Oh wow, that's right, a lot of developers, this is cool. Now we don't have to be a GitHub consumer as a user but you can actually, you pull the code off of GitHub. It's freely available for the build of the lab itself. And of course as you know we've got flexibility because we can run this on Mac or Windows or on Linux as a nested lab. And the important thing is the way we deployed the lab and you can use DevStack and DevStack is very cool. I like it, I've used it but I hit the wall at a very early point where it was like it's either an all-in-one node or the multi-node lab. It doesn't always build so well and there's some challenges around different feature sets within it. The good thing about what we've done with the actual cookbook lab is that we see all the different projects that we have available to us. And again, it's a multi-node lab so we can see true node to node interaction. It has a controller which will give you the API services. It has your glance, your keystone and your horizon baked in there. It has your database, we use MariaDB in order to make sure that it's scalable. And we use RabbitMQ as the queuing service. Cupid is, you'll often see that in different builds as well but for the most part RabbitMQ is kind of a common one. There's a CinderStorage node and that's where we keep all of our neat images and you can actually add volumes as needed. And there's two hypervisors, each of them are running KVM. So those are both handled by the single controller because we're using one hypervisor, we only need one controller. And then we have the option to actually have Swift nodes if you wanted to get really fancy and you wanna run object storage so you can very easily spin up two Swift nodes and it shows you how to implement that ring architecture. And then the important thing, as I said, is Neutron. Outside of Nova Network, we wanna get into the higher end capabilities and this gives you the flexibility. So we have a Neutron node running and we use OBS and OBS bridge in order to communicate between the hypervisors. The good thing is that this is all done for you. It's as simple as get clone, vagrant up, go to the web service in about 20 minutes and life is good. It's a very simple way to do it and we'll actually show, we'll give you the URLs for the code and everything as well as part of this. And then again, as you think about how you define your networks in a lab environment, this is kind of typical. This is what your regular lab would look like on a DevStack. It's gonna be a very simple shared network. One range of IP addresses, every tenant gets the same single endpoint. It's all connected to the outside world. And that's cool, it gets you what you need, but it's probably not effective for a real lab of what you're gonna see in your production environment. So you can get into slightly more complex opportunities. You can use multiple flats. By multiple flat, that means that you have separate networks. So you've got those separate L3 boundaries, but at the same time, you've got the ability to share between tenants. So let tenants see over to the middle right you can see is can have access to both different networks. The more complex and actually what's in the cookbook lab itself is we have a shared external network and then we have nested internal networks that are available to the tenants. This is a more common distribution. You're gonna see where you want east-west traffic between the different nodes. Again, as you're getting started, a lot of this is gonna be like, I don't even know what this means and it doesn't necessarily matter. But as you get going, the good thing is that you'll see it in action and there's actually a very nice browser inside the network section that shows you what the logical topology looks like as you deploy your nodes. It'll show you your instance and where it's connected to and where it's routed. So let's talk a little bit about online resources and where you can get some things to get started with. First, we have the open stack documentation and it's gonna sound a little silly, but RTFM totally applies here. The documentation is updated nightly just like the code and it's a really good resource. If you look, there's a bunch of different guides they publish. We have the install guides, the operation guides, the high availability guides, security, architecture and design, and then you can find all the open stack cookbook information at openstackcookbook.com. Now our advice would be to kind of, anyone getting started, look for the guide that matches what you understand the most. So I'm more of an architecture and infrastructure type person. So I started with the architecture and design guide and that was a good way for me to have everything make sense. It all related to things that I already understood and did on a daily basis. And the architecture guide is very good because it's written by folks in our community. In fact, some of them might even be here today. If not in this room, they're definitely out on the floor somewhere. The good thing is that all of this content is created and contributed and maintained by all of us. You don't necessarily have to do it yourself. You can consume it as needed. The code and all of the documents is updated nightly just like the regular open stack code is. So these documentation sets are actually all upgraded on the fly as people notice, like, hey, I noticed on page 325 there was a missing period. You can put in a request to actually, a Jarrett request, I wanna get that fixed. The beauty part is that secures your ticket for the next open stack summit. That's how cool it is that you can literally commit anything and get a $900 ticket. How awesome is that? I literally committed a capitalization error and it shows technically as a commit. It's interesting that you can do that. And it's a good way for us to be able to contribute to the ecosystem if you choose to. And it's, again, it's not necessary, but it's a nice way. Maybe you're not a coder. I code only because I have to, but not really as for the love of the code. But I do enjoy helping with documentation and training. So that's a good way for us to do it. And then, of course, not only are they available by HTML, but you can actually download them as PDFs. So you can actually just render it as a PDF. It's dated, so you know which day you got it, because someone will say, your guide looks different than mine. Well, it's all right. It's date stamps, so you get a sense of that. You can actually go back into previous releases if you want. We talked, we had a great keynote and we talked about what's going on with Comcast and with eBay and PayPal and all these big companies. So PayPal is doing something neat, but they're also doing something neat in Icehouse. We're in Kilo, came out a couple of weeks ago. Juno's available in between. So what you'll find is that you may run into different iterations and you're gonna get different versions and you can go back through those guides and get that information. So then you can wiki all the things. Every OpenStack program has its own wiki. So we just kind of went over some of the core things today, but let's say you're really interested in OpenStack Ironic, which is the program that deploys instances onto bare metal instead of using a hypervisor. There'll be a whole wiki on Ironic. It'll tell you the history, how it works, what changes have been in each release and it's a really good way to start getting information on it. There's development wikis for all the things that were kind of going on and constantly changing. And then one thing that's also happening this week is the Liberty Design Summit. So besides kind of celebrating Kilo and getting up to speed with that, we're gonna be talking about what are the features that should be included with the Liberty release. So as that information becomes available and is decided on, you can look at the launch pad links and the ether pad for all the different notes from all the design sessions. So as developers, and we've got a big development community of the development folks in here, who's actually developing in order to contribute just to OpenStack? All right, but who's a developer that wants to consume OpenStack and use it as their platform of choice? All right, a few more hands, of course. Again, this is the focus of the OpenStack Summit and the entire ecosystem. We wanna provide services to the community to the consumer of the service. The reason why it's growing like it is is because we've got this need. We've got developers that need flexibility, they need API accessible information, they need fast spin up and tear down, whether it's via command lines or APIs or their own personal SDKs, it's your choice. So when we look at, like I said, the OpenStack cookbook, we definitely recommend that and that one's the bill that we use and if you follow what we do, we've got our Twitter is usually a light with other people that are sharing good information and if you wanna have a cool, relaxed looking cloud like that guy over there, then that's a good place to go. Follow what we do and go to openstackcookbook.com. If you really wanna get fancy, you can go over to room 106 and Cody Bunch, who's one of the offers, is actually helping out with the V Brown bag. We actually have the couch to open stack series with the V Brown bag, which is an open, free training group that we work with, we just do online WebExes. That again is a great place to start. If you actually search out couch to open stack, you're gonna find some of those videos. We're gonna try and rerun the series again because I feel bad when I say it's training wheels for open stack, but that's really what it is. It's a getting started guide. We're not saying that you can't figure it out, but why should you have to? Why should you have to go diving through pages and pages of reading when we can kinda walk you through it? And we've got a great community of folks that are happy to interact and answer questions for you. Again, we're available on Twitter at vmiss33 and at discoposty. We love to be able to help you through your journey in finding the best you can inside open stack. I know we've probably hopefully got some questions in here and that's, we wanted to make sure that, where people are and what's important to you and what you're learning with open stack. It's one of those midday sessions. Everybody wants to get to lunch, I know, it's all good. So who has actually thought about taking formal training through say something like Miranda's Chrononical or the other options? Excellent, okay. Now the good thing as well, when we talk about the different options inside open stack, there's a training area. If you just go to openstack.org forward slash training, there's both free and commercial training opportunities there. If you wanted to contribute to training as well, if you find it's really cool and interesting and you wanna feedback, there's lots of meetups worldwide. We can take a look at a few different things on the meetup.com or whatever their thing is and you'll find other groups that are common folks with your hopes in your city or if you're not too far from where you are at least. Coming to the open stack, someone's one of the greatest groups because this is why we enjoy it. We get to meet our peers. You see the code that's being written and you get to see the people that are up on stage telling you about it. So again, while you're here to take in your journey and you've probably overloaded it's Thursday, everybody had a good night at Cisco or HP, the other night, we're happy with that but everybody's a little tired. So take this in, consume it. All of these videos are available online. We'll watch our Twitter's like I said in our blog and we're actually gonna share how we're going to help you build that couch to open stack program and as we get into the further work that we're doing in that you can kind of watch and feel free to interact and email us as needed. So any questions? Oh, all right, we have a question. Yeah, if you don't mind going to the microphone that'll be super. Thank you. I have experience with front end development engineering and I wanna work on Horizon. I have little or no experience with open stack. What's the quickest way you can start working on the user interface and without worrying with worrying as little as possible about the open stack install? What's the quickest path to just getting it running? That's the open stack cookbook in what we use is probably a fast path to get there because not only does it give you the ability to muck about with your instances itself and treating it as a lab but you have access to a working code set right there. The good thing is you could just, something goes horribly awry, you tear it down and you rebuild it in about 20 minutes. So you've got the full project access. Of course, if you go to the GitHub for Horizon itself, that's available there, but that's a good lab. It's Python, Django, it's very common tools in order to modify it. Follow the Horizon developer Wiki because that'll tell you both through the Wiki and through the launch pad. And you're gonna find other folks that are contributing actively into the ecosystem and they're very, very happy to bring folks on board and help you to kinda do what you need to do to contribute back and make what you want out of the Horizon dashboard. Because not only are you gonna get value maybe in what your organization wants to do but we'd love to see that code come back up stream. It's kind of a fun feeling and it's good for everybody when we get enhancements that way. And another good point of the open stack cookbook, besides the fact you can just go download the code and get it running, the actual cookbook itself will tell you exactly what's in that code and exactly what you need to do if you wanna just open up that book and go step-by-step and do it yourself. Yeah, so if you want the full Martha Stewart pre-baked oven, you can do that in one instance. If you wanna spin up the lab, you can literally walk through step-by-step. It's very, very cool. Any other questions? Oh, y'all are quiet and hungry bunch. Okay, well, we're gonna release you a couple minutes early because we definitely value everyone's time. We wanna thank everyone for coming today. Yes, thank you very much for coming. Feel free to tweet us if you have any questions or want any more information about anything we talked about today. Thank you.