 Okay, and so it begins. Welcome everybody to the OpenStack Summit here in Austin. This is very, very cool. I'm very pleased that you've taken your time to be with me today, and hopefully we're gonna add some value. And like I said, if we don't have an opportunity to finish what we're gonna talk about today, the beauty part about this is it's very community oriented and it continues beyond. The conversation continues well outside of here. So it's called couch to OpenStack. Who's heard of the couch to 5K? And this is the idea that you're running no OpenStack at all and you wanna get to the next point where you can actually kick the tires on it, maybe test it out, see if the knees work. You wanna do what you do when you're learning how to run. And there's no commitment to suddenly become a massive, like resilient OpenStack operator. It's simply about being able to give you a starting point where you can begin your learning journey on OpenStack. And of course, oh yeah, who am I? My name is Eric Wright. I'm otherwise known as at Disco Posse on Twitter. That's the easiest way to find me. And I'm a technology evangelist for a company called VM Turbo. Now, live interactive stuff here. We are going to go vote. And this is where we pray to the demo gods. So who here has heard of Vagrant? All right, we're in good company then. So I've got a little handy dandy lab and I'm going to do what we love to do. And I'm gonna type Vagrant up. And we like a magician. This is like, this is my pen and teller bit where you say, you can see there's nothing up my sleeve on either the left hand nor the right hand. But yet we are gonna build a live lab in the environment. Now the beauty part is it doesn't, oh, you don't see the command line. So I can tell you it goes great. And then just say we ran out of time and you won't have to listen to me, but it's running in the background. Oh yeah, slow inter, oh, oh no, oh no. Network failure. But don't worry, we don't need it to be able to go ahead. I will be able to do some other stuff in absence of it. The whole idea of this is to show you that in the course of 40 minutes you have the ability to with zero experience launch an entirely functional multi-node open stack lab. So again, just to go over the usual thing. If you wanna get ahold of me, you can find me at a number of different places. So I blog at discopossy.com which has little to do with disco or posies. I'm the co-creator of something called the Virtual Design Master Competition which is really fun and weird. Contribute to the V Brown Bag. Has anybody seen the V Brown Bag? They actually have run tech talks over here on the fourth floor, which is very cool. And of course I'm the technology evangelist for VM Turbo. I also run a podcast called GC On Demand. You can go to GConDemand.io and that's kind of a high level tech, maybe business-ish type of stuff. Very indifferent topics that we cover. What we wanna cover today is the idea of the challenges of learning open stack. So who's running open stack in production today? All right, okay, we got a handful of people. Now I'm not gonna tell your boss that you're here and you don't know how to run and you need help. It's okay, there's nothing wrong with that. Believe me, I ran vSphere in production way before I should have. That's how it goes. We do it out of necessity and then we go back after the fact and say, all right, it worked. Let's go find out why. We're gonna talk about open stack distributions because that's half the battle is figuring out which one to go with. We're gonna talk about the project apologies. I'm Canadian. You have to pardon my strange Canadian accent so I talk about projects and processes and I say a boot apparently. I don't know that I do but that's what I'm supposed to say. We'll talk about the open stack cookbook lab. That's the basis for how you're going to go through your learning journey. And then of course we're gonna cover a couple of the core projects. One of them is Nova and Neutron and the reason why we spend a little focus there is because we wanna relate it to what you're doing today and give you a sense of the first sort of battles you're gonna fight with trying to learn on how open stack is gonna apply into your environment. And then we're gonna point you to some online resources. So again, the goal here is that in a very short amount of time, if not here but very shortly after, we will get you from what we call zero to hero. Don't worry you're not a zero now and you actually won't be a hero but at the very minimum it's about giving yourself an opportunity to test things out without having to shave yaks in order to do it. I run a course or I've created a course called Introduction to OpenStack on PluralSite. Who's heard of PluralSite? All right who hasn't heard of PluralSite? You all gotta leave. Now I'm just kidding. Everybody that has not heard of PluralSite, first of all I'm very pleased to have Mr. David Davis who works with PluralSite and he creates an incredible amount of content for PluralSite. So if you aren't there to listen to me then go to listen to him. We've got some really great courses. They're all video training online so you can consume them at your leisure. You can even consume them at 1.5 speeds so you can listen to them just as fast as you want and you can listen to it in the car, you can do whatever you want to. So anyways you can go there and consume this and as a general view of the entirety of OpenStack over the course of about two and a half hours and I like that it's a little older because it's talking about a previous edition. However most of it stands in that things haven't changed that much but I'm hoping to finally find time to commit and do another one and I'm sure David's here to make sure that I don't leave the room without committing to do that as well. So the number one thing you have to think about when you're about to learn about OpenStack is why OpenStack, right? Why am I learning OpenStack? Yeah I'm at the OpenStack Summit, my boss told me to come and learn about OpenStack and I don't know why. Everything that we do in technology starts with contributing the why. Understand why is OpenStack important? It's an open source ecosystem, it's a private cloud, why do I need a private cloud? Because I need to be able to create self-service, I need to run a multi-tenant environment, okay that chooses your tracks, that tells you how you're going to want to begin your learning journey. There's the basics but if you're gonna be network oriented you're gonna choose sort of a different way that you target your learning but the first thing you have to ask yourself as a technologist is why would I choose OpenStack? Answer that question for yourself and in the industry we've tried to answer it for you. We talked about in the keynote today that every single one of the top 100 and the Fortune 100 companies is running OpenStack. Now some may poke and say maybe they're not all production or whatever, the fact of the matter stands that 100, 100% of the top companies are trying it or running it successfully. So wherever you're at today if you're not running OpenStack you are going to run into it in the course of your career. This is why it's important to get started. And then you say to yourself but I'm not Walmart or eBay or Fortune 100 but then you have to think to yourself that your bank today has more developers than the vendor that produces your hypervisor. So who's a VMware shop here? All right who's running Citrix? It's always worth asking. KVM, so we've got some red hat folks. Sorry we got a good mix and match, I like this. The beauty part is OpenStack, it works for all of you. Now here's the challenge. When you wanna say I'm gonna learn about OpenStack it's very easy, I'm like I'm gonna go online. I wanted to learn about networking so I went and I figured out how to learn networking. I Googled it out and there's lots of neat courses to quickly and easily walk you through step by step on how to do networking. This is how OpenStack learning goes. It's very simple, two step, everybody will tell you. First you draw a couple of circles, then you finish drawing the owl. There's a lot of stuff that's missing in the middle and it's a huge challenge. Me as a user and a producer of content, I know I'm facing this battle all the time. Sometimes I get to like 201, 301 kind of content very early and you find that you shell off the folks that need a little bit of those interim steps and so it's important that you find the right resources to get you from the circles to maybe a bit more detailed circles and add a few steps because if you don't, what you're gonna find is that you will begin your OpenStack journey and you will be sobbing mathematically. That is precisely the expression I would use to describe my first run at installing OpenStack. So I actually went and I installed it line by line code by code on Ubuntu using the OpenStack documentation about four years ago. I don't think I ever recovered from that. I have PTSD from doing it. It made me want to do it better and it made the community want to do it better and that's the beauty part because what happens is you get folks who will go out and they'll give it a try and they'll go through what I did line by line and they'll go through the configuration and you're gonna say, oh my goodness, I made it. It feels like the finish line but the truth of the matter is it's the start line and you've gone through a lot of pain to get to the start line. If you get to the start of a race and your knees are already sore, do you think you're gonna finish the race? Of course not. So that's why we wanna make the journey a little bit nicer. Luckily we've got a lot of ways that we can do this more simply because there are distributions out there. There are more and more places you can go and vendors who wanna make your life a little bit nicer and better and as a result of doing that, you've got choice. And when we talk about lots of options, we're not kidding, there are sometimes too many but luckily because the OpenStack popularity has been caught on with the VCs and companies like Cisco and VMware and EMC, what's happened is a lot of different vendors have come up and they've got distributions and iterations of OpenStack and they've kind of been rolled in. We've got cloud scaling which I listed there but cloud scaling is actually part of EMC now. There was four or five others that were on there that they don't even exist anymore. It's not because they went away but they've become part of commercially offered distributions which is pretty awesome because that means that you can literally file, put an OVA in your environment, run a wizard installer and you're gonna have OpenStack. So the good thing is it's not necessary to do the yak shaving of going line by line and making it work. But the big thing is starting with, I like to say two places. So anybody heard of Ubuntu? I'm presuming that your lanyard is sponsored by Canonical who create that fine product. And of course, CentOS is a red hat derivative Fedora project. They're very simple, freely available, small footprint Linux distributions and that gives you a chance to run. And then of course, each one has its own derivative of OpenStack. The reason why it's a little different is because it's the package. The package managers are different. So you wanna pick the one that you're comfortable with that's what makes more sense. If you've been using CentOS today, so who's a CentOS lover today? All right, Ubuntu? Oh, yeah, I'm gonna make you split on either side of the room. You can't all sit together like this and intermingle, it's gonna cause a problem. So the beauty part is that no matter which your distribution of choice is on the Linux side, you've got an OpenStack derivative that's going to work. The OpenStack core is identical. Like I said, it's just package management that changes it. Who is it? Okay, we had asked about VMware. We got a few VMware folks that are in the room. So VMware themselves also create their own distribution called VMware Integrated OpenStack. And the beauty part about it is, if your vSphere Enterprise Plus licensed already, then you've got it. It's actually part of your ELA. You have to pay maintenance for it after a while, but you don't have to pay the upfront cost to actually bring it in. And it doesn't take a sea of engineers to make it work. It's like a lot of the other package distributions. And then if you're, like I said, a heavily integrated VMware shop, they've got all the different projects that tie into the existing areas within your environment. So it makes it a little bit nicer to consume if you want to keep your vSphere. You know, VO is like Obamacare for your vSphere environments. You can have your hypervisor and you can keep it too. So there's nothing wrong with that. So you probably saw a lot. You hear the phrase big tent? Anyone heard of that one? It's an odd thing that we talk about the OpenStack projects. And in fact, if you've gone historically through OpenStack, we learned about OpenStack projects. And they said, no, no, no, no, don't call them projects because we call the tenants inside them projects. Like, okay, good, we'll call them programs. Perfect. Okay, no, no, don't call them programs. It's very confusing because people are getting confused again. So let's go back to calling them projects. So we're back to projects again. Don't worry, next year I'll be giving one that's talking about OpenStack programs, I'm sure. Now this iChart has become something that I keep as my desktop background as this gentle reminder of how complex it is and it doesn't need to be. But it's okay because you don't have to do all this stuff. OpenStack does it for you. What's more important about when you take a look at this, it's challenging, it's confusing sometimes. You look at the different projects that are isolated here. If they talk about what the purpose of them is, sometimes it'll have the code name. The important thing is that you see those neat little red lines, the red dotted lines that go in between them. And that's what we call the API. That is the way that they communicate to each other. So all of the OpenStack projects, or programs or whatever you wanna call them, they communicate to each other via the API. It's a core requirement to deliver an OpenStack project that it has to communicate with other projects via the API. You don't have to sit there and curl, dash, X, post. Like you don't have to consume it as an API, but all of the products you use that use OpenStack will. So it's just important to understand why we do that. Because that means that every single OpenStack project very simply plugs into the other one as a loosely coupled environment. It's important when we talk about being loosely coupled, because by being loosely coupled, it ensures that there's no SDK you've latched onto. Even an SDK is close, but we can make changes. They don't necessarily deprecate them cleanly. So you always have an API that speaks to the other projects together. And this makes sure that forward compatibility, backwards compatibility, we're kind of future-proofing some of our stuff. Now the big thing is figuring out what exactly is every part of OpenStack. And I feel bad, I got this out of the room. I should turn my laptop, maybe you can see it, or I could turn this TV around. Unfortunately, it's harder to see, so hopefully you can get the angle there. So this is the list of core projects. There are six core projects. And these are important because this is most likely the most common set of projects you're going to interact with. And in fact, you'll only interact with a subset of them, because for the most part, you need very simple things. You need to be able to compute, right? We talked about it today as being able to compute, to be able to speak to each other, that's networking, and you need to be able to maintain some data, storage. So we have our NOVA project. NOVA is the compute project. And that is how we're going to actually run compute resources and present resources. NOVA itself is not a hypervisor. We'll talk about that in a couple of minutes. Neutron is the networking project. It's confusing. This is why I talked about NOVA and Neutron before, because there actually is networking built into NOVA. You don't have to necessarily run Neutron. There's more advanced scenarios and capabilities that may be beyond what you need. And so that's why a lot of folks have actually run NOVA networking in its core, and that's all you need. We have two different storage environments. So who's got block storage running in their environment today? All right, most likely. Any object storage? All right, a nice mix and match. So what's important about this is that you have both sides of that covered. Cinder, aptly named, is the block storage project. Cinder, again, it runs as an intermediary Larry in order to communicate with your storage platform. So in fact, whatever you're running today, whether it's NetApp or it's EMC or if it's Ceph or Gluster or just good old-fashioned Linux running raw block storage, you can actually just simply run it. And then Cinder becomes the communication layer and the API layer in order to communicate with it. Why it's important for you to learn about it is because figuring out what are the roles of Cinder and what are the actions that you take. And then, of course, in the object side. So for folks that use object storage, it's exactly that. It's an intermediary Larry to communicate with an object storage platform, which can be an existing environment. There are ones that are popular, again, like Ceph, Gluster, and other ones that you can package and build. But we don't want to see it. As soon as you think about that, you're like, oh. I already don't know what half of this stuff. I just barely learned what Cinder is, and now I've got to figure out what underneath it. This is why it's challenging. And then, of course, Keystone, who's ever seen the movie Kindergarten Cop? I'll get a couple to get this joke. You ever heard the man who is your daddy and what does he do? This is when Arnold asked that, this is what Keystone is. First of all, who are you? I am X3-228. And then, all right, X3-228, what are you allowed to do? So it is both authorization and authentication. And then, glance is the template or image environment. The reason why I say template or image, I'm supposed to say it in the reverse, but that's really not how people consume it. Most people are running VMware templates today. They're running Hyper-V templates today. You're already thinking in the terms of templates. There's called images because that's, of course, AMI, Amazon Machine Images. It's meant to be an image. And it literally is that, but we've just grown up calling them templates if you come out of a traditional virtualization environment. Now, this is the fun part. Inside the OpenStack project navigator, which if you just go to OpenStack.org, there's a ton of great documentation there. And this is one of the things that's there. The very same thing that gave us those six blocks also goes through all the different ones over there. And I can't even, that's an eye chart. That's pretty badass. All right, but the number one you're going to figure out is Horizon. And that is the dashboard environment. Now, while it's not considered a core project because it's not actually required to do anything, OpenStack was built to be consumed by other computers. That's really what it was meant for. Most of these services that are built like this are. So Horizon is the human side of the dashboard where you can log in and you can pull down some content and create environments. And that's what that is. So if you look through the project navigator, you can see what are the code names for it. What the actual service is, so telemetry is about capturing metrics for utilization and performance. We have orchestration to be able to create templates and run books in order to deploy environments and redeploy them repeatedly. All this you can go through and pick and choose which ones you want to dive into. And then underneath each one, if you actually go to the more details on any of them, it gives you a full set of the details because the reason why the details are important, sorry, I'm going to take you back, is if you take a look in the middle, there's a maturity there. Maturity tells you how long, or not how long it's been there, but the opposite, how many requirements it meets in order to be called mature. And there are actually eight different requirements because the age is beside it. So some of the projects are fairly new, so they've only been around a couple of years. So this is the maturity matrix that tells you each project has, does it have its own guide? Does it have its own SDK? Does it have a well-documented API? All of these things are required, is it considered stable? So these are the requirements in order to get this little checkbox and turn it into green or I guess it doesn't get extra green because some of them are six of eight, but they're actually still green. So it's important as you drill down through each of these projects, take a little time and get a sense of what it is, because the beauty part is there's a rich amount of information and it's freely available. Free is in beer, is in kittens. Kittens aren't really free, they're only free when you get them, you gotta feed them or they get really complaining. And again, you can drill down even further into each project and see it has links to the documentation, links to the development wiki, links to issues that are registered and it actually has the PTL or Projects Team Lead and that tells you who's actually in charge of shepherding all of the content and the updates that go into the program. So again, who is your daddy and what does he do? Authentication, who are you? Now you do this today, we take for granted because most environments come with it baked in. They have an SSO product or they have just a general database that keeps track of user information. So this is what Keystone is. It uses role-based access control to deliver rights within the environment. So if you use Keystone you can, some people can have storage access, some people don't need to, some are only allowed to run on the instead of host. It's very granular how you can actually deliver this. However, most people don't really get too deep into it but at the very least, which project are you allowed to have access to? Are you administrative in that? And then you can kind of sub it down below that. It's also a service catalog. Sorry, I'll just do one second. A service catalog, not in the sense of a self service but literally a catalog of services that are available in your environment. So this is the registration for services. So you have 12 Nova servers. You have three Cinder servers and each of them, each of those endpoints is gonna be registered with a service catalog. So if you query the service catalog it's actually where are my Nova endpoints? And Keystone is the one that keeps track of that information. Yes, sir. Good question. And I didn't make you get up and walk to the microphone so I'll quickly repeat. And especially in a service provider environment or any existing environment, I would say there's existing role-based access control or authentication and authorization services there. How does this interact? It has the opportunity to interact via LDAP. If you're an active directory environment it can actually speak Kerberos and speak to it. So it actually uses time ticketing just like Kerberos tickets, which is cool. And there are other third party integrations to be able to tie into the environment you've got. The active directory was kind of shaky in fact but the folks at GoDaddy did a lot of work to solve it for their own environment and that's the beauty of open source. They had a problem that you were going to have and you didn't even know you had it yet and they fixed it and they pushed that back into the core. So everybody now gets to enjoy the riches of their pain and it's nice to be able to do that. So there are plug-ins to be able to integrate into the environment that you're running today which is very, very cool. And of course keystone to keystone. What does that mean? You can take one keystone environment and have it mixed with another keystone environment. It sounds like that's rudimentary, right? But in fact, two full releases ago that wasn't even really available. So if you had an open stack environment then you acquired a company that had an open stack environment, you now have two open stack environments. It's like the old joke, I've got a problem and I use Regex to solve it and now I have two problems. This is what keystone was. So there was no way to integrate them together. So now you can actually merge them. You can use a singular database that's common amongst them. It's very, very cool how you can aggregate some of the content. So with your glance environment when I talked about images and templates typically we know them as, you know, I've got a new Buntu machine or a Centos machine and it just stores all that information. So it stores in order to manage guest images. What's cool about that is that you can actually manage them globally as an administrator of the environment or if you have tenants who wanna create their own they can create them and share them amongst each other and it gives you an opportunity to save on the work because you may have a development team that builds their own images and that they can actually share them out amongst other development teams, which is pretty cool. And the beauty part is they can upload their own custom images. You don't necessarily have to do all of the work for them if they wanna do something like kind of like we do with containers, right? Where I wanna build some stuff and then I wanna put some stuff on top of it and then I wanna load that back up and have that be my new baseline. Same thing here. You can have like a pre-packaged image that has a bunch of things on it. You make it generic so you can launch it easily and that's capable within there as well. And it stores them in multiple environments just raw file system. You can store them on Cinder on your block environment. You can store them in an object environment. And in fact, you can actually store them on S3. So Amazon S3 you can use OpenStack to control and manage S3. It's pretty cool. You can actually use to control AWS content. But if you choose to you can store them out on S3 so it pulls down the image every time. So there we go. That's what that is. I was like, why isn't it in the bullet? So again, Horizon is your self-service web portal. This is how most people are going to consume it. So who here uses an API to talk to their environment today? Not, oh, there's a couple folks. All right, that's good on you. But for the rest of us, we're gonna wanna use a web data, a web view of it. And what it gives you is a very simple dashboard that actually lets you take a look at how many compute environments there are, how many are utilized, and everyone is per tenant, which is cool. So you could have five different tenant environments. You can do it by department or however you want. And they're gonna be able to see their view and then you see a global view of the entire thing. So it allows you to do the traditional common administrative tasks. You don't need it. OpenStack runs fine without it if you consume it via the APIs or via other third-party services. And what's cool is that you actually see a lot more stuff happening in multi-language. We, as North Americans, I'm gonna lay down on this one because we traditionally think that everyone speaks English with a perfect diction and that we only write English and that the only thing that I wanna make fun of people when they have bad language is when you say on-premise. That's the only thing I'll ever make fun of someone's language. There's no such thing as on-premise, it's on-premises. Other than that, I wanna respect everyone's language and their ability to localize to their environment. So you actually can, through the dashboard, see a multitude of different languages available today. And that's all community-driven. Remember, there's no company behind it, so that's pretty cool. So object storage environments, for those that don't normally consume stuff by object, just imagine a very simple example. We have an internet-facing environment which has a public switch. We hide some content behind it, but you have to have authorization nodes and proxy nodes out in front. This can be your own environment as well. And then behind the scenes, you've got a bunch of storage servers and it's a distributed file system that's spread out throughout. And the beauty part is that the objects, you don't know where they are. It uses a really cool ring topology and actually to distribute the file system throughout all of the nodes. And then when you go to request it, the actual server knows by proxy where to find it, where to pull it and that database is stored. So it's eventual consistency, it writes it once, then it writes it to a bunch of other places, depending on how you set your resiliency for your ring environment. But ultimately what you have behind it is a bunch of different objects. Typically, we would see this for like web objects, graphics and movies and such. And then when you consume something, you've got yens over here, he says, I wanna take a look at something or I wanna publish something. I'm gonna publish a Photoshop document so I'll push it up into the object environment. When you read something, you do the same thing. You read it, it leaves in the original place. These are HTTP CRUD operations. So they're using the CRUD verbs. And then of course, for deletion, same thing. When you wanna delete, you run a remote delete and just like any other good environment, it deletes it, there's garbage collection. So if you accidentally delete something, you can often go back and retrieve it. Block storage environment, you've got this today. You know, what's cool about this is if you are a shop that maybe you run a lot of databases and web environments where you test things out, you wanna try and upgrade, you can use something like an IDEA, you can have a front-end server which has a database environment and then a bunch of file systems underneath it with objects for web actions. And then you can take that front-end environment and wipe it out and test it out. I gotta switch to Sentos 2 Ubuntu and I'll just simply attach it back and try it out to see how it works. And that's available to do within Cinder. Now the fun part. So who's a networking person here today? Oh boy. All right, I should've spent more time on this. I apologize. So Neutron runs using a modular layer two or an ML2 plugin environment. What's cool about that is that it used to be baked into the code. So if somebody came along, if Newish wanted to write some code for Neutron, they would write it into the Neutron core. That's a bad idea, right? Because you wanna create this separation. We want that same loose coupling capability that we had with the rest of it. So they've done this. There's a modular layer two environment. There's a spec by which you write those drivers. It's a your network provider today. Most likely every single person in this room is a network provider that has a modular layer two driver for Neutron. That's cool because that means that you can support whatever topology you want that your environment can support. Most commonly when you're kicking up, the first environment is gonna be a local environment. Maybe you have a flat topology. If you're running some kind of an overlay, you can go a little bit more advanced. We support VLANs. So I'm assuming that most folks are running VLANs. It's hard to believe that not too many years ago, nobody had VLANs. They just have one big VLAN and everybody was on it. Now we've learned to create VLAN isolation for just network chatting us and it's kind of been wrapped into kind of a security idea. But again, in the overlay, so supporting on the Microsoft side, we have the GRE spec as well as VXLAN if you're a VMware platform. And I failed, I should put in the third that Cisco contributes for theirs, which is, oh goodness sakes. But they actually do VXLAN as well as their own and the beauty part, they just need to write out of the gate, plug in the ML2 driver. So again, not just at the software tier, but at the hardware tier, you're now able to plug into your network environment and you don't have to necessarily configure it. It's just fully available to you. That's the beauty part. As an OpenStack administrator, we've done all the work for you in the community to make sure that you can just communicate with it and you can request networks, request IP addresses. It's all encapsulated within the OpenStack environment, but advanced features can go into your existing topology. The compute environment, again, there is no OpenStack without compute. So typically, we're gonna see a lot of folks that are running KVM, we're gonna be running Zen environments, we've got vSphere, we've got Hyper-V, we've got a variety of different, and we even got bare metal. You can actually run it on top of bare metal, which is kind of cool. Now, all the compute platform does, it's a way to run our guest instances and it boosts the instances based on a current block image or an object image. And again, NOVA is not a hypervisor itself, so it requires a hypervisor. When we talk about vanilla OpenStack, you're gonna hear the phrase vanilla OpenStack quite often. And vanilla OpenStack just means that it's usually canonical Ubuntu with the straight OpenStack main trunk code running on top of it. That's why they call it vanilla, because that's typically where people start. So NOVA is the management tier and that gives you the ability to interact with the hypervisor and do operations, like launch an instance, quiesce an instance, destroy an instance, arg kill, destroy. So that's their fun verbs and we want to break them out that way. So that's what's, when you look at that environment now, all of a sudden you're like, ooh, okay, now I can start to see what the different project topologies are and where they make sense. Oh yeah. The last two lines currently require separate NOVA. Right, so if you run a vSphere environment and you've got a bunch of vSphere servers, you can actually use one NOVA instance to control multiple vSphere servers. However, if you run vSphere and Zen or Zen and KVM, you need a NOVA instance per, because the instruction set is different per hypervisor, just similar to how you would do with AMD and Intel as a CPU. So the instruction set is different between them. So NOVA drivers are configured for that single NOVA instance. So it requires that they speak language per pocket of hosts underneath it. So there is OpenStack itself, the rest of it is able to, you just simply have one NOVA instance per hypervisor you've got and it all shows up as one single OpenStack cloud. Now your image templates and stuff obviously are gonna be specific to it and those are actually set by specs within NOVA. So you couldn't take a hyper-v image and deploy it onto a vSphere environment. It uses the actual specs and we're called extra specs in order to identify these tags and attributes and that tells it what can run where so they'll be distributed throughout based on the actual base hypervisor. Metadata, yeah, there's a NOVA metadata service, there's metadata beyond metadata. We thought only the NSA loved metadata, we love metadata. So again, when you take a look at the iChart, it's gonna make a little more sense, hopefully you'll start to see what those are and you'll understand the interactions and you'll see AMQP, there's a queuing environment, everything is based on, if it's API driven, it has to be able to use a queue environment because otherwise how do you actually pass a number of instructions and wait for them to complete? Now, even beyond using it as a basic cloud, you have now the marketplace and the app catalog. This came out in the most recent release with Liberty and now with Metacca, it's even more cool because what you have is one single web page where you can go to and you have the opportunity to view all these different appliances, different images, different distributions, all in one spot. So this marketplace is a community contributed environment where you can go through and take a look and they're based on three different types of templates. So there's a Murano package, a heat template, or glance images. So if you wanna go and find I need a glance image that's running lamp stack on Ubuntu, you can go through and you can actually take a look in the environment and you can separate it by what iteration of OpenStack, what addition you're running and it'll tell you what format it's in and you can go through this and other people are living the pain you're about to live already for you and so they've already done a lot of the early steps to help you along that road. So this is all fully and freely available online and in fact, if you wanna contribute one same thing, you just fire it upstream, it's all done through GitHub. So the cookbook lab, this is the important part. So just remember, everybody all say it together. Okay, we're gonna say OpenStackcookbook.com. Everyone together now. OpenStackcookbook.com. It's just that easy. Now I didn't do all of the work. There's a lot of great folks from Rackspace who actually helped to contribute to this. Cody Bunch, Kevin Jackson who it's his birthday today. So if you wanna say happy birthday at IT Architect, Kev, it's his birthday today and we're celebrating it here. And we built out this environment so it's a multi-node environment running with Vagrant to deploy it on top of VirtualBox. So if you go to OpenStackcookbook.com, it'll show you where the GitHub repo is in order to get that information. You don't have to have a GitHub account, you can pull it down without it. If you wanna contribute up, of course, you need a GitHub account. We encourage people to use it, find out, does it work, does it not work? No problem, we can help you with those things. So it gives you a multi-node environment where it gives you two Nova instances, each running KVM. They're both running on top of Canonical Zubuntu. It has a block storage environment. It has a neutron networking environment so it does have the advanced capability to do multi-tenant and nested private networks. So you can actually run, and I'll show you that in a second why that's important. And all this does is you simply spin it up, you deploy the code, you install VirtualBox and Vagrant, and you type Vagrant up, and if you've got internet access like I failed to do here correctly, it will actually launch the environment, it'll set up the private network and the management network, and it will run the environment for you. It takes about half an hour to actually download and deploy. The reason is because it's pulling down, the core image itself is running down a VirtualBox image and then all of the code, because it literally builds it live every time you do it. So if you've got a limited network package with your cable provider, don't do it over your cell phone when you're tethered because boy, you're gonna get an AT&T bill that'll make you cry. But the topology of it is this environment here. So again, you have a controller, you have multiple compute environments, which is compute one and compute two, a networking agent and a Swift environment. Now you can choose to run Swift, you don't have to. So if you wanna dabble with object storage, you can do it. Simply uncomment out two sections in the code and then you can launch a Swift environment and you can learn about how to actually use object storage. It's very, very cool because believe me, getting through that first step is not friendly. If you wanna do networking, the beauty part is you've got the ability, like I said, to do multiple kinds of networking. Traditional flat networks is how most people run them. There's like, flat network, everything runs underneath one that just distributes DHCP back to every single instance. It runs on one network. This is not how you wanna run in production. Believe me, but if you run in tests, this is a good way to do it. You just wanna kick the tires on it. You can get what we call multiple flat networks. So each shared network is between tenants and then it's flat. You can go even more complicated where you can have nested private networks. You can actually get overlapping name spaces and IP spaces on this one. So that's cool. Everybody underneath can have 10.0.0.0 slash 24 and you're safe from that. So of course, we can't cover everything here which is unfortunate, but as I said, there's great stuff online to make it easier and I encourage you to reach out to me again. I'm at Discoposti on Twitter. You can email me. I'm Eric, ericatdiscoposti.com and absolutely, if you wanna find out more on how to do this, but go to these resources. There are install guides for every environment that you've got out there. There's install guides for Sentos, for Ubuntu and for Red Hat Linux. I say every environment, that's the most common three environments. Fully documented, continuously updated and fixed. So if someone finds a problem that like, eh, this didn't work on page 38, no problem. Report it, we'll fix it. As a community, we all fix it together, which is great. There's a high availability guide. You've gotta understand where resiliency needs to live inside your open stack environment. It is freely available. It's like a 250 page book. You can download it and deploy it as a PDF and you can view it on your e-reader, which is cool. There's a security guide. Security first, right? Start there. But most importantly, there's an architecture and design guide and it gives five different use cases on how to deploy. If you're a storage heavy environment, if you're a multi-tenant cloud provider, if you're a simple environment with a couple of hypervisors, each one tells you about how you wanna configure your environment and where to start looking. All of it's freely available. And again, there's an open stack Wiki which has lots of different content that tells you about the projects itself. If you wanna dive a little bit more into what makes it tick, how the development lifecycle works, it's very interesting. You can, it's fully open because we wanna have an open kimono and have everybody be able to see how your cloud is built for you. We're all in this journey together and we all wanna succeed together. So by doing open source goodness, it reaps the benefits for everybody that's here and throughout the rest of the world. You thought about the number of contributors, the number of consumers of open stack. It's very, very cool to imagine that we all have a chance to succeed in technically a free environment. I don't necessarily encourage you to run it free in production. You're probably gonna have a partner. You're gonna have a vendor. If you go to whatever your service provider vendor is or whoever you're existing, you know, your hardware for software, you name it, your networking provider. They all work with open stack today. And if they don't, you probably need to move away from them, right? There's companies that give a real forward view because they wanna succeed with you. And that's why they're getting behind this ecosystem. You know, we kinda laughed at three years ago, people are like, there's a lot of hardware vendors that wouldn't be downstairs because they're like, ah, the science project. You're running a science project. Everybody call it a science project. You know what else is a science project? SpaceX, NASA, some pretty successful science projects. So I'm pretty proud to be a part of this one. So again, for anybody who wants to contribute and talk more and continue the conversation, we can, I'm gonna stand outside for a few minutes. We'll catch up hopefully if we wanna talk a little bit more. Anybody that wants to communicate online, like I said, reach out to me on Twitter and ericatdiscopasti.com. And thank you to everybody for coming out today.