 I think you're on. Are we on? Good. Hey, good morning, everyone. Thanks for coming after the party. You guys are hearty souls. I like that. All right. We're gonna start, we're gonna give a little intro of who we are and then Dan's gonna give some kind of updated or some instructions that you guys, in case you guys hadn't seen it, on how to kind of catch a laptop prepped with a vagrant image so you can kind of follow along during the workshop portion. So one way there's gonna do the intro and I'm gonna give a quick overview, kind of a quick snapshot of what is OpenStack and where it stands today, and then Dan will take you through kind of the internals and how everything works. So before I get started though, let me ask how how many you have new to OpenStack? You know what you haven't, okay. How are you classified? Good. Okay, then this is the right workshop for you. Great. Let's get started. So let me introduce myself. My name is Ken Foy. I'm currently at Rackspace as the OpenStack evangelist, and I've been involved with OpenStack for about three years. And I'm Dan Radies. I'm on the OpenStack team at Red Hat, been at Red Hat for about eight years, and working on OpenStack for three and a half now, I think. I counted seven summits that I've been able to attend and be a part of the community. So Ken was talking about the vagrant file. If you didn't see in the abstract of this session, there's a link out to a quick web page that we put together that references a vagrant file. So while he's given a little bit of the introduction of OpenStack and how it came to be in some of its history, pull down that vagrant file and go ahead and see if you can get it running to get that virtual machine running, because the vagrant file that's out there is the same exact one that I'm going to present off of. So you can follow along with it. It takes a little while to go to finish doing the installation. So if you don't quite get it done here, we'll also have a link to the slides at the end of the session as well. So you can take the vagrant file and the slides later and then work through it again. So everything here you'll be able to do now as we're working through it as well. Take it with you and give it another try on your own time as well. Okay, great. So Dan just gave you permission not to listen to me while you're busy getting your vagrant box up. That's okay. So yeah, so go on to the schedule and go on to our abstract and somewhere in the bottom I think there's a link to the vagrant box that Dan put together. Okay, while you're doing that, I'm going to give a very quick kind of history of where OpenStack came from. So this email you're looking at is the actual email that was sent by an executive at Rackspace at the time in 2010 to NASA, the CTL of NASA at the time, he's essentially inviting them to come alongside Rackspace to create a new open source cloud platform. So the history of that is that Rackspace up to 2010 had been running their own public cloud, but it was not a it was a proprietary piece of code and it had scalability challenges. So when they got to a certain point where they knew they couldn't scale anymore, they decided they wanted to rewrite the entire thing and at that point they made a decision to use Python as the underlying programming language and at the same time or roughly the same time that was happening, NASA had an initiative to create a private cloud and they decided they wanted to see if they could pick something that was open source if possible and they evaluated some of the current platforms that are out there at that time, like I think CloudStack, Eucalyptus and others and decided what didn't quite fit their needs. So they decided they would build their own public cloud as well, I mean own private cloud and they independently of Rackspace also chose to use Python as their program language. So the executive who sent the email you saw in the previous slide read about it, contacted Chris Kemp, who was the CTO of NASA at the time, and basically said, hey, do you want to work together? And so all that is where OpenStack came from and and one of the decisions they made was even though they had these two teams working together, they still were limited, somewhat limited in resources, right? NASA is not it's not a software company per se and while Rackspace had a lot of software developers, it is by nature a managed, was at that time a managed hosting company. So they decided the best way to grow OpenStack quickly was to open source it and give it to the community, community of people who could then write code and kind of build up the project. So this is, I'm not sure, this is probably a little outdated, but it kind of shows you in the six years time how quickly the open source project has grown from really just those two entities to the point where you have hundreds of companies spanning multiple geographies. And you also, you can also see that OpenStack is now starting to grow in its adoption. So this is a list of companies that have, they may have adopted OpenStack much at the very beginning, but these are the ones that have come out in the last year and publicly said, hey, we are in fact using OpenStack. And you can see it's across multiple, it's not just science research, it's not just public cloud companies, it's now true enterprise adoption. So since you guys are new, I'm going to give a very quick definition of what is, or overview of what is OpenStack. So this picture you may have seen from the Foundation, OpenStack Foundation website. So basically, if you think about what, what data center infrastructure was like before cloud computing, it's basically a lot of silos of physical hardware, like physical servers, networks and storage. And then somewhere along the way, we started figuring out how to virtualize much of that and started being able to create pools of resources. The problem though was each of those silos had to be managed and provisioned separately and it required someone who had a lot of knowledge about how to set up a server, set up a, you know, piece of networking gear or storage. So the idea behind cloud computing, which is where OpenStack fits into, is what if we could, what if we could figure out a way to kind of automate all of that management and provisioning of those virtual resources and make it really easy for an end user who doesn't necessarily know much about storage or about networking to be able to, to provision their own resources without having to go through an IT person, right? The idea to be able to do things very quickly. So if you think, if you look here, basically the self-bound APIs that talk to the infrastructure to manage it and that's basically any virtualization platform should be able to do. What makes OpenStack a cloud, a true cloud computing platform is actually the northbound APIs, right? The fact that, again, a developer through APIs or through a web portal could actually say, hey, I want 10 machines that network together in this way with this amount of storage and never have to talk to an operator, never have to talk to a storage admin and say, hey, I need, you know, 10, two terabytes of storage tomorrow. And the idea, also, if you look at this picture, one of the difference between OpenStack and some other platforms is that it's a very loosely coupled architecture. In other words, all the components within OpenStack from the user experience, the dashboard, the storage, to the networking, the servers, it isn't one monolithic system that makes system calls to each other, right? Everything is done through APIs. So every component is really its own project. It's kind of own program and we're just relying on these open REST APIs to talk to each other. And the idea is this way, you can change out different parts. You can scale out different components without necessarily negatively impacting other components. And all these components that he has on this slide is what we're going to go through in the hands-on part. So we're going to identify all the different pieces and how they interact with one another. So this is just kind of giving an overview of how the different components tie together. Dan will actually walk through what each of these components do within an OpenStack cloud platform. So at the end of the day, the goal of OpenStack is to be able to allow developers to create applications that they can build on a platform that they can self-service, create, and then rapidly scale. That is fundamentally what OpenStack is designed to do. Let developers move really fast and grow applications to scale. This is an example of a reference architecture of what a typical OpenStack implementation might look like. This is actually based on Red Hat OpenStack platforms, reference architecture. And you see here, again, the OpenStack kind of spread out across many components or many servers that deliver this cloud computing platform. We'll talk quickly about some consumption models, different ways that you can actually consume or use OpenStack today. So there are three primary ways. One is public cloud, private cloud, and then what we call the middle ground, which is private cloud as a server. So public cloud is easy to understand. It's what Rackspace has, it's what Dreamhost has. It's what Amazon users have, even though they are not running an OpenStack. So that's shared infrastructure that anyone can get access to. Private cloud distro, again, the design is to have a private cloud platform infrastructure that no one else has access to besides your company. And there's a number of players that do that. And then private cloud is a service. It's kind of a middle ground where you can do some trade-offs, whereas it is single tenant, but you don't manage it. Because one of the value props of a public cloud is you don't have to run this cloud platform as an operator. You just consume it as a developer. Someone else handles it. And the downside is that's out of your control. Someone else has to run it for you, and you're sharing it with other people. Private cloud, you have exclusive access, you get to control it. The problem is you have to operate on a day-to-day basis. And then the private cloud as a service is that middle ground that says consume it like it's a public cloud, but you're basically handing off management to someone else, even though it's, again, maybe running inside your own data center. So a couple of the big players. This is just a sample. If you look on the openstack.org marketplace webpage, there is many, many more vendors that are involved. I'm just kind of laying out some of the big key ones I know of. You'll see that in some cases as you look on that website that some vendors actually play across two or three of these spaces. This is the slightly marketing piece. I won't make it very long. So as I mentioned, I'm from Rackspace. So Rackspace has a couple of different approaches to presenting openstack. So I talked about the public cloud, because that's basically what public cloud is built on today. It's built on openstack. From a private cloud perspective, we actually don't have a distribution, per se. We offer that private cloud as a service. So everything we do is as a service offering that we manage, whether it's our public cloud or it's a single tenant private cloud for a customer. We offer that in two flavors right now. One is the private cloud is just using the upstream code that's running on Ubuntu. Or we can also do private cloud as a service using Red Hat Open Stack. So customers have either option. Then Red Hat, up to a few months ago, really had the one option of here's a distribution that you can download. Red Hat can help you set up and deploy, but primarily you run it yourself. So what's new now is because of what we're doing around wrapping out managed services around Red Hat Open Stack, essentially Red Hat now has both the distribution offering and also this private cloud as a service offering. That was the marketing piece. Let me talk a little bit about some ways you can learn Open Stack. Obviously, you guys being in this workshop is a great way to get started, but there are other things you can do once you get back to where your home is for you and be able to continue learning Open Stack. So a couple of things. One is there's a bunch of resources you can look at. Open Stack Foundation is probably the best place to start. It has documentation, it has videos, so you can kind of learn more about what Open Stack is and how to use it. A few books that are out. There's an Open Stack Cloud Computer Cookbook that was written by a couple of rack space engineers. It's a great way to use, I think they also use Vagrant. It will spin up a, kind of do what we're doing here, spin up a one no or multi no environment and be able to play with it. Open Stack Essentials is actually the book that Dan wrote that does a similar thing. So if you like the workshop today, go buy his book. It's my suggestion. Ken and I have done this workshop, what, four or five times? This is our fifth time together. So we've kind of become partners in crime. This Open Stack Essentials book is essentially a print form of the hands-on part of this presentation that I'll do. So a lot of the same exact stuff that I do right here today. This is a print form that we'll walk you through and give you more in-depth information about the different pieces. It's actually the first draft of the second edition has just been completed last week. So hopefully in the next week or two here we'll have the second edition published. So you're welcome to get the first edition. It has lots of great information. There's a few extra updates and additional information that's been added. So if you want to wait a couple weeks for the second edition. Okay. That's good to know. There's other books that have been out for a long time. There were no resources for learning Open Stack than the website. It was a lot of guling around and finding things that actually didn't work because it was outdated. So since Open Stack has gone more mainstream, there's many more kind of real books that have been written. I forgot to put this in the deck. But all RACs are unfortunate because of our involvement of Open Stack since the very beginning. We've had a lot of experience and a lot of people who've worked on it and a lot of them have gone on the right books. So right now there are three or four books that are including the one in the screen that's authored by RAC Space Engineers. And we're actually giving them all away. How many of you have seen the RAC Space Cantina? It's kind of the restaurant. Okay. So this afternoon, I think starting, I want to say starting at three. Yeah, probably three. We're going to be giving away three of the books that have been written by RAC Space Engineers. So you just have to kind of go to the cantina, get online, they'll hand out the book. If you want, they can sign it and you can actually talk to them and ask the authors questions. So several of those people are actually current RAC Space Engineers. So one of them being like a, for example, probably the best RAC Open Stack networking book that's out there today. It has been by one of our engineers and he's going to be signing the book and giving them away to people at the cantina later this afternoon. All right. Last thing, I think it's the last slide for me. If there's likely a user group in your kind of where you live or very close, hopefully there is, if there is, these groups tend to meet every month or every other month and they go over, either technologies involved with Open Stack or sometimes they do hackathons or just workshops. So I encourage you to go to the Open Stack.org community website, find a user group near you and get involved with that community on a regular basis. The other thing is there are, the foundation started doing something called Open Stack Days, which is basically, think of it as like a mini summit that's done regionally. So historically that's been done internationally. Last couple of years they did a, what they call, Open Stack in Silicon Valley and then later this year we'll have the first Open Stack Day East, which will be in New York City. So again, it's tend to be one or two day events where you have a keynote and you have breakouts just like you would at a full-blown Open Stack summit. Okay, so that, let's get going. I'm going to let Dan take over from here. Yeah, let me borrow that to start out with. And I'll stay here just, if you guys have questions along the way, between Dan and I, we should be able to answer them. Okay, so Red Hat, the way that Red Hat operates as a company is that every product that we have also has a associated community project. So everything is open source that we have. Everything that we write goes first into an upstream if possible and if it doesn't go directly into the upstream we'll carry the patch internally until we can get it into the upstream. So as Ken mentioned before, there's Red Hat Open Stack platform, which is our supported enterprise product, but on the community side we have RDO. So RDO is our community supported distribution of Open Stack and that's what we're going to use here today. That's what if you've been able to get your virtual machine up and running with Vagrant, it has gone out to RDO and installed RDO into this virtual machine and that's what we're going to walk through. RDO, we take the upstream source and we package it into RPM and then we give it to you. So what you get in RDO is directly what comes from the upstream. If you're installing Red Hat Enterprise, I'm sorry Red Hat, they just changed the name so I'm trying to learn how to say it correctly. Red Hat Open Stack platform, if you're using that it is supported by the company and so we oftentimes will carry patches for customers. So there's a small delta between what's in the community distribution RDO versus what's in the enterprise distribution OSP Open Stack platform. But if you look at the two, really the only difference between the two of them in a big picture is just the branding of it, that when you install OSP it's Red Hat branded and when you install RDO there's no branding, it's just vanilla Open Stack. So that gives you a little idea of what we're installing and using today. RDOproject.org down there at the bottom is where all kinds of documentation, where you go to get started if you wanted to not use the vagrant file that I've given you, if you want to do a larger installation or learn how to do more than just a little sample installation, lots of documentation and materials there. So this is a picture of the components that we're going to go through. I'm going to quickly touch on each one of them and then we'll get started actually looking at each of them. Keystone is identity. So this is where you're going to be able to do authentication and all of the services are registered there. So everyone has an identity and needs to be authenticated with one another, not just the end users, but all these components together. After we do identity we'll look at Glance. Glance is image management. So when a VM starts it has to have a disk behind it and so what we do is we pre-build the images and load them into Glance so that when the VM starts it goes out to Glance and pulls a copy of the image that's been pre-built and then instead of you having to go through the whole installation process it's pre-installed and ready to go and Nova boots it up, customizes the networking and identity inside of it and then you're ready to run. So it's a way to quickly get those VMs up and running. Once you, before you can get that instance actually running you need the disk image from Glance and you also need a network to attach it to. So next we'll go to Neutron which is OpenStack networking and we'll look at creating a virtual network to attach it to. Then once you have that image available and the network available and the identity to get you into OpenStack then Nova is our compute component and Nova is going to take all those pieces and put them together, talk to the hypervisor and actually launch that VM using the image, using the network that has been provided to it. Once the instance is up and running, the instance in cloud computing or in OpenStack the paradigm is elastic computing and the idea is that these VMs are intended to be kind of disposable that if you have multiple VMs that are working together to run an application and one goes down instead of working really hard to try and keep that one back up and figure out what went wrong with it while end users are waiting for that capacity to come back, instead you just kind of slice it off and spin up a new one because it's so quick and inexpensive to do that. And so doing that the disk underneath it is ephemeral, it doesn't last, it gets thrown away with the image if it gets axed. So Cinder is our volume service, we can attach a persistent block device out to those VMs to write information to that block device, that way if the instance gets axed you're able to reattach that block volume device to another instance so that you can continue using the information that you've been using. This isn't shared storage, it's block storage so it's a one-to-one relationship between the drive and the volume and the instance. There is a shared storage called Manila which we won't get into so if you need shared storage you can do that as well. Object storage is very simple storage so it's simple content object so instead of working at the block level and presenting a volume as a disk to the VM instead you use an API and you pass content with kind of a name value sort of mentality that you can, this is the name of the object that I want to store and this is the content that's going to go in it or this is the name of the object that I want to pull and it gives you the content in it so it's very basic file storage but it can be very powerful there's websites that use object storage to run their entire system because of how flexible and how simple it is as well as their software defined and behind it so you can use commodity servers to do replication and mirroring and there's a lot of power behind object storage it's not just simple file transfer back and forth. Yeah and you've probably, if you've been at this, how many people here, for them this is their first summit? Wow, okay, a lot of you are right. So you might have heard that there's a lot of these other project names that we haven't mentioned like Magnum and projects like that so OpenStack actually is made up of I think currently over 50 projects which can be dizzying obviously we're not showing all 50 projects these are what I would consider to be more core projects in other words what are the kind of what are the minimum things I need to get have you know to say it's spin up a OpenStack cloud so this is how so what Dan and I are going through are the kind of core stuff and then all these other 40 some other odd other projects are useful projects that want things that you layer on to your core your base opens that cloud so we just don't have time to touch all those in the hour and a half that give us this morning and they don't all fit in one slide either probably also be bored to tears by the end of it okay and so the last one of the top here is the dashboard it's based on a project called horizon which is more of a framework so dashboard and horizon are are somewhat analogous in the component that you're working with the the technical differences that horizon is the framework underneath it that the dashboard the web interface is built on top of so that's the first thing we're going to do here is is connect to the dashboard the web interface and that's where we're going to be today working through these concepts so that you can learn to understand how they work together through this this graph we're going to go in and create each of these virtual resources and attach them together to get a virtual machine up and running an open stack everything an open stack is built modular modular modular modular and the dashboard is no different so as Ken mentioned there's almost almost 50 projects at this point and so as each of those projects come through the dashboard is committed to working hard to get each of them to have web support dashboard support in it and so it has to be built modularly that a new project comes online or a project that previously hasn't had a web interface comes into the dashboard and says we're ready to have our our web interface there or we've done this work for it they've created another module that they can drop in and integrate in and so it it makes it quick in how it it ends up being able to be more projects can be added to the dashboard so in that in our abstract there was the web link that had a link out to the vagrant file if you're able to get that up and running and installed then this is kind of where we start getting into that so vagrant up is going to do that installation if you haven't done it yet it's going to take a little while for the open stack to actually be installed if you've already done it and then did a vagrant halt like i'd put in the instructions then you should be able to vagrant up again and it'll come right back to where it was before when when you shut it down previously the next thing we need to do then is connect to the dashboard and try and log in using it so vagrant ssh will log you into the command line of that vm that you've brought up sudo minus i will change you to the root user and we're going to talk a little bit more about installation methods at the end the installation method we use is is called pack stack here and it's good for kind of one-off simple demo like environments which is why it's being used here and when it installs the keystone rc admin file is dropped into the root user's home directory so if you cat that file out and and list out the contents in it the administrator username and password that get generated for you are put in there and then finally well this is the web url that vagrant has helped us to present so you should be able from your web browser once open stack is completed installing connect to this 192168372 and the dashboard will get appended if you just connect to the ip address it'll redirect out to the dashboard let's do a quick check how many you here have gone to the point where you've been able to figure it up okay how many you're still working on getting the vagrant okay great okay and so as i mentioned before too if if you're having trouble getting it to work um or it's still running uh catch up as you can but also i'll have the slides for you later and you know take the file with you so you should be able to use this after the presentation as well and uh we'll give you ken's email so you can send all the questions to him if it doesn't continue to work yep it's in the abstract for the session there's there should be a link in there you like the schedule the the summit schedule yeah go go find the the um the the session description in the schedule and the link is down there at the bottom okay so connecting to the dashboard um so if you connect to that url that i put in there you're gonna see a screen that looks like this and then if you ssh in let me get my vagrant running again i'm sorry i make the font bigger oh yeah i'm not doing anything that's important right now i'm i'm restarting vagrant here we go can you see that is that good okay so there you go you you see i've uh my vagrant was just suspended so it was still running but it was it was in a suspended state so i resumed it and then vagrant ssh so now i'm on that vm that vagrant created for us and then sudo minus i and cat keystone rc so you'll see here that there's os username admin so that's the generic admin username that's generated for us and then os password right underneath it is a randomly generated password and so that's what we need to get into the dashboard so i'm going to copy that password and log in as admin and now i'm logged in as the administrator user to the dashboard so let's keep going from there so keystone identity management is next the idea here is that in the install that we've done we've made it a centralized identity service and a centralized catalog of services what this means is that all the users and all of the components within open stack can call into keystone and ask how to connect to the different components so in particular when if if you use the command line and or the command line or the web interface every time you make a call to create a virtual resource you have to connect to one of the components that we're looking at today and so when you connect to that component you have to be authenticated and then if that component needs to talk to another component to create virtual resources or associate them in some way those components have to authenticate to each other so there's tokens and usernames and passwords that are all passed around from the users and the services and so the identity services the users and the catalog of services is that you can ask keystone how do i connect to service nova or to service glance and it will respond to you with a connection url so that you can make your connection to that service yeah it's one one way to uh kind of one illustration that may help in your head is think of as um think of all the components that you need to spin up some resources like the the spinning up a server spinning up a some switcher spinning up some storage think of each of those as having workers that you do that work for you um and when the server worker needs the storage it needs to essentially present a badge like says hey this is who i am and i'm authorized to ask you to present me some storage right that's essentially what keystone is it's a way to for these workers to present badges to a authorized badge to each other to say i need a resource from you so that i can pull it all together to spin up a resource yeah definitely and then for your authentication options these are just a couple but keystone can also have other identity management or authentication systems plugged into it so the users can be connected to LDAP or ad or username password token oh off you know all if you're familiar with Apache remote user where Apache also supports a bunch of other authentication schemes you can plug into Apache and rely on that remote user login methodology and and keystone will also recognize that so there's you're not tied to username password in the way that we've done in this demonstration here so now if we create a user and login as the administrator user so i can manage users i'm going to click on the the user's link on the side here can you guys see that a little bigger would begin um and then well now my button's been pulled off and then up at the top here there's a create user the top right over here there's a create user button so you click that and it's going to give you a big dialogue that you can fill in all the information so i'm going to put my name in here and i could describe myself i guess in the description so what's happening here is Dan's logged in as when you when he says it's an admin he's basically the global super user for the for the in OpenStack cloud and um Dan will talk about it later but there's a concept of in that cloud you can have multiple tenants that have resources only available that you can assign specific resources to that tenant and then within each tenant you can give you can create users that only have access to that tenant so you don't have to give everyone that using your cloud the super user rights and just to make things super confusing OpenStack has mixed the word tenant and project so if you hear tenant or you hear project they're the same thing it's just on the command line it started with tenant and then in the dashboard they started using project and now they're starting to switch to using project on the command line as well so you can see here we're at primary project the idea in Keystone is that you kind of have a triangle of things that are important you have a username you have a project that the user will live in and then you have a role that the user is associated with the project so all of your virtual resources that get created have to be created inside of one of these projects and a user is no different a user gets assigned to a project in general if there's a group of people that are all together then you will name the project something relative to the users but if it's a project specifically for that user the standard is kind of to just create the project name with the same name as the user so I'm going to come in here and do create a project and if I had more members to add to it I could do that on this tab but I'm just going to create project so I created project with my name that matches my username and then what the dashboard does for you is brings you back to the same screen that you're at so all the information I'd already filled out is there and now my primary project is already filled in there when I click this plus button here and went to that other dialogue I switched out of the create user and I actually created a new project object and then after that project was created I came back into the create user and so now it's associating it and then at the bottom the role is member here and member is a generic non-administrative role for you to be in your tenant yeah so uh is it in a production environment just you would likely set up um you know let's say you let's say developers are going to be using this um you may create a tenant for every for every software project and then a user could be a member of one or more of those projects software development projects and in in each project he can have in one place he could have an admin role and in another project that same user could have just a member role um so depends so there's a lot of flexibility in what you can do and then later we're going to look at using Swift and Swift has a special role that it needs to be connected to so I'm going to come in here you can see the project that I created is down at the bottom uh that matches my username and then I'm going to go to manage members and you'll see that my name is in here and I'm a member role but I can also add myself as a Swift operator and so for us to be able to do Swift later we have to add this um role to my user within the project there is some configuration that you're able to automatically make all users in a project Swift operators um it's not configured by default in pack stack what we're using so I'll save that and now I'm a member and a Swift operator so we can later use Swift so at this point I have a user I have a project and I have a role in my project so I can log out as the administrative user and then log in as myself my non-privileged user and the first thing to notice here is a non-privileged user is that there's no administrative panels here so as you're interacting with this log back in as the administrator log back in as yourself and and note the difference of here I've got project and compute and network object store so my basic virtual resources that I'm managing and then in the administrative there's also user management and an administrative panel that lets you globally manage all of the resources within the within this cluster so now that we have a user let's start kind of working towards getting a virtual machine up and running the first thing we talked about was the image management that glance houses these pre-built images the idea again being that we don't want to have to sit and wait for the installation to run for each of these virtual machines when they want launch that we can it's very boilerplate to do this installation so if we do the installation ahead of time and put a generic image into glance then when we launch that image all we have to do is do a couple little tweaks to make it unique in its networking information or its its identity on the on as a machine and then we can it's very quick to pull that image and boot it and start running from it so glance is image management it's a registry for these disk images so we import the disk images into glance and then you're able to recall them and share them across your cloud there's lots of these images pre-built for you on the internet so if there's a particular distro or flavor of os that you want to use you know go search on the web for a cloud image or open stack image for the particular distro that you want and most all distros now have one of these images pre-built for you that you can download and put it in so looking at adding an image instead of trying to distribute this image to everybody and get it so you can download it i went ahead and had it imported for you so if you go to images in this screen you can see that there's a cirrus image cirrus is a an operating system that's built for developmental testing purposes so it's very insecure isn't as a operating system and it's not recommended at all for use for anything but basic testing and demoing and the reason it's great for demoing and testing this that it's only 12 meg so if you see the size over there on the right it's it's a teeny tiny little image and that's because there's not much in it but for instance if you wanted to say put fedora or centos or something we could say cloud fedora and it comes right up with download clouds or yeah somewhere so here we could if i download this fedora image here that that's an image that we could pull directly into open stack might have one already downloaded so in my open stack here we say create image fedora you can actually give it a url so this image location that's here if i had just put the direct url to that image it would let me pass that in and then it would download it and it would import it i'm going to see if i have a file waiting for me yeah there you go so i've got a fedora image cloud image here so i've selected that and it's the format has seen that i've got a q cal 2 format so you want that format to match the image that's come down you can mark it public or not this public flag here says can everybody in the open stack cloud use it or can only i use it in my project so that's that's a public private based on the the tenant or the project that it's being imported in and then protected is a flag that says it can't be deleted unless that protected flag is taken off so the user that imported it can take that flag off or administrator can take it off but no one can delete it until that flag is is unchecked i'm going to do public just for fun create image now create image is not actually creating the image like the file itself all it's doing is creating a record of the file that i've previously downloaded so you download the file from the internet and then create image imports it into the registry i don't know why it didn't import okay quick check where you guys at has anyone got a dashboard up and running okay good a few of you uh the rest of you still trying to get the vacant box oh how many years still trying to get the vacant box up and running okay one of the challenges of a hotel wi-fi okay so for whatever reason it doesn't want to import fedora here and there's error messages we could go look at but we don't have time we've got cirrus in there so hopefully we'll be able to move forward and launch the cirrus image okay so that was the process to import an image if you had a different image you wanted to import you could do that you can have multiple images in there you can even go to the extent of rolling your own image so if you wanted to build say a centos image that had your application pre-built into it so that when you launch an instance you can maybe launch over and over a application inside of that image that you want to cluster together or you're doing maybe all your developers need a base image to do their development on you can custom roll these images and then add them to it and then your developers could come in and say yeah launch me another development environment launch me a different development environment and so managing these images can become very powerful for productivity in how you roll them and how you manage them uh again there's not time to be able to do that but there's lots of information online about how to create them so if you google uh you know create open stack cloud image or something like that then there's more information there yeah there's a line you can add i think did you get it didn't work in the back okay do you know what your solution was you have to show that there's nowhere to go so it's it's in here uh i was in the centos the vagrant box okay in the you should have went with open2 then just kidding thanks mine's libvert so it's probably in there this vagrant file here okay virtual locks okay so so when vagrant pulls down that scent image we have a vagrant file that i've written that you're using to do this but there's also a vagrant file that describes the image the box that's been downloaded and so if you find where that box file is being stored on your machine there's a vagrant file next to it which is what i'm showing here and so you just need to change this rsync here to virtual box is that right virtual box probably because i'm i'm using a libvert provider but for virtual box provider you probably need to change that to virtual box you're gonna need to find where your box file is so let's see so see how my box dot image file that's the centos box that it downloaded and that we've launched off of um so this directory this home directory that i'm in may be different for a windows machine but you need to find where that box image file is and where that vagrant file that came with it is and change rsync to virtual box sorry i don't have better instructions on this okay so let me keep moving let's see we've created a user we've got an image imported into open stack that we can launch off of next we're going to create a network so once we have an image in a network then we can actually launch a vm so neutron is the network management service and it's it creates virtual networks so we think of a network as a switch with a bunch of wires plugged into it and that creates the network you can do that virtually on servers and even have them span servers and this is what neutron does it uses something called open v switch on the system and it ties all the open v switch services or services on the different machines together and then creates virtual networks so the same way that we think about a bunch of wires plugged into a switch physically open v switch can do that virtually with virtual machines and it can even segment them so that they're separate from one another the idea being here that will create a network and it goes into the project and that means that your project will have its own network that all the vm's can be attached to and no other project can attach to can attach to that network only your vm's can unless you do extra configuration quick question how many of you guys here use use vmware okay so so so the concert should be easy right so the open v switch is basically an open source version a similar version to the v sphere the v sphere distributed switch in a in a vmware environment right the your virtual machines have virtual knicks they for the most part they typically don't can't plug into a directly into a physical switch so you need some kind of a virtual switch that's really like a bridge that you can connect those virtual knicks to so that's all we're really doing so creating the network hop back in here select your network tab and select the networks link on the right i've got up in the top corner over here i've got to create networks create network button i'm going to call this my private network and then you need to give it a subnet so this is kind of your your private networking subnet ranges i'm just going to do a 10 10 10 0 24 um if you've used 19268 or 172 16 those are those are good ranges to use and you're you know we could just as easily put a 192 168 1 dot 0 slash 24 in there you don't need to fill out uh let's see i put that in a subnet name it should go in network address you don't have to put a subnet name you can if you want to you could name it private subnet there's there's a network and there's a subnet that goes with it so those are two different objects um usually what i do is i name my network private and then don't name my subnet and just give it the the address that i'm using uh the gateway ip will be assigned automatically in a private network like this and then it's important to have dhcp enabled and that's by default on your network the reason is when the vm comes up the first thing it needs to do is get an ip address and so neutron will statically assign an ip address to it but the instance will get it over dhcp okay uh just make sure just make sure everything's clear so what's happening is a every project slash tenant um so i'm i'll probably use the word tenant just because it makes more logical sense to me old school yeah but i really mean project so each project or tenant has to have its own network so think of again you're from the vm world um it's kind of like how you they used to do um vcloud uh vcloud director networking right so every tenant has its own kind of private network and for those uh so vms inside those networks they can only talk to each other to be able to talk to the outside world there's gotta be a uh like a provider network that's that's tied into an actual you know a gateway to actually talk to the outside world so what we'll eventually do is um basically connect a tenant network to one of those provider networks and that would allow the vms to actually talk outside its own little world um and a subnet it's just a range within a network of private network a range of ip addresses that a virtual machine can have so it makes sense someway are you awake uh do you need some jumping is it jumping that i need a slide that said let's do jumping if you guys have questions uh because something's there's a lot of information i know we're throwing at you you have questions just kind of raise your hand um so we can kind of make sure that we're all you know we're kind of all we're on the same page or just buy my book later yeah or you could do that second no it goes into your project so when i logged in as my user i'm logged into my project so all the virtual resources that i create um what what the dashboard kind of hides for you is that when you log in you don't just log in as your user you log in as your user to a specific project and so i may be a user that's in multiple projects but when i'm authenticating i'm authenticating to a specific project so because i'm specific authenticating to that specific project all the virtual resources that i create automatically go into that project so there's there's some there's some division of responsibility so as a cloud operator right you you're going to set up the underlying infrastructure because all this gotta run on physical stuff right it may be a virtual network or virtual switches but it's gotta sit on it's gotta actually gotta talk to a real physical networking so you set that up you set up the provider network right that allows all the tenants we'll actually talk to the outside world but the each tenant has the um ability to create his own private network and configure that um and then basically say hey i just want to tie into this uh external network so i can talk outside so let's create an instance and put it on this private network and then we'll do the provider network side to show kind of the external access so jump away from Neutron for a minute to Nova this is instance management this is basically the the hypervisor manager um that it's it's going to manage these virtual machines on demand across the hypervisors um and OpenStack in general is is intended to be built on standard hardware and be it's designed to scale horizontally so we've put i've put this here that it's designed to scale horizontally and designed for standard hardware but that's OpenStack across the board not just Nova the intent is for you to go and take a bank of commodity servers stick OpenStack on it and really the only prerequisite that you have to have on these servers is that you have virtualization capabilities which pretty much all machines have now and then enough resources RAM and and CPUs to be able to divvy up into virtualization so let's take that network and the image that we have created uh that we imported and create an instance so i'm going to go compute into my instances and launch instance and we'll call it first instance because that's really creative and then your launch instance has all these tabs down the side um so we need the ones that have the the blue stars or things that we have to go in and it has requirements for us so our source we have to go in and select this serious image and say this is the image we want to boot off of your flavor is a definition of how many how much resources are allocated to your virtual machine so you see the the preloaded ones there's tiny and small and medium and large and they have a certain number of vcpus and ram and and disk that get allotted to them for this demo environment that we're basically doing nested virtualization because we're about to launch a vm inside of our virtual box vm or our our vagrant vm um just do tiny because if you do anything bigger it it's not going to fit um but i'm sorry how many of you have used them at aws okay so this concept should be pretty familiar right we're basically doing very similar to the thing that you would do on an aws environment so then next is networks i'm going to select the private network that we created um and then there's a couple other tabs that aren't required right off the bat we'll jump back into a couple of them uh a little bit later here so now i'm going to do launch and my first instance comes up it's connecting to the hypervisor which this is kind of an all-in-one so it connects back to itself and it launches it builds um and it should come up active so now you see this this active status that means that the virtual machine has gotten the disk image it's created a port on the network it's spawned the vm it's come up and open stack sees it as a happy virtual machine ready for us to start interacting with right and the key is um again keep in mind this is dan's logged in as a regular user as a presumably a consumer of the cloud this is not something that requires an operated or an admin to be able to do right so the whole again the whole idea of open stack is uh giving end users the power to create spin up their own resources just the way you would be able to do on on amazon web services but do it not only in a public cloud but potentially in a private cloud context so now we get back to that provider network idea that this instance has come up and it's on this private network but on that little private network in your project the only thing that that instance could talk to was if we spun up another instance then you could talk to that instance and it would be literally a little switch with two computers connected to it and they could talk to each other there's no internet access per se that's been provided to them so this provider network ends up being a a catch all shared network that all tenants can end up connecting to and interacting with so we'll jump back into neutron and look at creating a second network and a router to go with it and the router connects your private network to your public network so because this provider network is only available i'm sorry because the private net the public network is available for all projects to use we have to be an administrator to create an external network and i have that in quotes because that's kind of the flag that open stack gives these provider networks that when someone talks about an external network they're talking about one of these provider networks and you'll see in a minute when we create it there's a flag that says external and that's kind of where it came from so i'm going to log out from my non-privileged user and log back in as the administrative user i'm going to get my password back out of my keystone rc file and you can change that password if you don't want to have to copy and paste it every time so now logging back in we can tell where the administrator again because we have this this admin panel here and this is where we're going to be able to manage this provider network scroll down and find networks and you'll see the private network that i created that says it's in my project is already listed there so now we're going to create a public network that we can then create a router to attach the public and the private network to one another so as the administrator i'm going to create a network and i'm going to say public i like to put this in the services tenant because you're not supposed to attach a instance directly to the public network you need to have a router to go between and the services tenant or services project is a generic project that all the components get added to so that they're you know like i said every everything in open stack has to be inside of a project and the components are no different the public network is no different so this services project is kind of a catch all don't really use this but it has to be in a project sort of designation and then down here at the bottom you'll see external network this is the designation that says this is a provider network so it's important that we check that network now the that wasn't supposed to happen let's try oh change your provider type to vxlan and just put one as a segmentation id external network i skipped a step read the manual vxlan so by default neutron is configured to use vxlan tunnels so that if you had multiple nodes it would connect those with these tunnels and then all of your tenant traffic would go across these tunnels we only have one node here it's an all in one so that's not terribly important but because the configuration by default is that it uses vxlan we have to specify that our public network is of type vxlan so one of the powerful things about open stack is that there are so many options there's so many ways i mean that you the fact that you can pick all these different network types from these virtual network types to actually just regular straight up vlans right and flat network that's a very powerful thing it's also one of the most painful things about open stack is that you know there's there's 300 different that's probably 300,000 different combinations ways you can configure open stack and about a couple hundred of them actually work in production so that's that's one of the challenges and one of the reasons why there are people offering open stack as a distribution or service because they're basically kind of giving their opinionated their opinion on what is the right configuration that could actually work in the production environment but for the purpose of playing with it it's good to try to play as many different of these it's many of these different options as possible to see what you can actually do and what actually works and everybody starts in the same place so just because it feels like you're drinking from three fire hoses worth of information when you first start trying to configure an open stack cloud know that everybody started i started there can start there everybody has to go through kind of the tough process of learning enough configuration options to get far enough ahead to be able to bring something up and use it and so hopefully this vagrant file can help you at least get started with that and be able to interact with it so that you have a baseline to move from now notice as the administrator that i created the public network in the services tenant but there's no subnets associated with it the automatic subnet creation is a non-privileged user feature within the dashboard as an administrator you have to create them separately from one another so i'm going to select this public network and it gives me the option to go in and create subnet so you see over on the right side now i'm going to click this create subnet button and here again the subnet name you can name it if you want i generally just name the networks and not the subnets you're welcome to do either the network address that you want to use is this network address let me get it typed in and then i'll switch back so that you can that you can copy it off and the way that the vagrant file is designed we have to you have to use this one specific to the vagrant file because that's the way i configured it but know that provider networks are something that are generally provided by your network administrator so a lot of times this information would be given to you from a network administrator and they would say use this specific citer because this is the block of ip addresses that i've given you for your open stack cloud to work it does okay let me finish this real quick and then i'll go back and give you a sec to look at that slide on the subnet we need to go into subnet details i put it in the name again maybe i should just start giving the subnet's name so i put them in the right box okay the the important thing about creating an external network is that you should disable dhcp so by default your internal private networks are going to have dhcp so that when your instances come up they get dhcp off of your network that you've created but because a provider network is one that your network administrator has given you this information you're using a network that's been provided to you from your network administrator so you don't want to put dhcp on that network because there's probably a dhcp service already running and you don't want those to conflict and the ip's that you've been given by your network administrator will be assigned statically so be sure to enable a disabled dhcp on it and then oftentimes there's a a subset of these ip's that need to be used and that's called an allocation pool um so here at the bottom i also have an allocation pool and all that says is the ip's that this provider network will allow you to use are in the range from this you know four 227 to four 238 so it's it's saying even though you have a slash 28 only use a certain amount of the um ip addresses so i've got to type now okay does anyone still need this up for a few minutes everybody good for now great i have a link at the end and actually the link that goes to the vagrant file if you just take vagrant file off the back of it and my fedora people drive there uh you can look for the austin pdf and there's a pdf out there that has all the slides on it right next to the in the same place as the vagrant file yeah same exact slides that are here in that pdf was that your question oh that information yes that slide is in there yeah so everything that's going up here is in that pdf so that slide will be in there with that information it's hard to see with the light okay so now i've created the provider network now we want to attach the private network that our instance is attached to out to that provider network so that the instance has external connectivity so i'm going to log back out from the administrator account and back into my non-privileged account and there's kind of a neat thing that they have this network topology link under networks will bring up kind of a visual representation of what the network looks like so this guy right here is the provider network that we just created and this little cloud here is the the public the private network did i say that right the provider network is the little globe right and the private network is the little cloud and then the instance is the little mac looks like a mac doesn't it in your non-privileged user account there's a network tab so drop down the network and there's network topology so what we need to connect this provider network to the private network is a router so if we go down here to routers if i only have one router i generally will name it the same thing as my project and then it gives you a external network option here so i'm going to go ahead and attach the public network to this router and then you'll see once it's created over on the right side it says clear gateway so another name for this provider network being attached to your router is a gateway so if i were to hit clear gateway it would detach those two and it would say set gateway and i would be able to to select that public network again so now if we look at the the network topology again we see that the the globe the provider network is connected to a router these arrows but we still don't have a connection back into the private network to create a route all the way out so go back to your router and select the router and the connection from the private interface the private network into the router is called an interface so if i select the interfaces tab on that router say add interface and select my private network from the list and submit it now i have a connection from the private network to the router and the public network to the router and we should be able to see that in our visualization here so you can see there's a link from the instance to the private network to the router to the public network and this public network is kind of in quotes public right that if you get actual public ip's on this public network then there's actual public access like real-world internet access but public is a little bit misrepresented there in that if you have a corporate network you still need this public network that has ip's on your corporate network for you to get into your internal network so internal means that it's isolated from everything and public is or the the provider network the external network is the external connection outside of the open stack cloud to whatever network it's connected to so this could be like i said a corporate network that you're on it could be your home router you know if you're doing this at your house and you need to get in through your 192 168 home router addresses it could be actually public ip's that are that are you know legit ip's that you can get from anywhere in the world so at this point we configured that so at this point you should be able to ssh into that instance i'm sorry there's a step before that you're an ip yeah we need to assign a floating ip so a floating ip is an ip from that provider network that's assigned to the instance so that you can then contact it so right now my cirrus instance has a 10 10 10 3 address and if i come over here and say associate floating ip then i can add one to that instance now a floating ip is a resource just like everything else so i can't just say give me a floating ip to my instance i have to first allocate a floating ip into my project and then take the floating ip that's been allocated to the project and associate it to the instance so here it says manage floating ip associations with this instance but it says no floating ip address is allocated so the dashboard is great about this it gives you a little plus button it says let's allocate a floating ip it'll come from the public network associate ip now there's an ip that is listed in my ip address list there and i can say actually associate that with first instance the name of the instance that we've launched over on the right side of the screen there's a button next to create snapshot and this dissociate floating ip was associate floating ip so each of the each each of your instances that come up will have a drop down that have options of different resources that it so these are the ip addresses remember back when you were creating the provider network you put in the sider and there was a ip address range all we're doing here is saying hey i i know you have this range of ip addresses that was created that was signed give me one of those so that i can give it to one of my instances so then your instance typically will have two virtual necks right one neck will have that private network ip address so it can only talk internally and then the other neck you're going to sign uh this floating that's what you're going to connect to the external or provider network that one gets the floating ip address and it's what direct should be able to allow you to talk to the outside talk outside your tenant into either to other tenants or even to the external world so someone had a question the link that you got the vagrant file from it's in that same directory so if you do radius fedora people org and just get the directory listing of that there's a pdf in that directory there right next to the vagrant file cool okay so now that we have that floating ip associated with the instance uh i'm on my laptop and the networking vagrant has set up the networking for us to get from my laptop into these virtual machines so you can see at the top the 4.228 that was associated with the instance i'm able to ping it and then at the bottom here i was able to ssh into my cirrus image by default cirrus wants you to log in as user cirrus um one of the security things they tried to get right and then you'll notice down at the bottom though that when i log in it's asking for a password in general when you download a vm or a cloud image from the the internet uh they're not going to give you a username and password to get into it so you have to set up a key pair an ssh key pair um so in the compute tab under access and security you can manage your ssh key pairs and you can create a key pair similar to the way you do an aws where it'll create one and it'll force you to use a private key that you download um open stack also allows you to import a key pair so i'm actually just going to import my public key that's on this laptop it was it's a virtual router which is handled by network namespaces at the linux level um so there's a there's a dhcp network namespace and there's a router the q router d namespace and so if you if you search for those if you want to know more about them there that's some of the underlying plumbing of network namespaces and ovs that's being done under the covers okay so i've i've imported this key pair now um i'm not sure that i'm able to run two instances on this demo environment once so what i'm going to do is terminate my first one so i'm going to go into that same dropdown and confirm delete instance because when this one booted it didn't add the the key pair wasn't associated that was one of the options in launch instance so i'm going to go back into my instance launch and i'll do second instance again because i'm really creative and apparently not very funny either so a couple of things while he's doing this um this part how many you feel like this is that overly complicated it seems things things seem very overly complicated just set things up no no one that's good for user yeah so a couple of things to keep in mind what is um dan's doing a lot of kind of the prep work right to get things up up and running so in a steady state operational environment you shouldn't have to keep doing this over and over that's the one thing uh second thing is uh in my experience is if you're if your users are developers particularly almost none of them will ever use this dashboard we're using this dashboard because because an easy way to visualize how to do things in a workshop but in reality your developers will most likely um be using command line and how and if the hesitate you say we're doing it right they're going to be using the api and using different tools to actually uh provision these resources so you're not they're not going to be clicking through a bunch of tabs and wizards everything should be laid out in a config you know some kind of a manifest sort of config file or something in the way they code their applications and then the third thing is if you haven't picked this up already especially if it seems like a lot of you folks are having a vmware background which i'll also have is you need to learn linux there's no way around it it's especially on the networking piece if you don't understand namespaces uh you're gonna have you know those things like an ip tables you're gonna struggle that's the only way i can put it so you know i'm not saying you have to be in linux school but you have to know enough about linux concept particularly in the networking space to be able to to afford all other stuff to to make sense for you as you're operating it okay so here launching this instance i selected all the same source flavor networks and everything the differences i went to this key pair tab i'm going to select my key pair that i just imported and then launch the instance and all that's going to do is when the instance launches it connects into a metadata service and pulls that key pair out and drops it in as the user so once this guy comes up i'd be able to log in without having to put in that password i'd just be able to ssh directly to it we're running out of time here so i'm going to go ahead and move and not actually display that we've got two things left real quick to talk about cinderblock storage again i talked about it earlier if you if your instance gets axed for some reason you want to have a place that you can save in um save off data so that if it does get axed you can reattach it to another vm and access that data so cinder does this for us there's a volumes link that i just clicked here over on the right again we can create volume give it a name like first of all the source and the type aren't necessary they're kind of extra parameters the size is what you want to pay attention to so i'm just going to create a one gig volume but if you had a larger backing then you could do 10 or 20 or 100 gig whatever you're you're able to provide and then once that volume is created you simply attach it so manage attachments over here on the right and it'll give you a list of your instances so i'm going to select my instance that i have running and say attach volume and so now if you logged into that vm the initial drive was dev sda or vda i think for for virtual disk a and then when this guy gets attached which it says it's attached now another one will pop up inside of there and it will say vdb so you now have a second block device that you can now create a file system and a partition table and mount into the linux file system just as you would had you plugged like a physical drive into a desktop or a server of some sort yeah so because we're going this okay you might want to skip this with just because of the time okay because you can ask you can ask questions but um so because we're going to just keep it keep in mind by default when you create a vm um the the root disk it's an ephemeral disk that means when you just when you delete the instance all the data gets purged right and you can imagine in some cases that's not what you want that's where the cinder block storage project comes in that's a way for you to take a a volume attach it like as dan said almost like a usb drive and then what not in case when you terminate the instance the data the volume persists over past that time and still has all the data on it and you can read and then you can attach it to something else one thing i need to question you about is although a lot of customers use um that kind of sand you know a sand like the traditional emc or netapp array to to provide the sender the back end for the cinder volume cinder is not a shared storage technology right this is not like vm where where you can take an you know a value a value from emc and say okay now i got two vms i can that can both connect in the cluster and then one one uh hypervisor fails i can restart it that that's not the way it works when if that hypervisor that you attached the cinder volume to uh dies it you have to manually or script detaching that volume and reattaching it to something else okay that's really important because i i always get people get confused because they in the heads they think it's like a vmware type thing but there's there's no cluster volumes the project manila which does share a file system so you know the cinder is like a sand and manila is shared file systems like nfs or something like that i just labeled myself as a linux guy nfs yes okay so the final thing so we're we're gonna just for time's sake we're gonna skip swift um you can read about object storage if you search online you know using it in the dashboard is is pretty simple just like everything else there's a create container add object and container there's those are the two concepts is that there's a container that you add files to and then you add objects which are just simple file there's no metadata about these files there's just a name and the content that goes with it um i wanted to touch real quick from a red hat perspective i mentioned earlier that we have kind of the community and the the supported product separation within the rdo community we have two installation methodologies one is called pack stack which is what we use today and it's generally intended for demonstration proof of concept small deployments where you're kind of playing and learning open stack the other one that we have out there is called triple o and there's a quick start out there that you can read through and work on that as well if you'd like to get involved or to try and use triple o um triple o's basis is that uh triple o o o stands for open stack on open stack and the the idea is that it uses open stack to deploy open stack and so what it actually does is stands up and all in one open stack just like the one we're using today here but it adds in bare metal support which is the project ironic and so the bare metal support can then go out and provision a larger cluster of machines and triple o is where you're going to get all of your your bigger um capabilities like high availability or um like provisioning larger software defined storage clusters and things like that um and then from the the enterprise side the supported side osp director is our supported product and so the thing to note here is that triple o and osp director are one one to one with each other triple o is our community supported um installer and osp director is our our productized supported product so if you pick up triple o and you like what's in triple o or need help with triple o in some rate in some aspect osp director is what we're going to be is what we sell as a product and so it's important to know that pack stack is kind of just for play and triple o is is intended for a larger um longer term supported type installation um this is just a review of the first slide we have with all the different components that we've worked through um and then here's a resources page with uh you know visit rack space rack space and red hat are partnering now to provide cloud solutions um rdo project is where you can get the rdo bits and the community support of what we've worked on here today open stack of course tri stack is a a free platform that you can go and spawn instances as a non-privilege user so interacting with the networking and the images and the block storage and the object storage if you don't want to have to install open stack but you want to use a demonstration type environment tri stack as a set of servers funded by the foundation and managed by red hat uh so exactly what we just used here in this demonstration is what's running on a bank of servers at tri stack dot org um and then my fedora people link at the bottom is where the vagrant file on the pdf are so you're welcome to pull those down um and you know email if you're having trouble with those um i think that's it for us if you have questions you're welcome to stick around and ask them i think do we have time for a couple not really right yeah if you have questions come on up and see us and uh we'll stick around for a little bit thanks for coming guys thanks for hanging with us for a nice long session