 all right good morning everybody we are sorry about that classic don't give me that give me that yes keep okay reboot all right so good morning everybody this is the open stack tour to force tutorial this is a tutorial that is built for one hour which is extremely short for the content that we're about to present however as we're going into the lunch break afterward I'll be happy to stick around for questions later just in case we do happen to run out of time and cannot cover all of the questions that you may have this is if you want to follow along the tutorial itself on your own virtual box images you are welcome to do so however it is not required this tutorial is being recorded it's going to be available for your perusal afterward and of course the virtual machines themselves are also going to be available on Ubuntu one or Dropbox for you to download from them and by the way all of the slides here are free for you to use for any purpose you wish under the ccpisa license and I'm going to make my slides available as a github link at the end of the talk so for those of you who do want to follow along this is how you do it you grab the files you start virtual box you create one instance of the puppet dot ova virtual appliance and you create three instances of the open stack ova virtual appliance you name them Alice Bob and Charlie and make sure you reinitialize their MAC addresses when you create them you can log in to all of these boxes as root with the password of open stack and for the open stack nodes you have to run a little script it's called fix up host it runs in the slash or it's stored in the slash root directory and if you run that as root fix up host Alice fix a post Bob fix post Charlie then it will make that open stack virtual appliance three different nodes with three different network configurations named Alice Bob and Charlie and you can then use them for this tutorial once you reboot you're ready to go there's nothing that you need to do on the puppet node itself it will just come up preconfigured as a puppet master okay so much for that we're going to start with the open stack tour to force subtitled cloud from scratch in no time this is the third time that we're presenting this tutorial the first time was at Oskar in Portland this year then my colleague Adolfo Brangist did a version of this talk at cloud open North America in New Orleans and now we're here in Scotland and we're doing it again but this time we are doing it for the open stack Havana release which just dropped last week so you may be wondering just who the heck I am my name is Florian there are a few links on this slide here there is my sort of corporate bio I am one of the co founders and an instructor and principal consultant at Hasdexo the top link here is my official corporate page the second one the short link here is my Google plus you can get in touch with me by email at flooring at hasdexa.com I'm one of those strange holdouts that don't maintain a personal Twitter account but we do have a company Twitter present that's Hasdexo and if you want to know more about our training work that we do around open stack I encourage you to visit academy.hasdexo.com that has all of the information about our training services now for those of you can I have a show of hands please who in here is familiar with open stack in the sense that they have looked at open stack or deployed an open stack instance in production or in testing up to this point quite a few okay great the rest are complete open stack novices can I just have a nod your head okay all right so a quick very brief overview of the open stack architecture open stack is comprised of a set of services all of them under the ASL2o license all written in Python and all working together using restful JSON APIs at the core of everything we have an identity service called Keystone which provides authentication authorization and access control and also maintains the concept of tenants which means we can logically subdivide a physical open stack cloud into multiple logical tenants and then every object that we create within that cloud belongs to a certain tenant all our access control patterns and all that can be set up on a per or are set up on a per tenant basis we then have an image store codenamed glance in open stack the image store basically maintains our gold images goldmaster images for our virtual workloads our virtual guests those run in open stack compute also known as nova and open stack compute can support a number of back-end hypervisors the canonical way of doing it might arguably be Libvert with KVM which is what we're doing here in this tutorial today but it equally supports the ZIN hypervisors the VMware in two different flavors actually one through vCenter and one interacting with ESXi directly it supports Hyper-V and it has recently gained much improved support support for container based virtualization rather than hardware emulation based virtualization as well we have a network service which in the grizzly release still used to be called quantum it has since been renamed and it is now called neutron the neutron the open stack networking service provides network connectivity within our virtual tenants between virtual machines and also to the outside world we have a block storage service codenamed Cinder provides persistent block storage to guests and we also have an object storage service codenamed Swift which provides restful object storage within an open stack cloud and then we have a unified open stack dashboard codenamed Horizon which acts as a unified UI across all of these services which is web-based runs is based on Django and whiskey runs in Apache normally and provides a unified link for all of this and there are two new facilities in open stack Havana released last week and developed over the past six months which are facilities for metering and alert services a project codenamed Cilometer and an orchestration engine that provides both its own native API and an AWS cloud cloud formation compatible API codenamed heat so two more open stack sub projects that just had their first open stack official open stack release dropped last week open stack Cilometer and open stack Havana so from this logical overview of the open stack architecture what follows is the concept of node roles and node roles are sort of logical atomic and composable classes of nodes in an open stack cloud they're atomic because they're usually not broken down further they're composable because it's very common for a single node to have multiple of these roles depending on your scale out requirements you may then scale them out across multiple physical boxes and those node roles are there's an infrastructure node which runs a database and a message queuing server the relational database is normally my SQL but Postgres is also supported and a message queue server name QP server most people will be using a rabid mq but Apache Q it is also supported and there is support for zero MQ as well although that support has some known limitations then we have the authentication node which runs the open stack identity service code name Keystone which provides authentication and a service catalog we have an API node which provides restful API endpoints to open stack services a controller node which provides scheduling and registration services that are internal to open stack so for example with the controller node would control are things like what specific compute node am I scheduling this specific guest to run on and other things then we have the network node which provides network connectivity within the cloud and also to public external networks and we have of course the compute node usually we have several of these and it's not unusual to have an open stack cloud with hundreds or thousands of compute nodes which of course runs hosts and runs virtual machines that is to say Nova guests then we have a storage nodes one or several which provide persistent block storage to guests there is a very basic implementation of the center the open stack block storage service which uses LVM and ice guzzly which is what we're going to be using here in this tutorial for demonstration purposes however there is a truckload of external drivers that are supported by center where we can interact with existing either open source or commercially licensed storage systems such as for example sef which is highly popular as a scale out service for software defined storage Gloucester FS is also supported for that same purpose but also things like 3 par left hand various other sand vendors etc then we have a dashboard node which provides a unified user interface to our cloud admins obviously then we have a metering node this is new in Havana this is a node type that collects metering data from a unified event stream which provides a set of counters and gauges related to metering of our open stack cloud and finally we have an orchestration node which runs an orchestration engine for complex guest workloads so this would be the the node that would be running the heat engine in open stack in our setup as for the tutorial architecture we have a total of four different nodes Alice is the one that holds the majority of our control related services so Alice is our infrastructure node which runs our database and our RPC server our authentication node or API node controller node storage node dashboard node and also the metering and orchestration node although I don't know whether the time that we're going to have in this tutorial is going to permit us to actually go to the metering and orchestration stuff but if you're taking the virtual machines with you it will be easy for you to reproduce that on your boxes and then have a cilometer and heat at your disposal as well then we have Bob that's our compute node it will be running our virtual workloads or the virtual workloads that we're going to create at the end of the tutorial we have Charlie that's our network node that's the node that provides access to the outside world and runs services that ensure that the virtual machines can talk together within the cloud as well and of course we have our puppet master node because we're going to be deploying this whole open stack cloud with puppet because if we were not automating this this would be an endeavor that would take us about two days and we want to do it in one hour so obviously we are going to be automating that with a system automation framework in this case we chose puppet you can also deploy open stack with with chef you can deploy open stack with a variety of other tools and you can do crazy things like deploy open stack within open stack using open stack tools that does not provide a not in your cerebral cortex okay we are going to be using a collection of puppet modules that are available for configuring open stack services those are hosted on Stack Forge Stack Forge is a community project that that develops third party additions to open stack third party in the sense that what is being developed on Stack Forge is not or not yet part of open stack proper but the nice thing is it uses the entire continuous integration and continuous deployment and testing infrastructure that the rest of open stack also uses which is great which means that most of the stuff that is on Stack Forge actually has much the same quality level of what open stack itself has and at least in terms of revision control peer review etc that's pretty stellar we are going to set use a simple set of wrappers on top of these puppet modules on Stack Forge we call this kick stack because it's something you can use to kickstart your open stack it's really a very very thin wrapper around what is upstream in Stack Forge as part of the puppet open stack modules there okay we are going to use the puppet dashboard let's see where's my puppet dashboard here there we go that is a pristine puppet dashboard it runs on the node named puppet you can access it depending on how you've set up your virtual networks you can either access it through its actual IP address which is 192.168.122.100 or you can just connect to local host port 3000 because it should do a port forwarding there for you there is no need to use puppet dashboard kick stack uses you can use kick stack with any puppet ENC and puppet external node classifier for those of you who are familiar with puppet that is a way to that's a facility that puppet use can use to externally classify nodes put them into specific classes etc apply parameters and variables to them and the puppet dashboard is but one implementation of such an ENC there's others such as the foreman which is very popular with many puppet heads including the puppet heads at CERN that run the largest open stack cloud in Europe at this time and you could actually be writing your own ENC because all that puppet uses is a bit of YAML that that ENC outputs so that is our puppet dashboard here and what we also have or are going to have is the open stack dashboard because if I click on this now obviously it's not here yet because that is something that we are going to deploy as part of this setup there we go and we have our four different physical nodes so that's our node name puppet I actually don't need to do anything here right now so I'm going to move straight to our node named Alice and the first thing that we're going to do here is we are going to have it check in with puppet which means we're running a quick puppet agent test and we are now going to have that check in with puppet for the first time yes you can crucify me now for purposes of this tutorial alone I have actually set up puppet for auto sign of these SSL certificates do not ever do this in production and by the way do not deploy these virtual machines in production because they have my SSH public key in slash root dot SSH slash authorized keys so if you actually deploy this in production and you make it accessible on the internet prepare for a visit okay we're going to do that on the other nodes as well so we have them all check into puppet for the first time there we go and while that is chugging along we can go back to our node named Alice and indeed we can go back to our puppet dashboard so there's two nodes that have checked in lovely and then we're going to have a third one shortly there we go Alice Bob charlie very creatively I use the domain name example.com and this is what we are going to be using for the rest of this tutorial so the first thing that we're going to do is we are just going to add these three nodes to the kickstack group such that we can apply these parameters that we can set in the puppet dashboard globally for all of these nodes so what I'm doing is I'm selecting the group named kickstack go to edit and down here we can now go ahead and add these nodes Alice Bob Charlie and there we go okay so we've got that we've got all of these nodes here in the same group so the first thing that we're going to do is we are going to work with our node named Alice and Alice we are going to make the infrastructure node the authentication node and the API node API infrastructure and us whoops and what is this guy doing here and then we're just going to run our next puppet run again for purposes of demonstration obviously I don't have puppet run as a background service but I'm just invoking it manually if you are using this in production then obviously you would be running it as a background service with a refresh interval so while that thing is chugging along let me tell you what an open stack API node actually does so in open stack all of our nodes all of our services run on or expose endpoints that are restful we can interact with them with a standard HTTP or HTTPS client and what we're doing is we're pretty much bouncing around JSON objects is what we're doing and that is true for all of the open stack services that is true for Cinder as much as it is for Nova as much as it is for Neutron Cilometer and all others and what these services do is they use two means of communications with other services so if there is anything that actually requires to be persistent so if at any time a service needs to tell another service okay here's a piece of data that you are going to use henceforth and forever more then all of that goes into a relational database as I said earlier typically we use my SQL here Postgres is also supported SQLite as well but obviously SQLite you wouldn't use in a multi-node environment and the other means of communication that these nodes use is AMQP and AMQP is the common message bus that we're using for communications between our nodes now the next thing I'm going to say it's not you should take it with a pinch of salt it's not completely set in stone but the the rough rule is if there's anything that needs to be known for two other nodes for more than 30 seconds it typically goes into the relational database and if it's less than that is information that just lives on the message bus and those are the two things that we are creating here in this step we are installing my SQL or what what this thing is doing for us as installing my SQL and installing rabbin MQ as well and then there is a rabbin MQ service and then subsequently it's also going to create the databases which is what we see here we go so we've got all the databases that we need and I just want to move over this quickly here on this puppet agent test again so we get the next run with the actual API services I want to complete this rather quickly so I can actually show you how we can interact with the open stack cloud it is important to understand and you we're going to see this in a moment that as soon as we have our open stack API services or API endpoints available then we can actually interact with our open stack cloud even though we don't for some of these services for some of these APIs we actually don't have implementing services as yet now that may sound strange but it's actually really sound and reasonable design which is you want for the API services to be decoupled from the actual implementing controlling services so at any time you happen to lose connection with a back-end service or the back-end service actually moves from one node to another then the system should just find out about that essentially by itself and using the the information that the system itself provides yes questions are always good it's okay stay seated you have what you do is you put your node named Alice into three kick stack classes kick stack node off kick stack node API and kick stack node infrastructure those are the three classes and they also need to be part of the kick stack group what is that open stack all lowercase so while we're waiting for this to chug along this is also configuring keystone for us as I said keystone is the open stack identity service that's the service that is responsible for providing open stack authentication authorization and access control and keystone also manages the open stack endpoints so if we and what this job here does for us is creates this little open stack RC file which we can now source and that provides our open stack credentials for us so if we do that we can actually go ahead and do a keystone endpoint lists and as of right now we only have one endpoint in there and this was already interaction with an open stack cloud using open stack tools so I used keystone endpoint lists to basically tell ask open stack okay where are my API endpoints and right now I only have one which is keystone itself arguably because that's the only service that I have currently actually configured and with the next run we are actually going to configure the API services themselves and okay and while this is running while this is running let me go back to my overview here we go there's that so what we're currently doing is we are configuring or we have already configured the identity service and the next thing that we're doing is we're configuring API endpoints for all of the other services actually more than are shown in the slide here we're not doing it just for network and block storage and compute and image but we're also doing it for metering and orchestration so for a kilometer and for heat however we are leaving out object storage simply because it's not really useful to also add Swift object storage onto these three notes that we have those are basically just the limitations that we have in the in the setting here for this for this tutorial by the way Swift itself is not the only service that provides OpenStack object storage there's other services as well most notably Ceph through Rados gateway has a means of interacting with a Ceph store as if it were a Swift store and likewise Glustrefs has what it calls rather endearingly UFO unified file and object storage which allows me to interact with a Glustrefs file system as if it were a set of Swift containers for those of you who are familiar with Amazon S3 that will be that will sound relatively familiar a restful object storage not unlike Amazon S3 so what S3 is to AWS Swift is more or less to OpenStack okay let's see how Alice is doing okay so that is a handful of services there already we should have that done in a moment as you can see we are doing the cinder that is the OpenStack block storage service at the time at this time and then also heat and we're gonna give this a few more moments and then we should be done with the with the API installation of course you know this is somewhat slower than it would be on an actual production box on real hardware because what I'm doing here is I'm running three virtual machines on one laptop drive which is inevitably going to be iobound and that is something that you're probably gonna be seeing on your boxes as well specifically if you're using spinners so if your laptop does not come with an SSD then that may be really really slow for you so maybe a good thing to actually do this on real hardware for you okay so was that it nope in the meantime we can move along for our next few steps so right now this node named Alice is just our API node our infrastructure node and our authentication node and now we're gonna add a little more to that we're gonna make it the controller node storage node dashboard node and we're gonna leave out metering an orchestration for right now because that's just gonna take up too much time controller we already have that dashboard okay that part is fine what is that yes for right now there's six classes in all there we already had API infrastructure and auth and we just added storage and dashboard and then we can also go ahead and make Bob our compute node we're gonna run pop it later on that and we're gonna make Charlie our network node here so okay while this is running of those who are in here who have deployed OpenStack in testing or production in some shape or form for whom of you was that in the OpenStack Folsom release for whom was it in Grizzly and anyone actually using something earlier than that has anyone looked at Havana yet no come on just out of curiosity is anyone still downloading from the the VM images from my box here I hope not right and there's our NovaDB sync that should be the last bit of those of you who are following along is has anyone hit any snags up to this point Alex no yes exactly did it actually do something for you or did it just okay not necessarily a second there we go about time okay and now I'm gonna do something here we're gonna scp over our oops scp stack RC to Bob so we can actually work on Bob here so doing the next puppet agent run so we can actually install our controller services here and in the interim we can actually start interacting with our OpenStack here we're gonna do this and we should be doing be able to do a Nova list at this time there we go it doesn't actually do anything at this point it does do something which is it actually talks to the OpenStack API services but returns an empty list here the same thing is true for neutron net list there we go and a few others as well so here that is the dashboard and I hope that's the last time that's gonna take quite as long because once that is done we actually have an OpenStack dashboard to work with and we should be able to actually start working with our OpenStack cloud about 45 minutes into the tutorial that is a reasonably complex Apache configuration which you really should need to worry about and again it's basically the puppet modules do all this work for you there we go so if you want to set up your nodes like I have here I have the node named Alice that is our API node infrastructure node API node and authentication node and also our storage controller and dashboard node our node named Bob is a compute node and finally Charlie is our network node sorry about that that is a problem in my configuration but not in yours you are not going to see this issue because I actually don't have an SDB device here that I can use for my cinder volumes and let's do that again because the only thing that's going to do is is just going to create the volume group in the right place and that's that there's our cinder there we go okay alright and now we finally have an OpenStack cloud that we can actually interact with so we've got all of these services here and most importantly we also have that's my OpenStack RC I'm just going to take this passport here that has been generated for me and now let's see if we have an OpenStack dashboard there we go there is an OpenStack dashboard installed from scratch you're going to have your own password that was generated for you and you can log in here with a username of admin with that password there we go whoops what the heck my web browser doesn't support cookies there we go okay this is what the OpenStack dashboard looks like by this time we actually already have a fully functional OpenStack cloud except that it doesn't have any compute nodes yet and except that it doesn't have outside network connectivity but we're going to fix that in a moment but what you can see here is that all of the services that we have defined are already available and they're running there they are those are all the API services and we can interact with all of those and we can interact with our specific tenant as well so one thing that I'm able to do at this point for example is I can upload an image there we go and the image type that we're going to be using is an image is a is a distribution called Cirrus it's a super tiny micro distribution but it runs really well in cloud workloads so what I can do here is I can create this image and I am going to upload it from an image file come on there we go and I want where is my Cirrus there is my Cirrus there we go and that is going to be in the kukau2 format and I'm just going to name it Cirrus as you can see OpenStack supports a number of image types such as AMIs as well or you can upload an ISO that you can then boot from and then it supports any disk format that kukau supports such as the kukau2 format just a raw disk VDI which comes out of VirtualBox VHD which comes out of Hyper-V and VMDK which comes out of VMware the dashboard runs on Alice that's the dashboard node and you can connect to it on that specific node that's 192 168 122 111 just on port 80 if you are currently unable to connect to that that may be due to your VirtualBox network configuration and I can show you afterward how you can do that okay likewise what you can also do if you're familiar with VirtualBox is on your node named Alice you can create port forwarding for port 80 forward that to any local port on your machine and you should be able to access the dashboard that way okay so Alice didn't do anything here though Bob is currently busy installing or becoming a compute node that's fine so we have uploaded an image another thing that I typically want to do with my OpenStack tenant is I want to be able to ensure that I can log into machines that I create preferably using SSH so what I can do here is I can just use my SSH key SSH and dash L so that's my key my public key part here and I want to upload that under access and security and that is how I store my new key pair here under the key pairs tab I can import an existing key pair like this I'm going to name that Florian what was that thank you good point there's my key pair and another thing that I also do whenever I do this demo setup is I create a flavor flavors are as you can see here there are various virtual machine types that we can create tiny small medium large and X large are what we can typically typically create here when I'm demoing this on my laptop I usually like to add another flavor which I call M1 Nano which is one CPU and just 256 mega RAM and all of that's gonna be zero and then I can create that and that's more than sufficient for the serous that I'm going to deploy here yes yes flavors come from Nova because you can manipulate them with Nova flavor list but there so you have to set flavors globally for the entire cloud and then you can make them available per tenant it is not something you can create on a per tenant level so you also need Admin scope privileges meaning you're able to manipulate more than one tenant okay that is still Nova if I'm here and because even though we don't have a network node at this point we can already interact with the networking service and there is more than one way to do that as with anything in open stack so for example we could now go ahead and create networks here or create routers here etc but for the time being we don't have anything no networks no routers no connected instances to display now anything that you can manipulate in open stack you can manipulate either through the open stack web GUI the open stack dashboard horizon which is what we're doing here or you can manipulate it through the JSON APIs directly or you can be using one of the open stack CLIs which are perfectly scriptable so that is something that we're going to be doing here so here on Alice there's a little script here I need to source that obviously and there's a little script here that's called create neutron networks and if we run that then even though we currently don't have any network nodes that are implementing these networks that doesn't matter we can still create them because we're only interacting with the neutron API and that in turn is talking to a relational database so it's now creating an apport a network and a gateway for that network and lo and behold there is our network topology that has just changed so what we have created is two well one virtual network that's the one that you see in orange here that we call admin net that's the one that is only going to be implemented within gerry tunnels actually in the open stack network and then we have a virtual router which we call provider router and that then connects to an external network which is actually a physical network and that is there we go that's just about to run that and that's going to be our final puppet run for this tutorial and once that is done we will be able to actually instantiate an instant and a guest okay so that's Nova and that's our compute node and we're almost done the only thing that we're still waiting for is for our network node to not only create the network services but to also launch a an L3 routing agent and a DHCP agent and also a metadata proxy agent so we can make sure that our boxes in fact can access the Nova metadata API service and this is going to be the machine that we are going to create and connect to in a moment and I'm going to prepare everything here for the actual launch of this instance so we're going to create a virtual machine in our availability zone that we call Nova we're going to name it seris we are going to boot this from an image namely the image that we previously uploaded I subsequently want to be able to connect to this machine using my SSH key pair and I'm going to attach it to the admin network and then we should essentially be good to go there's our neutron agents there's all that by the way in the in the process things like service passwords etc. all generated for you just in case you're wondering what those OBS entries were this actually builds what OpenStack does for you what OpenStack neutron does for you is it builds open V-switch GRE tunnels for you and in this case you can already see the scaffolding from the from the compute node which is that we have a tunnel bridge and we have an integration bridge and they are tunneled to or they are patched into one another here but what you're not seeing as yet is the is the tunnels themselves the GRE tunnels themselves which are going to be brought up by the neutron services as soon as they start there's the open V-switch package there's the other side of the integration bridge here's the tunnel bridge and there we go let's see well and it's if as if by magic we now have a configured GRE tunnel so these boxes can now provide tenant networks they're being placed into GRE tunnels and are then relayed over to the network node we shall see the other side of that here so all of that is taken care of for you by neutron by the way this has nothing to do with the there's no puppet magic here all of that is by the API service itself and what you can also see is a few IP network namespaces which these are using heavily for routing and DHCP so now moment of truth let's launch this virtual machine so that instance is currently spawning so that means what happened in the background here is Nova API talk to the Nova scheduler the Nova scheduler services responsible for selecting a compute node that is suitable for launching this virtual machine based on a variety of parameters that we can that we can configure in this case it's easy because we have only one virtual machine one compute node that is capable of hosting this virtual machine what also happens is the respective image namely the sero space image that we have here is being transmitted over to the compute node and IP addresses are being allocated from neutron for this new instance it has received the IP address 10 5 5 3 and if we take a look here we should be able to see a startup log from this virtual machine what it has done here it has received IP addresses from the neutron DHCP service that is what you can see down here it obtains a lease for 10 5 5 3 it adds a DNS server as required and then it connects to a magic URL which is the Nova API the Nova metadata API service where it actually fetches its own metadata such as for example the SSH key that we have created for it and that is a complete boot of Cirrus here we also should see a full lock here if we if we selected that here and so now let's take a look here's Charlie and here is our Cirrus node so let's see and we're out of toast oh of course yes because what I also want to do is for this instance I would also want to allocate a floating IP address that is if you're familiar with AWS that's what elastic IPs are in AWS or floating IPs in OpenStack in the external network we have a floating IP address pool from which we can now allocate an IP address and associate that with that machine so that has now been associated in a few seconds there we go there's our connection to that box and here is our Cirrus machine so in 63 minutes from zero that is to say from complete bare Ubuntu 1204 no open stack no trace of OpenStack on it a bit of puppet even on a slow box a really slow box at least IOI's within 63 minutes where we're getting to a completely working OpenStack cloud that can actually fire up a virtual machine and not only can fire it up a virtual machine but just for completeness sake we can also create a volume here so whoops let's take a look here so here's our proc partitions that only has a VDA device of course so it's a single device that we booted from I want to create a volume that I named tests I want to give it one gig in size I want to make it empty no source I could also create a volume from an existing image here is the volume that I have created in case just in case you're curious the way this is implemented in this case is something that we can see on our node named Alice if we take a look at TGT Adam mode target show that is the volume that has just been created for us and exported as an ice-guzzy target completely automagically and we should also in fact see that here oh we don't have an LS SCSI here so there is nothing here as yet and if we now go ahead and attach we want to attach that to the serious instance as def VDB so that has been attached let's take a look we can see that on our node named Bob here it has just attached in the background to ice-guzzy and the machine itself doesn't even care whoops doesn't even care whether it's ice-guzzy or something completely different because as far as it sees is just another virtual device that has been added to it we're out of time so unfortunately I don't have the opportunity to go into metering and and orchestration however if you would like to reproduce this tutorial you may do so at any time the only things that you need to do is you need to add the metering and orchestration node classes to presumably your node named Alice and then you will be able to interact with a heat stack or cilometer statistics as well I apologize for going slightly over time and I apologize for the IO issues here I hope this was interesting for you nonetheless what this tutorial was meant to show you is that in a very short amount of time and trust me if you're doing this in real hardware you can do it in 20 minutes or less you can build an open stack Havana cloud a fully fully fledged and fully workable and working open stack cloud that you can then use to deploy your virtual instances so within 60 minutes on this machine or 20 minutes on an actual piece or set of hardware you can go from building the cloud to actually using the cloud and that's ultimately what you want to do you don't want to be held up with with building your cloud for for too long a few final words if you like this talk it would be great if you could let us know has text those are Twitter handle if you want to refer back to the slides or use them for a presentation of your own you're certainly welcome to do so the sources are in the I'm sorry there's actually an error there it is LCU 2013 in my in my personal a github repo so that's github.com slash FGHOS slash LCU 2013 and by the way if you're interested in my slides from Liz Khan year last year just substitute 2013 with 2012 and if you want to learn more about open stack then I also encourage you to check out our schedule it's the UK schedule sorry which is on our website we do have classes coming up in Munich later this year and also in Bangalore and we have more in Sao Paulo in January so if you're interested in that please do take a look thank you very much for your kind attention if you have any further questions I'll be happy to stick around if you do not please enjoy your lunch and enjoy your afternoon have a great rest of the conference thank you