 Okay, so this presentation is the Introduction to OpenStack Architecture. It kind of was born out of some of the work that I've done over the last two years and some blog posts I write and things like that. And it really came about because I came into the community from an architecture background. I used to be a dreaded enterprise architect. And when I got here I noticed everyone wrote code and the general questions on like how would I deploy this was generally met with go read the code. And if you come from an enterprise architecture background you know we do pictures. We love pictures. Pictures are a great thing because it doesn't matter if you understand the language, whether it's spoken or code everyone knows immediately once you show them a picture how things work if you've been a computer before. And so today what I'm going to talk to you about is really just showing you pictures and describing them. And before everybody starts in all of these things will be on the web here within an hour so don't feel like you need to copy down the pictures or take pictures. There is an accompanying blog post with it. This is being videotaped. There will be multiple channels to get this. All of the source documents will be available on the website also. So don't feel like you need to take pictures and try to transcribe this. So just a quick about me. My name is Ken Pebble. I'm CTO for Selenia which is an open stack consulting company. Before this I ran development at Internet Network Services. We were I would say the first to actually have production Nova as a public cloud. I see a few of the people that worked with me there. Also the author of the O'Reilly book, Deploying Open Sack. I've been contributing code since Bayer. I'm constantly convicted on that. You can see my Twitter there and obviously as most people know I am the llama. So let's talk conceptually what open stack is. How things relate to it. One of the things that really kind of trips people up as they're getting started with it is there are just a lot of pieces. And there is no like overview of how these pieces would fit together. What pieces do I need? What pieces do I not need? And so it kind of helps just conceptually to understand how do these things fit together. And so this picture I hope kind of shows a little bit of what the major pieces are. And mostly when I say pieces are projects. And a little bit about how they relate and how they meet each other. So you can see there at the very top we have dashboard. It's a web GUI that basically gives you a GUI for all of the other services that are out there. Below it tend to be services that you actually would use that is somehow virtualizing some kind of infrastructure resource for you. So most people tend to be most familiar with compute. However, there's also an image store. There's an object store. There are newer services for network services and block storage. All things which are virtualizing usually some kind of hardware component for you to provide you that service. Beneath that you have the identity service which is really underpinning everything to allow you to have a consistent identity across all of the services. And I'll go into more detail because it's actually more than just identity. It's actually authentication and authorization. But from the high level you've got somewhere around seven pieces here. Now, one of the things to remember also though is OPASAC is a collection of services. It does not one big monolithic piece. So just like these are all broken into different services, different code bases. It does not mean I need to deploy all of them. And in fact I would recommend if you're starting out not to deploy all of them at once but start with several pieces and then start adding to them. The other thing to look at here is they are all different services and it does not mean they all have to be actually OPASAC services. And I'll talk about this later but all of them communicate through an API and as long as you have a service that communicates using the same API you can actually swap in non-OPASAC services into your overall cloud. Most notably you tend to see this in the object store in the block storage areas where you see projects most notably CEP and a few other projects out there which actually implement that API and can actually be used as part of your cloud even though they are not OPASAC services per se. So a few basics before we get into some of the details here and start talking about the more logical architectures. The one thing to know, yes, everything is written in Python. Don't fear if you come from the Java community. It's not a big leap. I'm an ex-son person. I was a son for 15 years or so. You will pick up Python very, very quickly and do not be afraid to go in and dig into the code. The code is actually very legible. It's very well organized. If you've done any object-oriented programming I will show you some code later here. You will be immediately able to pick this up. In general, end users interact with either the common web interface, which is the dashboard I talked about earlier, or the APIs. Every service has an API. There are no private calls. Everything is done through REST APIs. Now there are some privileged APIs, but they are all documented and they are all REST APIs. All services can authenticate through a common source, which was identity that I talked about earlier. All services try to interact with each other through APIs also. This isn't always true because some services actually need some intimate details, especially when you talk about compute. But in general, services interact with other services in the same way that an end user would interact with the service. This is really helpful when you start doing debugging because if you have a problem between two services you can just go in and try to replicate the calls to debug that. You'll also hear that most of these demons, and when I say demons, most of the services, are almost always written as what we call PACE demons. PACE is a library for Python, which allows you to create RESTful demons. It has a special thing called WSGI, or depending on how you want to say it, WISGI, WISGI, WSGI middleware. This used very extensively. That middleware allows you to do pipelining. You may see this in the job world, OSGI, RAC in the Ruby world. It's used very extensively and is configured through your any files. It also allows you to do things before requests actually get to a service to intercept and redirect. And that's usually used when you're actually doing authentication. So if an unauthenticated request comes in, it actually gets redirected to the identity server for authentication or authorization before it actually gets up to the service that it's been addressed to. So, knowing those few things, let's launch in and actually talk about the logical architecture. So, a lot of people look at this picture and say, this is really, really complicated. You know, I guess a few things. A, it used to be much more complicated. This is the new Grizzly picture. You can go back to my website and you can see I've actually taken away 15 different lines. We've probably dropped 10 or 11 boxes just within the last release or two. And part of this is the maturation of the overall open stack. But a lot of this is because we've been able to break out services into finer and finer grade services. A thing to look at, we'll go through each one of these in detail, but you can see the dotted lines are our communications between services using the REST APIs. Solid lines tend to be connections coming in from an end user or some other service which are out there. And then internally, there'll be calls within the services. And we'll go through each one of these here in a second. In general, though, it is not as complicated as it looks. Everything within a box will be deployed as one set of code, so you're really only ending up deploying between five and seven services. There are actually very few connections between services other than identity, which obviously everything talks to, and dashboard, which talks to everything. So it's not nearly as complicated as people look, or seem to think, especially when you start decomposing it down into the individual services. The nice thing is with each one of the individual services, they are pretty much standalone so that you can isolate and debug them in isolation. You'll see at the top, that's end users. End users interact through the services just like any other library would or any other tool would. So there's command line interfaces for every one of these APIs that you can run if you're a command line kind of person. You can build this in to any tool that you'd like. And there's a number of third-party and commercial tools out there which use these. There's a number here in Stratius. A few others that are out there that actually write to these APIs already. So there's a variety of ways that actually end users can use these as well as... So let's start talking about individual pieces here. So the first one I tend to talk about is identity because you need to have identity for everything else. So Keystone, and you'll start to notice where it's giving you a service API, but it also has an administrative API. And you'll find that most of the services within OpenStack follow this model where they'll have an external or user API but also a privileged API called the Admin API. So things that tend to use the service API are things like, can you authenticate me? I'm canned, this is my password. On the admin side, though, you have to create users and things. So like I said before, we always communicate through the APIs. Even admin type responsibilities, our actions, tend to be done through the API. And they tend to be done usually through this admin API. So, single service, it has basically four backends for you. So basically you get an API request in there, asking for... Catalog, policy, token, any of these different services. And then there's a back-end that actually fulfills the service for you. So the most normal or usual one tends to be identity back-end. I come in and say, my name's Ken. Here's my password. Would you authenticate me? That could be back-ended with a number of different back-ends. This is one of the things that you'll also see a common kind of theme within OpenStack, but even more so a common set of confusion, which is there tends to be many ways to implement this back-end. So for example, on the authentication or identity back-end, it could be a MySQL database. It could be LDAP. It could be Active Directory. It could be a number of different things. And it's a plug-in or configurable set of choices. One of the great things about OpenStack, but possibly also one of the confusing things, is there are a lot of options. And you'll see as I go through this presentation, I'm kind of giving you Ken's version of OpenStack architecture. It is not the only way to do it. You could do a radically different architecture than what I'm showing you here just by the different choices that you may make. But for example, identity has a back-end. Oftentimes this is MySQL. However, you could use a different back-end. There's a catalog back-end. A catalog implements a service catalog for you. So when you ask the catalog what services are here in this cloud, it comes back with a list of endpoints for you. So you can discover what services are in your cloud. So you don't actually have to know up front that, oh, I have block storage and I have quantum as a network service. You could just query this catalog. It will tell you not only what services you have, but how to reach them. There's also a token back-end. OpenStack APIs tend to be based on a token-based authentication. So for example, I'll come in and say, my name's Ken, my password. It'll check the identity back-end and says, yes, that is Ken, that is his password. And then it issues me a token. The token's a fairly large string of characters. And I use that token from then on to talk to all of the other services and they authenticate that token, not my username and password. That token is valid for a configurable amount of time. I think the default tends to be 24 hours. And you use that token from then on. That token's obviously stored in the token back-end. There's also a policy back-end. The policy back-end allows you to have finer green control than simple authentication. So simple authentication says, this is Ken because he gave me this password. The policy back-end says, okay, Ken is a administrator or Ken is a user. Users can do these kind of things. So for example, that may be, I can go ahead and start up VMs, but I'm not allowed to create new users. That's the policy back-end allows me to do those kind of things. And so that is Keystone, kind of the back-end for all the services and something all the services depend on. It's relatively simple when you deploy it just as is. It's a MySQL database on the back-end and stores tokens there. It can be very complicated. So obviously you can change out that identity back-end and back-end it into your corporate LDAP or some other authentication piece there. You can change out the token database into some kind of key value store, perhaps some kind of in-memory database. But as it is, fairly simple. Run it on a server or two, depending on your availability needs. Fairly easy. Similarly, dashboard codenamed Horizon is your web GUI. As we write everything in Python, it's a Django app. So Django, if you're more familiar with Ruby, is like Ruby on Rails. Basically, this Django app gives you a GUI front-end. However, it's doing the exact same calls that you could do from your command line. So for example, when you say, please show me all of my instances or VMs which are running, it's making the exact same call that you could make through a command line tool to Nova to ask which VMs are running for me. So all it's doing is taking some HTTP input, doing some REST calls to the back-end services and displaying that in a very pretty interface. And because we have to have an obligatory picture, you can see, this is actually a picture of the older one. I don't have the newer one running yet. It's actually gotten much nicer than this, but this is the default that's out there. It's actually implemented in a two-part code base. One part is purely open-stack-based. The other part is look and feel and kind of application level of it. This allows you to actually skin it in any way that you'd like. I'm using, I think, the default open-stack one. If you use embutu, you'll come up with an embutu look and feel for you. Most people tend to change it to what they want. But you can see across the left-hand side, there's a variety of things that I can do. Look at my instances, my volumes, my containers, all of these different things. So let's move on to some of the real services that are out there that people actually use. So the very first open-stack service, and especially the first one in production, was what we call an object store. You'll hear it called Swift for most people. It stores and serves objects, i.e. files. It is not, though, a file server. It is not something you're going to be able to mount on your desktop as a file share. It's not something like an NFS mount. Basically, you have APIs that you can talk to it, and you can either put things up or change, pull them down, you can delete them, you can change metadata on them. However, you have to use the open-stack API to interact with it. As you can see, it's a fairly simple architecture which is really built to scale. So when people talk about CAP Theorem, where it's consistency, availability, performance, and things, this is really built around being able to scale and having highly available, not losing data. It is not necessarily built around performance. As you can see, though, it has kind of a multi-tier architecture, and you'll see this within the other services as well. It tends to have some kind of API service or daemon that runs, and then a number of worker daemons that sit behind it that actually do things. So in this example here, we have Swift Proxy. Swift Proxy actually handles the API calls that you're going to make to it. So when you upload or download something, you talk to Swift Proxy. Swift Proxy then talks to other things to make that happen for you. So, for example, in this particular one, in Swift, there are three major daemons which tend to run to store things for you. Two of them are fairly simple. One's the account database, which is, who actually have an account? Can I map a user to an account here on Swift? There's a container database which maps accounts to folders or what we call containers. And then there's the object database, but the object database is actually a store. While the accounting container databases are implemented at SQLite databases, the object store is actually implemented on disk. So basically you have files which will be put on to disk, and there's a mapping to put them on there. And then any metadata that you'd actually like to be able to attribute to those objects are actually written into the file system as extended attributes. This limits the file systems that you can use to ones that support extended attributes. The most common ones tend to be XFS or EXT4, and that's actually where you actually get the metadata from those. So you can see it's a fairly simple architecture from a logical point of view. This is one, though, that actually from a physical point of view when you actually implement this tends to be much more complicated because obviously you want a very large store. This can be 50, 200, 500 machines implementing this with just a few of them running Swift Proxy, but most of the other ones actually running the account container and object databases. The next service we'll look at tends to be one of the smaller services that people kind of overlook. It's called the image service or GLANCE. GLANCE basically takes images, and when I say images, VM images that you want to launch, and stores them for you. So as I said before, kind of a similar architecture. There's an API daemon, which would be somewhat like the Swift API daemon or Swift Proxy daemon that I talked about earlier, except for what we call GLANCE API. It allows you to talk to it to either upload or download images, as well as to either query or assign metadata to images. There's a GLANCE registry. GLANCE registry actually takes care of syncing or querying metadata about images and stores it in the database. It is only actually storing the metadata in the database. So information about the images. The actual images has a pluggable backend. So you'll hear this again and again from me. There's a pluggable backend for where you actually store the images, and this is where you actually see some cooperation between services. Most usually, you will actually store your images for GLANCE in Swift. However, you don't have to. You can store it on a file system. You can store it in other places. I think there's four different choices that you can make there. But basically, the only thing it does, it stores an image, it retrieves an image for you. It does some other fancy things around caching images and prefetching images and things like that. But the general architecture there is, I store metadata, I store files. Fairly simple. The one that everyone actually looks at and always wants to talk about, tends to be one of the more complex ones, which is obviously is Compute. Compute's called NOVA as a code name. It is probably the most complex, probably the most complex of all the OpenStack services. Mainly because there are so many moving parts and there are so many configuration options. I think at one point, one of the cloud scaling guys added up the number of configuration options and there were somewhere above 400 that you could set here. Having said that, while there are a lot of configuration options and there are a lot of moving pieces, should you follow the simplest path is actually quite simple and it's fairly easy to get going. Looking at this, it does follow kind of the similar architecture that you've seen in the last two demons that we talked or services that we talked about. However, there's just a lot more that's going on here because there's a lot more functionality. Everything kind of starts up at the top there with NOVA API. NOVA API accepts all of your API calls and then usually will do some kind of mediation on them to start some kind of orchestration for you. Now, some of these tend to be fairly minor. So, for example, if I query it and ask for my VMs which are running, it's fairly simple. It looks in the database where we're staring most of the state, we'll grab which ones are mine and sends that data back to me. Some things are much more complicated. So, for example, I want to start a new VM or perhaps I want to actually assign a floating IP or a public IP to it. Those tend to be much more complicated and tend to use some of the other pieces that we're talking about here, which I'll kind of talk in a second here. However, it also does a few other things. It does enforce policy and when I say policy here, the policy is slightly different than what I talked about policy in Keystone. Whereas Keystone is a binary, can I do this action or can I not do this action? This is actually a quota-based system. So, there are somewhere around 10 quota items that you can have for any particular user or tenant. And when I say tenant, it tends to be a company and you can have users underneath that. It actually allows you to set quotas on that. So, for example, the total amount, gigabytes of RAM that you're using in all your VMs, how many VMs you're running, how many IP addresses you're using, things like that. And it actually does those checks also. Authentication, though, is handled on middleware before it actually gets here. Just like on all the other services, in my whiskey middleware, any kind of unauthenticated request get trapped and sent to Keystone for authentication before it actually gets to Nova. So, once it gets to Nova, it actually knows that you're an authenticated user. So, let's break down a few things within Nova and talk a little bit about what they do. So, the most interesting one there is Nova Compute. You can kind of see it up in the left corner there. Nova Compute actually kind of sits and orchestrates your hypervisor. So, one of the things a lot of people don't understand, OpenStack is not a hypervisor, it's not a virtualization. It is a framework which allows you to control your virtualization that you have out there. And one of the things that OpenStack is very good about is supporting most of the different hypervisor or virtualization technologies which are out there. So, whether it's VMware, Hyper-V, KVM, Zen, or any of those other derivatives, it probably supports it in some way, shape, or form. It does this basically through a set of libraries. And the major libraries I kind of have up there, one would be ZenAPI or the ZAPI commands if you're in the Zen world, LibVert for KVM, the VMware API, or different libraries which are out there to actually do the orchestration for you. The process by which it actually does this is somewhat complex, but it's fairly easy to step through if you just know that some simple pieces. So, for example, if we're going to go ahead and start up a new instance, a command will come into Nova API. Ken would like to start up an instance, a large one, let's say. Nova API will make sure that I'm authenticated and that I'm under quota. Once it's determined that, it will go ahead and send out a message into the message queue. The message queue is basically how all of our demons within Nova interact by sending messages back and forth to each other. So, the message it'll send out is Ken wants to do a large instance, and the first thing it actually sends out to tends to be Nova Scheduler. Nova Scheduler, if you come from the HPC world, a Scheduler basically determines where you should run a job or what kind of resources you should use. So, the first thing it'll ask is, Nova Scheduler, where should Ken run this large instance? There's a variety of different ways it can do that, and on the next slide I'll show you a little bit about that. But Nova Scheduler usually comes back with a simple thing which says, host C, or host 12. That is where you should go ahead and run that. Once that's there, host 12, which is running Nova Compute on it, and it's one of my compute hosts, we'll go ahead and get a message that says, we need to go ahead and start up this instance for you. This is actually a place where we've changed a lot in OpenStack Grizzly from previous versions of OpenStack. Before, everything would actually, as we changed things, so for example, as a Q message came through, everything would be updated in the database. And we would just send through on a message the actual action, but none of the associated metadata with it. With the new version of Grizzly coming out, they've actually implemented something called NoDbCompute, which is isolating Nova Compute, which tends to be the most security vulnerable piece of our code because it runs on a hypervisor where actually end users run stuff, so it's the most easy to exploit. And it's actually isolated that from the database. So it no longer has credentials for the database and no longer talks to the database in case it does actually get exploited. So what will happen is, it will get actually a message, Nova Compute will, which has the metadata of the size of an image I want and all the metadata around that, as well as myself and my credentials. And it will go ahead and talk through whatever library it has to actually make that happen on the hypervisor. So for LibVert, it goes through LibVert and just like you would do normal command lines if you're on a KVM machine, it goes through those same kind of areas. Now the interesting thing here is it obviously doesn't have an image. Part of what got sent to it was Ken wants to start a large instance. By the way, it's an Inbutu 1204 instance. Once it gets that, it actually talks to Glance. So Nova Compute will talk to Glance over the same API that you use to talk to it and will pull down that image for me. So Nova Compute says, Glance, can I get the 1204 image that Ken wants? It goes ahead and streams that down to it. It now has an image for 1204. It does all the magic that KVM or Zen or anyone does to create an image, and up it comes. Once it's done that, it actually sends messages back to a special daemon that's new within Grisly called Nova Conductor. Nova Conductor's sole purpose is to mediate access to the database for Nova Compute today. So instead of Nova Compute actually having credentials to the database and being able to talk to it, it talks to Nova Conductor. Nova Conductor then actually updates the database and takes care of all those database accesses. The database there that I've talked about basically stores all of the state of your currently running cloud. So while it doesn't necessarily have user credentials and things, it does have, oh, Ken's running three different instances. One of them's on host five. The other one's on host 12. One's on host 17. It's carrying the current state of your database within there. I'm sorry, the current state of your cloud in the database. The database can be implemented in several different ways. Most people tend to use MySQL. However, you can use SQL Lite, which isn't a good choice, or Postgres if you'd like also. Likewise, the queue can be implemented with different technologies also. Most people tend to use RabbitMQ. For example, Red Hat uses Cupid. Cloud Scaling uses ZeroMQ, which they've written. And you can use basically different queuing technologies also for that. There are a variety of other demons which run here, which allow you to have a few other services. The most notable ones you'll see there tend to be console services. Most people want to be able to get into the console of either your Windows VM or your Linux VM. And those actually allow you to get into those to see those. I talked earlier about Nova Schedule. The scheduler is, if you come from the HPC world, it's one of these classic computer science problems, which is how can you most efficiently pack VMs of different sizes onto compute resources? It's actually conceptually the simplest probably piece that you have, which is you get a request saying I need to schedule this size of VM onto a host. Please tell me which host. In reality, it's probably the most complex. But you can have something very simple. So for those of you who don't write or read Python or write it, most of you could probably look at this and say, here's my major area there. Host equals self filter host. Filter host is a database call. The request spec is the size I'm looking for. Those are the hosts I've got. If I can find a host, I'm sorry, if I can't find a host, raise an error. If I can, this one is a particular, it's a random scheduler. It just picks one of the hosts at random for you. Like other things in OpenStack, it's configurable. So there are many different schedulers which are out there. This was one that used to be called the chant scheduler, which you just picked a host at random for you. Obviously, there are more sophisticated ones in there. In an area where as clouds have gotten larger and implementations have gotten larger, a lot more effort has gone into. I think today OpenStack ships with five or six different schedulers that you can choose from. You can always write your own, though. So these range from, I think the chant scheduler is somewhere around 20 lines of Python, of which you're seeing probably 10 of them. They range up into several pages. So it's kind of as complicated as you need to. Something that it really doesn't matter if you've got a small cloud, but as you get a larger cloud, and especially as you want to drive efficiencies, the scheduler will become a huge piece of what you're trying to optimize. And if you've ever been in the HPC world, you know, this is all about optimizing jobs. The difference here, we don't have jobs, we have VMs. You'll also see schedulers in other services. So in this particular service, we're usually asking the scheduler to where should I put a VM? However, another place is it might be where can I get a block storage piece, or where can I get a network, or something like that. Block storage, or what we call Cinder, is one of the two newest services that we have. It debuted in the last release, Folsom. And it basically took part of what we kind of deprecated out of NOVA. NOVA used to be much larger and had much more arrows and boxes, as I said before. That's because in addition to doing compute, it was also doing all the networking and all of the block storage. Over the last two releases, we've started to move that code out, and while you can still use it as part of OpenStack and part of NOVA, you also have entirely new services that you can stand on their own to use. The nice thing here is, it's brought a lot of the third-party ecosystem into basically being able to provide their specialized hardware into OpenStack, people like NetApp, different hardware vendors. And so you'll see here, Cinder, which is the block storage, and then I'll talk about Quantum here in a second, which is networking, follow a bit different model than what you've seen on the other services here. So block storage, block storage unlike object storage, instead of allowing you to manipulate objects or files through the API, this is actually orchestrating volumes for you. So key point here, it is not file shares again, it is a volume, so you'll have to actually go ahead and create some kind of file system and thing on top of it. It's conceptually similar to EBS, if you're familiar with Amazon Web Services. It has a fairly simple architecture. The thing you'll notice though is, the architectures change dramatically depending on what actual pluggable modules you've used here. So whereas before I talked about you can use different hypervisors and that's configurable, here you tend to pick up different hardware pieces. So whether you're using IBM or SolidFire or NetApp, or just iSCSI out of Linux, the architecture will change a bit because you actually have usually some kind of hardware appliance in there and there's probably some kind of specialized way to talk to that. But the basic architecture, very similar. Cinder API, which basically routes your commands as well as accepting API commands. So for example, can I have a 10 gig block storage unit? Cinder volume, which actually does most of the work here. It actually talks to the database as well as orchestrating what we call the volume provider. The volume provider is actually your block storage. So whether it's SolidFire or NetApp or whatever it is out there, it actually talks directly to that to carve out that 10 gig block storage item for you and make it available. Currently you can see there's different drivers. There are a lot more than the ones I've listed. We get more and more of those on each release. I think we're up to 11 now on Cinder. And because of that it does change a little bit depending on who you're actually deploying what the architecture will look like there. The final piece here is Cinder Scheduler. Just like I talked about Compute Schedule, you have to pick where am I going to get that 10 gigabyte volume from. The scheduler will actually choose that for you. So for example, if you have a five or six different NetApp filers or they're NetApp boxes it will pick which NetApp box to get that 10 gigabytes from for that volume. The last piece we'll talk about here today is networking. It's called Quantum. Quantum takes place of what we used to call Nova Network. Nova Network was built into Nova before. It got moved out to a place where you see A, the most complexity today as well as probably the most speed or velocity and code movement. So Quantum allows you to create both Layer 2 and Layer 3 networks depending on which pluggable back end that you've chosen as well as assign public and private IPs to your instances. It has a fairly I say this a little bit with air quotes because this architecture changes radically depending which provider you've chosen here. I've kind of taken the picture that shows you the Linux networking one. However, if you would go do the Cisco one for example there's probably six more boxes up there. If you go do Nacera's MVP there's a few more boxes up there and there's some external machines also for that. But from the high point you have Quantum Server as your API request as well as some orchestration pieces. You then tend to have depending on which provider you've chosen and I don't have all the providers up there because I believe Brocade's up there now depending on what provider you've chosen, you usually will have a number of agents as well as some plugins. The plugins actually provide the logic for you as well as the registration and the database about these Layer 2 or Layer 3 networks that you're creating. Whereas the agents tend to run out either on the particular network appliances that you have or will control those network appliances to actually provide that for you. There's some common agents which are out there. Again, this depends on the plugin you're using or the vendor you're using. But the agents tend to be a Layer 3 agent a DHCP agent there's specific plugins for specific network items and new within the Grizzly release you also get load balancing and a few other new network services they're also part of this. Again, this changes dramatically depending on who you've chosen but that's kind of the basic architecture that you see there. We're getting towards the end here so I will talk about there are two future projects which have been accepted not into Grizzly but into Havana so as we go to Hong Kong in October there are two new projects which will debut one is called Solometer Solometer is a metering database so it gathers basically metering statistics of things which are running on your cloud metering is not the same as billing it basically provides the raw data so that you could bill so this will give you the metering data that you would then send up into an ARIA, ZORA the OpenStack billing stack project that you could then rate and actually send bills to people on if you're a service provider. Solometer has been around for six months or so but it's come very very far very fast some people are actually using in production now and then HEAT HEAT provides a REST API for orchestration so kind of like AWS cloud formation it actually provides the ability to launch multiple instances at a time and orchestrate them and that will be coming in the next release also in addition to that there are many other projects which are not considered core projects things like moniker which is DNS as a service and there's a few others which are out there also those although not core obviously work with OpenStack and hopefully someday they'll probably graduate into this also so with that I think I'm out of time I will be posting all of this onto a blog post at selenia.com here as soon as I can get some bandwidth all of these pictures are in OmniGrapple but I'll try to put them also into Visio if you need them all of this is in PowerPoint and there are some blog posts which go with this also otherwise thank you for your time and I'm happy everyone is able to sit for the entire time so after this Diane and I will be introducing Ann Gentel and we'll be coming up here and telling us wonderful things about documentation and joining that community so please come back in a few minutes