 And we're not really famous for being punctual at OpenStack summits, but I figured we might just break that trend this year and say, let's get started if you can find a seat in the back, that would work out well. Most speakers, I think, get an introduction, but I'm the chair of the track. And so technically I should introduce myself, which is a little bit confusing. How many of you don't know who I am? There's a lot of new people. Shut up. My name is Joshua McKenty. I was the architect and team lead at NASA that built the Nova volume, Nova network and Nova compute components prior to the launch of OpenStack. The first release of OpenStack source code was on my blog in May of 2010. I now run a company called Piston Cloud Computing and we commercialize the OpenStack software for private cloud deployments only, and specifically really for folks that look a lot like NASA. So banks, governments, life sciences, where security and reliability are important. I'm not going to talk about that today. I figured we would start with story time. How many of you are vaguely familiar with the history of OpenStack? Okay, sort of. I will give a very vague story then. NASA and Rackspace joined forces in the spring of 2010 because we were both coincidentally doing roughly the same thing. We were building compute and networking components and they were building storage components. And when we got together for the first time it was the weirdest experience ever. It's like discovering a long lost twin in software. We had chosen the same languages. We'd chosen in the same frameworks. We had the same design architecture. We even were using the same networking hardware at the time. We'd never met before. And so OpenStack was sort of born three weeks later as the fastest gestation period in the history of babies. I want to mostly debunk things that people believe about OpenStack that are incorrect. So that folks getting started with OpenStack start with a level playing field. And starting with the philosophy. OpenStack does have one. We didn't, we didn't espouse one to begin with. We just discovered in that first conversation with Rackspace that we all believed pretty much the same things about software development. Which is that standards for standard sake are not what we're interested in doing. I actually should back up for a moment. I usually start every talk I give by apologizing for the fact that I am going to offend people. And that I will make my best efforts to offend everyone equally. Means I might offend people a little more than I would if I just spoke naturally. I have a lot of strong opinions about software. I have some very strongly held opinions about OpenStack. I have been sort of involved on a 24 by seven basis for years of my life. And this in particular, this philosophy that OpenStack is not a standards body. If you want to argue about standards in the absence of working code, there are many other places you can go and do that. What you will discover is at places like the design summit and in the process of using OpenStack or filing bugs or submitting patches. If you propose something without working code, no one will hear your voice. It's like a tree falling in the forest when no one is around. This has made it harder than it should be really to engage directly with the user community because they tend to have great ideas and not have working code. And so events like this, one of the questions people ask is, why do you have one summit that's a mix of developers and users and customers and business people? Well, the reason is so that we have those conversations happening directly. Because it's really impossible in the OpenStack community to get anything done if you don't have at least a written blueprint. At least an idea about how this code could be made to work or better yet, maybe a patch. That as a preface, OpenStack has always been about working code. We never wrote a spec just to emphasize this. When we launched OpenStack, we had been working on Nova for less than two months. It was just working code. Okay, OpenStack is always in about two kinds of cloud. There are big explosive public clouds and in your kitchen private clouds. And that challenge is part of what defines OpenStack and it's part of what makes it amazing. The best analog to that is the internet itself, which was a technology built to build both private and public networks. We ended up using the same technologies Ethernet, TCPIP and BGP routing to address both private and public networks. And that made things possible that we couldn't have imagined. That's really the goal behind OpenStack as well is to have the same technologies work for public and private cloud environments. Because we don't even know why that's gonna be great. We can just tell that it is. Trust me, it's gonna be great. The last and the sort of high level philosophical pieces are the interfaces. Every piece of OpenStack has three different kinds of interfaces. So you've got a GUI, that's a dashboard, that's the horizon component. Everything inside OpenStack can be exposed through that GUI. Then you've got a command line tool, CLI. And every piece of OpenStack has today its own command line tool. There's also a project now underway to unify those to a single command line tool. And then you've got the actual HTTP REST API. And again, every component in OpenStack has all three of those interfaces. So when you start talking to developers, it's really easy to get lost. Be like, wow, there seems to be like 18 to 20 different projects. I don't understand what people are talking about. Where is this in the code base? Where are the docs? Strip it down. There's really just three kinds of resources. There's really only three or four core projects. What makes it seem complicated is there's three different interfaces for each one of those projects. So if you think about this as a matrix of the way you use the systems with the resources that the system provides, that gives you this basis for understanding. This is the diagram everybody's probably seen. And I'm going to start with this because when we get into the project names, we will get very confused. So let's start with the concepts. The resources we talk about in IT are compute. Notice we say the word compute. We don't say virtual servers. We don't say virtual machines. And the reason is because OpenStack doesn't care if there's a hypervisor or not. The idea of provisioning compute resources can be applied to bare metal. It can be applied to virtual machines. It can be applied to Linux containers, LXC, OpenVZ, et cetera. It can be applied to supercomputers and ARM processors and GPUs. So compute resources is really an abstraction of something that runs a process. Storage, same thing. We have two different major core projects in OpenStack, Cinder and Swift, that provide different kinds of storage. But when you're first easing into OpenStack, just think about storage as a kind of resource. Networking has gotten a lot of attention in the last couple of releases of OpenStack. And at a very high level, you've got data. You keep it somewhere. You do some processing on it. And your network gets the data to the processing and back to storage again. You can do amazing things with each of these components. OpenStack does not have to be deployed as one solution. You can deploy just compute. You can deploy just storage. You can deploy just networking. You can deploy a mix and match. It works best if you deploy all of it. We have tried to keep these projects very separate, to run them separately. They have separate technical teams. They have separate tracks in the summit. But they do have a lot of intimate relationship. And this makes sense when you think about what people are trying to do with cloud. You're always processing some data. You do need some storage for your compute environment to be useful. And if you don't have a networking connection at all, you can't get in and out of the rest of your environment. So you still end up needing these pieces. But you don't have to use OpenStack for all of it. All right. We talk about these different sets of resource pools. Let's talk in specific about each of these core components of OpenStack, starting with compute. I want to put faces to the projects because a 101 class is never enough. And the reason to be at a summit and be here in person, for those of you who are watching, I record a video. I'm very sorry. You should have been here. Is you can track these people down. So this is Vishwananda Eshaya. He is the project technical lead for NOVA. How many of you are familiar with Conway's Law? Conway's Law essentially says that when you build a piece of software inside an organization, the structure of the software will match the structure of the organization that wrote it. When we launched OpenStack, there were two projects. There was NOVA and there was SWIFT. And that was the biggest mistake we ever made. Because people then believed that OpenStack had two capabilities, virtual machines and block storage. When we launched it, NOVA also included all of the networking code. And it also included all of our block storage, sorry, object storage, SWIFT is object. We named it that way because inside NASA, NOVA was the name for our whole thing. We didn't have names for these sub-projects. And we had two companies and we launched two projects and that seemed to make sense. But it led to a huge amount of confusion. So since then, we've taken all of the networking out of NOVA and we've made that quantum. We've taken all of the block storage out of NOVA and we've made that cinder. So today, NOVA is just about compute. Those of you who are developers in the room will know I'm lying a tiny bit. There is still block and networking code in NOVA. Pretend it's not there. So this is Vish. Vish is the project technical lead elected, although nobody ran against him for the last, I think, ever. And the idea in NOVA, the most important part of NOVA is scheduling. So I want a compute resource and I have a bunch of these resource pools. Where is my VM going to run? There are a number of schedulers. There are a few that are part of OpenStack Core. There are many that have been written by the community. So when you hear people talking about scheduling, that's what they mean. A lot of other communities would call this placement or they would call it allocation or they would call it reservation. We call it scheduling. Means is I need to get a VM running on some piece of hardware. How's that piece of hardware gonna get selected? And again, this is not about virtual machines. This can be bare metal. It can be containers. It can be ARM processors. It can be GPU environments. There are a ton of drivers available for NOVA that make it run all of these different things. But at a very high level, schedule and launch compute resources. And you can track Vish down. He's not wearing that hat today, but he is out in the halls. You will find him. You can ask him questions. He will probably tell you to go talk to one of the lieutenants depending on what part of NOVA you're interested in. Okay, object store. This is arguably the most mature piece of OpenStack in the sense that it was the first one that was running at massive scale in production. It was the Rackspace cloud storage code base. The project technical lead is John Dickinson. He is also here. You can track him down. I know he's in the room. And how many of you are familiar with object stores as an idea? Some of you. The idea of an object store is that it's like a file system or like any other kind of storage except that you've given up on the parts that make it hard. So it's not POSIX. You can't map a file into memory. You don't have the same security controls around who owns it, what group are they a member of? Exactly. But the trade-off is you can make an object store theoretically infinitely large. Dozens of petabytes, hundreds of petabytes. And it's incredibly durable. Every bit that's written into that object store is saved at least three times, three different servers, three different hard drives. There's a process running continuously that's looking for errors at the bit level on any one of those hard drives. And if they're detected, it automatically goes and makes a fresh copy for one of the other two copies onto another piece of hardware. So the data is incredibly durable. And it's very highly available because every component of Swift can run on many different machines. You can lose whole racks of gear and have zero downtime in a Swift cluster. The last thing is that the architecture of Swift from the bottom up is designed for massive concurrency, which means you can have tens of thousands, hundreds of thousands of people or individual processes accessing Swift at the same time. Which really for a service provider is your number one bottleneck. It's great that you have hundreds of petabytes of storage, but if you can only have a few hundred people accessing it at once, it's a very expensive bottleneck. So that's Swift. And again, track down, John, if you have questions. There each of these projects have what we would call the bleeding edge or the leading edge where work is being done. Swift, not that much. It is of all of the projects, it is basically what it is today. People are very interested in maybe locality awareness, perhaps some geographic performance improvements when you're running multiple data centers around the world and you wanna replicate between them automatically, but by and large, Swift does what it was built to do. Nova on the scheduling side is also fairly mature at this point. All right, lock storage, Cinder, young, just landed in core in Folsom. And again, it was originally called Nova Volumes. It was part of Nova. Think of it as an external hard drive, but without the USB cable. It can be disconnected and connected to any compute resource you have in theory anywhere in your cloud. Internally, it uses iSCSI. There are folks involved in making sure that that supports every possible export target you would want, AOE, NFS, fiber channel, et cetera. It is POSIX. You can use it exactly like a hard drive. When you connect a Cinder block store to a Nova virtual machine, it looks like an attached hard drive. That's how it shows up. And it has the performance characteristics of hard drives, not object storage. So it's much lower latency. Use it for the data store for your database. It doesn't, however, do, in most OpenStack deployments, it doesn't magically keep three or four copies. It doesn't necessarily fail over when the machine that that hard drive was hosted on is turned off. Some commercial implementations do that, but it's not part of the Cinder spec. And again, the driver community for Cinder is maturing rapidly, but it's fairly new. So you'll see most of the large storage vendors, a lot of the new storage vendors, Ink Tank, Solid Fire. I'm gonna forget people, MP Store, Cast Store. If I forget you, it's not on purpose. This is... A new Griffon, Griffith, John Griffith, thank you. Project technical lead for Cinder works at Solid Fire. You should track him down, he's also here. I don't think he's in the room. Okay, networking. There has been probably more buzz around the Quantum project in OpenStack than anything else. And personally, I believe that's because OpenStack is the thing that makes software defined networking makes sense. We're the use case for SDN. So every SDN vendor in the market has piled into OpenStack. This has been where they could demonstrate their stuff. They can use OpenStack to show how cool SDN is. So the launch of SDN was three, four releases ago. Let me get the dates mixed up, I think he was in Diablo. And it started with Cisco, Nasira, MetaCora, Arista, Big Switch, and the rest of the OpenStack community in a room for a full day, thinking out what would an API for software to find networking look like in OpenStack. We had working code, we had Nova Network. And Nasira had OpenVSwitch and MetaCora had the early versions of MetaNet. And we really brought that community together and said, okay, take your vendor hats off. How does this need to work? And what we ended up with was Quantum. There are a number of OpenStack deployments that don't use Quantum. Most of them still use Nova Network. That's really been because of when they were deployed and when Quantum was well integrated. And there are a lot of folks who've used just Quantum and not the rest of OpenStack. So this is kind of that mix and match. That is Dan Wenlach from Nasira. He's the project technical lead. He's also here, you should run him down. You wanna get involved in Quantum. The bleeding edge for Quantum is really moving farther up the stack. So we've got great layer two, virtual layer two, almost virtual layer one, networking, ports and cables. And we have some decent layer three functionality on IP addressing. We don't have layer four or above. As I believe, some of my colleagues mentioned this morning there's no DNS in OpenStack. And we know it's missing. Quantum might be a good home for DNS. Load balancing is a service. Also not yet part of core. There are some ecosystem projects that support that. Let me pause there for just a second and talk about core. Because this again, this came up in the foundation board meeting yesterday and we've done a poor job of explaining this. The goal in life as a project inside OpenStack is not necessarily to be in core. That's not everyone's goal in life. If you're a hockey player, you wanna be in the NHL. But if you're a piece of software, you don't necessarily wanna be in core. And frankly, we've rejected more things from core than we have accepted. The only things that belong in OpenStack core are the things where clearly there is only one way that this should be done. And OpenStack needs to own and prescribe that one way of doing it. There are a lot of projects that will live in the ecosystem forever in the satellite community and that's where they belong. And that doesn't mean they won't be well supported by distributions. It doesn't mean they won't be commercially supported in products. It doesn't mean the integration won't be mature and maintained and they won't have an active role in OpenStack governance. What it really boils down to is the trademark policy for the word OpenStack requires a product that's called OpenStack to include everything in core. And the more things we have in core, the less appropriate it is for the general case. We go back to these public and private clouds. We don't want things in core that are only useful for private clouds or only useful for public clouds. So you're gonna see a lot of debate this week around cilometer, around the metering solution that's been proposed to say should cilometer be part of core? Probably not. It's not really what private clouds need. They need to charge back. They need cost visibility. They don't need billing. These are sort of divergent solutions. Maybe there's a central part of cilometer that belongs in core but the rest of it doesn't. We've had this debate around orchestration as well. And OpenStack has always had a philosophy and I think we'll continue to have a philosophy that says OpenStack goes up to the infrastructures of service API layer and no farther. So above that you will see right scale, scaler and stratus, dynamic ops, service mesh, puppet and chef, a number of other solutions including juju for managing the guests. And OpenStack does not go up into that layer. We have never taken one of those projects and said we'll make that part of OpenStack core because this whole tools ecosystem are our partners. There are the ones who are bringing OpenStack into every possible solution. And if we become, if OpenStack because competitive to that tier, OpenStack has no reason to exist. Out of those major resources, so the storage, the networking and the compute, there are a number of shared services that these other projects rely on and these three in particular are in core. There's always a debate. Should we take some of them out? Should they be merged? Glance is the image registry. It's a slightly nuanced idea but essentially if it's launchable, it lives in glance. Disc images, snapshots, et cetera. The storage is actually not part of glance. Most OpenStack deployments use Swift to store the disc images. Glance manages that registration layer. What format is it in? What hypervisors can it be launched with, et cetera? Keystone is middleware. Auth n and auth z. Authentication and authorization. We would love for it to be a more complete identity management solution. It probably never will be. And this comes back to private clouds, public clouds, science clouds, hybrid clouds. They need different things from their authentication solution. So what Keystone really does is provide an abstraction and an API that has a number of drivers and those drivers can connect anything. So there are presentations as we're showing Keystone connected to SAML. There are PAM modules. There are SQL lite databases. Most service providers have done their own Keystone plugins for their internal identity systems. Most private clouds, we've been involved in use active directory. Finally, common is a cleverly named common, OpenStack common, and it has a bunch of other stuff. Logging, configuration, file management, the nuts and bolts libraries of the rest of these services. The goal with common is less about what a product or a user can get out of it than it is how can we make the rest of OpenStack easier to work on. This is don't repeat yourself in real life. The project technical leads for these three projects, I am not going to get into gratuitous detail, you can track them down. I should do Mark McLaughlin, Joe Heck, and somebody help me out, Glantz PTO. I know this guy, I knew this was gonna happen. There's always one name you just can't get, huh? Thank you, BC Walden. This is because I never call him Brian. Brian Walden, BC Walden in IRC. All right, let's talk briefly about governance. If you're involved in OpenStack and you want to get involved in OpenStack, having a very quick idea of how governance works will be helpful. There are three governance bodies in OpenStack. The foundation itself is a nonprofit and is governed most directly by the board of directors of the foundation. It has 24 seats. Ridiculous. Eight of those are directly appointed by Platinum members. Eight of them are elected by the gold members out of the pool of gold member delegates. It's somewhat confusing, but there are currently, as of last night, I believe 12 gold members. There will never be more than 24. And there will only ever be eight seats for gold members. So they have an opportunity to run, but not necessarily a seat on the board. The last eight are individual members. They are elected out of the community of the currently 6,000 individual members of OpenStack. There are some interesting rules. I'm not gonna belabor them as far as, there can only be two affiliated persons on the board so Platinum members can have one individual member from the same company, but not more than that. So the elections get very complicated and there's a large amount of debate about whether or not we should change the election process and if it's really proportional representation and everything else. The bottom line is it's working pretty well. And if you think about the board's responsibility being the marketing of OpenStack and the protection and preservation of the brand, the members put in the vast majority of the dollars that go into the foundation and those dollars are primarily spent for those purposes. And that's basically what the board is for. The technical committee has directly replaced what used to be called the Project Policy Board. Don't really need to worry about it. It is the PTLs of the core projects plus another eight, I believe, no, sorry, five, generally elected active technical contributors. So anyone can be a member of the foundation. Cost nothing, we may at some point in the future ask you to write up 90 characters worth of why you would like to be a member just so people have some sense of who you are. But in order to be an active technical contributor, you have to be an active technical contributor. That doesn't mean code. And Gentile is hiding in the back of the room behind the camera. She leads the documentation team. She is absolutely the shining paragon of an active technical contributor. Right, Docs, yes. Docs count, bug reports count. Community evangelism counts. We have a number of folks on our team at Piston who host meetups and go give talks about OpenStack in other communities of Python Software Foundation and others. That's active technical contribution. It doesn't have to be patches. But in order to be, there are a much smaller number of ATCs than there are foundation members. I think ATCs are in just shy of a thousand. That community elects their own general members to sit on the technical community and they use condor set voting. So it's a, again, nuanced and complicated. The TC makes all of the technical decisions about OpenStack. Each project makes as many of their own decisions about that project as is appropriate. The technical community and technical committee tries to make sure that the projects don't go off in random directions. So the creation of OpenStack Common came out of the TC saying, guys, we're doing configuration management four different ways. Can we clean that up? They also vote, along with the board, on whether new projects should become part of court or not. Really helping to define what OpenStack is happens between those two groups. The last governance body is the user committee and it's very new. So far it's one person. His name is Tim Bell. Literally, the way we wrote the bylaws for the foundation is the board has to a point one member, the technical committee has to a point one member and those two members then get to make up the process for how to fill up the rest of the user committee and in fact they get to decide how big it should be and how long folks should sit on it, everything else. So that is happening right now. Tim Bell is here. He is from CERN. He is also on the foundation board, although that's not a requirement. And he's been very active in helping folks understand what you can do with OpenStack and bringing back to the community feedback on what they would like it to do that it doesn't do today. That's really the goal of the UC. If you're involved in OpenStack and you're not going to write code and you don't particularly care about politics and arguing about the budget of how much money gets spent on lawyers and which legal firm we should use to protect the trademark, you should try and get involved as a user. Just to put to find a point on it, marketing and membership, meritocracy and active technical contributors, the users of clouds including the tools builders and the operators of public and private clouds. If you decide to get involved as a developer and I'm not advocating that in particular, in fact, we have a lot of developers. I think we could use more folks on the user committee and more folks bringing back the feedback from the people running it saying, hey, you know, we tried deploying it this way and it turns out that combination of Hyper-V and Swift, it doesn't work. But if you do, there is a review process and it goes like this. You see something wrong in the code or you see a place where you can make a contribution and you write a patch. You don't ask for permission. If you want to, if you wanna feel like you got permission, come to the summit, lead a session or a lightning talk and say, hey, I've got this idea. I think if we used this one kind of driver, we could get OpenStack to control those carpet robots, vacuum cleaner robots, what are those? Roombas, I'd like to have a Roomba Cloud. I'm gonna write the Roomba Cloud driver. Okay, submit a patch. I'm just gonna say no to you writing code. They will say no to you committing the code. And so what happens is that code goes into a review process and two members of the core team for that project need to sign off on it. It's automated. Generally, it shows up in a web form. You go to IRC, you say, hey, I've submitted this patch for Roombas. I'm pretty sure that Vish should look at this because technically it's a compute driver. Vish will look very briefly at your commit message and say, no, I think BC Walden should review this. I have no time for robots. And there's some folks like this who will log in and look at your code. Hopefully they will provide constructive feedback. Why are you trying to power a robot is not constructive feedback, but perhaps this doesn't follow PEP 8 standards or you've reimplemented the other general purpose robot driver that we had. And then they'll say, okay, it's done. It gets merged automatically. It goes through unit tests. Congratulations, you are now in ATC. It's pretty easy. If you want to get involved in the technical community at large or in governance, specifically in the technical committee or in fact on the board, do this first. Do something, do some contribution first. If you are going to get involved as a user, awesome. Go talk to Tim Bell. In fact, talk to any board member. Say, hey, we're using OpenStack. We'd love to provide some feedback on what it does and maybe you can tell the delivery community to fix these things, great. But if you want to get involved in the governance, you should be a contributor first. And we've had this confusion with a number of folks who felt like they needed to come and become a sponsor or a member before they were gonna be allowed to contribute code. This is open source. It happens exactly the other way around. Hatch is first. Then come and talk to us about logo swaps. And it's not hard. Most reviews are done in less than a day. If it drags on for weeks, it gets harder and harder to merge. So get an IRC. All right, last point and I've sort of emphasized the developer workflow because I figured there was some clarity that could be provided but really folks at this conference fall into three buckets. Either you want to use OpenStack. You want to learn how the APIs work. You want to learn how the GUI works. Maybe you want to take some training. You can start with TriStack. It's free. You got all the APIs and go play around. Learn the APIs. Read the docs. Get good with OpenStack. That's a great place to start if you want to be a user. There is training available. Rackspace offers training. We offer training in Piston. Morantis offers a ton of boot camp programs. There are others in other countries. I'd be, does he know events? Offer training in French? There are folks in Australia at Aptira who have offered training courses. Wherever you are in the world, there is someone offering training. If you want to start as a user, try TriStack or get some training. If you're going to run it, please start with a product. Please start with a distro. I'm gonna come back to this in a minute. If you're going to be a developer, what you want is DevStack. It is a shell script that turns your laptop into an OpenStack cloud. It's crazy awesome but it's nested hypervisors. It's turtles all the way down. So don't use it to run OpenStack. Please, that's not what it's for. Don't use it really to try OpenStack because you're gonna get frustrated. It takes like whatever 10 minutes to launch a VM. It's on your laptop. Laptops are not designed to be clouds but if you're a developer, great. It's all the code and it's built for developers and it's easy to restart processes after every patch and you can run the whole test suite in there. DevStack is awesome. If you're going to work with DevStack, you should track down Jesse Andrews or Dean Troyer. Submit patches so it can be run on more platforms. Okay, I said this before. I wanna come back to this. We are very proud that people have downloaded source code from OpenStack.org 300,000 times. Please stop doing that. It's like kernel.org. If you go to kernel.org and you download the raw source code for the Linux kernel, you're probably going to be unhappy. You're going to have to write your own make files. You're gonna have to write your own, like it's not designed to be an operating system. It is the Linux kernel, right? OpenStack.org, same thing. It's not everything you need. There are what, 560 configuration options in Nova alone. There are 19 different files full of configuration options. We support every hypervisor known to man. We support LXC containers in OpenVZ and bare metal provisioning, and you have a less than 10% chance of taking the raw source code from OpenStack.org and getting it up and running. Even if you're following the docs. If you start with, I mean, there's like six free distros. Grab any one, we've got one. Rackspace has one, StackOps has one. Start with that, even start with DevStack. Please don't start with OpenStack.org, because what happens is then you're sad, and then you go in the forums, and you're sad, and you're like, I tried OpenStack, I couldn't get it working. Okay, well, which line of which config file did you have, like, misconfigured? The reason there are products and installers is the same reason there are Linux distros, right? The kernel is not a product, it's not designed to be run. Distros are a great way to start. You will have a chance to try two or three, and you will gradually realize it for the kind of cloud you're wanting to run. There is one product that is perfectly suited for that. If you're a service provider, you probably do want to really, really understand OpenStack, all the way down to the source code. You want your own developers to be monkeying around. You want to write your own Keystone plugins. If you're running a public, private cloud, probably not. You don't necessarily want your own dev team contributing to OpenStack. There are products available for that. That's my PSA. Grabbing source code from OpenStack.org is not really the best way to start. Now, I have gone over by one minute, but we started three minutes late, so I think we have a couple of minutes for questions. There is a mic, nobody ever uses it. You want to stick your hand up, I will repeat your question, so the folks following along from home can hear. Yeah. So the question is, can you mix and match physical infrastructure and virtual infrastructure? Let me make that more general. Can you mix and match different NOVA drivers? So different hypervisors plus bare metal. The answer is yes, but you should do it in separate cells. So NOVA has a concept of a cell, which is a collection of resources in a single pool. You can run a scheduler across multiple cells, but all of the resources inside a single cell really ought to be of the same type. And really where the issues come in is the integration with Glance. When you have stored a disk image, mapping that to which resources can launch that disk image is not as automated as we would like. There are blueprints for Glance to be able to do transform on the fly, to take a disk image built for KVM and turn it into something that'll run on Zen or run on Hyper-V or run on vSphere and just move it over there and launch it. They're not done yet. So yes, if you keep the resource pools separate and yes, if you're willing to make sure you've got the images managed correctly in Glance. The projects for bare metal provision, by the way, there's two different projects. There's one at Dreamhost and there's one the UCSB folks, Nimbus Systems, Ryan Stevens. I'm not sure if he's here. If you wanna track me down afterwards, I'll make intros. So the question is what's the best way for an enterprise to get started with an OpenStack cloud deployment? There are two routes. There might be more than two routes. There are two routes that I see today. So we do this for companies at Piston Cloud and usually what we do is we sell them our Enterprise OpenStack product. Sometimes they start with our free airframe product and they try it out themselves and then they're okay, well we get why we want the upgrade. The other is there are a number of professional services firms that will come in and do a custom built solution. The risk with that and I think Shuttleworth highlighted this pretty well this morning is you want a cloud that can be upgraded. You want your cloud built in such a way that you can move from Essex to Folsom or from Folsom to Grizzly. That's something that the products have spent a lot of time working. I know we spend a ton of time making sure upgrades work. I know Canonical spent a ton of time making sure that upgrade demo would work. Most of the professional services based deployments I saw in the first two years of OpenStack weren't thinking ahead. So there were a lot of Diablo based clouds that had a month of downtime to get to Essex. So my bias would be on the product side but certainly both paths are reasonable and folks have been successful in both ways. Any other questions? Was that at all helpful? All right thank you very much.