 All right, well, looks like we should get it started soon. I appreciate you all coming by this late in the day. My name is Dan Willand. I work at VMware. This talk is entitled, Avoiding the Perpetual Proof of Concept, which I think was controversial enough that might get some people to come in. And in smaller fonts, Beating Enterprise opens stack adoption with VMware. So the reality is most of the time I'll actually spend on the main point. There's going to be a couple other sessions here at the summit that will talk more about VMware's offerings in depth. I'll cover them a bit at the end, and if we have time, give a couple quick demos. But the main thing that I want to talk about today is just try to answer the main question of, OK, well, between a POC and being successful in a real production, scale, multi-tenant, opens that cloud, what are the key kind of stumbling blocks and things you have to knock out of the way for you to be successful on that route? So a bit about me. I actually, this is, I think, my 10th open stack summit, if I'm counting correctly. All of them since Austin. I was involved in an open stack. It's part of a startup called NYSERA. And as part of that, we started the open stack networking project, which at that point was quantum. It's now been renamed Neutron due to a certain company threatening to sue us. And so I was the project team leader, Ptl, for open stack networking for two years. And now I have a role at VMware, which is kind of defining our overall open stack product strategy. That's across networking, which they acquired NYSERA. That was my original role. And now, but storage compute management as well. So with that, I think I'll get started. So what do I mean by this concept of a perpetual proof of concept? So how many of you have used dev stack before? Or any other simple tool to stand up open stack? It's really easy. You're down, getting a boon to VM or whatever. Copy and paste a couple commands. And next thing you know, you've got Horizon there. And you can boot a VM. And you've got open stack. And you can use the CLI and the APIs if you want. And it's pretty cool. But I don't know how many of you are fans of the internet meme, but this is a popular one going around about open stack recently. One does not simply deploy open stack. It's very complicated. And I don't know if any of you were at earlier so much when they're handing out these t-shirts. This is a personal favorite of mine. A common refrain from developers. Well, it worked in dev stack. I didn't test it in anything else, but it worked in dev stack. So I bet it probably will work in production. So these are all just kind of simple ways of saying, we all know there's a big gap between what you would do with dev stack or some other simple deployment tool. They're great tools if you understand what they're for. But you have to appreciate the jump between that and a full production deployment. So sometimes, I think, it can kind of seem like this, where with dev stack, you can see horizon. You can boot VMs. You can create users. It feels like you're almost there, like the mouse. And you almost have the cheese. But you have to keep turning and turning and turning and turning to really do everything needed to get to an open stack production grade deployment. So how about some data? Maybe a little proof I'm not just making all this up. How many of you check out the OpenStack user survey? Have you guys seen this data? It's super useful. I recommend that, A, you participate in it. If you're an OpenStack user, or at least check out the data. They typically have a session on it every summit and publish the data. So it's basically OpenStack users reporting information about their deployments. What you have here, this is over the past three summits where they've done the surveys. You can see in the red line is the total number of OpenStack deployments deployed. Great line, up and to the right, shows a lot of people interested, a lot of excitement about OpenStack. The green line is production. So pretty far down, but still not terribly bad. The most interesting bit of data I see in that data set is actually they started calling out numbers for production deployments over 500 VMs. Which to be honest with you, 500 VMs is not that big. Rack or two maybe. That number is almost flat. And the reality is that getting a small kind of basic OpenStack environment is pretty easy. But the kind of problems you start running into, when you go beyond 500 VMs, you get to 1,000 VMs, 2,000 VMs, 3,000 VMs. This is why we're all doing OpenStack. We're not doing OpenStack to do 20 VMs, 100 VMs. The type of problems you run into are very non-trivial problems. And typically require a very sophisticated development team to be successful with. And so really what I'm trying to do is help explain what are all the steps needed to be in one of those purple, be in that purple bar, rather than just being in that red bar. And I think this is concrete evidence to the fairly intuitive observation that many of us have had that's actually a pretty hard jump to make. I want to be clear, by far not the first person to be making this observation. In fact, probably the most useful thing you can do is check out a blog by the chief architect at eBay. Links down here. It was entitled, an OpenStack installation does not make a cloud. And the blog went on to continue and call out a whole set of things that you have to do beyond just taking the OpenStack code in order to be able to successfully deliver an OpenStack cloud. We're not deploying OpenStack for the fun of it, or at least most of us aren't, but we're deploying OpenStack to enable a set of developers to deploy workloads and get real business jobs done. And so the point was what are all the things you have to do between deploying an initial OpenStack setup to having a production-grade infrastructure for your business to be consuming and getting real value from OpenStack? And the logical conclusion was, yeah, there's actually a lot of significant time and energy you're going to have to pour into this. Now, from eBay, they were what I would call a do-it-yourself customer, so they built up their own internal team, built these tools over time. And that's certainly an option with OpenStack. Another option that we'll talk about later, hint from a product company, is that you can get a product that potentially fills a lot of these gaps as well. So I actually think, even though I like the wheel diagram, I actually think this is more accurate. You're the most you want to get to your cheese. And there's actually a whole set of stumbling blocks that you can get to. You can think, oh, I'm almost in production. I'm going to get that cheese snap. So what I want to do is I want to help you get to production OpenStack without any broken limbs. Think of it that way. And look, that's the path. So you might want to take a picture of this slide. Could be important if you plan on doing production grade OpenStack anytime soon. So where am I getting the data for this presentation? As I mentioned, I've been involved in OpenStack since it was publicly announced. So I've done deployments on Diablo. I've done deployments on Essex. And especially in the early days of OpenStack, that's how you felt after deploying OpenStack. You weren't sure who won. Maybe you got something up and running, but you were all bruised and beaten too. I think the world's a little better now. But I was kind of putting together my notes for this. I started listing out all the different customer deployments I've been involved in. And I stopped basically counting at 50. But I've been involved in a lot of different deployments. They've ranged from large service providers to tiny private clouds. Some have been by no means telling you that all 50 of those were successful. Sometimes people were successful. They got promoted or they moved on to another job, got a great title some of their company because of the success they had with OpenStack. And some of them, to be honest with you, weren't. So what I want to do is I want to help you understand how you can be one of those more successful people deploying OpenStack. And I want to be clear too, even though I'm from VM, or the actual data for this, I've been involved in deployments that are pure open source deployments. I've been involved in deployments that are a mix of open source technologies like KVM with technologies like the NYSERA technology, which is now called VMware NSX. I've been involved in mixed hypervisor deployments. I've been involved in vSphere deployment. So generally speaking, a lot of these points are generic across all of those. One data point where I've gotten really in-depth view is that VM were actually continues to run an internal OpenStack cloud. So now it's, I think it's over 250. 250 is a safe number. It's a multi-hypervisor deployment. It's a mix of vSX and KVM with NSX for networking. Any given day, it has somewhere around 5,000 VMs, couple thousand logical networks, depending on what we're doing on it. It's used for test dev. It's used for continuous integration. It's used for training our sales folks, training our customers, giving partners lab environments, used for all kinds of stuff. That's what OpenStack should be able to do. Should be a very multi-purpose tool. And in fact, today, I don't know if any of you went to the Neutron hands-on lab, which was I think just an hour or two ago. We actually had 200 different, unique, fully isolated lab environments that everyone in that hands-on lab was using. They're all hosted in this OpenStack cloud at VMware. Moving on, I wanna, just before we dive into the specifics, I wanna give you a little framing for what is OpenStack and how we think about OpenStack. So as I mentioned earlier, the whole point of OpenStack is to give a team who actually is solving a business problem, the infrastructure to solve their problems. So up at the top, you'll see what we call the application DevOps team, right? They're either writing scripts or using pre-built tools that use OpenStack APIs to programmatically talk to the infrastructure. The infrastructure here is represented in the red. This is what OpenStack provides you, a set of tools in terms of SDKs, CLIs, a web portal, all on top of these standard vendor-neutral OpenStack APIs. And then it's up to the cloud infrastructure team to decide what infrastructure lays under those standard OpenStack APIs. They'll have to pick a set of virtualization technologies. They'll have to pick a set of hardware technologies. Then they'll have to pick a set of infrastructure operations and management tools that actually help them keep the lights on and do all kinds of troubleshooting, capacity planning, security, and compliance, et cetera, to run a production-grade cloud. Make sense? All right. So this is the format I'll be using to go through a couple of the items I want to talk about. I'll highlight a kind of a key stumbling block. And then over on the left, what you'll see is the set of layers in the stack that that stumbling block typically applies to. And one of the reasons I did this is people often hear a stumbling block like scale. And they think only about OpenStack scale. They don't necessarily think about, oh, I have to think about the scale of the compute infrastructure, right? You know, if OpenStack scales infinitely, but my compute infrastructure doesn't, your overall solution still doesn't scale. Another good one, I have to think about the scale of my infrastructure operations and management tools, right? If I build an operations management tool that does something and it works at 10 VMs, right? There's no guarantee it's not gonna blow up once I get to 1,000 VMs or 2,000 VMs. So let's start out with the first one, which is someone should be very familiar with if you've ever installed OpenStack. Just the basic need to have OpenStack packaged up and installed and deployed in a highly available way. Highly availability basically for the OpenStack control plane would mean something like I have multiple Nova API servers so that if one fails, the other one can take over and load balancing across them, et cetera. Now, the reason I have this picture is that really you should think of this as the foundation of any OpenStack solution. It's entirely necessary, but the thing to be careful about is it's completely not sufficient. And a lot of times the discussion around OpenStack actually, in my opinion, in an unhealthy ways, is very just focused on this. Hey, you can use this tool, you can install it and it deploys OpenStack in a highly available way. Ta-da, production grade OpenStack. And so you really need to think about all aspects of highly availability, not just for OpenStack, but for your underlying infrastructure. And there's a whole host of additional things we'll talk about that in my mind are just as important for a production grade OpenStack cloud. So this is kind of the foundation of a production grade OpenStack cloud, but there's a whole lot more that you need to think about. Next one is integration testing and support. Now, there's many different models of how you can get this done. But what I mean by integration testing is who makes sure that when you pick an OpenStack cloud that has this type of compute, this type of storage, and this management tool, that it all works together. That it all works together when you first deploy it and that it all works together still when you apply this bug fix over here or that bug fix over there. And it still works together when you decide to upgrade the hypervisor to the new version because you heard the performance is better and that it still works when you upgrade to the next release of OpenStack. And from a support perspective, what do you do when you have a tenant call in and say, I'm getting poor networking performance, what's going on? Or, hey, the Nova server is returning a 500. These issues could happen anywhere in the stack, could be in the OpenStack control plane, could be compute network storage. And really there's two basic models to this. You can build that capacity inside in-house. That's what I would call the DIY model. Seeing people succeed with that. Or you could go to a vendor, right, who would provide OpenStack integration testing and support on your behalf. I would say that I think this is the area probably where I've seen the most drastic misjudgments from people in terms of them minimizing the anticipated complexity of what's going on here. They tend to think like, oh, someone probably tested this upstream. So it'll probably work in my environment. And you saw that slide with the bruises. This is where a lot of those bruises came from. Well, it turns out the person developing upstream thought they had this version of NSX whereas the customer was deploying this version. This is actually complicated stuff and it's non-trivial. So you either really need to decide to build up this own capability. That usually means running CI, continuous integration infrastructure in your team. We know people who have done that successfully. That typically means building up a team of developers and DevOps folks that are very OpenStack, Python, Linux, Savvy. And often that takes a lot longer than people expect. Everyone's just like, oh, I'm gonna go hire five OpenStack people. Well, you know what the problem is. Everyone else is also thinking, I'm just gonna go hire five OpenStack people. And so this is something that can definitely, if you misjudge this, can definitely, if you go the DIY route and you misjudge this, this can definitely kind of blow out your timelines in a very serious way. I think the other thing to call out, too, is to think about, when you're thinking about support, thinking about the interactions between support between if you have a multi-vendor solution. Again, people kind of think, okay, well, I'll get OpenStack from this person. I'll get my hypervisor from this person. I'll get my networking from this person. Without necessarily thinking about, okay, well, does the person I'm getting OpenStack from sufficiently know the underlying hypervisor platform well enough to know that if there's a bug in that driver, which they probably didn't write, how to fix that bug and patch that bug. So these are all very mundane and kind of boring issues, but the reality is a lot of the damage in terms of what people perceive as OpenStack being unreliable in this. And that actually really comes down to this core issue, and whether you're properly covered from an integration testing in a support perspective. Like I said, you can do that multiple paths. You just need to make sure that you're covered whichever of those two paths you go down. Second one is, or next one is rolling upgrade. I actually, to be honest with you, kind of find it shocking that, I'm not gonna name names, that's not my goal here, but there are major OpenStack distro vendors out there today who actually tell you that you cannot upgrade their OpenStack, right? OpenStack's four years old, right? Like maybe when I was one year old, two year old, I get that, it's over four years old, right? And they say no, just deploy another one next to it, right? And start putting workloads onto that. So this is a question, whatever your approach to OpenStack is, I always ask them about rolling upgrade, because this is something that a lot of people jump into production is they say, hey, I can go into production with this version without asking what's gonna happen when that version changes. And asking, for example, whether their vendor, how does their vendor's code relate to the upstream code? How soon are they gonna release after a given upstream release and have an actual product around it? This is very, this is more data from that OpenStack user survey. You can see, so this was from the last summit which is the most recent available when I put this together. So Icehouse had just been released at this time. So you can tell about as many people were on versions older than two releases that were on the most recent two releases. So again, pretty strong data, right? That people see upgrade and rolling upgrade and the ability to move to the next version of OpenStack as a pretty key stumbling block. Performance validation. So in a POC, what do you do to test? Well, I can think about when I did it, I probably use the Horizon GUI to boot up a couple of VMs. You know, went into those VMs, did a couple of pings, made sure that everything worked and said, okay, that's pretty cool, right? Well, it's very different from what you need for a production grade OpenStack cloud, right? You need to understand the core OpenStack control playing performance. What happens if I boot 200 VMs, right? Is the OpenStack control playing gonna keel over or is it gonna be able to handle the load? I can tell you, it certainly depends on how you deploy and how you spread out those services. If you just jam them all into one node, there's gonna be CPU contention, they're gonna fall over. Similarly, you know, on your core compute network storage infrastructure. You know, I can tell you a story going back to at least a couple OpenStack releases. I bet it's still true. I just don't run the networking project anymore, so I haven't tested it. But, you know, subtle differences in how you configure, for example, OpenStack networking, whether you're using tunneling, whether you're using security groups, how do they combine, can easily result in, you know, four or five X difference in performance. So you need to be very, you need to really understand the trade-offs between different configurations and make sure you're really validating the performance of your desired configuration that it meets your needs. Obviously, you know, you can validate performance on any one of these areas. You know, storage performance is a performance good enough for running a database server. You know, is that something my application requires, et cetera. But definitely being able to do a good burn-in, understanding the characteristics and what type of performance guarantees you are aren't able to provide. It's a very key thing to do between POC and production. This is another interesting one. There's a lot of options in terms of OpenStack, right? Everyone knows OpenStack's all about choices, right? And you can choose what back-ends you want to use and you can even choose what OpenStack services you want to use. So, you know, the first question is just, well, what OpenStack services do I think I need, right? So, just to take a survey, so let's just do a level set. So, how many people have deployed OpenStack with Nova? Oh, more than that, no, really? Oh, come on. Okay, then there might not be much data because I'm gonna ask even more obscure things. So, how many of those have deployed with Swift? How many had a need for Object Store? So, a lot less, right? Now, whether you need to do that or not, right? Depends on the set of applications you're running. Basically, it forces you to think about who's gonna be on my cloud and what type of services they need. It's not a statement about whether Swift works or not. It's a statement about, you know, whether that service and that capability is something your cloud tenants need. How about heat? Heat, okay, so it's reasonable. That's about what I would expect. You know, we're starting to see people moving into the phase where heat's at least a core part of what they're planning on doing in the future, even if they're not deploying it now. What about Trove, database as a service, anyone? Okay, that about maps to what I would have guessed. You know, we're starting to see some people dabbling with Trove and asking about it, but, you know, most people are really just trying to get the core stuff working and solid at this point. So, that's just the first basic question of which OpenStack services do I even need. Your POC probably spins them all up, right, because it doesn't have to worry about creating them in production grade or tuning them or anything. The next thing is to say, okay, well, you know, if I think about the flavors and the needs of the specific applications, what are they? Good example of this, how many of you heard about the fact that Amazon had to reboot, you know, 10 to 20% of their VMs? You guys hear about that in the news at all? It turns out supposedly it was some Zen, some bug in the Zen security issue and the Zen hypervisor that they had to reboot. No, I don't think they officially said that, but anyway, the point is Amazon basically has a SLA to their customers that says, you know, we can tell you we're gonna reboot your VM whenever we want. We'll try to give you some notification, but you know what, the server may go down in which case you get no notification, right? That's an SLA between the cloud and its users. You have to decide if that type of SLA is something that's relevant to your users or if you wanna build a cloud platform that can guarantee that a VM, for example, is preserved, right? Does it have shared storage that if a hypervisor loses a disk that VM's not just gonna go down, right? Does it have something like maintenance mode to live migrate the VM off, right? These are things you have to think about. You have to say, what are the types of applications that are going on this cloud? And therefore that dictates a lot about how you need to build the underlying infrastructure. Another example is just standard performance tiering, right? You can have, for example, sender volumes. You can have different sender types. So for example, if I'm a database, right, I probably really care about the IOPS so I'm gonna get off a volume. I wanna be hitting fast storage, likely either all flash storage or storage that's really accelerated by a good amount of flash. If I'm just using a volume as a backup mechanism, I probably don't care, right? What type of tiering do you need to do in your application? And do you have an underlying infrastructure that can support that type of tiering? I could probably talk all day about this, but I'll move on. User and quota management. This is one that's almost always overlooked in small private clouds. Obviously, if I'm a public cloud, this is a core part of what I do. I have a sales pipeline. I move that customer into the sales pipeline. I probably give them a free trial. I get their credit card info. So public clouds are all about user and quota management. But often private clouds, they're just like, hey, I'm just trying to get this stuff working. I'm trying to find some people who wanna throw some stuff on here and, but before they know it, if your cloud's successful, pretty soon you have a problem on your hands, right? You've got a bunch of people who wanna access, they wanna get access to this project and that project and how do they do that? What's your mechanism for requesting access to a project or asking for a bumping quota and deciding whether to give it to them? This is actually something where typically people use really bad just manual channels like email to sort this out, but they aren't able to do it in a formal way that has approvals, that has visibility, that checks back and says, hey, you said you were just needed this quota for a month, you know, I'm taking it back from you. So, you know, thinking about how that whole workflow and approval process around your users, their projects, their quotas works is something that you'll wanna do before your cloud grows too big if you're truly having a multi-tenant cloud. Application life cycle management or platform as a service. This is about whatever you do on top of OpenStack. But remember, at the end of the day, your business problem isn't solved until you've solved all of the layers in this stack. So what is your strategy for your business case? You know, are you just gonna let those developers do whatever they want? Are you gonna basically make an opinionated choice? I'm gonna choose some application life cycle management tool, scale or whatever you want up on top. You know, am I actually gonna expose something that's a little more restrictive than infrastructure to service, like a pass, for example, cloud foundry. Or are you just gonna let people get raw access to the OpenStack APIs and let them do whatever they want? Those are all reasonable things to do. You have to ask yourself for your business, for your business to be successful. You know, what's the model I should support here? This one's a huge one and actually something that people tend to overlook all the time. So, you know, you can, and again, you can tell almost all the boxes are lit up, right? You can think about infrastructure monitoring, troubleshooting, log analysis, alerts or mediation. At the end of the day, someone, if this is production infrastructure, right? Someone has, well, at least they would have had a pager. Maybe they just get their alerts on cell phones now. I don't know. I thankfully don't have to carry either. Someone else does that for our cloud. But, you know, at the end of the day, someone's responsible for keeping the lights on with, you know, ideally 24-7 consistency. And so, well, what are your tools? You know, how do you have visibility into the OpenStack control plane to understand its health? How would I understand if my Nova API servers are getting swamped or my end users are consistently getting 500 errors, right? I want to know about that proactively, right? Not get a bunch of random emails from people saying, hey, your cloud's been down for 24 hours. I'd say, wait, I should have known that up front, right? You know, that's just an example of the OpenStack control plane. Do you have a team who knows how to troubleshoot the compute infrastructure, the network infrastructure, the storage infrastructure, right? They have, you know, do they have the existing tools to monitor that, their entire processes around how that works, who they alert, who they go to for help, right? If you're starting with a Greenfield cloud, you actually have to build all of this from the ground up again. And so, this is something definitely, definitely not to overlook. And log analysis, anyone who's worked with OpenStack in production knows, it spews out a lot of logs, right? So, you know, you need a good tool for being able to not just centrally manage those logs, but search them, have triggers off them, have alerts off them, et cetera. And this is very critical to being able to run a good OpenStack cloud in production. Scale, right? Like I said, you know, if you're gonna be, we're not interested in clouds to handle a couple VMs, right? Those are things you can do manually. Ultimately, the reason most of us are building OpenStack clouds is we imagine this large scale-out pool of capacity that our business users can just draw from as they need, right? We can throw additional chunks of hardware into it as needed, and it scales out. But of course, in a POC, right? You're probably not scaling the OpenStack control plane. You're probably not scaling the compute layer. You're probably not scaling the network. You're not testing any of that, right? And, you know, there's an incredible number of issues you can run into here. Just mind-boggling, really. And I actually believe, if you look, one of the reasons I think so many people are at that under 500 VM limit, is that that's, you know, an area where you can still get away with simple flat networking or simple VLAN networking. You can probably get away with having maybe a single network node with active passive. You don't have to figure out how to scale out your networking nodes, right? There's all kinds of scale at the OpenStack control plane, at the networking layer, at the storage layer, that you have to understand kind of what you're getting into and to what degree those underlying platforms have been validated to operate at scale and in the configuration you're using them. And of course, the sister to scale is performance at scale. That's a great story here. How many of you have ever looked at the load balancer as a service in OpenStack? Anyone yet? Okay, so the default implementation of this is I'm not kidding. It's a single, it's basically a Python process that every time you create a load balancer just creates a Linux namespace and starts HAProxy in it. So, you know, if you create one load balancer and try to pump some packets through it, you can probably do pretty well. You imagine a service where people are spinning up load balancers, you know, five load balancers, 10 load balancers, right? There's no way to guarantee a load balancer in any sort of capacity. There's no way to make it scale beyond a particular host, right? These are all things you never find out in a POC, right? You actually have to go do the validation at scale if you're in the DIY model, right? Or work with someone who has. You can understand that, well, if I pick this driver, right, it can scale in this way. You know, there's OpenStack control plane, right? You know, all kinds of POCs, right? They just jam everything into one VM because it's super simple. You can download a VM or you can run this single DevStack command in one VM, right? The behavior of your, you know, the queues between services and if you're ever pounding on that control plane in any non-trivial way, like your developers will when it's in production, that model will never work. You need to farm them out into different, you know, either hosts or some other mechanism that guarantees them capacity, right? And that lets you scale out and give sufficient CPU and memory and guaranteed resources to each of those components. This is an interesting one. It's another kind of thing that you don't typically think about during a POC, right? But you need to think about, okay, well, yeah, maybe I'm just starting out with some non-important workloads here. The second I prove that this cloud is successful, are people gonna wanna put more and more stuff on it? And maybe suddenly those new workloads start having personal data in it, right? Or credit card numbers. Or anything else that I actually have to care about security and compliance. So this really comes down into, you know, either security and compliance at the OpenStack control plane layer, right? You need to make sure that, you know, either the distro that you use or if you're doing it yourself that you've properly made sure that all, you know, inner service communication are using certificates over SSL, right? You need to make sure that all the passwords are set in a secure way and can be updated. You need to make sure that you have a process in place to react to a security advisory that happens inside of OpenStack, right? There are all things that again, you know, if you really, if this is really gonna be a production cloud you need to care about. And then obviously that goes all the way down if anyone's ever had the joy of dealing with security and compliance, right? You know, that goes all the way down your stack. Change management, this and that, right? Making sure that you have the proper controls in place if someone ever modifies the compute infrastructure that you can prove that you've limited access to the underlying infrastructure to the appropriate set of people. All of that. I hate security and compliance so you can tell I'm moving past that one quickly. Yeah, capacity monitoring, capacity addition, right? The cloud you start with is not, you know, hopefully gonna be the cloud you end with. So how do you monitor and find out that, oh, I'm running out of compute capacity, I'm running out of capacity for my volume. So I'm running out of capacity for my high IOPS volumes but I have plenty of capacity for my, you know, cheap storage volumes, right? These are all things that, you know, you don't wanna find out the answer when someone says, hey, I failed to provision my workload that I need to have done for my, you know, for my job tomorrow, right? You need to be proactively monitoring and understanding this capacity. I need an easy way to add that capacity over, you know, when it's needed. The final one is, or final or second to final, I forget, second to final now I think. It's cost modeling invisibility. So, you know, it's about having visibility into the cloud. How much does it cost? You know, whether you're doing chargeback or whether you just need to make a business case, this is very critical. And finally, there's a whole bunch of miscellaneous day two operations that you need to think through. I was just talking to someone today, said, oh, well I have to think about the case where I upgraded the BIOS and my hypervisor and what impact that has on my workloads. All right, it's probably not something you would have thought about during your POC, but something you definitely need to think about before you go into production. So, going back, and like I said, I'm not gonna spend a whole lot of time on the VMware side, I'll end with a slide that shows you lots and lots of different VMware talks you can go to to learn more. But this is the basics of how VMware technology fits into OpenStack. From a compute perspective, you can use vSphere. From a networking perspective, you can use NSX. Storage, we can plug into the vSphere storage APIs and use whatever storage works with your vSphere environment. We have lots of infrastructure and operations management tools, vCenter operations management, log insight and IT business management for the cost visibility. I'll go in another slide and kind of explain how these things map to the problems we just talked about. There's vCloud Automation Center and things like Pivotal Cloud Foundry depending on the model you wanna provide your application development teams. And then you can get OpenStack from anyone, you can get it from open source, you can get it from a third party vendor. And recently VMware actually announced its own distro of OpenStack for people who are really looking for a more tightly integrated solution. We call that VMware Integrated OpenStack. So just one more slide on what is VMware Integrated OpenStack. It lets you start with your existing vSphere environment. You add VMware Integrated OpenStack which is just a hardened version of the upstream code running with the VMware drivers configured and a really simple install and upgrade process. That makes it very easy for someone with an existing VMware skill set to be able to leverage and deploy OpenStack. And the key thing to realize here is you're deploying OpenStack on top of your existing vSphere environment where you already have many of those things that I talked about earlier figured out. Well, I already know how the system performs. I've already created different tiers of storage. I already know how I have security and compliance figured out there. I've got a bunch of tools for troubleshooting it. And so this really lets you vault past a lot of the hurdles that you have in OpenStack and deliver OpenStack to your developers quickly. And then finally, we provide a lot of OpenStack-aware cloud management. These are things like VC operations manager login site, several of the other tools I talked about earlier. You know, and again, these are filling kind of key gaps that aren't necessarily an OpenStack itself. These are things that go around OpenStack but help you run a successful cloud for your company. And then what we do is we put it all in a, scale out HA validated reference architecture and we provide a single contact for support. Because the reality is when you have a problem, you're not gonna immediately know whether it's a problem in OpenStack or OpenStack's giving you an error because it's a problem in the underlying infrastructure. And so it's very important that you're able to go to one person, you're not getting ping pong between different vendors. So again, this is an example of a, there are many different ways to consume OpenStack with VMware. This is an example of a very tightly integrated way, right? We have customers who take the open source code. We have customers who use third party distros and we now more recently have customers who follow this model as well. So going back to the things we talked about, you know, I already mentioned some of these in passing but these are just the different components of the VMware solution and what of those earlier problems they touch on. And again, you know, this isn't by any means a statement, this is the only route to get that. I wouldn't be so bold, but you know, it is saying that you do need to think about these and one of the ways you can solve these problems is by working with a vendor and VMware's got a lot of good technology in that space. So I'm actually gonna quickly just pull up. I wanna show a quick demo of VMware integrated OpenStack so you all could just get a sense of what that looks like and show some of the tools that I talked about. So there's no sound here but I'll just kind of talk you quickly through what you're seeing. So the simple install is that you basically just download a virtual appliance and then you can see up there in the upper right VMware integrated OpenStack is now kind of a set of tools you have inside of your standard VMware vCenter. And you can click on that and you can choose to deploy OpenStack. And that will take you through a basic wizard where you give it some set of IP addresses for the OpenStack services. You choose the clusters you're gonna use for OpenStack capacity and it will automatically boot up. You can see the VMs over there, a fully HA, fully production grade, scalable OpenStack control plane. So your end users can log into Horizon or use the APIs or use the CLI just like anything else they do. Now the important thing to notice here is that there's nothing different from a, this is nothing VMware specific about what you're seeing here. Your end users should get standard OpenStack APIs on top of really great production grade infrastructure. But you as an operator have a whole set of VMware tools at your disposal for doing things like troubleshooting, cost visibility, all those things we talked about. This is VC Operations Manager. It monitors the OpenStack control plane as well as individual VMs. These are tools inside of NSX to make network troubleshooting simpler. This is that login site that centralized log analysis tool with dashboards that are OpenStack specific. This is a trace back I used to troubleshoot some OpenStack problems a little while ago. I think it's the DHCP error or something. So anyway, we actually have a full dedicated 40 minute session tomorrow that you can go to and see a deep dive demo on that. I just wanted to give you that kind of little teaser trailer. So with that, I'll just show you the other, on Thursday we'll be talking a bit about some early reference customers. These are a set of people we worked with early with VMware integrated OpenStack to actually have production grade deployments with more of a high touch model. We're now in beta for our software that's more around the low touch model where a VMware administrator should be able to be successful with OpenStack without necessarily having a professional services engagement. But I'll talk more about these in our session on Thursday morning. That's the session in the upper right. Tomorrow you can see on Monday these sessions are obviously passed but everything's online. OpenStack is great about sharing those videos. So you can watch those on YouTube later. Bunch of sessions tomorrow afternoon. The deep dive technical demo is the second session I mentioned there. And then a couple other talks. Here are some great resources. Hands on Lab. VMware has this thing where you can actually click a couple buttons in your browser and we spin up a VDI session to a full OpenStack and VMware environment. Now it's not the production grade environment we talked about earlier. It's more that kind of demo POC mode. But it's a good place to start if you're looking to learn more about OpenStack and VMware. This is the link if you're looking for the product page or to sign up for the beta. And we also just have a generic OpenStack communities page where you can ask questions about VMware and OpenStack. So with that I want to say thanks. Open up for any questions. I think we're over but I'm not sure if we're the last session or not. So please let me know if we have time for questions. And this is my Twitter handle, my email if you have questions. And definitely if you saw this and said hey you totally missed this other stumbling block. Let me know. I'd love to add it to the deck for future use. So thanks. Any questions? All right well I'll play one more video for you in the background as you can stay or file out as you wish. But it's a video actually of showing how you can take VIO and which is again designed for using on VMware infrastructure but you can actually extend it to work with KVM deployments as well. And again the key takeaway here is that with VIO we support all the standard OpenStack APIs. And so if you have something else that also supports the standard OpenStack APIs you can present what looks like one cloud here on users. So this is, let's see this is, oh it's way at the end now. So let me, oh I've ruined the surprise. Hopefully nobody noticed. All right. Okay so this is just an example of a user logging into the VIO portal which is standard horizon. We don't modify or tweak any of this. The value to our customers is that they get standard OpenStack APIs for their developers. You can see that, you know, I look I've got one hypervisals that means I've got one vCenter cluster with 32 vCPUs running. And a couple of them are used and it's got a bunch of RAM. I can go over and I can show you a set of instances that are spun up. This is all standard VMware integrated OpenStack exactly what you'd expect out of the box. The interesting thing happens next. So what we're gonna show now actually is the ability to go up to the top and actually select another region. And this region doesn't have to be VMware infrastructure at all. In fact this is a region that includes KVM infrastructure. But I can still be logged in as the same user and using the same portal, the same tools, the same APIs. And here we can see I've got an instance spun up. It's a KVM instance and it's spun up on an Ubuntu hypervisor. So again this is the point that some people say, well oh well if I use VIO then I can, it makes sense it seems like the right way to start but what happens if I wanna use KVM down the road? So that's why we wanted to show this example. If your solution is vigilant about adhering to standard OpenStack APIs, then the interoperability like this is trivial. And so that's fundamentally the commitment VMware makes to its customers. Is that we're gonna expose OpenStack on best of breed VMware infrastructure. We're gonna expose those standard OpenStack APIs. All right well again thanks for your time. If anyone has any questions afterward feel free to come up.