 So, quick show of hands before we get started. Who here is already running OpenSack in production? A few hands. How many people are still kind of learning about OpenSack? Okay? A few more. All right. Well, I'm going to start off then by thinking back in time a little bit. So I got started in OpenSack in 2011. So I've been doing this for a while now. In 2011 was a pretty exciting time to be working in cloud computing. There's a whole lot going on. There were a bunch of new stacks coming out. I had just been to my first design summit, and there was this brand new shiny OpenSack release called Cactus. And I was attending my first design summit out in California. Audio-video technology obviously still has a few problems since then. Turns out getting Cactus running in my lab when I went home from that design summit in California was actually a whole lot of work. And a few months later, another release came out, and that was even more work. And it wasn't actually possible to move things in between the two. And keep in mind, this is well before a lot of the projects that we know and love today and that you're probably running in production today, for those of you that are running production clouds, even had. Cinder didn't exist back then. Volumes were still part of NOVA. Neutron didn't exist back then. We were just starting to design Quantum back in those days. Slometer was a twinkle in somebody's eye, and heat didn't exist either. So a lot of those projects weren't even there. So as we've seen OpenSec grow over time, there was a real concern for a while about how much complexity that was going to add to the picture. And managing OpenSec in those early days was kind of awkward too. A lot of the error messages were not very good. The behaviors were kind of different. There were a lot of knobs you could turn that you didn't really know what they did. The documentation wasn't that great. And then when it came to a lot of the day two operations, that was really tough. So if I wanted to patch my cloud, if I wanted install a security fix, if I needed to change some of the underpinning libraries, maybe I wanted a new version of RabbitMQ even or I need to upgrade my database, there weren't very good reference architectures back in those days for how do we deal with high availability so that I don't have a single point of failure in my cloud. And then there was still this whole problem of how do I take all the workloads that I have before and move them onto this new environment so I don't have to stand up two things all at once. So kind of clunky in those early days, right? And that could sometimes be both painful and costly. It was painful for me as somebody who was operating a cloud in my lab and it was costly and that it was taking a lot of my time and attention away from the other things that I needed to be doing. So not an ideal situation. Fortunately, this is not 2011. This is 2016, almost 2017 now. And we're no longer dealing with Cactus. That is a long, retired release in the open-sync world. So we've come a very long way and I think what you're going to find out is that the tooling we have now for open-sync is significantly better than it was four or five years ago. So by way of introduction, my name is Mark Belker. I'm the open-sync architect at VMware. I've always said I've been doing open-sync since about 2011. I co-chair of the Deaf Core Committee in the open-sync community. And I was one of those guys up on Sage yesterday doing the interop challenge as well. So interopability has been kind of one of the things that I've been focused on lately. I also run the triangle open-sync meetup back home in North Carolina in the United States. And if anybody wants to bribe me afterwards, buy me a box of donuts and I'll tell you whatever you want to know. So let's talk about what a modern open-sync cloud should look like and how a modern open-sync cloud should behave as an operator, and a little bit as an end user as well. And since this is a VMware sponsor talk and since they pay my bills, we're going to talk about our specific product, which is VMware integrated open-sync. So basically VMware integrated open-sync, for those that aren't familiar, is an open-sync distribution that you deploy as a V app in your vCenter. So all that existing virtualization technology that you have in your data center, you can actually pick up and use with open-sync. All the major drivers for VMware underpinnings are upstream code. If you look back, for the past couple of years, we've been a top-ten contributor to pretty much every release. If you look at the TC approved release projects in open-sync, I think we're number nine in this last cycle. And VIO is an open-sync powered product. So for those of you that aren't familiar with the powered program, in order to get this little open-sync powered mark in the upper corner there, you have to pass a set of interoperability tests and make sure that you're using designated sections of upstream code. So this is real open-stack. This is as real as it gets when we talk about, from the foundation's perspective, what is actually open-stack, right? And it is considered an interoperable product. The 2016-08 guidelines are the most recent guidelines that the foundation has issued on interoperability, and we were one of the first products to comply with those. So the name is VMware Integrated OpenStack, and we should talk a little bit about what the integrated in there means and why we named it that. Integrated in our case means it uses a very well-tested cohesive stack underneath. And that's really easy for us, because it turns out we own a lot of the stuff that goes underneath this thing. So we use vSphere for the compute. We use NXX or DVS for the networking. There's the VMDK driver that we use for all the storage things, both for Cinder and for Glance, for example, into which you can plug any vSphere-friendly storage that you've probably been using for years, whether it's an EMC array or maybe a new solid-fire storage platform or vSAM. And it can be managed with vRealize tools, and we'll talk a little bit more about that in a little bit. Interesting for a lot of folks in the room that are Enterprise ELA holders, if you have an Enterprise ELA with VMware, this is actually free for you. So you can go get the bits absolutely for free. And if you want to call in and have support for it, there's a per socket charge. So the current shipping release is via 3.0, or a third release, a third major release, which is built on the Metaka release of OpenStack. Features a highly-billed control plane and also a compact version. If you want to start small, pop something up in the lab, we can basically put the control plane on a single VM for you. And then you can expand that over time to the full HA architecture. Also features the ability to import workloads, which we'll talk about a little bit later on. So this kind of lays out for you our definition of what a good solid underpinning for a cloud looks like and how our parts kind of play into the bigger picture here. So as you can see, there's the common OpenStack services, Nova, Cinder, Glance, Neutron, all the familiar suspects for most folks. We package that up for you. We make it easy to deploy and install, which we'll talk about a little bit more in a few minutes. And then you can use all the sort of standard tooling for OpenStack to interact with this cloud. So you can use heat templates if that's your thing. Maybe you're a Terraform user. Maybe you want to use Ansible on the Shade libraries, like we did on stage at the Interop Challenge demo the other day. All those things pretty much just work, because this is a real OpenStack. And then our drivers, obviously, talk to the products underneath, and then we have the management side as well. So obviously, one of the first challenges with OpenStack is getting the things stood up. OpenStack is a whole set of services. So there's a lot of moving parts, a lot of things to install, and even past the core services, there are things like the RabbitMQs, there's databases you got to worry about, and so forth. So in the early days, back in the 2011 days, that was well-mined possible, and involved a whole lot of time and days of effort just to stand up a real cloud in the lab. These days, it's much easier. So there's a lot of good installers out there nowadays. You can look around and see some of the ones that have been developed in the community, and a lot of the other products kind of piggyback off those. In our case, what it looks like is basically you go into vCenter and you upload an OVA, just like you would for any other V app. If any of you have installed NSX, for example, it's actually quite similar. Basically, you're going to upload your OVA. It'll create this wizard in your vCenter. You'll enter in IP addresses, passwords, a few other things like that, click a button, and off it goes. And we'll go ahead and install this for you with a predefined architecture that gives you a pretty solid cloud. So one of the advantages here is for those of you that already have a lot of vSphere in your data centers, this is going to feel pretty familiar to your vCenter admins. So there's not a whole lot of new stuff to learn here, just to get started and have something to start working with. So when that happens, under the hood we'll be deploying all the OpenSax components. We'll also deploy load balancer pair with KeepLiveD, which is kind of the entry point to the cloud. We do load balance both the external APIs coming in from your end users, as well as the internal crosstalk between components. So in other words, when Nova wants to talk to Neutron to plumb a VIF into a network, that'll go through the load balancer pair as well so that we can send those requests to whichever the master is alive and well, or both of them. There's a control node pair, there's RabbitMQ nodes. Those actually don't go through the load balancer. Those are client-based, so the individual services are all configured to know about all the load RabbitMQ nodes, and they will talk to those queues individually. And then if one goes down, they'll failover. We use MariaDB with Galera Cluster. For those who aren't familiar, that's a synchronous master-master technology. We actually use it as an active passive pair, in our case, so that all our writes are directed to a single node. That's largely because in the OpenSat code base, there are still a few instances of database patterns that will cause deadlocks, if you actually do a multi-master write. So this is something that we've kind of been under the cover with for a while, and sort of figured out what works pretty well. We deploy a compute node per VC cluster, which is a little bit different than what you see in some of the other Hypervisor offerings out there. If you look at, say, KVM installation, you're gonna have basically one Nova Compute service per Hypervisor, which means you have a whole lot of Nova Compute process to manage. In our case, since we're talking to VCenter as the endpoint, we actually have Nova Compute talk to a cluster in the VC. And there's a couple of advantages to that that we'll talk about in a couple of minutes, but it's a little bit different architecture for those who aren't aware. And then the other bit that we'll deploy is what we call an OpenSat management server or the OMS. And that's basically a single VM that's in charge of running the Ansible code that we use to do the deployment, and then also doing a lot of the day-to-day operations, like patching, like upgrades, starting and stopping services, on-demand and those kinds of things. If you wanna see a video of the install process, I won't go through that whole video here, but there's a link here in the slides where you can go check that out on YouTube if you wanna go see it. It's pretty quick. I think it's, the video on YouTube, I think it's about 20 minutes or so to do a full install, and we'll make these slides available afterwards, obviously. So at the end of the day, what this does, it puts all the VMware's power that we've developed over years and years and years of working in the virtualization field and the power of OpenSack all together, which we think is a pretty good combination. And if you're an ops guy, or you have VC admins in your data center who've been working with VCenter and vSphere for a while, this is gonna feel very familiar for them. And in fact, it's not just gonna feel familiar, they're actually still gonna be able to do things, in many cases, the same way that they've been doing them before, if that's what they wanna do. And at the same time, you also get the APIs that your users are asking for and the self-service that they're asking for as well. So again, kind of the benefits of traditional vSphere environments plus the benefits of OpenSack all in one place. And we'll talk a little bit about some of the things that your VC admins can do. So for example, I mentioned earlier that we deploy a Nova Compute per VC cluster. So what that means is the Nova Compute process essentially sees 14 or 15 or 16 servers as one computer. That's kind of an interesting thing because that means within that cluster, there's a lot of things that you can do that OpenSack can either knows or cares about. So if I want to vMotion a VM as an admin, or I want to evacuate one of those hosts, I can just go into vCenter and do that the way I've always done it. And as a Cloud admin, that makes my life much easier. My users never know the difference and OpenSack doesn't even have to care. We also, that allows us to take advantage of some of the features that have been built into vSphere for a while like DRS and SPBM. So if you want to add DRS rules or use storage policies to manage storage, those are all things that you can do as well. And obviously HA too. So the other thing that's kind of interesting about that architecture is control plane and data plane decoupling. So this is actually really important when it comes to doing things like patching upgrades and maintenance in the Cloud. So if any ESXi host goes down, VMs can be rescheduled to another host in the cluster. So for those of you that have the more sort of pet-like workloads where you don't want those things going down, they're not stateless applications, then this might be a good match for you. Even if something like the vCenter goes down or the NSX manager goes down, the workloads aren't impacted because what's happened is the control plane had a problem, not the data plane. So all those packets keep flowing, all those VMs keep running, all the storage stays there. Which is something that's a little unique to this architecture. When you're upgrading and patching services, again, we can basically take the whole control plane offline if we wanted to and all those workloads would still keep functioning. So for those of you that have a lot of end users who have apps that don't want to be impacted by your maintenance schedules, this is kind of an advantage for you. And what that really allows us to do is it kind of gives us a transition because there's a lot of apps that are running in traditional virtualization environment today that are more pet-like. And then there's a lot of these newer apps that are more catalyped. Have a little more statelessness. The individual VMs aren't as important. Maybe there's redundancy built into the application. You know, you're spending up web services behind a load balance or something like that. So this makes an environment where those things can coexist. And obviously the other interesting kind of thing there is that we've reduced the number of Nova Compute Process quite substantially, which means there's less work for you as an admin to trace down problems. If you've been to any of the operators mid-cycle meetups over the past couple of years, it seems like inevitably, for example, one of the things that always comes up is, man, I've got so much stuff going on in the RabbitMQs and they're hard to keep alive and so on and so forth. What we think about what we have here, instead of 16 Nova Compute Processes, we've got one for that entire cluster. So there's a lot less chatter going on there. That means that when there is a problem it's easier to track down what's going on and also there's a lot less load on that control plane as well. All right, so now at this point we've seen some of the advantages of the sort of basic architecture. We know the things that we can do with it and we've got an easy way to bring it up and install it. So far so good. Next problem is, okay, we've got this cloud built and there's no workloads on it. This is what we call the empty cloud problem. This often is a result of the IT shop in an organization. It has built the infrastructure for people to run stuff on. But people already have their stuff running somewhere else. So what's their incentive to move, right? Or in other cases maybe they wanted the cloud and you're a little bit late and they've already started running stuff on a public cloud. So there's kind of this issue where if I build it, will they come? And we've seen this a lot when we got to enterprises. So one of the ways that we can solve that is that we can actually, as a Viya 3.0, import existing workloads. So basically if you've already got a bunch of stuff running in your traditional vCenter virtualization environment, we don't have the ability to bring those workloads under the control of OpenStack without disrupting those workloads. So that allows us to do things like take your existing vSphere templates and import those as glance images. Take your existing running VMs and put them under the control of Nova. So once that's done, you can take those same VMs and work with them with all the traditional Nova APIs. There's a few little details that are not quite there. I think we can't resize the VMs, but we can start them, we can stop them, we can get lists of them through the Nova APIs, all the other traditional kind of stuff. So what this is, is kind of a way to bridge that gap again. So you've got traditional virtualization environment, you've got people that want cloud, and this is the way to get those together and get a faster return. And kind of get that critical mass in your cloud to make the project succeed earlier. The last thing any cloud admin wants is for his cloud not to be used. So this is one way that we can get that cloud full of running workloads faster. Okay, so now we've got a cloud. It's relatively easy to stand up. We know we've got a decent robust architecture under the hood, and now we've been able to get some workloads on it. So the next sort of step here is now I gotta worry about all the day two stuff, because now I've actually got real users on the thing. So it's kind of important that this thing stays alive and then I can manage it, I can monitor it, I know what's going on in my infrastructure. So there's operations, there's patching, there's upgrades over time that we're gonna have to worry about. And just sort of being able to keep tabs day to day. Who's using what in this cloud and where things are running, and maybe I need to troubleshoot some problems, all those sorts of things. So these are all things that we kind of baked into the design of IO as well. Operability has kind of been a longstanding important factor to a lot of folks in the traditional virtualization world, and especially on the networking side as well when we look at things like NSX, being able to know what's going on in your networks and being able to secure them from an operator perspective has been a really important thing for a long time. So we didn't think we could build an open-sack cloud unless we took a lot of those things into account. So let's start with patching. Security vulnerabilities happen, not just for open-sack itself, but also for a lot of the underpinning components. I'm sure everybody remembers Heartbleed and some of the open SSL issues that have come over the past few years. Databases need to be upgraded, bugs and things like route and MQ that need to get fixed over time. Even the vSphere infrastructure itself, right? There are maintenance releases for vSphere and for NSX and for all those components. So for the open-sack control plane itself, as well as the databases and the MQs and those kind of things, patching is pretty simple with us. Again, we've got a distributed system where we have a control plane that's made of many VMs. We may need to apply patches to all of them. Maybe it's even a configuration change or maybe it's a code change. So the way we do that in VIO is, you'll remember I talked about the open-sack management server that we spun up earlier. That becomes a central point for administering a lot of this stuff. So when we need to apply patch, it's basically uploaded to the OMS and off we go. We'll have a knob that we can turn there to deploy that out to all the infrastructure that it needs to touch. And we can bundle those together. Again, we've got a highly-vibble architecture, so it's pretty minimally disruptive or non-disruptive and those workloads never go down in that case. Because again, if we wanted to, we could take the whole control plane offline and most of the workloads would just keep running. That wouldn't matter. So deployment and service restart are all automated. We've kind of taken care of that under the hood, whether you're running one cluster or whether you're running a dozen or a thousand doesn't really matter. We can push those patches out. So let's talk about monitoring a little bit. Monitoring's been a hot topic in open-sack as well because what we have is a set of projects that are all running independently. So Nova has a whole different set of log files than Cinder has a whole different set of log files than Horizon or any of the other components that are out there, right? And they behave a little bit differently as well. There's some differences in the way they do things. They're pointing at different databases. They have different message queues and exchanges. So there's a lot kind of going on there. Well, VMware has been in the management business for quite some time. And we have a whole business scene that's dedicated to management switch, which is actually the people that pay my bills. So I should probably not go through this presentation mentioning a few of the things that they do. V-Rail has operations management packs for open-sack. If you're familiar with V-Rail has operations, this is something that's been around for managing and monitoring traditional V-Sphere infrastructure for quite some time. We've added a management pack now that provides specific information for open-sack as well. So now we can not only manage the underlying V-Sphere infrastructure, but we can also get some information about open-sack itself. Log Insight is one of the most popular tools that our customers use right now. For those that are familiar with the Elk stack, it's kind of a similar concept. Basically, we correlate a lot of logs, putting together a lot of new searches, a lot of new correlation, a pretty powerful product. V-Rail has business for costing. There are customers out there that want to do showback or chargeback. They want to know who's using what. Hey, maybe the accounting department's really using a ton of resources this time around and maybe the marketing guys haven't been using so much so we should shift the cost around. V-Rail has business can help with that. And it's important to note though that even if none of those are what you want to use, maybe you've already got a really awesome Nagio server that you just really love. This is traditional open-sack, right? So all the open-sack services are just like any other open-sack. And in that regard, you can use whatever tools you want. And in many cases for customers, that's whatever tools they're already using in their data center, be it Nagio servers, Abex, or whatever else. And there's some other things that we can do through the OMS as well. So day two operations that are not uncommon. Gosh, this cloud that I imported, all these workloads too has now become so popular that I've run out of storage. I need to add more storage to my cloud. So I want to import some new data stores. We can do that through the OMS. We can add new hosts. We can retire old hosts. Hardware failures happen, hardware refreshes happen. So those are things that we can help manage as well. And then starting and stopping and restarting services, knowing what's going on in those clouds as well. So there's actually another kind of interesting thing for, especially for the VC admins in the room, that you can find out a lot of information about what's going on in your cloud, and that's just pull up your vSphere client. So if you're familiar with the VC web interface that's been around forever, you probably, it's kind of thing your VC admins log into every day and use all the time, right? So wouldn't it be great if we get some information about it in the stack there? And what we've done in VIO is just that. So we have a plugin for the vCenter web client that pulls a lot of information from the OMS and from the open stack. So we actually get a lot of information about your cloud right there in your eCenter. So if you look down the side there, you can see that there's folders of infrastructure here and each of those correlates to a project in my open stack cloud. And you can actually see the UIDs of the open stack projects there. And within those, we'll find folders for images, instances, and volumes. And if you pull up one of those VMs like I have here, this is what we're looking at by the way, is actually a screenshot from the vCenter that was running the interrupt challenge on stage the other day. And what you can see there is that we have a whole bunch of notes that give a bunch of information on the open stack side of the house. You can see what flavor the image was spun up with. We can see it's UID, it's project ID, it's project name, sizing information, all that kind of stuff that we can pull from Nova right there in the vC. And also some security group info. So again, we've taken a lot of the stuff that you can get from open stack and made a little bit more approachable by putting it in a place where your infrastructure administrators can find it and use it and know how to deal with it. So now let's talk about upgrades. Upgrades are kind of a fun topic in the open stack community. In 2011, when I started with Gactus, it was basically literally impossible to upgrade open stack. Because database scheme has changed, there were no migrations in between releases. It was kind of a mess. It was really stand up a whole second cloud and run new applications there and then figure out at the application layer how to sync data between the two. It was pretty terrible. And open stack realized pretty quickly that that was gonna be a problem. And they made great strides in making things a little bit more upgradeable. Still though, we do have a pile of services out there. So when we look at doing upgrades, we gotta worry about not just, hey, if I upgrade Nova, is that gonna work? But will Nova now work with Neutron and will that work with Cinder? And will my heat template still work when we get done with all that? So there's again, a lot of moving parts. And again, it's not just open stack, but all the underpinning stuff for open stack as well. So you gotta think about the databases. You gotta think about the RAVNMQs. You gotta think about the base operating system that those things are running on, right? Because if you're running it on Ubuntu or Red Hat or whatever, those operating systems need patches and upgrades over time as well, right? Even the protocol layer. So for those of you that are running PCI data centers, you may have noticed in the past year or so that the PCI DSS has now allowed the use of TLS 1.0, which is technology that's been around forever. And that's something that we've seen customers start to tell us, hey, we really need to make sure this is disabled all across the infrastructure and that we're only using TLS 1.1 or 1.2. So even sort of the protocols that you use may have sort of a finite lifetime and need to be upgraded over time. So I wanna talk a little bit about how we do upgrades. And just FYI, there's another session just down the hall immediately after this one. You can just sort of follow me because I'm going there after this. We're gonna talk a little bit more about upgrades. So if you want a little more information, come talk with us then and we'll have a little bit more information for you there. The way we've chosen to do upgrades is with a blue-green pattern. So this is something that's been around in the distributed systems world for a while. This seems to have escaped a lot of folks on the sort of app level. So we'll talk a little bit about what that entails. Basically the general idea is that we have an existing control plane that we call the blue control plane that's already there and that's what's running day to day. Now I mentioned earlier that we run a load balancer pair in front of all this, both for the internal and the external API access. So that load balancer is gonna do us a big favor by allowing us to stand up a whole set of second control plane. So basically rather than take the existing stuff and upgrade it in place and try to hope that it all happens at the right timing and that all those things work with each other if they come up at different times or whatever, what we actually do is we just stand up a second control plane. Not a whole nother cloud, but just the control plane piece. And in our case, that's a handful of EMS. And if you're running a cloud of any size, chances are pretty good that you've got the capacity for another six or seven VMs. So not a big deal to almost anything that we've talked to. So once we've got that second control plane stood up, one of the advantages there is that it's completely functional. And what we can actually do is plummet into our load balancer now but still keep sending all our requests to the old control plane. And now we can actually test the new one. We can actually go test and make sure that everything works right. And we can even do that use that as kind of a chance for end users to do some testing. So it may turn out that OpenSack has changed an API or introduced a new API. And they wanna make sure that their application workload still work with the new thing, right? So this is a chance to actually go in and do some testing on it before you flip the switch and make that in production. So we go, we can do our testing. We can decide right then and there that we don't wanna do it and we can throw it away. Or once we've kind of vetted it and figured out that this is what we wanna do, we can actually start cutting over traffic from the old to the new. In order to do that, then we need to bring along the data. So one of the things that happens in OpenSack between releases is that the database schema has sometimes changed. Tables get added, columns get added, rows get dropped. Things happen in the database layer. So the period that we need to do some database synchronization. This is the only point at which the control plane has any downtime. And again, the workloads have no downtime, right? So the workloads keep running, all the existing stuff in the cloud keeps going. What we'll see during this period though is that we'll lock out the API services so that nothing changes in the cloud while we're syncing that data over and running those migrations so that it's fully functional for the new cloud. So we start out kind of with an empty database on the green side. We bring in all the new data, we run the transformation so that all the schemas line up and now we're good to go. And at that point, once the synchronization is done, we can cut off access to the old blue control plane and start sending everything to the new one. So at that point, our new control plane is in production, it's good to go, right? And this is the point at which we've already sunk all the data over. The old control plane is still there. So if there's something we forgot, if something terrible has happened, all the data and all the services are still there. So at worst, if we wanted to cut back over, all we have to do is reestablish that link on the load balancer and cut off the green control plane reestablished into the blue one. If there's data that's changed in that intermediary time, we'll lose it. But that's a pretty small price to pay if something major has gone wrong in your cloud. And keep in mind also, if something failed before that point, before we did the data synchronization, we've had no downtime at all for anybody. Control plane or data plane. And that green control plane is still there for us to do some forensic analysis and figure out what went wrong. Maybe something happened in the deployment, storage went bad, who knows? We've actually had one customer, I think, who literally had a storage right go south while they were doing an upgrade. And this kind of saved their butts, right? So once everything's happy though, we can now reclaim the old control plane resources and we're right back to where we started. We have a control plane, it's in behind a load balancer and everybody's happy. So again, this is also kind of a neat thing and it allows us to do things like upgrade hardware. I've talked to a lot of customers who want to do hardware refreshes at the same time they're doing the software. So when we build this new control plane, we're not going to build it on that old storage right, we're going to put it on this new all flash array. We're going to put it on our brand new shiny new servers or our brand new networking gear or whatever it is, right? We can test it before it goes live. We can roll it back really fast. And kind of importantly, it allows us to do that forensic analysis on either side of the coin, right? If something goes wrong when we're deploying that green control plane, we can do the forensic analysis there and we can do that without impacting any of the existing control plane that's already running. And if something fails on the other side, we can do the same thing with the old control plane as well. And the other kind of neat thing about this is it kind of reduces the complexity on the code side as well, which is an end user you don't necessarily care about, but for somebody who's actually producing this software, this makes it much easier for us to test and be confident that we're putting the right thing out of the market because we're not actually sort of doing a special case when we're doing an upgrade. When we build that green control plane, it's just an appointment. It's just doing exactly the same code that we ran to do the initial install, right? So there's no code path difference between those two until we actually start flipping those bits and move the control planes over. That makes it much easier for us to be confident that we've got a very simple, well-defined process and we can really focus on getting that right. It also doesn't depend on N minus one or N minus two, backward compatibility. So one of the things we heard from operators over the years is turns out maybe I'm not gonna upgrade my open stack every six months because my procurement process or my vetting process with an organization says that's a little too often. Maybe I'm gonna do it once a year, maybe even less than that. So I might wind up skipping releases, right? And what I don't wanna do is do an in-place upgrade where I now have a talk a version of Nova trying to talk to an ice house version of Nova compute. Bad things happen there, right? So in this case, we're actually doing it sort of holistically where that green control plane is a completely all-in-one version control plane. And all we have to do is sync the data between the two. And this also allows us kind of an easy way to deal with the addition of components as well. So maybe we add things to our cloud over time. Maybe we add a new service. We wanna add a kilometer to our cloud where we didn't have it before. Or maybe a kilometer changes its architecture. Used to be, a kilometer is sort of a little more monolithic than it is now. It's now been sort of decomposed into the AID project and the Nokia project and the Panko project. So when OpenStack makes sort of major changes like that, we can actually deal with that really well here. Because again, it's just a new deployment that we can actually functionally task before we get started, moving real work to it. So they're testable, they're transactional, and they're much less awkward than trying to upgrade one little piece at a time. And we think that makes a lot of sense in an OpenStack world. So we've actually vetted this in the field a couple of times. I don't know if Nathan's here, he once told me, he actually did an upgrade while he was sitting on the couch watching the big football game, Drinking a Beer. And in 2011, that would be unheard of. So OpenStack has really come a long way in this regard and this is the sort of architectural pattern that lets us do that. And we've also had a couple of times when our support folks give us a call and they say, hey, you know that customer? They actually did this upgrade over the weekend they didn't even tell us. So just FYI, they're now running a new version of the cloud. And again, that was something that used to take a lot of hand holding, a lot of consulting, no longer the case. So finally, now that we've sort of seen what it looks like from a day two operations perspective we've seen upgrades, we've seen how to get it installed. We kind of got a familiar idea with the architecture. Want to talk a little bit about what's going on in the real world with this. There's always been sort of this folklore in OpenStack. Back in 2011, the big question was when is it gonna be ready? When's it gonna be ready? While I can tell you today, we have quite a wide variety of industry verticals that are running production workloads on top of BIO. So we're pretty happy with the reception that it's got in the field as far as definitely being ready for production. And again, we're running this all on top of virtualization infrastructure that's been around for ages. So that's very well accepted in the data centers. We have folks running e-commerce platforms on top of this including, yes, Black Friday and Back to School sales and all those kinds of big times a year for e-commerce. We have telecoms. We have transaction processing. So real actual dollars changing hands or euros since we're in Europe. Automotive companies, NFE workloads, live demos on the keynote stage at the OpenStack summit ran on top of EIO and a whole lot more. So we're pleased to say that this is something that has actually been battle tested in the real world. So few places to learn more and then I'll leave a little bit of time for questions. We do have a booth and also there's some links in here that when we push the slides, you can go check us out online. We have a YouTube channel. We have blogs. There's a free webinar coming up. Marcos, are you here? There he is. That's the man. So if you were interested in a webinar about BIO, go talk to him afterwards and he'll be happy to clue you in or you can click on the link in the slides. So VMware integrated OpenStack, robust, powerful cloud that anyone can love, we hope. We hope you'll give it a try. And with that, I will take questions. And there's a mic over here. So if you have questions, come on over to the mic. I think we're recording here so they'll want to get those captured on the audio. Question about, you mentioned how if an operator goes in, they start moving things around using Vemotion, you don't have to worry about that. What about other changes that an operator might make if they're adding networks and things like that? Do you pick that up? And will you see that in VIO automatically or do you need to do something to go see that in your GUI? So the sort of general guidance is that if you can do it through OpenStack, you should do it through OpenStack, because that's the safest way to do it. Now, within a cluster, within the things that OpenStack doesn't have this ability into, there's sort of a range of things that you can do in very traditional ways, like Vemotioning something within a cluster or evacuating a host. For, if you do have operators that are sort of building things outside of OpenStack, it's not a sort of automatic two-way sync, in a lot of cases. We do have that import workflow. So it is possible for us to go find stuff and bring that into the cloud. But general of the guidance is if you can do it through OpenStack, you probably should. Okay, thank you. Other questions? The one. Go for it. Okay, with the new version of that you have a VIO, you have this new compact mode, can you talk a little bit about how, I think you said at the beginning you can start with that and grow it? Or how does that work? Like just in order to really high level to if you want to just get started and go from there. So the sort of background on this is that when we started out with VIO 1.0 we had a fairly large control, but I think it was 14 VMs to do it all. So we had three database nodes, a couple of rabbit MQ nodes and so forth. And what we found is that given the sort of underpinning architecture that we have, that was bigger than most people needed. And so we could save a little bit of resources by trimming that down a bit. What we also found was that there were a lot of people who wanted to start with a pock that took almost nothing, right? It was very fast to stand up, something that I can go demo to my CIO and maybe throw some workloads on and just see how things go. And there were even use cases where people didn't want an HA control plane for a production version, which sounds kind of odd, but these were kind of low value workloads or maybe even stuff that like is hanging out in a branch office somewhere, right? That just doesn't, you don't want the big heavy footprint for the amount of stuff that you're gonna throw at it. So that's where we introduced the compact architecture, which basically puts a lot of stuff all on one VM and then we also have a VM to run the per cluster Nova compute, right? So we can deploy that. Very fast way to get started, very fast way to experiment or even run very small workloads where you don't need HA or where HA is built into the app layer. And then we can actually expand that. So if you are starting with a pock and you start out small, there's a knob being turned to actually expand that out to a full HA architecture. And again, that kind of goes through the same sort of process where we do a boot grain. Other questions? Hi, the last time we considered using VIO, the customer had the specific use case of also wanting to employ a Swift object store. And back then the documentation said not supported. Has that changed over time or do I still have to miss out on some open stack features? So it's just kind of an interesting case. Demand for that is really kind of spotty. It's kind of strange. So what we've done is, it's also kind of one of those things where the vCenter doesn't natively have the concept of object stores. It basically has the concept of block store. So we do include an optional Swift that you can run. It's not really something that we have a great vested interest in. So what we've done is we've partnered with organizations that do. So generally the one we work with most is Swift stack and also some with EMC folks for the object storage. So yes, you can get Swift with the IO and we'll actually make sure that it's partnered up with somebody who really knows it well and has that as a first class citizen. Thank you. Other questions? Well, thank you all for coming. Like I say, if you want to find out more about upgrades, there's another talk that I'll be walking down the hall to you in a couple of minutes. Swing by the booth and see us and thanks for coming.