 Sorry about the delay, guys. Finally, there seems to be some operational issues, and it's ironical that we have an operational issue when we want to talk about operational stability of OpenStack. Cool. So we're going to switch the session around a little bit. So I'm Santosh. I'm a product manager for OpenStack at VMware. And we have one of our customers, HeadServe, joining us on stage to share their experiences on using VIO, VMware Integrated OpenStack, to run their OpenStack deployment. I'll let them introduce themselves and go over their experiences on why they chose OpenStack, what the business drivers were, how they evaluated the different deployment models, what made them choose VIO as their distribution of choice, and how it's been helping them march towards their business goals. And once they're done, I'll go through some demos in VIO to show how we have simplified some of the operational tasks around maintaining OpenStack and running OpenStack in a production deployment. Thank you. Hi, everyone. My name is Issa Barisha. I'm the global director of the platform engineer and team at HeadServe. TJ McTeer is one of my senior engineers. So OpenStack, a fun topic. OK, just a quick snapshot of who we are. HeadServe is an organization. We're in the fintech market. When you think about maturity and change, this market's been notorious for being slow in adoption of technology and platforms. Size, we're about 1,000 employees today. How we get paid is something called assets under administration. We're over $300 billion right now. We've been one of the number one, we have been the number one overall provider in the hedge fund administration industry for over three years in a row as a five or six-year-old startup. A little bit about my team, the platform that we manage today, the size of it. We hope to completely be on OpenStack by the end of the year. That's a big one for us today. And the interesting part about our industry is we deal with a lot of pets. And a lot of our pets are VMs that were roughly on average around 90 gig of memory. That's the average VM that we deal with and 8 to 12 CPU. So we're a little bit abnormal in that aspect. So why OpenStack? It was simple. The old, I need more. I need faster. Request came down the line from our executive team. The good thing about this initiative last year is that we did get executive sponsorship. That's a huge key piece to taking on a new platform, taking on unknown charter territories. So how do we go about it? Well, early on we took some considerations. We can run OpenStack. We could go out and get a private cloud. We can go out and get a managed service. A lot of options. So we had to go through a lot of evaluations fairly quickly as we had a tight deadline. Decision points, small team of engineers. On the team, we actually had specifically two people focused on this. Myself and TJ were predominantly focused on how do we get this online as soon as possible. The company was hiring a lot of dev. Agile became a big piece, was a big piece, but really continuous integration, being able to knock down and turn up 1,000 VMs, changing the complete way we did dev cycles and QA cycles in the organization. So culturally, we're changing at the same time. And that required every piece of our infrastructure to change. Donna, from early today in the keynote, I like the terms she used. So obviously, as you guys can see, this presentation was well-made early this afternoon after. But I like the way she said, systems of records. Systems of records are these, sometimes we call them pets. Sometimes we call them legacy apps. The reality is these are the apps that run the business today. These are the apps that make the money today. And systems of innovation, I view it. These are kind of my own terms. I view it as web-scale apps. They are cloud-friendly. And they're what's going to help us make money tomorrow. They're going to help us continue to bring in that income. So why VIO? Third time's the charm. So I think some person said, you have to fail a few times before you succeed. And we did very well at that. So our first attempt, we went about it. We basically said, OK, we've got to do something with a small team. We haven't had the opportunity to ramp up from a skill set. So let's go to the professional services route. Let's get SMEs to come in and help us implement this. Lack of adequate support implementation practices that were poor and a lack of security focus. As you can imagine, $300 billion of other companies' money and other investors, there's a lot of sensitive data. And the way you integrate into a client, or vendor integrates with a client, how to make that possible, there is a little bit of challenge. It also showed the immaturity of a lot of OpenStack implementations out there and their method about going into making this possible. And obviously, it took three or four times longer than what was sold to us. Attempt number two, managed service options. You guys know many of these. In 2015, the market was just saturated of OpenStack providers. Some that can do managed services on their platform, on their hardware. Some that can manage your hardware internally. But there are so many options. And a lot of the considerations that we're looking, a lot of the vendors that we're looking, every other month, one of them was either going out of business or they were getting acquired. So it made longevity and stability concerns become a huge key part. During those considerations, we tried attempt number three. We call these dark projects. So all of us in this room, I think all of you guys probably do dark projects. I call them dark projects. But basically, this is an initiative that your boss doesn't know about. This is something you try while no one's looking and say, OK, what's the viability of something like this? At the time, of a little bit of research, came across, VIO had just come out of beta and gone to GA. I'm like, quite frankly, what does VMWare have in this space? They're pushing their vSphere, their Cloud Suite. Santosh is here. But this is what we're looking at. It's like, how devoted are they really? Should we even bother considering them? When you go to look at OpenStack, they're nowhere in the top 10. And crazy enough, we said, let's just try it. Let's just try it. We didn't bother talking to our VMWare raps or anything that sort. Download it, deployed it, and something crazy happened. It just worked. It actually just worked. So who's trying to do their own OpenStack deployment here? If you've tried it, you can understand how that feels. After you get a console up, it looks great. After you actually try to do something, that's a whole different story. And for us, we were just taken back by this. And again, so being the director of the platform engineer team, we had to provide a platform, limited resources. How do I not lose my job? And how do I actually get the platform out to the end users? So we tried this. Things worked. We took it to the next level. So I want to bring up TJ to talk a little bit more specifically around our OpenStack distribution, or deployment, I should say. So we're using OpenStack a little bit differently than, I want to say, most of the people that you see out there today. Our primarily VM, on average, runs about 6.5 CPUs with 64 gigs of RAM. Our largest instances run over 110 gigs and 24 cores. Our smallest instances are 16 gigs of RAM, four CPUs. We call those the little guys. When we were designing the stack, we needed to meet a requirement of we need to run 2,000 instances concurrently. We expect those instances will be recycled daily. And we need to house pets and cattle, both of that in this, because OpenStack is our determined platform of the future. When you talk about running pets in OpenStack, you have to really be concerned about stability and high availability, right? Because obviously, it's not its wheelhouse. So what we really liked about VIL was it really leveraged here. ESX implementation on reliance on clusters lets us make huge compute nodes, over 10 terabytes of RAM, hundreds of cores. Our storage architecture relies on flash arrays and VIAA initiatives. We get rid of a lot of those copy and write images and move to full clones. It really helps us when it comes to some of those pet managements, because you don't have to be as worried about large numbers of instances disappearing. We're a huge DevOps shop. VIO, if you really don't know, is essentially just an Ansible-based deployment of OpenStack. It allows for great configuration management when you need to start changing your DHCP INI, because you're using Windows DNS. You want to register your guests in that. It's as simple as updating a Custill YAML, redeploying, the really way you should be doing it. You also upgraded it. We ran an upgraded VIO from 1.0 to 2.0 without talking to Santosh. Later on, we opened a ticket, and they were surprised. Apparently, that's a big deal in the OpenStack world, but it just, again, worked the blue-green nature of the upgrade path. Really makes that very easy. And if we talk about some of the other things that it does, the packaging, if you're essentially an EA customer, you're going to have log insight likely as well. You have in-ground SysLogging, very manageable. Can't tell you enough how useful that has been in troubleshooting the problems that inevitably arise. So how did we measure our success? It's fast. We can deploy about 100 terabytes worth of data generation in under 30 minutes. The vast majority of that time and system turn up is Windows SysPrep. The instances launch in minutes. And again, that's a lot of the integration between VMware and the all-flash array we chose. But right without that, you're going to be locked into a lot of these copy-and-write images in order to get that speed. HA works without intervention. We've had host failures. We've done rolling upgrades of our VMware hosts from 6.0 to 6.0 Update 1 while running live production workloads without any downtime to our guests. We've recovered from a critical failure where an error basically caused vSphere and our database, one of our three database nodes, running OpenStack, to be destroyed. We went from in a dangerous state to back up in under four hours. All user VMs and instances remained running throughout the entirety of the time. vSphere allowed us to do this essentially with their CLI that they've implemented. Basically, it just regenerated the Maria database from a new VM, spun it back up. Again, no real problems at all. And again, in-place live upgrades is really kind of a thing you don't see very much. Just a few final thoughts on how we're using OpenStack. We've had 100% uptime since we deployed it. We're running it with thousands of VMs on a 24 by 7 cycle, and only two people call. So if something breaks, everybody's going to call one of two people. We don't get calls. Our users love it. So we have some more information here from Santosh. She's got to bear with me while I change the slide next to you. Thank you so much, Isha and TJ, for sharing your experiences. And for me, the best part was when we found out that you'd upgraded OpenStack from ISOs to Kilo without any professional services, without anyone from RNs netting involved, that was sort of the highlight of working on this product for me. And with that, since we are a little time crunched, I'm just going to jump into some of a few demo videos that show how VIO, VMware Integrated OpenStack, makes it really simple and easy to operate OpenStack in a production environment. And since TJ mentioned about how easy it was for him to upgrade OpenStack, I'm just going to go through the upgrade workflow and show how simple it is to upgrade OpenStack from one release to another using VIO. Let me give you a little bit of background on how we do upgrades. So we follow a procedure called blue-green upgrades, where basically what we do is when you want to upgrade from an older version of OpenStack to a newer version of OpenStack, we stand up a completely new control plane of OpenStack based on the new release of OpenStack. Say, for example, when we went from 1.0 of VIO to 2.0, we jumped from ISOs-based release to a Kilo-based release. So when you upgrade from 1.0 to 2.0, what we do is we create a completely new control plane that's based off of the Kilo-goat base. And once that control plane is up and running, we migrate all the database and the configuration from your older cloud, the ISOs-based cloud, over to the newer Kilo-based control plane that we just created. Once that happens, we switch your public IP, the IP or the domain name that you associated with the OpenStack, the domain name that your end users are going to use to access the OpenStack services. We switch that over to the new control plane, and then we decommission the old control plane. So this makes it really simple to upgrade from one release of OpenStack to another release of OpenStack. And the added benefit you get is it makes it roll back even more easier. Say, for some reason, if you figure out that the newer control plane is not for you, the newer distribution is not as stable as you expected it to be, you can always revert back to your older control plane because they're just sitting there and you could move your public IP address back to your old control plane and you could start using your older control plane. So the blue-green upgrade, we chose to go with that procedure because it made upgrade a lot more stabbler than in-place upgrades and it gave us the added benefit of doing a rollback if something failed during upgrade. So here, this is the standard horizon interface and this is an OpenStack deployment that's running off of the older code base, ISO-based code base. And here we see a bunch of virtual machines that have been created, let me jump back a little bit. So these virtual machines were created in the older version of OpenStack, ISO-based OpenStack. So we log into the vSphere web client which has a plugin for OpenStack, plugin for VMware integrated OpenStack. So this plugin is basically sort of the heart of the entire OpenStack control plane. So all configuration changes that you want to make via your control plane, all maintenance operations are done through workflows that are provided by this plugin in your vSphere web client. So we go to the vSphere web client and we look at the version of OpenStack that this plugin is currently deployed and running in your data center. So it's the ISO release of OpenStack, 2014.1.4 is the long code for the ISO release of OpenStack. And next we look at how we upgrade from the ISO release to a Kilo-based release. So what happens is the way we do upgrade is we provide a patch, a Debian patch that includes all the latest code for OpenStack. You download the Debian patch and copy that over to your management server which management server is sort of like the configuration server for your OpenStack deployment. It maintains all the configurations and the topology information for how you've deployed OpenStack on your hardware. So you download a Debian patch which includes all the code for the new release of OpenStack and you install that patch using a patch utility that we ship with our product. And by the way, for minor patches, minor patches that are typically small bug fixes, fixes that take care of security issues, we ship patches. And this is the exact same procedure that you would follow to patch your OpenStack release. So once you've patched your management server, you log out of vCenter and log back in. And once you go to your OpenStack plugin, you notice that the management server has now been upgraded to the latest release of OpenStack. 2015 1.1, which is the Kilo release. So what this says is that your management server, all the packages, OpenStack packages on your management server has now been upgraded to the Kilo version. And now we have to go push these management, push these new packages onto a new control plane. So your deployment, your OpenStack control plane gets upgraded to the latest release. You go to the management tab in the plugin and there is an option to now upgrade to a new release. So you just pick that option to upgrade to a new release and you're presented with a simple wizard that accepts a few inputs and then just goes forward and upgrades your OpenStack. You enter a deployment name for the new control plane that's being created alongside your existing older control plane. And once you do that, you specify a temporary public IP for testing and verifying that the new control plane works fine. So what happens here is the IP that you specify here, this is going to be assigned to the newer control plane that we stand up. So once we stand up the new control plane and alongside your older control plane, you can spend some time to sort of test that everything is fine with the new control plane. Look at all the data you've created, make sure that all your data has been migrated over to the new control plane. All the while through a temporary IP and the real OpenStack endpoint that your developers were using, that will be still associated with the older control plane. And once you make up your mind, once you decide that, okay, I want to switch completely to the new control plane at that point, we'll throw away this temporary IP and associate your existing OpenStack endpoint IP at us to the new deployment. So you specify a temporary public virtual IP and a private virtual IP and that's about it. And you say, upgrade. So now what's happening is the management server is deploying a completely new control plane that's based on the new release of OpenStack, kilo release of OpenStack. And once that's done, the new control plane goes into a prepared state, which means all the services have been deployed, now you need to migrate the data. At this point, you go to your old control plane and you say, I want to migrate my database and my configurations over to the new control plane. And while the migration is happening, the management server is sort of going to turn off the old control plane, stop all the services so that we don't create, we don't accept new API calls and create new objects in OpenStack while data is being migrated. So we want to freeze the state of the data before we migrate it. So we pause the old control plane. Once that's done, once all the data is migrated, we can, here we can look at the temporary public IP that's assigned to the new control plane. In this example, it's 10, 1, 1, 5, 9 to 6, 1, 10. So if we navigate or if we point our browsers to that IP address, we are going to look, we are basically going to look at our existing OpenStack deployment, now available on the new OpenStack release. So here we can go ahead and make sure that all our instances that we created before the upgrade are still there, make sure that say your volume is still there, all the network topology has properly been migrated over to the new control plane. And once you've verified everything is good, I want to keep this new control plane. You can go back to your OpenStack plugin in vCenter and say switch over to the new control plane, all is fine. I want to start consuming my new control plane. So at this point, the IP address that was associated with your old control plane, the IP address that your developers were using to access OpenStack, that has been associated with the new control plane, your Kilo-based control plane in this case. And after that, you can just, your developers, once this is done, your developers can keep accessing the same API OpenStack API endpoint or the horizon endpoint at the same IP address. And they'll get all the functionalities of the new control plane because now the deployment has been upgraded to the new release of OpenStack. So we log back in through the dashboard. And we see that all the instances that we created before upgrade are still there, available on the same public IP that I was using before the upgrade as a developer. And just to make sure everything works, let's associate a floating IP address with the instance that was created before the upgrade. And once that's done, we'll try to log into that virtual machine and make sure that we're still able to access the virtual machine and everything is running fine. There you go. So we've taken huge complexity of upgrading out and made it really, really simple to upgrade OpenStack from one major release to another. This is just one example of how we've simplified operations around OpenStack and making it easy to maintain OpenStack in a production and deployment. We also have, there are a lot of other workflows we've added to make it really easy to upgrade OpenStack. So you can do things like backup and recovery where you could backup your OpenStack control plane. And at some point, like Atejam mentioned, if one of your hosts goes down taking your entire management cluster down, you can recover back from your backups really easily in a few CLIs. That's pretty much all I have. And I think we're almost at the end of the session. I'll open it up for a few questions if you have any. So when there is this transition state and all the version is stopped, does it mean that you just cannot do any changes to the OpenStack and the services running on the VMs are still functioning? Is it correct? Or there is a downtime for... When you're migrating the data from your older control plane to the new control plane, there is a downtime in the sense that you won't be able to create any new workloads in OpenStack, but all your existing workloads, they will still be running. So existing workloads that are already deployed, they'll keep running. They won't be disrupted at all. It's just that for the migration duration, the developers won't be able to create new workloads. Okay, thank you for a lot of talk. I have a lot of VMware customers and I believe I have a lot of questions, but I'll just ask two of them. When we talk about VMware workloads, I believe we have two options and one is the one you just explained about the OpenStack. And the other option is the FullStack VMware, meaning like using the VCloud directors, NSX, VSANs. I want to ask why that OpenStack was better than the FullStack VMware Cloud environment. And that's the first question. And the second question is, when I talk about VIO, a lot of customers are getting bothered by the fact that it has a very large hardware requirement, I believe it was 60 cores and 200 gigabytes of RAM. Did it bother your environment? Okay. So I guess we can go down the list, right? One is a couple of decisions. One, we were looking for OpenStack, right? So when you talk about VSAN versus one of the things we did is we actually moved to solidifier as the storage backend. One of the reasons is we haven't jumped on the bandwagon yet for hyper-converged, right? And at our point, we felt compelling to add a lot of the pieces. Now again, I want to be fair because this is a VMware presentation, but just talk about the bits and pieces. Now OpenStack actually further enables you that once you get OpenStack online and running, you can use VSAN later. You can use NSX later and transition into that because they do support the drivers for all those pieces. So the real question is, okay, what goes first, right? Well, for us, OpenStack was the most difficult thing in the market in the early 2015. Again, we tried a bunch of implementations and they failed. So storage was stable, right? For most of us, storage is stable and you have options, right? When it came to OpenStack, there's OpenStack and really just who are you going to partner with? We were not large enough to say we're gonna go straight from source and build it ourselves. So that's one piece. Number two is just the ecosystem, right? When, again, talking about VCloud Direct and everything, we're a multi-cloud platform. So we do certain things in AWS, we just do certain things in our private cloud. What the biggest driver was, how do I continue to make this company move forward, right? So we have 90% of the workloads that were the stuff that existed the last two years, last three years. How do I get to one platform for everything, right? Because you go out and you build OpenStack. The first question a lot of people ask you is, what are you gonna use this for? And originally I didn't understand while a lot of these vendors were asking me because later on I understood there was a lot of instability, there was a lot of immaturity in the OpenStack and some of the pieces, you know, Cinder. And even if you get the maturity of OpenStack, the drivers, the quality of the drivers varied exponentially. So that bit us, right? So when we went to VIO, again, you know, at the time we didn't expect like, you know, everyone was viewing OpenStack as a competitor to VMware and vSphere. But then when we realized it's actually made perfect sense. You know, for example, if we wanted to create a layer where we didn't necessarily care about the proprietary-ness underneath. But when we realized, I mean, we loved vSphere, right? Everyone grew up with saying, hey, this is kind of a stability platform. It's been matured. We still have to continue to run these systems that are big and not scale-friendly or cloud-friendly. But then we also wanted to prepare for the future. So how do we not have silos of platforms? So when everything just worked, we're just so taken back and we, you know, kind of progressively made that. And number two is, you know, I don't know if I had a point on my slide, but when we were going to, you know, it's all about value, right? You, one of the presentations today, they said it best, you know, they're like open sources is free as long as your time is worthless, right? Kind of. You're gonna pay at some point one way or another, right? So when we looked at the cost and value proposition, if you're already invested, which we had a platform on vSphere, the additional, one, it's free, right? If you wanna go on your own, and we did. We went for almost a year on our own before we set to VMware, hey, all right, well, we're fine paying the support. And the support cost was like incremental to nothing. So it was a, really, we almost couldn't believe it. And to be quite honest, I have a managing director that said, I wanna move to open source and I don't wanna invest more in VMware, you know, openly, right? But we developed a story that almost he couldn't say no. And he, you know, he really wanted to and he said this made absolute sense. You know, we can get everything we want. We can continue to focus on the more important things, which really, quite frankly, is everything above the platform, right? You need the platform to continue to run, which is a big piece, right? You want it to stay online or else you're gonna be spending a lot of weekends and weeknights, very unhappy. Thank you. No problem. So I don't know if, did that answer all your questions? Am I allowed to ask another question? I think so. I don't know that there's anyone else in line. Go ahead. Sorry. Did you wanna add anything, Santosh? Well, maybe I can talk to you later. Go ahead. Mama, did you have any problems with the Cinder VMDK driver because I had a customer, I had a lot of problems with. I use, I'm an IBM proprietary driver and I have a lot of problems, but did you have any problems, Cinder? The short answer to your question is no. The longer answer to your question is we had a lot of problems with Cinder drivers, especially those made by three other companies that own VMware. Okay. But if we go further than that, it depends on what you're doing. Cinder specifically when it comes to ESX implementations implements Cinder Volumes as a VM, which shows up running in your stack. You can run into issues with storage DRS if you're doing some funky things there and locking. And I don't know specifically what issue you encountered, but if you wanna take it offline, I'm sure we could give you some insight as to how to solve it. And back over towards your question about uptime and the upgrade, VMware separates the data plan and the control plan completely. You could wipe out your entire controller and all of your instances will stay running and up and 100% available. That's really the driving factor there. You had asked about the total consumption of the management, the controllers, and to get VIO online. It actually really isn't much. I mean, they give you the extreme because what they do is they basically a prepackage implementation of VIO is 1.0. It was up to 2,500 VMs. So we knew automatically, let's call it day 90 because by the time we got the hardware where we said this is a green light, we got the hardware we implemented, we had changed everything, a platform, even vSphere, we're on five, going to six, storage vendors, communications. We went and moved away from a private channel to 10 gigabit, ice-cozzy storage platforms and leveraging all those APIs. So much change happened that when you look at the management piece, if you're building any sizable cloud, it really is negligible. Because at the end of the day, you can squeeze those. What they're doing is they're giving you a pre-canned package for up to 2,500 VMs. Necessarily if you don't think you'll ever get to that, you can squeeze those VMs there. You can power them on, squeeze them. We may not want to say that. Yeah, when we created the product, we sized it for a really large deployment. We have a customer who's running about 6,000, 7,000 VMs on a single deployment of VIO on a single vCenter. We sized it for that kind of deployment. But then in practice, what we found is not everybody wants that kind of scale. So in our upcoming releases, we are bringing that size down to pretty much half of what the current production size is. And we're also looking at putting everything in a single VM and making that production ready. So in our labs, in our tests, we found that all services running on a single VM is good enough for a scale of about 2,000, 3,000 VMs and for a concurrency of about 20, 25 VM creations at a time. And there are a lot of use cases for which this is good enough. And we are sort of exploring really shrinking the control plane down to a single VM and adding some V-Sphere-based H&A and some backup routines behind it. So it's production ready and it makes it really simple to deploy your open stack. If it's in a single VM, you can just power it down and everything comes up. Yeah, we have a lot of customers saying that they want to do proof of concepts with VIO, but since that's a large hardware requirement, it's really hard to do that. Absolutely. So we've gotten that request from a lot of other customers too. So that's great news, yeah. Yeah, absolutely. In one of the upcoming releases fairly soon you'll see much smaller size VIO. So one of the other bad tricks you can do with that is you can deploy virtualized ESX servers with fake size, deploy that, and then really you'll get to, because it requires a minimum of three ESX servers for the management cluster, just for resiliency, obviously, right? Murano, three instances and so forth for a couple of reasons. I've actually done it on top of virtualized ESX now again. He's got to close his ears, but. Yeah, so, but that's it. I mean, there isn't, aside from test, test stuff, you know, the one thing I can tell you that blew me away as well. And again, we had to reach out to VMware and say, guys, are you actually supporting this product? It seems to work, right? Because they weren't heavily pushing it that early on in 2015. So we had to actually reach out to them, but the one thing that you can feel comfortable that, I mean, you know, if you can imagine what the pressure we felt at that time where we had so much uncertainty and we were kind of running out of time is that we couldn't believe it just worked. So at least that you'll know that you'll go from zero to a function in POC, you know, exponentially fast because you don't have to worry about 90% of the stuff underneath OpenStack if you're on the vSphere platform. No, VAO is a full distribution of OpenStack. It comes with all the code OpenStack services. It's a full distribution of OpenStack optimized to run on a vSphere hypervisor and VMware products. Yes. And the worker node just has the hypervisor ESXi. So when you do the upgrades, what is happening? Is that ESXi impacted when you do a VIO upgrade? No, right? Your VMware separates the data plane and the control plane completely. So your running workloads that are running on ESX are not impacted. There's no Nova agent running on your compute nodes. There's an actual VM that's created that represents a cluster that is your compute node, which runs the Nova agent. That goes down, but just like VMware, all of your VMs stay up when you cut over to the other database then your compute node is there. You're basically live. And only the support cost or how that works? The VIO one is it, how is that the license and support works actually? So VIO the product itself is free to download and use and there is a support cost associated with it. The only support cost. So the question is, is VIO only for customers who already have a deployment of vSphere or customers who want to get started from scratch can they use it too? This can be used by customers who want to start from scratch too. They have to deploy vSphere and ESX from scratch and then they can deploy VIO on top of it. Yes, this needs vCenter, ESX, the whole vSphere hypervisors. Yeah, it doesn't matter what OpenSack distro you go with that you need nodes, you need storage, and then you need to determine what your storage is gonna be, right? The drivers, right? Because OpenSack is, they pretty much are, I mean, it's Ubuntu, Ansible, managed by VMware. So it's very vanilla OpenSack and that's also a concern we had about OpenSack. So it's pretty vanilla, but just like a lot of the distributors, they only support their vendor, some of the vendor specific drivers. So for Nova and for Compute, it's vSphere and the DV switches for Network and Cinder, it's the VMDK. So every vendor has the same kind of setup. If you're already on vSphere, you don't have to build a brand new vSphere platform, Greenfield, we actually built it right on top of what we already had existing. There's a little caveat on the management cluster, just to make sure there's enough capacity there to deploy it, but aside from that, it's straightforward. And everything, sans-tage was showing is kind of like the fuel, similar thing, the management console to do all management integration, upgrades, operational tasks. So that's what you use, but you use that very rarely. You use that when you wanna add compute, so forth. You know, the best thing about it, I can say, is it's been actually for about five months, a set and forget it. I mean, we haven't touched OpenSack ever since we confirmed that everything was functioning. I assume that's for ESXI is free, so if only ESXI support cost, and then vCenter, software plus support cost, and then vIvo support cost, right? Package, right? Is it like 25, 100 VM? Is it how that works, maybe we can answer that. Can you repeat that? The licensed one for the vIvo, is it also like 25, 100, like SRM, how they licensing? There is no licensing on that. It's just simply they've, version one, version two I think is up to 5,000? Up to 5,000. Up to 5,000. It simply is, one of the biggest problems with OpenSack is configuring and making it scalable. So we actually enjoyed the VMware's approach where they basically, one of the gentlemen was talking about the size of the management and the nodes and clusters, the compute clusters. One of the nice things was that they pre-canned it and when you deploy it, it can support comfortably up to 5,000. So you don't pay on instances. You don't pay on instances. You just simply pay on every ESX server. And if you already have ESX, then you just, if you want support, it's a very small fractional tick up. It's about 600 bucks, so what is it? 200 bucks. 200 bucks per socket. Per CPU. Per socket. Yeah. So there's no licensing cost. You could use vIO to run as many VMs as you want. And if you want support, it's on a per physical CPU, per socket basis. All right, I think that's about all the questions we have. Thank you so much, folks, for making it to the session. Hope you have a good rest of the summer at Austin.