 Okay, can everybody hear me? Cool. So if you were at the introductory session a little earlier this afternoon, Dan Wendland talked about some lightning talks that our distribution vendors will do. So this is that session. And I am Dan Floria, I'm part of the OpenStack team at VMware. So let's get started. Before actually we go into the presentations from all the vendors that we have up here. I just wanted to give another plug for the Hands On Lab that we have for VMware running on vSphere. So this was mentioned earlier also and I think it may be showing up in your schedule as also happening in this room. So the Hands On Lab is something that you can do yourself so you can log in. The URL is up there. You can try it out if you need help with the Hands On Lab. What the Hands On Lab is, it's probably the easiest way for you to try out OpenStack with the vSphere platform. Actually it's probably the easiest way to try out OpenStack period. You just go to that tiny URL and it just sets up an environment for you ready to go, ready with vSphere lying underneath. And anyway, it's very easy for you to get started. And if you need any help, there's people in the back of the room or maybe in the outside room as well. If you have any questions and there's also the IRC channel, you can get help that way as well. So this session is, as I mentioned earlier, it's for our distribution partners to highlight the great work that they've done to support the VMware technologies. So we're actually really excited about this. We wanna provide customers with the choice of how they wanna deploy OpenStack on top of the vSphere platform. So we wanna provide you with a choice of what distribution you wanna use. So I'm really excited to have all the folks up here to talk about the great work that they've been doing. And it's actually a lot of work to support an additional platform. There's a lot of work that goes into modifying the deployment tools that they have and to building the engineering expertise and building a support organization to support a new platform. So they've done a lot of great work and they have a challenge right now to present all the great work that they've done in about five minutes or so. So these are lightning talks. They're gonna be quick and hopefully I can get the slides to work. So without further ado, I wanna introduce Dave Russell from Canonical who's gonna come up here and talk to you about Canonical's work. Thanks a lot, Doug. Everybody hear me here? You see. How about now? That sounds a bit better. Hear me at the back? All right, good stuff. So this is all about Canonical and the amazing work we've been doing with VMware on OpenStack. Thank you. So hopefully a lot of you know who Canonical is but just in case, Canonical's the company behind Ubuntu. We're also known as the great big orange stand with the cool orange boxes on it. If you haven't seen them yet, go down and take a look. 10 node cluster in a box, very cool. So we're sort of, we've been around for a little while. Since 2004, over 600 people, over 30 countries, we're a very, very widely distributed organization. We've got people all over the globe but we've got major offices in the locations above. We're also very much, very much around the Ubuntu platform. So we see it very much as a platform for innovation. A lot of the cool new technologies that are coming up, a lot of big data, a lot of no SQL, a lot of great stuff like, oh, I don't know, OpenStack starts very first on Ubuntu. Nine out of 10 OpenStack production clouds are running on Ubuntu. We've been supporting customers in production with OpenStack including very demanding financial services, telcos and other folks for an access of two years now. So we've been sort of supporting OpenStack and supporting customers using OpenStack for quite some time. We're also a fairly significant part of the public guest story as well. So this is really about our partnership with VMware and the work we've done together on OpenStack. We were the first company to announce our relationship with VMware a year ago now at the Portland ODS in fact, during the keynote, where we would be working jointly together on VMware and OpenStack engagements. We were the first company to actually do an engagement jointly with VMware and we'll have a little snippet on that a little bit later. And I'd like to share with you a couple of high-level architectures that we've kind of come along with. We've engaged with certain customers and these are things that we found really work for us. So the first one is basically everything virtualized on VMware vSphere. I'm sure VMware have been telling you all today about the great reasons why you'd want to do this. But we found this has been pretty effective. Organizations that are already incredibly familiar with VMware but want to start to get a taste of that OpenStack goodness. This is definitely the recommended way to go about it. So you've got a VMware vSphere management cluster. You've got installations of the Ubuntu server OS on top of that and on top of that you've got all of the OpenStack services and then the Ubuntu management and orchestration services on top of that. And then on your right-hand side here you've got a separate vSphere cluster or even several vSphere clusters that are then talking through the OpenStack Nerva Compute driver and being driven by the OpenStack environment. And of course, you can then run anything you like on top of that. So that can be, as you see here, Ubuntu server guests, other enterprise Linux guests or even Windows server guests. So that's option A. Option B is run your OpenStack services on physical servers, so that's Ubuntu on top of that and all the standard OpenStack services and then have all of that talking to a VMware vSphere environment. So the only difference between A and B is pretty much OpenStack services here on physical versus being virtualized in vSphere. Different organizations have different levels of comfort with introducing something new into their environment. So this suits some people better. Some people have different ideas as to how they want to scale out their environment. And again, this suits some of them better. And then finally, this is what I call the kind of the rainbows and unicorns. This is what everybody's like, a lot of people I find really wanting to drive towards. They want something where they've got the core OpenStack services probably on physical bare metal. They want the VMware vSphere environment. They've got key things that they want to run on that. It gets them HA for those things and it's important to them. And also, they're wanting to either dip their toes into or they already have some existing KVM. And for that case, again, you've got a complete parity of platforms. You've got a single OpenStack environment that can talk to all of these different pieces. So I want to very, very quickly whisk you through a single customer deployment story that we did jointly with our friends at VMware. So this is a customer who wanted to... They had an existing infrastructure as a service platform. They'd prototyped it basically with their organization. They could see there was an immense demand for it, but they really needed something a bit more robust. And so OpenStack was the obvious answer. They chose to go for that option A, everything virtualized on VMware, including the core OpenStack services. And the implementation and the results, they did a really excellent job with this. We provided them consultancy and services. VMware provided expertise on their side. And a couple of things they did really well. They had really good internal stakeholders they got everybody together on their side, on our side, on the VMware side, and we made sure that together we made this project a success, which of course it was. We learned a couple of interesting lessons. Initially when we deployed it, so this was a large financial services organization in the U.S. And initially when we deployed OpenStack, we did not have SSL encryption from start to finish all the way through the OpenStack environment. This was something that was important to them. Luckily, due to the way that we deploy OpenStack, whether it's on bare metal, virtualized guests, or indeed on VMware, we use our charms and juju. We just needed to alter the charms a little bit a week later, rolled out upgraded versions of the charms, all redeployed, so great lessons. And future outlook, the project's expanding, the customer's expanding, and it's all ongoing. That's it. Thanks very much. Thanks, Dave. So next we have Andrew from Red Hat, who's going to be talking about their efforts. Can we plug in your emails? Okay, good afternoon. I'm Andy Cathrow. I look after the virtualization product management team at Red Hat. You all know who Red Hat are and what we do. I want to quickly start with how we do it. I think it's important. So there are four steps that we take to go from a... This is where I criticize Max and talk about winning Linux. There we go. So four steps that we take to go from an upstream project to a downstream enterprise product. The first and in my mind the most important is participation. So you believe you have to participate and engage in all aspects of the community to be able to support your customers. It's not as simple as compiling and shipping code. There's a bar that we set at Red Hat before we can ship a product that involves engineers and QAs how to be engaged in the product. The open stack is more than just Python services that are running and doing orchestration. It's running on top of Linux. There's a messaging layer, a database layer, user space libraries. Each one of those has to be integrated, tested and supported. If you get a bug and it's not an open stack bug but it's something in Cupid or in Rabbit or in MariaDB or Galera, you can't say upstream issue. You have to own the issue from soup to nuts. So we believe you have to have broad and deep participation. Integration is taking all those upstream components really from Linux, from open stack and other projects, putting them together and integrating those to make sure they work. But also filling the gaps. There's many gaps that aren't filled by upstream open stack, you know, installers, high availability, monitoring and reporting. So it's filling those gaps with a complete solution. And then stabilize. So that's testing, certification, bug fixing and back porting. And finally, delivery. And that means not just give you the product but support you. Give you the patches, the bug fixes when you need them, not on an upstream step schedule. The services and the training. So there are two distributions from Red Hat. RDO is our community distribution. It's published on the upstream schedule with a six month cadence with a six month life cycle. Anyone can download that, install that on Fedora, CentOS, RAL or any RAL derivative distribution. It's on the upstream schedule on life cycle. There's no commercial support but it's a vibrant community around it. And then RAL OSP is our commercial distribution. Enterprise hardened, long life cycle. Certification and support ecosystem. When we talk about life cycle, why is it important? So upstream has a six month life cycle. We know about the cadence. We're here celebrating Ice House this week. Roughly six months from now there will be no more upstream patches from Ice House. And if you have a bug, well, you should go to Juno. That's not good enough for enterprise deployments. You need to have a longer life cycle. You'll see here a two month gap between upstream in April and downstream in June. And this two months is used for testing, bug fixing, back porting and certification. Now any bugs will be fine here. We fix, you know, in trunk and then we back port them to our stable branch. Support that for three years. Quickly around deployment. Three projects to mention. The first is pack stack, a very simple tool for deployments on POCs. You run a command line tool to answer some questions and it will deploy a single node great for POCs. For production deployments we have Red Hat OpenStack Installer based on Foreman, Booty USB key or CD-ROM and just go through a wizard see the nodes that it's discovered compute nodes, service node configure them, check box for HA and you're fully deployed. And finally, triple O which is the upstream deployment and management project. It's still work in progress. I hope it's going to be tech preview but we're heavily invested upstream on triple O. So releases so we've got new medical releases. The RELL OSP4 is our Havana release, released back in December. I'll talk in a minute about this but our 183 update out of support for vCenter. The release that we're concentrating on now is RELL OSP5 our ice house release is in beta right now. We'll be coming out GA in June. Runs in RELL 6 and RELL 7. A couple of notable features. We added support for RabbitMQ in addition to Cupid so you can pick a messaging platform now and added support for MariaDB instead of MySQL with Galera for active active support. Let me skip this for a second. I'm going to run out of time. So VMware has been working with Red Hat for many, many years now. I think it was the first hypervisor we supported before Zen, before KVM before Amazon. There's a long time engineering relationship between Red Hat and VMware. That means if you get a RELL guest running on top of VMware and there's a bug we have the engineers to triage and work together upstream to fix those issues. It's a similar model to what we're working on with OpenStack support. We're not a compiling ship company before we can add support for any platform such as the vCenter driver. We have to have engineers working on the codebase. So go back a few moments when we started down this path of VMware. We looked at the upstream backlog and we coordinated with VMware engineering to make sure that the code reviews are being done. We're participating in those, participating in bug fixes. So what we're delivering now we can support for three years and we know it's a more polished product than if we'd compiled and shipped at a start of a banner. So what we support today in Red RSP4 with the Async3 update is deploying the vCenter driver. Now this is not the ESX direct driver. We're only supporting the vCenter driver. That's the one that's got the upstream support and backing of VMware along with NSX. We do have Nova Network support but that's considered more for PSE deployments with the vCenter driver. We've got maybe a minute and a half left for questions. Any questions I can answer? So one quick thing. I mentioned PacStack. Now I've been updated. So you can quickly deploy a PacStack based, obviously, deployment with vCenter. Our triple O, excuse me, our foreman based installer still work in progress. We should that as our first Async update for Red RSP5 to other vCenter support. Thank you. Thank you very much. So next we have Pete from Suze. Can you find it? Good afternoon everybody. I'm Pete Chadler from Suze. We are also a long time Linux distribution. I hope everyone has at least heard of the green chameleon before. What I want to start off with was VMware and Suze have worked together for a very long time. At least for 10 plus years. Similar to Red Hat we've supported Suze Linux Enterprise Server running on VMware really from the beginning. And Suze Linux Enterprise server is fully supported to run in a vSphere environment. And we integrate all the tools that you need to really take an optimized use of that vSphere environment. One of the things we actually have done is we work very closely with VMware and all of VMware's virtual appliance applications actually run on Suze Linux Enterprise Server. So if you run vCenter virtual appliance you're actually running Suze Linux Enterprise in your environment. We have a number of extensions to Suze Enterprise Server including SAP which can support running SAP virtualized on top of VMware. We also have a high availability extension to Suze Linux Enterprise which compliments the capabilities that you have in VMware to provide application level availability for mission critical applications. Lastly, since what we're here to talk about today is Suze Cloud which is our OpenStack implementation so this is kind of a high level view of the VMware support within OpenStack and you can see the pieces that we support. All the stuff in light green is essentially basic OpenStack and the things that are highlighted in kind of this yellow are the drivers that you can take advantage of to access your vSphere environment. We support all of those and you can easily deploy Suze Cloud which the current release is Suze Cloud 3 based upon Havana. You can deploy that take advantage of an existing vSphere environment through the vCenter drivers. In terms of capabilities over and above what you get with the vCenter implementation, we also started shipping high availability for the control plane which really we think compliments the environment. Most of our customers I would say about 80% of our customers actually run Susanix Enterprise server on VMware. It's clearly the most prevalent hypervisor that our customers run and they're running mission-critical applications in that environment. When they start looking at how I want to move to OpenStack the first thing they told us was I need a highly available control plane. That's one of the things we have focused on and we've also simplified the deployment not only of that but also of vCenter in your environment. This is actually a screenshot from the Suze Cloud Administration server which is our installation framework, our deployment tool built upon crowbar. You can see probably the guys in the back of the room can't see but if you want to come by the booth we can give you a demo. You can see a little more closely. When you have a node available so you can see the first node down over here physical server that you're ready to deploy a compute node on you have a number of options of what kind of compute node you want that device to be. It can be a Hyper-V node, it can be a KVM node, it can be a QMU node, it can be Zen or it can be VMware. So in this case I've dragged compute1 into VMware and said I want to deploy the vCenter proxy into that node. You just come up with the next screen it says okay what's the IP address of vCenter, what's your username, what's your password and which clusters are you going to pick up from vCenter to pull into your OpenStack environment. It's a pretty straightforward, pretty easy once you've done that it's all going to be available. We've also got a plug-in for Cinder. I didn't show the pull down but when you say I want to configure Cinder you get a number of different selections one of which is VMware which is actually the NSX driver. Once you pick the NSX driver again same kind of idea what's your username what's your password which controllers are you using what's the transport zone, what are the gateways. So it really leads you through the whole process to go through quickly stand up an OpenStack environment and integrate that with your existing VMware infrastructure. This one again is another eye chart but once you get all that set up when you go into vCenter you can now see here's the network that's attached to OpenStack and you can see when you go into the cluster environment you can see that the OpenStack based the cluster that you've assigned to OpenStack now shows up in vCenter as well so you can still take advantage of all the vCenter management capabilities even within your OpenStack environment. So that's quick overview any questions obviously you can stop by our booth or catch us after the session. Thank you very much. Next we have Nick from Morantis who's going to I believe run through a demo and we need to do a quick switch of laptops here. Okay can everybody hear me? Alright, marvelous. So I've got six minutes so I'm going to try and make this short and sweet. My name is Nick Chase I'm from Morantis we're the number one pure play OpenStack on the market which basically means all we do is OpenStack, we don't sell hardware, we don't sell operating systems OpenStack is all that we care about. Today I am going to show you a demo a recorded demo of VMware and OpenStack. So a little bit of a little bit of a relief from the PowerPoint for the most part. Take for granted the fact that we do have an excellent services and support organization I only have six minutes. If we take a look at the general roadmap which you can see here is that for each of these major OpenStack projects there's a sort of corresponding VMware product. The idea is that you can use OpenStack but if you're a VMware shop you can continue to use the VMware tools that you're familiar with to manage those resources. For example the most obvious would be that you can create OpenStack VMs with Nova and then manage them with the vCenter tools or you can use NSX as the basis for your neutron deployment but as you can see here you can also use vCenter data stores the back end for Cinder and Glance and you can think about integrating Keystone with VMware single sign on through open source drivers that sort of thing. Okay so how does it work? Well speaking for a moment about compute and storage how it works is that we have the OpenStack API which uses the vSphere driver to connect to vCenter and I'm going to pause this for a minute because I've got an enemy and from there it's basically just a normal vSphere deployment. Let me go back just slightly. No I won't. It's the same thing for the NSX deployment where basically you have the NSX drivers that connect to the NSX controller and then you work from VMware. So let's take a look at a demo of how this actually works. So what you have is that we're going to go ahead and create a data center. Again this is time compressed so what I've done is basically recorded the demo and cut out the boring parts where you wait for stuff. So we're going to create a data center and create a cluster in the data center and I don't know what keeps banging but I apologize for it. And then within there we're going to go ahead and add the host which would be your normal ESX host. So at this point this is all just normal VMware stuff and so all your VMware people are probably going why are you even showing me this. And I'm doing it for a couple of reasons. One is I want to show you that this is really just plain old VMware. We're not doing anything special at that point. But also there are a couple of steps that are necessary for the integration. And specifically in this demo we're going to be using Nova Network. So we need to make sure that we have the switch set up with VR100 and you can see there the VLAN ID of 103. We're going to use that in a minute. So this is fuel. I'm going to stop this for a second. Boy this is just I don't know about you guys but I'm getting tired trying to keep up with this thing. This is fuel and it is the open source it is the open source deployment tool that comes along with Moranis OpenStack which is the Moranis distribution of OpenStack. Now basically what you're going to do here is you're going to go ahead and specify what you want. Let me just kind of go here for a minute. This is crazy. So this is 4.1 so you can see where you get it. You can choose Havana but 5.0 which will be out very shortly lets you choose Icehouse as well. You can choose whether you want HA or not but the important thing here is that you can choose vCenter as your hypervisor. So once you do that you can go on and choose your other things. We're going to choose Nova Network now. Future versions will let you also deploy with NSX but that's coming later this year. You can include Cep you can include other products. Obviously I recorded this before Savannah changed to Sahara but we'll keep that simple for now. So we go in and we create now we need to go ahead and add our nodes to the cluster. We're going to say I need to add a node and what we need is a controller because everything else is handled by VMware so I'm going to say I want a controller I'm going to look at the nodes that I have available. These are auto detected by fuel so I don't need to specify what it is or anything like that. Fuel will also let me see what the specifications of the hosts are so I can make sure that they're going to be appropriate. I can figure things. Now if you remember we set the VLAN ID as 103 and here you see it on the on the fixed network so that was why we had to make sure we knew what that was. So that's part of the way that we're going to communicate between OpenStack and vCenter. So going forward what is that noise? Somebody else have their mic on? Alright so going forward if we go to the settings you can see that as before so we specify that we want to use vCenter as a hypervisor we're going to say this is our, this is the IP for the vCenter server we include the admin username and password so that we can talk to it short now it slows down and then we're specifying also the cluster name that we added when we created it in vCenter. So that is how we're kind of tying those two environments together. I want to show you that because it's great to see kind of the high level, yes these two products talk to each other but okay how exactly so that's why we're doing this. So as you can see here later you'll be able to specify your NSX information as well other parameters that you might want to set so you don't need to edit configuration files and so on. So we're going to save those settings and go ahead and deploy the cluster any second now. There we go. So we're going to deploy the cluster, it's going to tell us oh you need a compute node but we don't need a compute node because VMware is going to handle that for us. So it will go ahead and do the installation and at this point I'm going to compress our time compression already by flipping over to an already completed cluster oh no not so fast. So if we go over to horizon we can see that the hypervisor the vcenter server shows up as the available hypervisor. So any host that we create that's where they're going to go, any VMs that we create rather. If we head on back to vcenter and we look at our data center and we look at the cluster that's associated with that open set cluster we can see we don't have any virtual machines yet we can see virtual machines zero. So if we go back over to horizon and we launch a VM it doesn't matter what's on it we're just going to launch a plain old empty VM for the moment. We can see that as soon as it comes up it will appear over on the VMware side so over in vcenter so that we can manage it from vcenter. So we can start it, do whatever we want to do from vcenter. And that brings us to six minutes. Any quick questions? Okay great that's my time. Thank you very much everybody. Well thank you very much Nick for that super compressed demo. It's pretty impressive that you can get it all done in eight minutes or less. Six minutes. So that kind of wraps it up but I just want to say once again as VMware our goal is to provide customers with a choice of what distribution they want to use for open stack on top of vSphere I'm really excited to have all of these partner vendors working with us and thank you very much for having these super speedy presentations and we appreciate all the great work you've done. And just one other thing if anybody's interested in canonical and VMware we have people with collateral at the back white papers on our deployments the architectures and all the cool stuff I outlined so enjoy yourselves. So I think we have a little bit of a break and then following up on this there's a talk on congress which is a new policy project that VMware is a part of an open stack and then after that there's a vSan talk. So thanks. Thank you. Thank you. Thank you. You're lucky I didn't say I wouldn't start using hyper V terms at you. I spent 12 years here at Microsoft so it's very hard to unlearn the Microsoftisms. So today we're really going to be focusing on vSan which is this little yellow box here but I just wanted to make sure you understood this kind of broader context. So this is something that we're doing as an industry as a company the storage industry is moving this way VMware is moving this way our competitors are moving this way it just so happens that the open stack community we feel can take huge advantage because the operational model and the workloads associated with open stack are pretty well suited to this type of storage and compute environment. So hopefully that makes sense. So let's talk about the basics. So at a very basic level what are we talking about? Well the interesting thing in the vSphere context is we've actually had storage abstraction for a long time in the product we call it a data store so if you're not a vSphere person don't worry about it a data store is just what we use to abstract traditionally LUNs or disks or collections of disks we always had this abstraction it's been around for a long long time it was convenient for us to think about random blobs of storage as these things called data stores we also use that exact same mechanism to abstract away implementation details like oh this one's sitting on fiber channel and this one's sitting on NFS and that's perfectly okay and it's all a data store and a data store this is not a new thing in the vSphere world and then we also have this notion of a VMDK now you might think well a VMDK a virtual disk object that's not very revolutionary Alex well it's not today but you know when it originally was invented that's a pretty cool thing that the guest thinks he has a disk a block object but what he actually has is a file and actually if you dig inside of ESXi the way it actually works is that thing that we kind of roughly refer to as a dot VMDK file because that's the original implementation is actually not that anymore it's actually a virtual disk construct that could be stored on an object store or a file system or a block device completely abstracted away from the guest the guest has no idea that we're doing this a disk is a disk is a disk it just works right so this abstraction is not new it's been around for a long long time but it's important to realize that there's this history of abstracting away implementation detail so what we're really doing is just taking the next step continuing to abstract away detail as we have been doing for some time so within vSphere there's another construct that we call SPBM storage policy based management this is not data plane abstraction like data stores and VMDKs this is control plane abstraction what we're saying is when you ask for storage inside vSphere tell me what class of storage you would like right so some people refer to this as like t-shirt sizing right or gold silver bronze what we're saying is tell me the kind of thing you want I want a high performance disk that I'm going to use for OLTP transactions I want an encrypted disk that's going to contain credit card data you know I live in Japan and this VM may not leave Japan whatever the class of thing you want that's what I care about so within vSphere not that the implementation detail necessarily matters to an open stack consumer but between us friends we'll talk about the implementation detail the way we do that is through storage policy SPBM and this is not a new feature it came out in vSphere 5 but what's nice is that that abstraction mates up very cleanly with things like cinder and nova because cinder and nova don't want to know what a data store is they don't know what a lung is they don't really want to care about the difference between a fast enabled fiber channel lung on a VMAX and a really really slow ZFS based NAS that I built myself out of component parts and it's lucky if it can do 10 IOPS an hour those things shouldn't matter to open stack and the way we make it not matter in our implementation is this thing called SPBM and I love SPBM because I'm the PM for SPBM anyway everybody has a mommy and a daddy yeah so anyway so we're the other interesting thing that's going on inside of vSphere and VMware is that we're moving away from lungs one of the big trends you're seeing inside of our product line is that we're attempting to move towards VM granular management of all things and again this might seem like a trivial change but actually if you get into the guts of the way the thing works it's a pretty big deal traditionally what we would do if you look at most enterprise customers today who are deploying vSphere what they do is they take lungs usually large ones you know two terabytes or so or larger and then they preallocate into cluster a group of lungs a group of data stores and then they consume against those lungs until the lungs are full and then they just start over again right that's a pretty normal implementation model in a vSphere customer which is cool if you only want to do one thing right but what happens if I have some VMs that need encryption and some VMs that need replication and some needs high performance and some don't some of them are expensive and some not see where I'm going here right being able to carve up those lungs into multiple classes of service and to provide additional data services like replication and backup becomes very complicated so now these really big buckets that you're trying to carve up into little teeny boxes that's actually pretty hard try to take a bucket of water right and you can't do it so instead what we're doing is moving away from that model we're moving towards a VM granular management model so in a vSAN or a vVol use case and vSAN and vVol are both features that are relatively new vSAN shipped this year vVol is going to ship next year what you do is when you ask for storage from us you don't get a line what you get is a virtual disk object and it's actually just that it's an object based file system sorry object based storage system both vVol and vSAN are both object based so you say okay I want this virtual disk and here are the properties I want it to have this is starting to sound familiar I hope because that's exactly the way Cinder works so now what's happening is that our plumbing looks a lot more like the cloud operating model that people like OpenStack are asking for now this is not unique to OpenStack by the way right this is exactly what people like Cloud View wants and this is what our product called vCake vCloud Automation Center they want that so you know from a plumbing perspective as the hypervisor we have to serve multiple masters but for the context of this room we're talking about things like Nova and Cinder requesting virtual disks so when we wrote a Cinder driver last year we made sure that that Cinder driver was based on these virtual disk objects these VMDKs so when you get a VMDK object when you get an object from Cinder using our driver you don't get what we refer to as an RDM right or a raw device map you actually get a virtual disk and the reason why we do that is because that future proves you against technologies like vSAN and VVOL which don't support raw disks so that's the reason why we did that so what's the workflow what does it look like hopefully this is pretty simple and obvious to you guys but I'll just cover it real quick so the first thing you need to do is you need to set up your capacity pool that in the Havana release that meant you had to make the data stores available in the Ice House release what that means is that you're going to use SPBM to discover your storage tiers basically then your cloud admin your open stack admin creates their Cinder volumes volume types excuse me excuse me the reason why we do this is because it's actually the volume type that allows us to inject metadata through the request through the extra specs mechanism I have a little demo of this later so I can show you how this works and then when the consumer creates a volume they collect the Cymru volume type because that's tied to the metadata injection right in the extra spec we see the request coming down saying okay I want an object of this class we use the storage policy-based management infrastructure to select a container to put it in we can set properties against it if we have to at the same time we provision the object and then present it to the VM the only kind of weird thing about implementation and Dan mentioned this earlier in his presentation but that was like two and a half hours ago so you may not remember but what we do is we actually lazy create the virtual disk we do not create it when you create the Cinder object and we do that for a couple of reasons one is because you could provision a thousand Cinder volumes and never use them so why should I have space on my back end that you don't need the other reason is when we know where you're going to put the Cinder volume then we know what data stores the VM can see so why create it on data store X and then immediately SVMotion it to data store Y that doesn't make any sense so we know oh I'm going to attach it to this VM and this VM can see these 10 data stores maybe I should make it on one of those 10 data stores instead of making it over here and moving it so that's one of the other reasons why we lazy create performance is better and it helps us decide where to put it after it's created if I detach the volume and then present it to another VM that's running on another cluster that can't see the local storage then we silently move it to a data store that the VM can see and the vSphere feature we're using is called SVMotion doesn't really matter what we call the feature we just silently move it in the background so it looks like you just attach it and reattach it but actually what happens is we detach move and then reattach and then it all happens in the background the question is is that only relevant for vSan no that's for any any data store any class of data store NFS, FiberChannel, iSCSI doesn't matter not all data stores are visible to all clusters so there may be a case where I need to do a SVMotion because VM1 is on a different cluster than VM2 lots of reasons why I might have to do that so the code just does that generically in the background so the question is I thought vSan was going to make it available to all the answer is that vSan is available to all members of a single cluster so if you're within a cluster you're good if you're in moving cross clusters then you'll still have to ask eMotion okay the other weird thing about implementation on Cinder just to give you kind of like the nitty gritty is because of the way vSphere works we don't actually manage disks like Cinder does like Cinder knows what a disk is because that's all it does it assigns a disk a grid and then it detaches a disk and then some time later it comes back and says you remember that disk I made like two years ago yeah I want it back now vSphere doesn't work that way vSphere manages VMs disks are children of VMs so when you detach a disk from a VM we can kind of forget about it it may still be there but we don't really know what it's there so what we do is we cheat and I'll fully admit that this is a hack and the reason why it doesn't work is we create a fake VM a metadata only object and we make the Cinder volume a child of that shadow VM and the only reason why we do that is so that we don't lose track of the disk ever so if you detach the disk and then come back and hear from now and ask it for it back we can find it and the reason is because the name of that fake VM is the grid of the Cinder object and it's a hack to get around the way vSphere works this will be fixed in a future version of vSphere but today we have to hack it around it turns out that making a VM is a relatively cheap operation so it's not a huge deal we hide them in a special folder so they're not cluttering up your main stuff but just want to let you know if you see weird things in your vSphere UI that's what it is and that's why it's there if you delete that VM by hand we lose our minds so please don't do that so how does vSan fit into all this Alex well I'm glad you asked me that it turns out that vSan because it is inherently local storage has a couple of interesting things in the open stack world one is it's directly connected to the hypervisor so when you scale the hypervisor you scale the storage and one of the things about cloud as we all know is that cloud is all about the perception of infiniteness in a cloud world we think the world is infinite we pretend like it's infinite it's not but we pretend like it is and the way we achieve the appearance of infinity is we simply are able to scale very quickly and be very flexible well what's one thing we know for certain about traditional sAN architectures they don't magically appear somebody has to install them somebody has to set them up usually in most corporate environments that's two separate teams so you have to plan ahead so usually what people do is they buy sAN capacity in advance that can get a little expensive so in this case by bringing the storage into the cluster what's happening is every time you add a node to a cluster or every time you add a cluster you're automatically adding storage capacity because compute and storage are now one thing so that to some extent solves that scaling and planning problem so adding storage in much smaller increments most storage arrays now I'm talking about traditional storage arrays not some of the new guys that are doing these scale out scenario but traditionally storage would have a head unit or probably a pair of head units and then you'd scale out with shelves if you think about it every time you bring a new head unit on that's a pretty significant scale factor because you just brought a lot of IOPS capacity and then you start consuming against that more modern storage architectures don't work that way they operate in a pure mode and they scale out linearly vSAN is like that so vSAN adds capacity with every single member of the cluster added it doesn't necessarily have this big scale factor you don't add 100,000 IOPS in one chunk you're adding them in much smaller chunks so we are supporting this today in Cinder as a vice house and we're adding support for Nova and in Glance actually the code is already there we've already published it on the community and we're just working with the reviewers to get it upstreamed so the interesting thing about vSAN is that vSAN was designed as a hybrid storage system from the get go and again for the non-storage people out there hybrid is kind of storage speak for both flash and rotating media it's kind of like the blues brothers joke what kind of music do you have here but we have both kinds, country and western so the question is what kind of disks do vSAN users support both kinds flash and rotating so the vSAN node is always both a flash disk and a rotating media and in fact the minimum configuration for vSAN is three physical hosts and each one of those hosts must have two spindles one flash, one rotating and once I get into the architectural slide you'll understand why that's the case so the absolute minimum number of disks that you can use to build your own personal system is six right, two each and three hosts the reason why we need three hosts is because we have to have a witness right we scale up to 32 nodes we scale down only to three that's the minimum we don't use traditional rate we use an array of nodes so when we do fail over and we do availability metrics we always do it based on complete node failure so we're not striping, we're not using rate five we're not using rate six we take the object, we replicate the object n times depending on the settings of the object the interesting thing here is that replication setting that's actually a property of the virtual disk not the entire data store so that's the other interesting thing about traditional storage arrays is if I wanted to have a high availability LUN I'd probably have to set that availability down in the RAID group shelf level and then I start putting things in there because it happens to have that RAID level in vSAN that's not the way it works every time I provision an object every time I make that decision I go on in plus two and plus three so you could have two VMs one is hugely important and one is completely unimportant sitting on exactly the same data store at the same time running at completely different service levels vSAN doesn't care that's just built into the way vSAN works and how do I get that different level of execution through storage policy as I already said you set the policy, apply the policy to the object and that's how we decide do I replicate this thing to preserve how many stripes do I make so for those of you that are familiar with the VMware kind of not language we have these things called BSA virtual storage appliances very important to note, not a VSA vSAN is in the kernel this is an ESXi feature this is a kernel level storage feature extremely high performance, high scale enterprise grade storage so don't be confused about that but for those of you that are more VMware vSphere knowledgeable we want to make sure we're really clear about that so there's only three seemingly conflicting goals we wanted to make something that was hugely simple we wanted to make something that was very high performance and we wanted to make something that had very low TCO what's interesting is if you look out in the marketplace right now it's kind of a pick two scenario you can have any one of these two we wanted to have all three at once and to do that we had to invent a completely new way of doing storage so that's why the architecture is so different so I mentioned this before so I'll go real quick through this slide but what we're saying is is that the VMs themselves have individual storage policy and those policies control the way vSAN works those policies can concern things like availability striping, performance IOPS flash, all those things are all controlled through policy that policy is assigned to the object when the object is created when the object is created that information is handed to vSAN and then vSAN takes appropriate action note there's no LUNs here no LUNs at all vSAN is an object store extremely specialized object store that really only stores two things it stores VM metadata and virtual disks that's it now in theory we could have implemented a generic object store but instead we chose to implement a very focused object store and the reason why we did this is for performance we were highly optimized to a small number of extremely large objects because we wanted to make sure that we had the enterprise grade performance and we were pretty successful the scale limits of vSAN was quite high so you can have 32 hosts in a single vSAN cluster why 32 Alex? because that's the limit for ESX we scale to ESX's limits that's the point it's an ESX feature it's not a separate thing 3200 VMs in one cluster 2 million IOPS 2 million IOPS 4.4 petabytes that's the high number is not really crazy amazing until you consider that we're just running in the hypervisor there's no storage system involved this is just hypervisors you're running on local disks and these are just regular old disks by the way right? so I was not part of the team that built this thing but I have to say I'm very impressed with their work there's two ways to build these things out so some customers come to us and they say this thing's simple I just want to skew I want to part to order on the internet fine no problem it's called vSAN ready so you go in it's a preconfigured node it's got everything in it buy it from your favorite vendor plug it into the rack turn it on wire it up you're good to go some people are like no no no I want that disk I want that controller I want that motherboard then it is vSAN supported full stop the only component of this system that's vSAN specific is the storage controller itself and the reason for that is we need to be able to see the disks so if you have a storage controller that's doing caching or it's abstracting disks into lungs and things like that vSAN is not going to work with that so you want to have direct access to the disks so there is a list of storage controllers that we support in vSAN and vSAN is just standard old ESXi and the way you you fine tune this thing is by changing the number of SSDs in a unit by changing their capacity by changing the ratio of SSD to rotating media and so you can have an extraordinarily fine tuned experience even within a single head unit so I can go with two SSDs for head unit or I can go with slightly larger SSDs by default we recommend about a 10% ratio so if I have a terabyte of rotating media then it's 100GB of SSD but that's just a guideline it depends on your actual workload that's going to vary yes sir the question is I thought you could only put one SSD in a data storage that's not actually correct it's one SSD per disk group and you can have as many disk groups in a data storage as you'd like and more disk groups means more throughput by definition a disk group is an SSD we create a disk group basically what we mean by a disk group is an SSD with its backing rotating media so if you just leave us in completely automatic mode which is the default we'll take every SSD that you have make a new disk group for each one and then keep adding rotating media until we run out the question is if I don't have any SSDs then what the answer is vSAN requires SSD you must have at least one SSD in every participating member of the cluster notice I said participating member of the cluster not all members of the cluster must participate that's not required in the vSAN infrastructure and minimum of three physical hosts minimum of three maximum 32 okay so really we're just talking about an ESX feature and this is a screenshot of the production product you can see that just along with all the other features DRS, sure, HA, sure vSAN, yes now notice that down here it's grayed out but the default is automatic mode right if you leave it in automatic mode we will self-select the disks and we'll do everything for you you can turn that off you can manually configure it if you want to but by default, you're done one checkbox, you're done there is one extra little step that I didn't mention and that is to say that the the host must be able to see each other over an IP network and we recommend that to be a gigabit network right but assuming that you have a fully connected cluster that has high-speed interconnects it'll just work so we talked about disk groups so disk groups oops disk groups are by definition an SSD and they're associating rotating media and the reason why we do this is because the way vSAN works is that when you write a block what we actually do is we write it to SSD always, exclusively we never ever ever ever write to rotating media we always write to SSD sometime later asynchronously we will destage that right from SSD to rotating media and this is the fun part based on policy designators so some virtual disks may never get destaged that's perfectly fine some disks may be destaged right away so if I read a block if I haven't been destaged I go right from flash again because I'm already in flash if I have been destaged then I have to go hit the rotating media then when it comes back it's cached up on the SSD tier again and if I hit it again I'm back in cache so we're inherently using the flash as a read-write cache all the time the way we use it though varies depending on the class of the object we can take objects big objects like VMDK virtual disks, we can split them into component pieces we call those stripes and then we can spread those stripes amongst the cluster and why we do that, well we do that for availability and performance when you set a rule you say this virtual disk is n plus one what that means is that media must be written to at least two physical nodes before the write is committed to the guest so we will write it in parallel to two physical nodes when those writes commit then and only then the guest receives a write commit when you read it will try to read from the local node first if you're striped it will grab the local stripe but if it's not it will go across the network, grab the stripe remotely and then go forward so the guest perceives this common storage pool across the entire cluster what's actually happening though is we're taking the object, we're striping it up across the cluster based on the rule set what's interesting about this is that we can scale up in a single node or we can scale out by adding additional nodes so as we build up we can just keep adding hard drives or keep adding virtual disks and continue to scale up or we can just scale out by adding additional nodes on demand not all nodes need to be the same size right you're going to get the most consistent performance all nodes are similar but there is no requirement that they're the same so you could have 10 terabytes on node 1 and 1 terabyte on node 2 perfectly fine you could have 3 SSDs in node 1 and 1 SSD in node 2 that's fine operationally you probably want them to be similar because that way all the VMs will receive similar performance as they get moved around the cluster but that's not a requirement and they don't need to be from the same manufacturer you can have a mix of HP and Dell or you can have racks and blades doesn't matter okay and what that gives us is it gives us a very linear scalability factor we are scaling linearly based on the number of nodes so whatever that nodes performance is you take that times the number of nodes you have so if you have 8 nodes and then you add an additional node 8 you're basically doubling your performance it's a very linear curve as the cluster size increases from a storage perspective that's exactly what you want it turns out the dirty secret of storage is if you have twice as much gear you don't always get twice as much performance but in our case we do because of the way we're architected that's a lot of stuff any questions how about if we take a look at it actually working how about that nobody wants to see it working so I am not as brave as Dan so I brought a recording so what's going to happen is let's say we have a vSAN cluster what you actually see is you actually see a data ooh I can't stay on there what you actually see is a data store and when the cluster is enabled you just see it as one of the many data stores that are attached the normally when you set this up you'll build out your physical cluster you'll add our node and then you'll go in and you'll create storage policies and storage policies are going to be whatever classes of storage internally you want to support so for a lot of my customers there's only one class of storage gold but you may have a situation where some of your VMs are more equal than others and you may want to promise them a higher level of IOPS or you may want to have more redundancy and the way you do that is through storage policies and storage policies can be whatever you want and they're configured by the administrator the this is just to show you what we've got here so we've got the very simple VCN implementation it's got three physical posts so the next thing is we need to create our sender volume types and because we're real Harry developer types we're going to use the command line instead of the WIMP UI way but obviously this works in a way actually you can tell but this was done by my engineer because it's all command line all the time and so what we're going to do is we're going to take a look at the at the side we're going to create a goal and then the next step is to add the extra specs that allows us to connect this to the SPBM policy that we saw on the previous screen so remember what we said is extra specs is just a delivery vehicle and you can see that the VM where extra spec is called storage profile that passes on the string called goal profile so if you recall from the previous screen remember it was called goal profile so that's what connects it to it's a very simple mechanism it's just a literal string that we're passing as long as those two match everything's golden so now we've kind of gone forward in the video a little bit here and we've created a couple of different classes so now that that's set up though you're really probably only going to do that once the actual consumer experience is you go to the website or you go to the command line you request a storage object and you just say what kind you want and then we give it to you again the implementation detail underneath is completely hidden from the user I'm not going to go all the way through this because I'm assuming you guys all know how Cinder works so from this point forward we're basically talking about normal regular Cinderisms it appears as a volume type you consume the volume type nothing really amazing or special on the back end what happens is we translate that Cinder request into a storage policy management request we pass that down to vSAN we create the object so I'm going to just go ahead and pause here this video is up on YouTube so you can take a look at it also it's in the lab if you want to go out and build a vSAN lab you can do that it's pretty straightforward okay I need just a time I'm just going to alright so in summary so what we've seen from Opistat customers what customers are telling us is that we really need low cost high performance storage here we don't need high end replication solutions we don't need synchronous replication we don't need offline snapshots we don't need all these fancy things we need something that's performant it's stable it's a commodity hardware surprise that's exactly what vSAN is it's all of those things it's very simple to deploy and operate and from our perspective the best part is it's integrated with vSphere it's a vSphere feature it's not a separate thing so from our perspective this makes a lot of sense we have a huge commitment to Opistat within VMware we have this storage product that seems to fit these use cases Opistat customers tell us this makes a lot of sense so we're seeing a lot of people take this up does this mean that we expect all of our Opistat customers to go directly to vSAN probably not the vast majority of vSphere customers today are running on sands and most of them are really happy with those sands that's great we love sands, sands are fantastic for what they do so if you are implementing Opistat and you want to carve off a piece of your existing sands and put that on Opistat it will work just great everything that I just talked about will work perfectly well against the sands infrastructure Fiber channel NFS it will still work this is just another option to look at okay so with that I think I am right up against my time and I thank you all very much for your attention and I'm happy to take questions thank you so the question is what happens if I'm running a VM on a cluster that's not a vSAN cluster can I consume vSAN storage? if I'm on the same network if I'm sitting, if I really have a great personality and I'm sitting really close no so vSAN is only managing storage within a single ESX cluster exclusively despite the name we're not actually a sands we don't support NFS we don't support external sands protocols right so if you want a centralized storage entity serving multiple clusters there's some really great products out there to do that that's not what vSAN does question, what if it's a host in that cluster if you're hosting a cluster then you can consume that storage whether you have local storage or not it's a cluster level asset that can be accessed evenly by all the members of the cluster but only the members of the cluster not across clusters but we can have a non-uniform cluster configuration that works fine now there are performance implications to non-uniform clusters so take that with a grain of salt will it work? can you consume the storage of a foreign machine? absolutely right question is that means I don't have to have SSDs in every single host correct keeping in mind you could have performance implications by having non-uniform access right so we can have many experience higher performance than others the other thing is if you have members of the clusters who are not participating it will limit the total number of VMs that you can support on a single cluster and the reason is because we distribute the metadata across all members that are vSAN enabled and the metadata limit is a per ESX limit so we can support 4,000 objects per ESX server but that 4,000 objects is only distributed to participating members so if you have a 16 node cluster with 8 vSAN nodes you're going to get half the scalability as a 16 node cluster that are all vSAN nodes in terms of just number of objects that we can support so there's some subtlety there if you read the vSAN deployment guide we strongly suggest that all members of the clusters participate because it's more predictable that way and it's the safest option even if it's only just 2 disks in the host so you may have a case where you have 16 members of a cluster 8 of which have 2 disks 8 of which have 10 disks that is totally fine so the classic thing is I have blades and I have rack mounts and I want the blades to participate and the answer we would say is that's fine but you probably want to go ahead and take the 2 spindles that are available in the blade have them participate even though there's relatively small amount of storage and the reason is because that way in the process they can be a witness they can store metadata they can form quorums so the design assumption is but the reason why it has to work when it's not the case is what happens if a 1 SSD and a host in the SSD fails you don't want to have it just fall down and die at that point so we have to support this mode where not all members are participating so since that already has to work you can do it by design as long as you are willing to accept the performance window that you're limiting there yeah, yes sir yeah, so you can do it the question is who's doing the scheduling and the answer is you can do either you can specify a data store and then basically Cinder is doing the scheduling but we would prefer that you just tell us what kind of object you want and let us do it because we know much more about what's going on in the data stores than Cinder does we want to have more control so we have to allow both ways yeah there's a lot of this is what we refer to as a three beers conversation about who should be doing scheduling it's more of a philosophical debate mechanically we have some advantages because we're closer to the disks there's also policy handoff when we talk to the arrays we give them policy hints which Cinder can't do so if you're not using our policy infrastructure you don't receive the advantage of the policy hints so your performance will drop so that's the other reason to use our policy infrastructure other questions yes sir no it's a persistence tier the question is what goes into that tier so it would be more accurate and if you're a storage guy think of it as dynamic auto-tearing at a block granular level or sorry that was inaccurate at a stripe granular level right are you a storage guy oh okay in the storage world those things mean things so you say auto-tearing to a storage guy as you guys lined up oh you're doing auto-tearing so what happens is that if a stripe lands on an SSD we consider that to be a right commit if it was only a cache layer that's not technically a commit that's a dirty buffer for us that's a commit so we consider that to be a valid commit and we report that to the guest later we may move it right so the the big S storage world like the pointy-haired storage guys that's not caching to them that's auto-tearing I think to a normal human that's the same thing but we have to use our words carefully because in the storage world that means something there's actually two factors one is how often you're assessing it but the other one is the policy that you've set for the object so some objects may have higher priority than others causing them to be we call it the elevator mechanism so you take the elevator down so you may or destaged so you may get destaged so let's say you have VM1 and VM2 VM1 is set to 100% flash VM2 is set to 0% flash they both commit a right at exactly the same time both of those rights commit to SSD the guest receives exactly the same acknowledgement at exactly the same time one millisecond later VM2's VM2's right gets destaged VM1's right is not destaged then they read the same block he gets a really fast access he gets a slow one yeah so is that caching or is that auto-tearing fine close enough so I think what I'm saying for all practical purposes the distinction between those things is not that big mechanically what's happening is different but the experience of the user, the experience of the VM is identical we're using flash for IOPS we're using rotating media for capacity right SPBM they're talking about my baby here man storage policy based management yep it's actually the policy is not how long it's percentage of object size guaranteed so it's a reservation guarantee but mechanically it's basically the same thing it's the main longer so it's expressed as percentage of object size one SSD and one rotating medium at a minimum yes yes well you can lie to us and tell us that the SSD is rotating media we wouldn't know but yes we require you will not enable a disk group unless you have at least one of each it won't work so the question is why that crazy requirement Alex this doesn't make any sense to me the reason why is because architecturally we wanted to make sure that we had a uniform D stage layer which gives you a more even performance experience the problem is if you have SSD without rotating media you have no D stage right so now architecturally we can't assume that you can take the D stage down to the rotating so architecturally we're assuming that we have two classes of disks, fast disks and slow disks right if you take the slow disk away right now we're just an all flash array those things already exist it's called pure or violin so we're just not in that business if you want the world's fastest source with ultra low latency and a million IOPS buy a violin they're really good at that yeah I don't think I said that it happens that vSAN is very well attuned to open stack workloads vSAN is not an open stack only product it's a generic storage it's a generic storage product and the reason why we use both SSD and HDD is because in our research what we found out is that the cost the cost of ownership the cost per IOPS on SSD is very low but the cost per gigabyte is extremely high HDD the opposite the cost per IOPS is high the cost per capacity is low so by combining the two you get low cost per IOPS low cost per gigabyte on the same platform so it's an architectural decision we've made you guys will tell us whether it's right or wrong because if it's wrong you won't buy it but we're pretty confident in this design and if you look at what's going on in general in the storage industry a lot of people are moving to this hybrid SSD HDD model there are definitely use cases like high frequency trading, NASDAQ where you want the absolute minimum possible latency with millions of IOPS we're not that of a storage system for 80% of your work loads those storage systems are designed for 5% of your work loads and they're really really really good at that and we didn't think that we could be a better high performance low latency array than violin or pure or the others, right? on the other hand we thought that we could produce a system that had a much better ROI for 80% of your work loads and that's the system that we designed you can definitely argue that we made a mistake but that's the rationale I think we had a question back here I think I might have to cut this out I love this conversation by the way the next step is you're going to have to buy me beers to continue answering questions which is totally legal, bribing your presenter with beers totally cool one more question and I think they're going to kick us out of here but I'm happy to continue the conversation yes sir the question is does vSAN have distance replication vSAN does not so vSAN has replication if you want to use it vSAN does not have its own replication engine absolutely using using vSAN replication service keeping in mind that vSAN replication service has a minimum RPO of 15 minutes so if that's what you're looking for then that would be an appropriate way to do it okay I'm going to have to stop the questions here I love the questions, happy to talk to you outside but they're going to kick us out of the room because it's after 6 o'clock thank you all very much, thank you actually I didn't say I wasn't starting using hyper-v terms at you I spent 12 years here at Microsoft so it's very hard to unlearn the Microsoftisms so today we're really going to be focusing on vSAN which is this little yellow box here but I just wanted to make sure you understood this kind of broader context so this is something that we're doing as an industry, as a company the storage industry is moving this way vMware is moving this way our competitors are moving this way it just so happens that the OpenStack community is moving this way because the operational model and the workloads associated with OpenStack are pretty well suited to this type of storage and compute environment so hopefully that makes sense so let's talk about the basics so at a very basic level what are we talking about well the interesting thing in the vSphere context is we've actually had storage abstraction for a long time in the product we call it a data store traditionally LUNs or Discs we always had this abstraction it's been around for a long long time it was convenient for us to think about random blobs of storage as these things called data stores we also use that exact same mechanism to abstract away implementation details like this one's sitting on fiber channel and this one's sitting on NFS and that's perfectly okay and it's all a data store and a data store is a data store we also have this notion of a VMDK now you might think well a VMDK a virtual disk object that's not very revolutionary Alex well it's not today but when it originally was invented that's a pretty cool thing that the guest thinks he has a disk a block object but what he actually has is a file and actually if you dig inside of ESXi the way it actually works is that thing that we kind of roughly refer to as a .vmdk file because that's the original implementation or it's actually a virtual disk construct that could be stored on an object store or a file system or a block device completely abstracted away from the guest the guest has no idea that we're doing this a disk is a disk is a disk it just works so this abstraction is not new it's been around for a long long time but it's important to realize that there's this history of abstracting away implementation detail so what we're really doing is just taking the next step and continuing to abstract away detail as we have been doing for some time so within vSphere there's another construct that we call SPBM storage policy based management this is not data plan abstraction like data stores and .vmdks this is control plane abstraction what we're saying is when you ask for storage inside vSphere tell me what class of storage you would like right so some people refer to this as like t-shirt sizing or gold silver bronze what we're saying is tell me the kind of thing you want I want a high performance disk that I'm going to use for OLTP transactions I want an encrypted disk that's going to contain credit card data you know I live in Japan and this .vm may not leave Japan whatever the class of thing you want that's what I care about so within vSphere not that the implementation detail necessarily matters to an open stack consumer but between us friends we'll talk about the implementation detail and what we do that is through storage policy SPBM and this is not a new feature it came out in vSphere 5 but what's nice is that that abstraction mates up very cleanly with things like Cinder and Nova because Cinder and Nova don't want to know what a data store is they don't know what a LUN is they don't really want to care about the difference between a high performance fast enabled fiber channel LUN on a VMAX really really slow ZFS based NAS that I built myself out of component parts and it's lucky if it can do 10 IOPS an hour those things shouldn't matter to open stack and the way we make it not matter in our implementation is this thing called SPBM and I love SPBM because I'm the PM for SPBM anyway everybody has a mommy and a daddy yeah so anyway so we're the other interesting thing is going on inside vSphere and VMware is that we're moving away from from LUNs one of the big trends you're seeing inside of our product line is that we're attempting to move towards VM granular management of all things and again this might seem like a trivial change but actually if you get into the guts of the way the thing works it's a pretty big deal traditionally what we would do if you look at most enterprise customers today who are deploying vSphere what they do is they take LUNs usually large ones you know 2 terabytes or so or larger and then they preallocate into the cluster a group of LUNs a group of data stores and then they consume against those LUNs until the LUNs are full and then they just start over again that's a pretty normal implementation model in a vSphere customer which is cool if you only want to do one thing but what happens if I have some VMs that need encryption and some VMs that need replication and some needs high performance and some don't serve and some not see where I'm going here being able to carve up those LUNs into multiple classes of service and to provide additional data services like replication and backup becomes very complicated so now these really big buckets that you're trying to carve up into little teeny boxes that's actually pretty hard try to take a bucket of water and you can't do it so instead what we're doing is we're moving away from that model we're moving towards a VM granular management model so in a vSAN or a vVol use case and vSAN and vVol are both features that are relatively new vSAN shipped this year vVol is going to ship next year what you do is when you ask for storage from us you don't get a LUN what you get is a virtual disk object and it's actually just that it's an object based file system sorry object based storage system both vVol and vSAN are both object based so you say okay I want this virtual disk and here are the properties I want it to have this is starting to sound familiar I hope because that's exactly the way Cinder works so now what's happening is that our plumbing looks a lot more like the cloud operating model that people like OpenStack are asking for now this is not unique to OpenStack by the way right this is exactly what people like Cloudvio wants and this is what our product called vCake vCloud Automation Center they want that so you know from a plumbing perspective is the hypervisor we have to serve multiple masters but for the context of this room we're talking about things like Nova and Cinder requesting virtual disks so when we wrote a Cinder driver last year we made sure that that Cinder driver was based on these virtual disk objects these VMDKs so when you get a VMDK object when you get an object from Cinder using our driver you don't get what we refer to as an RDM right a raw device map you actually get a virtual disk and the reason why we do that is because that future proves you against technologies like VZan and VVol which don't support raw disks so that's the reason why we did that so what's the workflow, what does it look like hopefully this is pretty simple and obvious to you guys but I'll just cover it real quick so the first thing you need to do is you need to set up your capacity pool that in the Havana release that meant you had to make the data stores available in the ice house release what that means is that you're going to use SPBM to discover your storage tiers basically then your cloud admin, your open stack admin creates their Cinder volumes the volume types excuse me excuse me the reason why we do this is because it's actually the volume type that allows us to inject metadata into the ask into the request through the extra specs mechanism I have a little demo of this later so I can show you how this works and then when the consumer creates a volume, they collect the Cinder volume type because that's tied to the metadata injection in the extra spec, we see the request coming down saying I want an object of this class we use the storage policy based management infrastructure to select a container to put it in, we can set properties against it if we have to at the same time we provision the object and then present it to the VM the only kind of weird thing about implementation and Dan mentioned this earlier in his presentation but that was like two and a half hours ago so you may not remember but what we do is we actually lazy create the virtual disk we do not create it when you create the Cinder object and we do that for a couple of reasons one is because you could provision a thousand Cinder volumes and never use them so why should I have space on my back end that you don't need the other reason is when we know how to put the Cinder volume then we know what data stores the VM can see so why create it on Datastore X and then immediately SVMotion it, StorageVMotion it to Datastore Y, that doesn't make any sense so we know oh I'm going to attach it to this VM and this VM can see these 10 data stores maybe I should make it on one of those 10 data stores instead of making it over here and moving it so that's one of the other reasons why we lazy create performance is better and it helps us decide where to put it after it's created if I detach the volume and then present it to another VM that's running on another cluster that can't see the local storage then we silently move it to a Datastore that the VM can see and the vSphere feature we're using is called SVMotion it doesn't really matter what we call the feature we just silently move it in the background so it looks like you just attach it and reattach but actually what happens is we detach and move and then reattach and that now happens in the background the question is is that only relevant for vSAN no that's for any Datastore, any Class of Datastore NFS, FiberChannel, iSCSI it doesn't matter not all Datastores are visible to all clusters so there may be a case where I need to do a SVMotion because VM1 is on a different cluster than VM2 lots of reasons why I might have to do that so the code just does that generically in the background so the question is I thought vSAN was going to make it available to all the answer is that vSAN is available to all members of a single cluster so if you're within a cluster you're good if you're moving cross clusters then you'll still have to ask eMotion ok the other weird thing about implementation on Cinder just to give you a kind of like the nitty gritty is because of the way vSphere works we don't actually manage disks like Cinder does, like Cinder knows what a disk is because that's all it does it's a grid and then it detaches the disk and then sometimes later it comes back and says you remember that disk I made like two years ago yeah I want it back now vSphere doesn't work that way vSphere manages VMs disks are children of VMs so when you detach a disk from a VM we can kind of forget about it it may still be there but we don't really know what it's there so what we do is we cheat and I'll fully admit that this is a hack but we make it work is we create a fake VM and we make the Cinder volume a child of that shadow VM and the only reason why we do that is so that we don't lose track of the disk ever so if you detach the disk and then come back and hear from now and ask it for it back we can find it and the reason is because the name of that fake VM is the grid of the Cinder object so we can always find it so just a little bit of a hack to get around the way vSphere works this will be fixed in a future version of vSphere but today we have to hack it around it turns out that making a VM is a relatively cheap operation so it's not a huge deal we hide them in a special folder so they're not cluttering up your main stuff but just want to let you know if you see weird things in your vSphere UI that's what it is and that's why it's there if you delete that VM by hand we lose our minds so please don't do that so how does vSan fit into all this Alex? well I'm glad you asked me that it turns out that vSan because it is inherently local storage has a couple of interesting things in the open stack world one is it's directly connected to the hypervisor so when you scale the hypervisor you scale the storage and one of the things about cloud as we all know is that cloud is all about the perception of infiniteness in a cloud world we think the world is infinite we pretend like it's infinite it's not but we pretend like it is and the way we achieve the appearance of infinity is we simply are able to scale very quickly and be very flexible well what's one thing we know for certain about traditional sAN architectures they don't magically appear somebody has to install them somebody has to set them up usually in most corporate environments that's two separate teams so you have to plan ahead by sAN capacity in advance they pre-provision that can get a little expensive so in this case by bringing the storage into the cluster what's happening is every time you add a node to a cluster or every time you add a cluster you're automatically adding storage capacity because compute and storage are now one thing so that to some extent solves that scaling and planning problem I'm also adding storage in much smaller increments most storage arrays now I'm talking about traditional storage arrays not some of the new guys that are doing these scale out scenarios but traditionally storage would have a head unit or probably a pair of head units and then you'd scale out with shelves if you think about it every time you bring a new head unit on that's a pretty significant scale factor because you just brought a lot of IOPS capacity and then you start consuming against that as you had the shelves more modern storage architectures don't work that way in pure mode and they scale out linearly vSAN is like that so vSAN adds capacity with every single member of the cluster added it doesn't necessarily have this big scale factor you don't add 100,000 IOPS in one chunk you're adding them in much smaller chunks so we are supporting this today in Cinder as a vice house and we're adding support for Nova and in Glance actually the code is already there we've already published it on the community and we're just working with the reviewers to get it upstreamed so the interesting thing about vSAN is that vSAN was designed as a hybrid storage system from the get go and again for the non-storage people out back hybrid is kind of storage speak for both flash and rotating media it's kind of like the lose brothers joke, what kind of music do you have here but we have both kinds country and western so the question is what kind of disks do vSAN users support both kinds flash and rotating so the vSAN node is always both a flash disk and a rotating media and in fact the minimum configuration for vSAN is three physical hosts and each one of those hosts must have two spindles one flash, one rotating and once I get into the architectural slide you'll understand why that's the case so the absolute minimum number of disks that you can use to build your own personal system is six right, two each and three hosts the reason why we need three hosts is because we have to have a witness right we scale up to 32 nodes but we scale down only to three that's the minimum we don't use traditional rate we use an array of nodes so when we do fail over and we do availability metrics we always do it based on complete node failure so we're not striping, we're not using rate five we're not using rate six we take the object, we replicate the object end times depending on the settings of the object the interesting thing here is that replication setting that availability setting that's actually a property of the virtual disk not the entire data store so that's the other interesting thing about traditional storage arrays is if I wanted to have a high availability LUN I'd probably have to set that availability down in the RAID group shelf level and then I start putting things in there because it happens to have that RAID level in vSAN that's not the way it works every time I provision an object every time I make that decision do I want N availability, N plus one N plus two and plus three so you could have two VMs and one is completely unimportant sitting on exactly the same data store at the same time running at completely different service levels vSAN doesn't care that's just built into the way vSAN works and how do I get that different level of execution through storage policy as I already said you set the policy, apply the policy to the object and that's how we decide do I replicate this thing how many stripes do I make so for those of you that are familiar with the VMware kind of not language we have these things called VSA virtual storage appliances very important to note not a VSA vSAN is in the kernel this is an ESXi feature this is a kernel level storage feature extremely high performance high scale enterprise grade storage so don't be confused about that if you don't know what a VSA is don't worry about it if you're knowledgeable we want to make sure we're really clear about that so there's only three seemingly conflicting goals we wanted to make something that was hugely simple we wanted to make something that was very high performance and we wanted to make something that had very low TCO what's interesting is if you look out in the marketplace right now it's kind of a pick two scenario you can have any one of these two we wanted to have all three at once and to do that we had to invent a completely new way of doing storage so that's why the architecture is so different so I mentioned this before so I'll go real quick through this slide but what we're saying is that the VMs themselves have individual storage policy and those policies control the way VSAN works those policies can concern things like availability striping performance use of flash all those things are all controlled through policy so that's where the object when the object is created and when I say object I mean in this case a virtual disk when the object is created that information is handed to VSAN and then VSAN takes appropriate action note there's no LUNs here no LUNs at all VSAN is an object store extremely specialized object store that really only stores two things it stores VM metadata and virtual disks that's it we could have implemented a generic object store but instead we chose to implement a very very very focused object store and the reason why we did this is for performance we're highly optimized to a small number of extremely large objects because we wanted to make sure that we had the enterprise grade performance and we were pretty successful the scale limits of VSAN are quite high so you can have 32 hosts in a single VSAN cluster why 32 Alex? because that's the limit for ESX we scale to ESX's limits that's the point it's an ESX feature it's not a separate thing 3200 VMs in one cluster 2 million IOPS 2 million IOPS 4.4 petabytes now that petabyte number is not really crazy amazing until you consider that we're just running in the hypervisor there's no storage system involved this is just hypervisors you're running on local disks and these are just regular old disks by the way so I was not part of the team that built this thing but I have to say I'm very impressed with their work there's two ways to build these things out so some customers come to us and they say look Alex we really want something simple I just want to skew fine no problem it's called VSAN Ready so you go in it's a pre-configured node it's got everything in it buy it from your favorite vendor plug it into the rack, turn it on, wire it up you're good to go some people are like no no no I want that disk I want that controller I want that motherboard fine no problem as long as it's on the vSphere compatibility list then it is VSAN supported so the other component of this system that's vSAN specific is the storage controller itself and the reason for that is we need to be able to see the disks so if you have a storage controller that's doing caching or it's abstracting disks into lans and things like that vSAN is not going to work with that so you want to have direct access to the disks so there is a list of storage controllers that we support in vSAN but every other component of the system is just standard old ESXi you fine-tune this thing is by changing the number of SSDs in a unit by changing their capacity by changing the ratio of SSD to rotating media and so you can have an extraordinarily fine-tuned experience even within a single head unit so I can go with two SSDs for head unit or I can go with slightly larger SSDs for head unit by default we recommend about a 10% ratio so if I have a terabyte of rotating media then it's 100GB of SSD but that's just a guideline it depends on your actual workload the question is I thought you could only put one SSD in a data storage that's not actually correct it's one SSD per disk group and you can have as many disk groups in a data storage as you'd like and more disk groups means more throughput by definition a disk group is an SSD we create a disk group basically what we mean by a disk group is an SSD with its backing rotating media so if you just leave us in completely automatic mode which is the default we'll take every SSD that you have make a new disk group for each one and then keep adding rotating media until we run out the question is that I don't have any SSDs then what the answer is VSAN requires SSD you must have at least one SSD in every participating member of the cluster notice I said participating member of the cluster not all members of the cluster must participate that's not required in the VSAN infrastructure and minimum of three physical hosts minimum of three maximum 32 okay so really we're just talking about an ESX feature and this is a screenshot of the production product you can see that just along with all the other features DRS, sure, HA, sure VSAN, yes now notice that down here it's grayed out but the default is automatic mode right if you leave it in automatic mode we will self-select the disks and we'll do everything for you you can turn that off you can manually configure it if you want to but by default, you're done one checkbox, you're done there is one extra little step that I didn't mention and that is to say that the the hosts must be able to see each other over an IP network and we recommend that to be a gigabit network right a fully connected cluster that has high speed interconnects it'll just work so we talked about disk groups so disk groups oops disk groups are by definition in SSD and they're associating rotating media and the reason why we do this is because the way VSAN works is that when you write a block what we actually do is we write it to SSD always, exclusively we never ever ever ever write to rotating media we always write to SSD sometime later asynchronously we will destage that right from SSD to rotating media and this is the fun part based on policy designators so some virtual disks may never get destaged that's perfectly fine some disks may be destaged right away so now when I read a block I go destaged I go right from flash again because I'm already in flash if I have been destaged then I have to go hit the rotating media then when it comes back it's cached up on the SSD tier again and if I hit it again I'm back in cache so we're inherently using the flash as a read write cache all the time the way we use it though varies depending on the class of the object that we're talking about those big objects like VMDK virtual disks we can split them into component pieces we call those stripes and then we can spread those stripes amongst the cluster and why we do that we do that for availability and performance when you set a rule you say this virtual disk is n plus one what that means is that media must be written to at least two physical nodes before the write is committed to the guest when those writes commit then and only then the guest receives a write commit when you read it'll try to read from the local node first if you're striped it'll grab the local stripe but if it's not, it'll go across the network grab the stripe remotely and then go forward so the guest perceives this common storage pool across the entire cluster what's actually happening though is we're taking the object, we're striping it up and we're pushing it down across the cluster what's interesting about this is that we can scale up in a single node or we can scale out by adding additional nodes so as we build up we can just keep adding hard drives or keep adding virtual disks and continue to scale up or we can just scale out by adding additional nodes on demand not all nodes need to be the same size you're going to get the most consistent performance if your nodes are similar but there is no requirement that they're the same so you could have 10 terabytes on node 1 and 1 terabyte on node 2 perfectly fine you could have 3 SSDs in node 1 and 1 SSD in node 2 that's fine operationally you probably want them to be similar because that way all the VMs will receive similar performance as they get moved around the cluster but that's not a requirement and they don't need to be from the same manufacturer you can have a mix of HP and Dell or you can have racks and blades doesn't matter and what that gives us is it gives us a very linear scalability factor we are scaling linearly based on the number of nodes so whatever that nodes performance is you take that times the number of nodes you have so if you have 8 nodes and then you add an additional node 8 you're basically doubling your performance it's a very linear curve as the cluster size increases from a storage perspective that's exactly what you want it turns out the dirty secret of storage is if you have twice as much gear you don't always get twice as much performance but in our case we do because of the way we're architected that's a lot of stuff any questions how about if we take a look at it actually working, how about that nobody wants to see it working so I am not as brave as Dan so I brought a recording so what's going to happen is let's say we have a vSAN cluster what you actually see is you actually see a data I can't stand there what you actually see is a data store and when the cluster is enabled you just see it as one of the many data stores that are attached normally when you set this up you'll build out your physical cluster you'll add our node and then you'll create storage policies and storage policies are going to be whatever classes of storage internally you want to support so if a lot of my customers there's only one class of storage gold but you may have a situation where some of your VMs are more equal than others and you may want to promise them a higher level of IOPS or you may want to have more redundancy and the way you do that is through storage policies and storage policies can be whatever you want and they're configured by the administrator this is just to showing you what we've got here so we've got the very simple VCN implementation it's got three physical hosts so the next thing is we need to create our sender volume types and because we're real hairy developer types we're going to use the command line instead of the Wimpy UI way but obviously this works in a way actually you can tell that this was done by my engineer because it's all command line all the time and so what we're going to do is we're going to take a look at the we're going to create a goal and then the next step is to add the extra specs that allows us to connect this to the SPBM policy that we saw on the previous screen so remember what we said is extra specs is just a delivery vehicle and you can see that the VM where extra spec is called storage profile and then it passes on the string called go profile so if you recall from the previous screen remember it was called go profile so that's what connects it to it's a very simple mechanism it's just a literal string that we're passing as long as those two match everything's golden so now we've kind of gone forward in the video a little bit here and we've created a couple of different classes so now that that's set up though you're really probably only going to do that once the actual consumer experience is much simpler the consumer experience is you go to the website or you go to the command line and you request a storage object and you just say what kind you want and then we give it to you again the implementation detail underneath is completely hidden from the user I'm not going to go all the way through this because I'm assuming you guys all know how Cinder works so from this point forward we're basically talking about normal regular Cinderisms it appears as a volume type you consume the volume type nothing really amazing or special on the back end because we translate that Cinder request into a storage policy management request we pass that down to vSAN we create the object so I'm going to just go ahead and pause here this video is up on YouTube so you can take a look at it also it's in the lab if you want to go out and build a vSAN lab you can do that it's pretty straightforward okay I need just a time I'm just going to okay alright so in summary so what we've seen from OpenStack customers what customers are telling us is that we really need low cost high performance storage here we don't need high end replication solutions we don't need synchronous replication we don't need offline snapshots we don't need all these fancy things we need something that's performant it's stable and it's low cost and runs on commodity hardware surprise that's exactly what vSAN is it's one of those things it's very simple to deploy and operate and from our perspective the best part is it's integrated with vSphere it's a vSphere feature it's not a separate thing so from our perspective this makes a lot of sense we have a huge commitment to OpenStack within VMware we have this storage product that seems to fit these use cases and when we talk to customers about this what they tell us is yeah this makes a lot of sense does this mean that we expect all of our OpenStack customers to go directly to vSAN? probably not the vast majority of vSphere customers today are running on sands and most of them are really happy with those sands that's great we love sands, sands are fantastic for what they do so if you are implementing OpenStack in your production environment and you want to carve off a piece of your existing sand and put that on OpenStack everything that I just talked about will work perfectly well against the sand infrastructure fiber channel iSCSI, NFS it will still work this is just another option to look at so with that I think I am right up against my time and I thank you all very much for your attention and I am happy to take questions thank you so the question is what happens if I am running on a vSAN cluster that's not a vSAN cluster can I consume vSAN storage? no if I am on the same network if I am really have a great personality and I am sitting really close no so vSAN is only managing storage within a single ESX cluster only, exclusively despite the name we are not actually a sand we don't support NFS we don't support iSCSI we don't support external sand protocols so if you want a centralized storage entity serving multiple clusters there are some really great products out there to do that that's not what vSAN does question, what if it's a host in that cluster if you are hosting a cluster then you can consume that storage whether you have local storage or not it's a cluster level asset that can be accessed evenly by all the members of the cluster but only the members of the cluster not across clusters but we can have a non-uniform cluster that works fine now there are performance implications to non-uniform clusters take that with a grain of salt will it work? can you consume the storage of a foreign machine? absolutely question is that means I don't have to have SSDs in every single host correct, keeping in mind you could have performance implications by having non-uniform access some VMs may experience higher performance than others because if you have members of the clusters who are not participating it will limit the total number of VMs that you can support on a single cluster and the reason is because we distribute the metadata across all members that are vSAN enabled and the metadata limit is a per ESX limit so we can support 4,000 objects per ESX server but that 4,000 objects is only distributed to participating members so if you have a 16 node cluster with 8 vSAN nodes you're going to get half the scalability as a 16 node cluster that are all vSAN nodes in terms of just number of objects that we can support so there's some subtlety there if you read the vSAN deployment guide we strongly suggest that all members of the clusters participate because it's more predictable that way and it's the safest option even if it's only just two disks in the host so you may have a case where you have a cluster, 8 of which have 2 disks 8 of which have 10 disks that is totally fine so the classic thing is I have blades and I have rack mounts and I want the blades to participate and the answer we would say is that's fine but you probably want to go ahead and take the two spindles that are available in the blade have them participate even though it's a relatively small amount of storage and the reason is because that way they can participate in the process they can be a witness, they can store metadata they can form quorums so the design assumption is that most of them... but the reason why it has to work when that's not the case is what happens if I have one SSD and a host and the SSD fails you don't want to have it just fall down and die at that point so we have to support this mode where not all members are participating so since that already has to work you can do it by design as long as you are willing to accept the performance window that you are submitting there yes sir so the question is who is doing the scheduling and the answer is you can do either you can specify a data store and basically Cinder is doing the scheduling but we would prefer that you just tell us what kind of object you want and let us do it because we know much more about what's going on in the data stores than Cinder does but some people want to have more control so we have to allow both ways we can have this is what we refer to as a three beers conversation about who should be doing scheduling it's more of a philosophical debate mechanically we have some advantages because we are closer to the disks there is also policy handoff when we talk to the arrays we give them policy hints which Cinder can't do so if you are not using our policy infrastructure you don't receive the advantage of the policy hints so your performance will drop so that's the other reason to use our policy infrastructure other questions no it's a persistence tier the question is what goes into that tier so it would be more accurate if you are a storage guy think of it as dynamic auto tiering at a block granular level sorry that was inaccurate at a stripe granular level are you a storage guy in the storage world those things mean things so you say auto tiering to a storage guy and you say auto tiering so what happens is if a stripe lands on an SSD we consider that to be a right commit if it was only a cache layer that's not technically a commit that's a dirty buffer for us that's a commit so we consider that to be a valid commit and we report that to the guest later we may move it so the storage the big ass storage world like the pointy haired storage guys that's the question to them that's auto tiering so I think to a normal human that's the same thing but we have to use our words carefully because in the storage world that means something there's actually two factors one is how often you're assessing it but the other one is the policy that you've set for the object so some objects may have higher priority than others causing them to be we call it the elevator mechanism so you take the elevator down you may or destaged so you may get destaged so let's say you have vm1 and vm2 vm1 is set to 100% flash vm2 is set to 0% flash they both commit a right at exactly the same time both of those rights commit to SSD the guest receives exactly the same acknowledgement at exactly the same time one millisecond later vm2's right gets destaged vm1's right is not destaged then they read the same block he gets a really fast access he gets a slow one yeah so is that caching or is that auto tiering fine, close enough yeah so I think what I'm saying for all practical purposes the distinction between those things is not that big mechanically it's what's happening is different but the experience of the user the experience of the vm is identical we're using flash for IOPS we're using rotating media for capacity SPBM they're talking about my baby here man storage policy yeah it's actually the policy is not how long it's percentage of object size guaranteed so it's a reservation guarantee but mechanically it's basically the same thing the bigger your guarantee the more likely your right will remain longer yeah so it's expressed as of object size one ssd and one rotating media at a minimum yes yes well you can lie to us and tell us that the ssd is rotating media we wouldn't know but yes we require you will not enable a disk group unless you have at least one of each it won't work so the question is why that crazy requirement alex this doesn't make any sense to me the reason why is because architecturally we wanted to make sure that we had a uniform d-stage layer which gives you a more even performance experience the problem is if you have ssd without rotating media you have no d-stage right so now architecturally we can't assume that you can take the d-stage down to the rotating so architecturally we're assuming that we have two classes of disks fast disks and slow disks if you take the slow disk away right now we're just we're just an all flash array right those things already exist it's called pure or violin so we're just not in that business if you want the world's fastest with ultra low latency and a million IOPS buy a violin they're really good at that yeah I don't think I said that it happens that vSAN is very well attuned to open stack workloads vSAN is not an open stack only product it's a generic storage it's a generic storage product and the reason why we use both ssd is because in our research what we found out is the cost of ownership the cost per IOPS on ssd is very low but the cost per gigabyte is extremely high HDD the opposite the cost per IOPS is high so by combining the two you get low cost per IOPS low cost per gigabyte on the same platform so it's an architectural decision we've made you guys will tell us whether it's right or wrong because if it's wrong you won't buy it but we're pretty confident in this design and if you look at what's going on in general in the storage industry a lot of people are moving to this hybrid ssd HDD model there are definitely use cases like high frequency trading NASDAQ where you want the absolute minimum possible latency with millions of IOPS we're not that we're a general purpose storage system for 80% of your work fluids those storage systems are designed for 5% of your work loads and they're really really really good at that and we didn't think that we could be a better high performance low latency array than violin or pure or the others on the other hand we thought that we could produce a system that had a much better ROI for 80% of your work loads and that's the system that we designed you can definitely argue we made a mistake but that's that's the rationale yeah I think we had a question back here I think I might have to cut this out they're going to kick us out I love this conversation by the way the next step is you're going to have to buy me beers to continue answering questions which is totally legal bribing your presenter with beers totally cool an open stack summit one more question and I think they're going to kick us out of here but I'm happy to continue the conversation yes sir the question is does vSAN have distance replication vSAN does not but vSphere does so vSphere has replication if you want to use it it does not have its own replication engine absolutely using using vSphere replication service keeping in mind that vSphere replication service has a minimum rpo of 15 minutes so if that's what you're looking for then that would be an appropriate way to do it okay I'm going to have to stop the questions here I love the questions happy to talk to you outside but they're going to kick us out of the room because it's after 6 o'clock thank you all very much thank you