 All right, we'll get started thanks for coming. It's getting down to the end of a pretty fun week I don't know about you guys, but I was a little hungover this morning But I'm good now a little fluid. That's good So my name is Alex Josh. I'm a product manager at VMware I'm responsible for the the cinder product at VMware part of the extended open-stack team The cart tickets here with me as well. I'll introduce him just a second. Kartik is one of the core developers We also have subu here subu is also another developer team. So We actually have a pretty good representation from VMware this week So the I'll just go ahead and answer this question now because first person everybody asked me is why are you guys here? So I'll just start with that so You know VMware is very passionate about making sure that our customers are Supported and are being able to do what they want to do right? So our customers are telling us that they want to run Open stack on top of vSphere and ESX and we're like, okay. That's great We'll go make that happen. So there's a pretty big commitment at VMware to supporting open-stack. So we're starting to upstream Actually, you know, obviously the the Nacera folks that we acquired last year are very plugged into this community The people on my team on the core vSphere team. We're a little newer. So be gentle. We're just learning So hopefully what you'll see today is that we've made some decent progress But we definitely a lot of stuff that we want to do to improve So that being said so what is going on with open-stack at VMware basically what we're saying is we're focusing right now on Neutron Nova and Cinder So those are the three projects that are the most impactful because of what our customers are telling us they want to do So if you don't know about the work that's being done with NSX in Neutron I would definitely suggest you check it out. It's very very very cool stuff on the software defined networking and Luckily the guys that came over from that team had been very helpful in the rest of us teaching us how to be open-stackers Is that a verb no Stacker, that's a Sorry Mike and my guys in the front row are trying to make me laugh Okay, so then we have the the Nova support the core compute support And you know, it's kind of interesting because I've seen some people say why don't we just have Nova go talk to ESX That's probably not what you want to do and that's probably not what we're gonna be doing going forward really Nova and Cinder are gonna be talking to to vCenter itself And the reason why you want to do that is because it really lights up all the core features To make the product work. So if you go talk to an enterprise class customer that's running VMWare today They're all running vSphere and vSphere is really the interface that they're using and that's pretty probably the reason why They're running VMware in the first place is because of the features that are inside of vSphere So so just a show of hands How many of you people are really familiar about VMware and vSphere and all this stuff? Okay, how many of you are not super familiar with and the rest of you are pretty hungover, okay? So I won't get too much into details then about the way vSphere That's the basic about half the hands raised up for the people listening on the recorder So I won't get too into the details We are working on Things around glance and swift. I'm not gonna pre announce anything But we're definitely want to make sure that we operate well within customers that are running Glancing swift implementations. We have really good partners, especially in the swift space around object storage Object storage is not something we do ourselves, but we want to enable that So if that's something you're interested in, you know, definitely come by the booth or talk to us directly I'm not gonna get into that in this session So what we are gonna talk about though is storage, right? This is Cinder So this is all about storage. So hopefully you guys are all really major storage geeks. I'm thinking. Yeah Absolutely. Okay, you can quote t10 in your sleep. Yeah, anybody know what the t10 is? Okay crickets. All right, so we're gonna be talking about t10 today So from a VMware perspective we really believe in this notion of software defined storage and we think that the abstraction of the control plane away from the actual implementation detail is hugely important and that this next generation of infrastructure and if you look at what's going on Inside of the OpenStack community. That's pretty much what's going on here as well. So this is something we believe in This is something we want to support We're gonna do this in a bunch of different ways At VMware, we have a lot of implementations that already exist. There's a lot of infrastructure running So we can't just introduce an entirely new concept and leave everybody else behind That that's not gonna work because we have millions of customers out there that would be left behind So we're not gonna do that. What we're gonna do is we're gonna make incremental changes We're gonna do some brand new technologies that are pretty different and frankly could be considered disruptive And then we're also gonna be supporting people that are already in production and try to bring them forward So whenever we have this conversation from VMware and storage, you're always gonna see us kind of have mixed messages, right? There's gonna be hey if you want to start completely from scratch You could do this and then we're gonna say at the same time We're gonna say oh, but by the way if you really love your sand and you want to keep the way sand you have Great, you can do that So you're always gonna see us saying both things and that's very intentional because for some customers They really want to just do a greenfield implementation of Opus stack and they want to start computing from scratch, which is great But there's a lot of customers just can't do that You know the guy they've got millions or even billions of dollars invested in hardware and infrastructure and data centers That are already running and you really can't just rip those out So when we think about the abstraction layer about the policy layer What we really mean is that there is going to be a universal policy abstract That's true across all the different ways that you can interact with storage on a on a v-sphere platform so there's Traditional storage on a sand or an az device or there's direct attached storage All those things are always going to run in the exact same policy framework Right, and so it will appear as if the storage is all exactly the same It won't be but it'll appear that way to the end customer, which is the whole point And hopefully this is sound familiar because this is exactly what cinder is trying to do as well, right? So we're very much aligned in that vision What's new and what's different for us is this notion to say well What we want to do is we want to put storage directly into the hypervisor And we refer to this is is v-san so when you hear us talk about v-san And we kind of get all excited about v-san because we do that and we rock the v-san t-shirts It's because you get to have Native storage capabilities right in the hypervisor. So basically you're what we're talking about literally as a server a normal v-sphere server an ESX server, sorry to be more clear that you put just j-bot regular old discs a mix of SSDs and rotating media and Then the hypervisor itself manages that storage and shares it amongst the members of the cluster So it's not really a san in the traditional definition, right? It's really a software storage abstraction layer That's extremely performant because it's local to the machine It has integrated SSD and rotating media So you get that kind of hybrid strategy where you get high IOPS on the SSD and high capacity on the rotating media It's very stable and reliable because it's replicated amongst the members of the cluster, right? So you can lose a node or a couple of nodes and you don't lose anything It doesn't look like a sand. It doesn't work like a sand. It's not there's there's no knobs to turn There's no configuration. There's no fiber channel, right? It's just plain old Ethernet and it's completely automatic. So when I say it's a little disruptive. That's what I mean It's not super comfortable to your traditional storage guy, right? So if you have somebody that's been installing emcs, you know for the last 10 years and he's really comfortable with it They may not be really excited about vSAN and that and that's fine but if you have something that's starting from scratch and saying look I want to build out a cloud infrastructure With basically one very flat infrastructure where I want compute a network and everything in one plane I don't want I want complexity. I just want it to work You know for them vSAN might work out and that's the reason why we created it Remember I said I'm gonna talk out of both sides of my mouth. So here it goes so in At the same time we have lots of customers who like look I've already got a sand I already paid for it, right? It was really expensive I want to keep using it and it does all these things that I want it to do and I don't want to stop using it Okay, great So we created something we call virtual volumes or vVol VVol is the ability to abstract VMD case right virtual disks directly into a storage array a sand or an s and What happens is is that we abstract these objects directly into the sand and now the sand becomes aware of these virtual disk abstractions virtual volumes vVol's and So what happens is now we can do exactly the time the same kind of granular policy-based management for virtual volumes just like we do with the MDK's they're sitting local to the High-previsor See what's happening here. So we have a sand or NAS based technology Using the same policy framework the same VMDK granular management the same tools the same cinder driver exactly the same everything's the same but the back end in one case is say a V max and The back end on the other side is just some random disks that you crammed into your to your hypervisor The consumption model is identical. It's no different. It's the same cinder driver the policy model Identical it's exactly the same and that's the point right is this one is virtual volumes better than vSAN No, right it that's not the point. It's different it customers need that type of choice on the back end There's reasons why people buy sands. There's reasons why people buy direct attached storage We're not going to try to tell people one's right or what's wrong We need to support all those infrastructures and we need to make the consumption model identical That's the reason why we're here So it makes sense. Yeah So via VMFS large long-term VMFS go pretty much goes away that the reason why VMFS exists is to allow us to share a single one amongst multiple Hosts in a in a vSAN world you actually do have something called VMFS L Which is the non-clustered version of the MFS But each local house has as every disk has its own little mini VMFS out partition So the traditional clustered VMFS file system doesn't really exist in a vSAN world in a vVol world You don't have ones at all So again, there's no VMFS, but what we've done is we've actually abstracted that implementation detail pretty low in the stack So the consumption model you're unaware of it. In fact, even if you're an API user, which is pretty low level You're still unaware of it. So The question is isn't a migration path No, but you don't need one right you just bring on the new data store and then you do NSC motion and then you're good For people that so it sounds like you really know something about vSphere if you want to have detailed conversation I'm happy to have it with you. I only have 40 minutes, so I'm gonna keep going. Sorry. Okay And then you apply on top of that what we call SPBMs Storage policy-based management. So the SPBM layer, that's the policy abstraction that applies to these underlying implementations So you'll be able to say something like I want to have a certain level of RTO or I want a certain backup schedule Right, and then we will make placement decisions against the back end based on the request that you've made here So you don't say I want a fiber channel on or I want an FS line You say I want replication or I don't or I want low latency or I don't really care about latency or Right, so now you say what you want what you're trying to achieve Not necessarily how to get it done and then you let us worry that the platform could worry about how to make that happen So this abstraction basically makes your infrastructure less brittle, right? So in a traditional environment you would write a script right in PowerShell or something that said You know create a VMDK on this on this data store in this cluster, right? And then your storage administrator come along and re-architect a storage and now your script breaks That doesn't really scale to thousands of objects or millions of objects on the other hand using storage policy-based management You would say I want an object of this class That script works all the time will always work right you never have to change the script because you're always asking for the Same thing well you apply the exact same thing to Cinder So Cinder comes and asks us today for something relatively simple like I want a thin disc or eager zero disc or something like that in the future They'll come and say I want a disc with these properties and that Cinder volume type will just always work Regardless of the underlying of imitation changes Yeah with me so far Okay, and I apologize. I'm going super super fast because we only have 40 minutes So I think this is obvious to most of the people in the room, but just to be certain We have had a a storage abstraction layer for some time in vSphere, right? So some people in the industry are talking about storage virtualization like that was a new thing Apologies for those of you that have blogged that in this room. I would argue. That's actually not that new right if you think about what a LUN is LUN is fundamentally a storage abstraction you virtualized some physical media into a logical object Which you think is a disk and it has sectors and all these fun things, but it's not actually right? It's just a logical construct. So isn't that virtualization it sort of is And within vSphere we have this notion of a data store which is an abstraction on top of that LUN So we say above that abstraction we apply another abstraction and we've had that for some time So this notion of having a highly abstracted storage universe is not new to us So today if you're consuming like NFS or or ice guzzy or fiber channel The vSphere Ministry is not really aware of that So we're just extending that same metaphor and it's really the VMDK that allows us that portability So I know some people in the community have been working with RDM's and there's actually some really good work That's going on around RDM's The only thing that I would ask is that you really think carefully about Deploying RDM's in your environments or your customers environments And the reason because RDM's were designed to be kind of the storage of last resort, right? You really should be using RDM's only when the workload you're using Absolutely positively requires an RDM and cannot run on a VMDK. So for example Windows guest clustering Quorum disks right because they use guzzy reservations. You can't run them on a VMDK Everything else should be running on VMDK's and the reason for that is because everything that I'm gonna talk about in the future Is assuming you're running in a VMDK and not on an RDM So you're really limiting yourself and there's some there's some things that are broken in VMDK's that cause you to want an RDM and when I say RDM by the way, I mean I roll a device map, right? So this is a path through disk Sorry for those of you are not VMware people We're fixing that is the answer, right? So we're gonna make VMDK's generic so they can support all kinds of workloads and basically what that means is that RDM Hopefully you won't need them anymore So I'm not announcing that we're like deprecating RDM But what I'm asking you is to please try to stay away from them and don't encourage your customers to use them Because we don't want them to be pissed off at us next year at this time when we say hey by the way We're all done with that so our driver our sender driver is focused on VMDK and and that's the reason for that Any questions about that any passionate defensive RDM in this room? One over here, okay. You want to ask a question or make a statement? Right, right. So the feedback we got is that the healthcare guys use this for like sounds like a compliance check Or a HEPA kind of things and in the end for snapshotting so the answer would be V-Vol was created to address that concern So in a V-Vol world you will have that tattoo that goes all the way down and all the way back again But we definitely that's why I'm saying I can't tell you today. Hey RDM's dead But I would love to tell you that but I can't what we're saying is we're making progress over time So yeah in a use case you're like yours You need them you need them, but we're just trying to get rid of them And we're basically taking each one of those use cases away one at a time So hopefully over time we'll get to the point where we don't need them anymore. All right, so in So how does this work? What's the workflow? So as with Nova? We set up a capacity pool that's then consumed by the driver So what happens today in the in the Havana release is we're actually just selecting from the available Data stores on on the on the cluster going forward We're going to add some collection criteria and SPBM and other things But today we just pick amongst them the available data stores Then the cloud administrator creates the cinder volume types, and then you consume against that capacity right The when you create a cinder volume We actually go into the back end and we create the metadata for it. We don't actually create the VMDK Then only at the attached time Do we actually create a VMDK and I'm going through this really quick because karts It's going to show you a demo of this in just a second But I just wanted to give you an overview and have it written down so you can see what the steps were So the important thing really out of all this thing the only thing really to remember is it's a lazy create We create the VMDK at attached time and the reason why we do that It's because that way we're guaranteed to create it on a data store that the VM can actually see Right, so if you have a data store attached to cluster one and your and your VM is on cluster two The first thing you have to do is do a storage vMotion, right? So that doesn't make any sense so we wait until we know where our attach point is and then we create it then We're going to show you a demo of this in just a second We are using that the extra specs mechanism and we are passing metadata down into the driver and again Karts is going to show you this in a second right now We're doing things like thin thick or ego and eager zeros thick The and we're also controlling what kind of clone you can do in the future We're gonna have a lot more richness and in fact for the ice house release. We're talking about a lot of enhancements to the type of Metadata you can pass down into the driver. Okay. I think I should I think I'm just gonna Okay, one second on shadow vM. So I mentioned I didn't mention this today So the other interesting thing that's happening is is we need a mechanism Remember I said that we create metadata before we actually create the vMDK when I said create metadata What I really meant was we create a fake vM Inside of vSphere There's no notion of a disk as a parentless object Discs are children of vMs. So if you want to create a vMDK you have to have a vM to attach it to But in Cinder doesn't work that way in Cinder and not a disk as a disk So what we do is we create a fake vM. We call it a shadow vM We attach the vMDK to that and then it becomes its parent and it follows it around And again, we're gonna show you this in a second So it is a little bit of a clue and I apologize But just from a platform perspective today and this is something we're working on fixing But today you can't actually create a disk unless it's part of the vM parent object So thus this fake vM that you'll see and if you run Vova or something like that in your lab and you open up the UI you'll see these fake vMs, right? We call them shadow vMs Okay, good. Kartik you're on baby Cinder man is on Hello Okay, so let's just start with the demo and first we'll walk through the current the initial setup that we have on the In the system so we have the Nova that uses the VC driver and talks to the virtual center server We have two Debian virtual machines that have been created And if you go to the VC UI you can see the there is a single cluster with two ESX machines And you have a visible data store for each of the ESX machines These are not shared across the ESX machines and these are the two virtual machines that have been created by an over If you look at the hardware section You can see each of them have a single fmerl disk and they are present on either of the hosts Now we also have Cinder that is configured to talk to this particular virtual Center server it uses the VC plug-in and If you look at the current list of volumes there are nothing so we go ahead by creating them So as the first step what we do is we create a volume type So this Cinder driver the VMDK driver allows user to specify a VMDK type And that can be done via the extra specs in the volume type and if you look at this particular example We create extra spec type to do a thick provisioning and that is the extra spec entry we add in this thick volume and as Alex mentioned we provide we support three types of VMDK types today thick thin and Eager zero thick now this particular step is not a necessary thing and you can always choose not to specify extra spec in which case we by default create a thin provision VMDK volume and If you see we first try to create a volume with this thick type and Of one GB size now one of the things that again as mentioned Once we do this creation you will not be you will not be seeing any backing for this volume in the inventory in the VC inventory Because this at this point the volume is not Used at all. It's more of a stateless volume. It's a fresh volume And we actually created only when it's being used for the first time That is now we go ahead and we try to attach it to a to WNVM one. This is when the driver Figures out that okay, it's being used for the first time. So let me just go ahead and create a backing and now it It sees that okay The WNVM is present on my ESX host one So let me just go create that pick a data store and it'll create that so now if you see the first step It does is it will be creating a volume shadow VM So the sender volumes is the kind of VC folder it uses where it aggregates all your volumes there and As a second step it Attaches the VM DK of the volume to the instance if you just go to the hardware section you can see a second disc for this particular instance and The the volume the third VM you see on your inventory is the shadow VM for it And if you go and look at the hardware section of it the hardware details Hard disk details so you can see that it's a thin provision one and that's the data store Path of the VM DK file now this is sort of the VC side of things and Whereas if for for the the open stack end user who was logging into the The OS guest directly for him. It's more of he should be kind of discovering a new device That's been attached and to check whether if it has a valid partition table and a file system And then just go ahead mount and start using it. So now He can probably do some sort of a continuous lock file backup from his production system or database file backup on his production system and since it's a persistent store He should be able to detach this from the instance attach it on to another instance and probably you have some sort of a Validation software that's running some troubleshooting or some sort of a log analyzer software running there and the state that's been written in this particular Instance the first of the first attach should be there at the later point of attach also So in this particular case we show in the Debian system where Since since we have attached a fresh disk The the device will not actually have partition table or a file system So here we do everything we partition that particular device we create a file system on it We mount it and we save some state in this case. We just add a text file to it So, yeah, so here we just do the mount in a Save some state and once I detach this and I present it to another instance. I should be able to see the same old state in that Now one other thing that the driver one other thing the user of the opens the open stack user need not worry about is the kind of Infrastructure's topology hasn't we see that is you can have any number of clusters your novice configure to any number of ESX machines or data store And and the sender driver takes care of Presenting the volume to an instance where it has to be attached It does all sorts of movement here and there and yeah, the end user really need not worry about any of those things So here we see that we have written a text file now. We just go back and We detach this particular volume from the Debian VM one and What we should be seeing is the shadow VM is still left because it's a persistent store and the Debian VM should now just have a single hard disk Earlier when it had to it was attached So you have the detached process going on Once it's done, let's just go to the VC no entry and check Yeah, now the detach has been successful and if you just go look at the hardware section of Debian VM one you Yeah, now you see that Just have a single hard disk So the volume has been detached from it and you have the shadow VM left in the inventory Now this is present on ESX host one where the Debian VM one was Being managed by and now I try to attach we try to attach the same volume to another Another the Debian VM two which is present on the second or the first ESX a different ESX So as I mentioned earlier the sender I would take care of doing all the jugglery So here in this case it figures out that this data store where the shadow VM is present is not visible to the ESX So it'll try to migrate or move these files over to the other ESX where the The VM to which it should attach is present So that step is happening here and second step which it does is to reconfigure the VM by adding the volumes VM decay So if you just look at the hardware section of it the first instance Now you see a second hard disk present Right now this is that of the volume now from our earlier attach we had done all the steps of Creating a partition table formatting it and we have written a file on it So once the second user of this OS He should just be able to discover that particular Device and he should be able to recover that state so that he can continue working on it from this point onwards So here you can see that we have just scan for new device and we found a Partition device and we just sort of mount this And and once we're done we should have the text file present here, which we had written earlier Yeah, there you go. So, yeah, this is a quick demo where we show just the attach and attach Before I go this these are the This is the list of comprehensive APIs the sender APIs that the driver supports as of today So you have the create volume You have the three supported VM decay types as of today. So you can do a create from scratch You can you can create it from a glance image you can clone from an existing source volume You have two types of clone supported one is a full clone or a link loan and then you also can do you can do a full Clone or link loan from a snapshot point And then you have the touch and detach to an instance and snapshot and available volume and so Kartik What about optional APIs? What's our plan there with optional APIs the ones we don't support today. Yeah, so The plan is we'll be getting it at least the important ones to you currently we are talking to the customers and we just Picking the the APIs that they want and I hopefully we'll try to get it by the next release Yeah, so it's a very good conversations this week with a bunch of customers So if there's things in the sender spec that you're very passionate about or you have specific use cases where we should Emphasize one or the other definitely. It's a good chance to let us know our intent though just to be clear This is support the entire sender spec right all the optional components as well, but this is what we're where we're at right now Okay, any questions for for Kartik and remembering that he's the technical smart guy, and I'm the PM guy So keep your questions appropriately aligned, right? Any questions for Kartik while we have him? Awesome. Cool. Thank you, sir Okay, so Getting on to so that's what we have now. This is upstreamed into Havana. So go ahead and grab it There's a couple of patches that have been made. There was a couple of bugs that we found So please do check it out if you have bugs report in the community. There's a dedicated OpenStack community In the VMware communities that which you can talk to us directly, but obviously we participate and if you file a bug With the open stack we'll see that obviously so we'll participate in the open stack community But we also have ours so it just depends on where you want to if you're a VMware customer Just go ahead and use the open stack community inside VMware comm So right now we have a committed roadmap to support storage policy-based management Which is a slide I talked about earlier, right? So this notion of attaching policy to a disk is committed for ice house We're gonna we have some issues around with snapshot and cloning of attached volumes So today what happens is that when we take a snapshot? It's always an application consistent snapshot of the VM, right keeping in mind that for us a disk is a child Object of a VM so we can't actually snapshot a disk We always snapshot the VM, but in Cinder Cinder has this notion of a of a disk snapshot So what happens today is if you take a disk snapshot with an unattached Cinder volume everything's fine Right we snapshot the shadow VM and everything's fine But if you try to snapshot at a mounted volume will fail the operation Which actually is okay because that's what Cinder expects us to do But there's a lovely force command command right where it's which is supposed to override that behavior We'll fail that too Sorry about that We're working on architecture to fix that but as of today you basically cannot do a Cinder snapshot of a mounted volume You just can't we'll fail it even if you pass as the force option We're also working with the Oslo guys There's a lot of common code between the vSphere driver for Cinder and the vSphere driver for Nova Which doesn't really make any sense. So we're gonna working on a project right now To move that into Oslo and actually Subaru over here is working on that. So if you have thoughts about that gentlemen here The we have some API work to do And obviously anything new that comes along an ice house. We're gonna do And then finally this is not really a Cinder thing, but we're also gonna add SPVM support to Nova, right? So if you think about a Nova boot volume, there's no reason why Nova can't request Services from the underlying storage stack just like Cinder can right so we're gonna make it basically the Cinder and the Nova implementation Identical so you'll be able to get all the goodness and Cinder that you get in Nova that you're gonna get in Cinder. It'll be the same So now we get into the speculative part of the conversation and we have a really extensive five minutes to have this conversation So hopefully we'll get into a lot of detail. No, we won't actually but just to get you thinking and maybe we'll talk outside about this We're working on what we're gonna work on beyond the committed release So one thing we're thinking about that we're interested in as a common metadata model between the two for us The backing storage for Cinder and Nova are usually the same in a vSphere environment, right? We don't really have V you know bootable data stores and data data stores, right? We don't really have that notion So there really is no reason why the metadata models are different in our world So it would be helpful if they had a common model We're also thinking about application consistent snapshots now for us. We always take an application consistent snapshot all the time There's really isn't a notion for that in Cinder, right? So we may try to introduce that notion in to Cinder. Obviously, this is not a driver thing This is actually Cinder itself would have to change a little bit to allow this and really what we mean by this Is that if you take a snapshot of a mounted volume? What we probably would prefer is that we actually hand that snapshot request to Nova, right? And tell Nova to snapshot the entire app, right? Not just the data volume. So just an idea Right now. There's no dr. HR considerations Well, not none real to speak of in Cinder For us and an underlying platform There's a lot of ways that we can replicate or mirror or back up volumes to foreign locations There really there's a notion of an availability zone in Cinder, but honestly, it's kind of broken, right? It's not really fully implemented. So one option would be to fully implement the notion of availability zones in Cinder and then from our driver We hook that up to our platform and then other obviously we'd expect the other guys to hook that up in their drivers as well There's no notion of storage QoS right now in Cinder It would be interesting if Cinder could ask for a specific service level Our platform and others can provide differentiated service levels, you know, this is not an unusual thing in the storage business But Cinder doesn't really know how to ask for that Today there is really there's a there's a migrate command in Cinder, right? Is that what they said? Yeah, there's a migrate API in Cinder We don't implement it in our current product, but we're thinking about implementing it There's some data mobility services in the platform that we could hook up to that migrate command. So we're working on that Right now, we don't really do any alerting So if you ask for a certain SLA for an object and you that vial SLA is violated There's really no way to tell Cinder that that happened, right? So maybe that's a salameter thing I don't know but one we would like to hook it up so that basically when your SLA is violated You get some sort of a notification in horizon, right a little eye red icon at least right that says hey, by the way You know that disc you asked me for it's actually compromised. This is kind of a problem. So today There's really no mechanism to do that And we talked about availability zone. So that's kind of what our thoughts are Any feedback from the room based on what you've seen so far any feedback about things you really think that we should be working on any Any suggestions any areas we haven't talked about Definitely open to your input. This is a community thing, right? Yeah, I'm pretty sure it's community Everybody's sleepy. There's one guy in the back. It's fall asleep. Okay So if you're too shy to bring it up in a big room, I understand So we will happen to take these offline and if you have well, you know, we're on the IRC chats now and things like that both Cartek and Subaru are attending the the Developer sessions, which they had to step out of it to come to here. So thanks guys for that Okay, there are a bunch of other VMware-led sessions this week I'm gonna read every one of them now. So no, I'm not that most of these have already happened So my apologies for that but obviously they've all been recorded So I definitely encourage you to take a look and check them out. No, the really interesting thing about this particular Slide is that I've had people tell me this week that VMware is not committed to OpenStack Which I found was a very interesting comment Being an open-stack guy at VMware. I you know, I don't think that's necessarily true We're pretty committed Right, there's a lot of sessions we're delivering There's a big team inside of VMware that doesn't do anything but open-stack stuff, right? So to me, we're committed, right? We're voting with our time and our effort. We're here. There's like 35 of us here That's probably not the biggest contingent of any company, but we're pretty committed and You know, we're doing a lot that the the storage team at VMware we're pretty new to this so you probably haven't seen us and active in the community because we're so new but our intent is to definitely be active and to make contributions and to Support others and do code reviews and all those good things that you expect from a proper community member So what I would say is that our intent is to be much more visible The last six months or so we've been very low-key Because we didn't want to just come stomping in with our big old boots and say hey We're VMware, you know get over it that didn't really work So we've intentionally been listening a lot more than talking, right? Because we wanted to make sure that we understood what people were saying and what was going on and what the issues were and stuff like that So now we feel like we kind of got an idea So we've got some code in the Havana release, which is a big nice milestone for us So you're gonna see us becoming much more active, especially in the ice house releasing going forward So we're really excited to be a member of the community. We're excited to be here This has been a really good week for us. We've gotten a lot of great input from the from the community. So With that that's the conclusion of the talk and we can the demo that we just did is up on YouTube You can go to the community site the the VMware opens that community site and get links to it And you can follow me on Twitter. I'm at a Josh my Twitter, and I'm always happy to answer your questions So thank you all very very much. Hopefully this is informative. Thank you