 It's 150, so let's get started. Thanks. This is a, we appreciate everyone who shows up on the last afternoon of the conference right after lunch. It's the worst time slot to have. All right, so I welcome to talk about OpenStack Cinder. And let me go ahead, we're going to just make some introductions since there's three of us here on stage. So I'll start. So my name is Ken Hoi. I am a director of technical marketing at an OpenStack startup called Platform 9. Prior to that, I was at EMC working on the OpenStack strategy. And just before that, I was at Rack Space as the OpenStack evangelist. John, you want to go? My name is John Griffith. I'm a software engineer at SolidFire. I've been there for about four years. We are a storage appliance company. We make flash storage, kind of built for OpenStack in our opinion. And mostly what I do actually is work on OpenStack as opposed to anything else. I am Arun, work at Platform 9 as a software engineer. Work on multiple components from Cinder, Nova, Neutron. Previously, I was with Cisco working on OpenStack. Thanks, guys. All right, so we're going to tackle a few topics today. One is I'm going to talk a little about what is OpenStack. I would say one of the things I actually want to do first is with the folks in the audience, how many of you are new to OpenStack? And by that, I mean how many of you are just learning, have been working on OpenStack or learning about OpenStack less than one year? OK, so how about less, how about more than one year but less than two? So most of you actually know OpenStack. OK, so I'm not going to go into great, that's fine. That means there's less details I have to go there. Although I will talk a little bit about it just for the benefit of folks who are new. How many of you have deployed OpenStack Cinder with your OpenStack deployments? OK, a lot less people. One thing I'll do here, we'll talk a bit about what Cinder is and what are the use cases that we find it's two different companies that are working in some way with OpenStack. What are the good use cases with Cinder? And then we're going to do a demo that kind of shows you how OpenStack and OpenStack Cinder works using the Platform 9's version of OpenStack, the implementation of OpenStack, along with Salify as an example of a good Cinder back end. OK, so let's start by how I would as OpenStack. Again, I'm not going to go into a lot of detail, but the thing that I want to focus on here is to give you an understanding or make sure it's very clear that what OpenStack isn't. So OpenStack itself isn't a hypervisor. It is not even a networking technology and it's not a storage technology. It is, in fact, an orchestration system to manage multiple resources, including storage, that gives end users the ability to provision their own resources on demand. So if you've been in IT for a while, you might remember a time when if you needed some compute resources along with some storage, you had to file a ticket or send an email to an operator who then had to take anywhere from maybe a week to a few months, depending on what he had in inventory, in order to present those resources to you. With OpenStack, by plugging in these various storage and networking resources, you as an end user, a developer, can actually provision your own resources and do it in a matter of minutes instead of waiting weeks, months to do so. So that's, and later I'm going to talk a bit about how Cinder kind of fits into this picture of the orchestration platform. Then, see the spaghetti here? So this is, the way OpenStack is architected is every, all the services are kind of their own, essentially their own project that are kind of tied together through APIs. And there's a number of reasons for doing that, including the fact that we think it will scale better. And Cinder happens to be one of those distributed projects that are part of this architecture. And the goal of OpenStack at the end of the day is actually quite simple in many ways. It's to give users and businesses the ability to deliver self-service IT, which we talked about earlier, and do it rapidly and at large scale. So the idea here is in the old days, again when you had to request resources and wait months, weeks to get a resource, you could probably, let's say you could do 10 projects a year for some amount of money. And you hope three of them succeed and actually make you some money, right? What if you could, because now you can do your own resources, provision of yourself, now you can do 30 projects a year. At the same cost that it used to take you to do 10. Now, even if you had a lower success rate, but because you can do many more projects at the same cost, you basically become more valuable to your company. So that's a service example of why it's important to have a cloud platform that lets you do self-service at rapid scale. And there's different models for consuming OpenStack today. So Rackspace, which is one of the companies that started the OpenSource project, was interested in OpenStack as a way to create a public cloud that could compete directly with Amazon web services. NASA, which is the other company, US company that was involved in creating OpenStack, was interested in it to create a private cloud when running their own data center. So historically those have been the two primary ways for OpenStack to be consumed, either as a public cloud or as a distribution. It's off where you install and operate yourself. A third model that's emerging, which Platform 9 specializes in, is to operate OpenStack as a private cloud as a service, which means we're managing customer resources on site, but you as a customer don't actually manage OpenStack. That's outsourced to someone else, like Platform 9. And this is from the OpenStack Marketplace webpage. This is an example of different companies. Some of them offer more than one way of consuming OpenStack, but these are the three. Broadly speaking, these are the three ways of consume OpenStack and the vendors that specialize in those ways of consumption. So, except for Hylian. Yes, that's right. Well, they're still around. So this is good until January there. Until 2016. Red Hat is spelled wrong. You're right, I don't know how this happened. I'm going to blame spell check. So let's talk a bit about, so that's the broad scope of what OpenStack is and what it's been used for and who's providing OpenStack. I want to kind of zill in on Cinder and block storage in particular. And so John's going to help me here probably because he helped actually start the OpenStack project. Actually do want to talk about Cinder and what it is. So for those that don't know, just kind of a quick background, it used to be, everything was basically in one project. Everything was under Nova, including block storage. As things grew, started to scale, one of the things that we looked at a number of years ago was, hey, block storage is kind of important to a lot of people, especially if they want to run databases and things like that. We should probably take it out of Nova and give it some of its own focus. So that's how we created Cinder. So we started that effort about three and a half years ago. It's been an official project and blessed and everything for two, two and a half years. So it actually made a significant impact on OpenStack and the growth of OpenStack because prior to that, storage was just kind of an afterthought. It was kind of a secondary thing. So that's been a really big deal and a really big push to get things kind of going forward. At the time, we had three vendors that had plugins or drivers for OpenStack. And as of today, we have over 80. So it's definitely grown. So it's good and bad. Mostly bad, but. So a couple of things on this slide I think you need worth highlighting. One is that in that second below point, there's a key word in there, persistent. And that's what makes Cinder different than the storage that was typically used when OpenStack was first created, right? In those days, when you created a VM, you use something called ephemeral storage, which typically was, could be an error for S amount, but typically was storage that was inside of the HyperVite compute node. It was called ephemeral because the idea was if that VM got deleted or the data was also deleted. Which for cloud native use cases, actually it's okay and may even be preferable, right? But what if you wanted, what if you were running a database and you needed data not to go away when the VM got deleted? That's where Cinder came in to do that. That's one point. Second point is a lot of you may come from a background where you're dealing with a lot of storage area networks, kind of enterprise storage arrays. And when I talk to people from that background, they tend to think, they mistakenly think of Cinder as like a storage volume that they use, let's say with VMware, right? That where you can share that storage among different servers and resources. So it's very important to understand that Cinder is not shared storage, right? The best way to think of Cinder actually is that it's a USB drive. Could be a really large USB drive, but it is essentially a USB drive that you can plug into a VM, right? But you can't plug that same USB drive into a second VM at the same time. So you can delete that VM, the data stays, but then you have to detach it and reattach it to something else. Yeah. Yeah, let's just say local. It's a raw black device. Yeah, right. So that becomes, that point I made has been point because one of the reasons shared storage is very valuable in some use cases it lets you do things like live migrations of VMs without having to move the data around, right? Or do what they call HA, where when a computer no fails, you can restart everything on another. And since all the data shared by, seen by all the servers, all the VMs can see everything too. That's not what Cinder is. Cinder will not let you do that. And I'm bringing that up again Enterprises make that mistake of thinking Cinder gives them shared storage. So, anyways. So you wanna talk about this one here? Sure. Yeah, so the key is to build on this what you end up with with Cinder is you now have basically disk devices or raw disk devices that you can dynamically create and plug in and unplug from your VMs. So it takes the idea, this Ken uses the analogy of the thumb drive. So that thumb drive basically looks like just a raw block device, just a raw disk. So you can create that, create whatever size you want, you can specify characteristics by using types, things like that. And then you can attach it, mount it, format it, put your data on it. And that data could be databases, it could be the actual image, the boot image for the instance itself. So you can boot off of it. One of the things that people get a little hung up on on OpenStack is the concept of ephemeral instances. Some people love the fact that the instance is ephemeral and after you shut it down, everything is gone and you start over. Some people hate that. So what you can do is you can use a Cinder volume and boot off of that. Now you have a persistent instance. So that's a pretty powerful thing for a lot of people. One of the things that you have to kind of keep in mind is we're not object store and as Ken said, we're not a shared file system. We're a block storage. So what does that mean? That means things that have a high change rate, things that have IO demands. So things like databases, things like boot partitions, things like that, that's the sort of thing that you want to run on a Cinder volume. Things like storing images or music or photos, you know, whatever it might be. That's the perfect thing to use, something like Swift or an object store for. So those are some of the big differences. The other thing is, you know, a lot of people have a background with AWS these days for obvious reasons. One of the easiest ways to figure out Cinder is look at AWS and what EBS is. Cause most things in OpenStack are actually modeled off of AWS and Cinder is no exception to that. So the difference with OpenStack is, you know, and just throwing in more disks is everything's automated and it's scaled. And in theory, you have kind of an infinite pool of resources, right? So as you start to consume resources and you're getting close to running out, you can bring in more storage, more back ends, plug them in and continue to grow and keep expanding that and have the, you know, appearance of an infinite pool of resources. And that's the whole idea of OpenStack. You continue to scale horizontally. As you reach capacity, you can always continue to scale out and keep going. So there's been a number of talks about this this week. If you went to the keynotes on the first day, the COO from Bitnami gave some great examples and talked about, you know, things like why cloud matters, why OpenStack deployments for private cloud matter and why they're important. You know, the basic thing is really is if you do software development and you have development happening in-house, things like OpenStack and private cloud are gonna be a huge asset to you because the point is they make everybody, they give everybody the capability to move faster, right? One of the things that software developers always have traditionally had problems with is getting resources to do their job and those resources could be servers, storage, networking, right? So you have all these great ideas, all this code you wanna write, all these different things that are going on, but you need resources to do that and back to the point made earlier in a traditional environment, if you're using just bare metal and stuff like that, you may have to file a request, submit tickets and takes weeks, days, months, whatever, but it takes an extended period of time to get that and you're also limited in what you can get anyway. So if we go to the next slide. If you look, so there's a development process and it goes something like this, right? So you're creating an app and you're gonna use Mongo database in it and you're gonna run this on CentOS and you're gonna do all these cool things and it's gonna be awesome, but you need to prove out some ideas. You have some concepts you need to figure out and see if some of your code's gonna work and stuff and in order to do that, you're gonna need a Linux box for a day or so that you can put MongoDB on and you're gonna need four network cards on it say and some other things. So you're not exactly sure, I don't know exactly how much memory I'm gonna need on that box. I don't know exactly how much storage or what kind of performance it's gonna need or anything else. I may actually not wanna do this on CentOS. I might wanna do it on something else or I might wanna do it on both. I should probably compare those things in Benchmark'em. So the first thing you do is you try and guess, right? You try and guess what you're gonna need because you're gonna have to fill out a request form. So you're always gonna guess on the higher side and overestimate. And then you're gonna submit that request to IT and then you're gonna wait. And that waiting could be, again, days, weeks, whatever it might be in your organization depending on what the resources are. And the best part, and I've had this happen to me, I'll submit a request and say, hey, I need a Linux box with this storage, blah, blah, and they come back after a week and say, yeah, we didn't have a Linux server, but here's a Windows box. It's the same thing, right? So those of you that do any development know that's very far from the truth. So anyway, that doesn't work. It's the bottom line. So now, that's how you feel. You're not very happy. So now you get things like an open-stack cloud and you have something like Platform 9 in SolidFire and you have that in-house. It's significantly different, right? Because now what happens is developers have quota, right? So IT can set up, or whoever can set up and say, hey, you're allowed this many resources, this is your pool, do whatever you want. It's up to you to manage that and figure it out and make your priorities and set those things up. So now what I can do is I can go ahead, just spin up an instance on demand, takes a few seconds, load my software, hack it some code, work on it for a bit, and then I might sit there and I say, hey, you know what? The storage that I have and attached to this, you know, it's okay, but I wonder what would happen if I had a faster disk. So you can do things like retype that volume that you already have and say, retype it to a higher IOPS level, increase the performance on it, and run another set of tests, right? So then you do that, you look at it and hey, that's even better. So you keep going through these things and then you think, well, this little design that I have with Mongo and all these things that I'm doing, I think I could actually do this with my SQL and just go back to a relational database and this will all still work. So you can go ahead and just blow it all away and start over and you can do all this on the fly, dynamically on your own, no requests, no waiting, nothing. You do that, hack at it some more, and then you think, hey, this is better, let me tweak it, mess with it, now let me spin up a couple more and test it in some other platforms and so on and so forth. So you're doing all of these things in the same amount of time that you would have probably waited to get that one server from your initial IT request in the past, right? And as people have pointed out before, one of the things that's showing enterprises that there's a real demand for private cloud is the fact that their developers are doing shadow cloud anyway. They're going to Amazon, they're going to Rackspace, they're going to Google, they're going to all these places because they don't wanna wait, they don't wanna deal with this pain, they just wanna do their job and have fun, right? Most of us that write software think it's fun. So they just wanna do that and get it done. So they just use their own credit card and they go out and they do it in the public cloud anyway. So as I said, in the time that you would have waited for the resources to come from IT, we're able to test initial design, test it in multiple configurations, hack on it, try a new design in parallel. We finished the application, we tested on all these platforms, we probably have a continuous integration system set up that's running in a cloud, right? We have all these things and you release an app and make billions of dollars, which we all wanna do. That's how you feel. This one's much better if it's animated, but it's not. So we're gonna talk a little bit about platform nine, software and in the context of it as being kind of example implementations of using OpenStack with a Cinder back in. Before I do that though, I wanted to see, does anyone have any questions? Because we kind of blasted through a bunch of material. Just wanna see if there's a question out there. Go ahead. It's nice to be able to get VMs quickly and Cinder volumes and stuff, so we can get stuff done. Developers tend to forget about these, so they kind of pile up and idle around, is there a way to automatically push disk images of VMs after a given time? So there's a couple of different thoughts on that. And here's one thing right now is that's what quotas are for. So you're given a quota and as an engineer and as a developer, it's up to you to manage what you do with that. If you wanna just sit on it and camp on it, that's your business, right? There are things that are being proposed and some people are starting to implement where they do things like auto-delete. So you can do that externally anyway. You can set up tools and scripts and monitors and stuff that will do that. Or there is some talk and thought about actually putting those sorts of things inside of OpenStack proper today. So yes. Thanks. Anyone else? Questions? All right. If not, like I said, we're gonna talk quickly about what Platform 9 and Solidify is to set some content, kinda set the table for the demo that we're gonna do to show a sender of OpenStack. So I will... So Platform 9 does what we call managed OpenStack. So the way to think about it is actually quite simple is instead of having our customers deploy and operate OpenStack controllers, we actually do that for them, but not in their data center, but in our cloud. But then we connect to a customer's data center and manage their hypervisors and their storage and networking. So it's basically a split model where customers have their concerns... As operators, they're only concerned about their current infrastructure. Platform 9 focus on running OpenStack and then developers get self-service. All right, so it helps to kinda get OpenStack running a lot faster because a lot of times customers don't have expertise in running OpenStack. Can I ask you a real quick question? Does that mean the control plane is off-premises? Yes. Okay. So because of that, we've got some typical customer. We'll usually have OpenStack up and running for the developers in about 15 minutes. But so things like streaming glance images. So there's some things that have latency sensitive who will house in their customer's environment. That are key things. We can actually discover existing environments. So if a customer has 100 hypervisors and 10,000 VMs and they wanna make out a self-service cloud, we can actually discover all the VMs and all the resources that are running and import it into OpenStack. So that's why I can kinda go from zero to OpenStack in about 15 minutes. So I'm gonna say about that. So the key benefits again is because customers don't worry about OpenStack, we do. They can continue using their tools, continue the same process they followed in order to manage their infrastructure. So what I like to say is Platform9 does the job right. Oddly enough, customers don't even know they have OpenStack running. They just know they have an infrastructure and that developers get self-service. And John, you wanna talk a bit about SolidFire? Yep. So from the SolidFire side, we actually started a number of years ago. Our founder was actually working on OpenStack. And one of the things that he was trying to do was find a storage solution to use in OpenStack that would work well. And at the time, there really wasn't anything that worked well. So he decided he had this brilliant idea to go off and build something. And that's what we came up with with SolidFire. The differentiator for us is we're focused on automation. So everything on our cluster is intended to be fully automated. And we're also focused on dividing the resources on the cluster into two pools. So you have a pool of capacity and you have a pool of performance. And the idea is that you can go ahead and select items and provision from those pools completely independently. So the beauty of it is you can plug that into something like Cinder and into OpenStack. And you now have the ability to do all of those things the way OpenStack kind of intended and the way it kind of works with OpenStack. If that makes sense. You know, we've had full integration with OpenStack for a number of years. We offer things like the ability to set and maintain quality of service, guaranteed quality of service levels using a minimum and maximum IOP settings. We have built in deduplication and compression. And of course, you know, we have all the regular table stakes, snapshots and clones, things like that. A web-based API if you wanna use it, but honestly, if you're integrating with OpenStack you won't need it. We have a JSON RPC API that lets you do everything on the cluster that you could that you would ever be able to do. So I'm actually gonna use this as a diagram. So the example of how Cinder gets deployed. Again, obviously we have a slightly different model because we host the control services off-site. But where there's off-site or on-site the model, the implementation model itself or the components that make up Cinder is the same, right? So there is essentially going to be some set of API services and scheduling services that run on the OpenStack controllers. And then there is what we call a volume service. That is actually what presents a volume to your compute nodes. That volume service sits in your environment somewhere and talks to whatever the backend storage it is. Back-end storage could be just a bunch of, a server, a bunch of disk on it or it could be an enterprise storage array like a Solify array or others. And basically, as an end user, what you're doing is you're actually talking to the API service, which runs in the control, in this case, runs on our controllers, which for Platform 9 happens to be off-site. But then it, all the work of creating volumes and then having, getting them attached done locally in the customer data center through the volume service. Which in this case is housed in this thing we call the volume node. So the volume node is basically a VM that runs the Cinder volume service. All right, so what, so Arn's gonna come and he's gonna just demo kind of how you would use OpenStack to connect to Solify as a Cinder back-end. Yep, thanks Ken. So let's get this up. So if you guys have played the OpenStack, you'll see that this is not the same dashboard as Horizon. So the interesting thing is, although it's starting to change, when OpenStack first came out and you looked at the documentation, the documentation said, basically said Horizon isn't designed for use in production. It was designed to be a reference implementation with the hope that people would actually design their own web interface. That would be more production quality. I think that's starting to change now because so many people decide they're gonna use Horizon for production, that there's some pressure now to actually make it better. And I think they have. In our case, because of the way some of the things we wanted to do, we actually create our own dashboard. But the key thing here is, underneath the covers, we are talking to the OpenStack APIs. I think in the exact same way that a Horizon dashboard is talking to the OpenStack APIs. Yep, so what you can see here is like Ken said, the platform and controller. And what I have opened here is the infrastructure page. So basically whatever servers you have, hosts get listed here. And what we do here is to enable Cinder in the backend. We have something called authorization, but all you need to know is this particular hypervisor gets a Cinder backend. So what that means in OpenStack terms is Cinder volume service is going to be running there. And to do that, all you have to do is just enable it here and select various backends. So for one is solid fire. All you need to do is provide credentials and connection string parameters, which goes in a config file in your host, which you need solid fire with. So I will say this piece where we're configuring, authorizing the solid fire rate, it's one of the reasons we end up using our own dashboard instead of Horizon, because you wouldn't be able to do this in the Horizon dashboard. You have to do it through a command line. So we've enabled you to do this in our kind of Horizon replacement. And what you see here is the volumes page. It's similar to Horizon, as you can see. So I'll go ahead and create a volume. You have a couple of options with which you can create a volume. One is from a snapshot that you've already taken. The other is you can create a volume from another volume, another Cinder volume, I mean. Or you can create a volume from an image. So as OpenStack developer, or as I always like to use a Syros image, it comes by default. And what you can see here is Syros image is a bootable image. So when I create a volume out of this, so I'm just creating a volume from an image. So what that means is internally OpenStack Glance is going to, Cinder is going to call Glance API to fetch the image. Once it's downloaded, it's gonna create a volume out of it. And this particular volume, as you can see, it says downloading, which means it's downloading the image from Cinder. Okay, it's done. So what this means is Cinder has already created a volume for you with Syros image as source. And you can see that it's marked bootable here. And the reason because of the image that you used is bootable. One other thing that you can see is the UUID that gets specified here. And John can probably iterate as well. So if you go to the SolidFire UI, it's just refreshing it. And if I search for it. If you can find the last page. It should be there in the last page. Yeah. So one of the things that we do on the SolidFire cluster, SolidFire has multi-tenancy capabilities built into it. So what we do is we actually, for each tenant you have in your OpenStack cloud, we actually dynamically, through the driver, create an equivalent tenant using the same UUID on the SolidFire cluster. And then when you create volumes, Cinder uses a UUID to assign to each volume. And we use that same UUID to name the volume on the SolidFire cluster. So now you have a one-to-one mapping for both tenants as well as volume IDs. It actually, in large clouds, it actually turns out to be a pretty significant thing. For example, if you have a customer that goes away and doesn't pay, it gives you a really easy way to figure out how to go through your back-end storage and comb through and figure out what resources belong where. So yeah. Just so, because it might be hard because it kind of goes through it a little bit fast. So there's a couple of different types of storage. I like the idea of creating a USB drive. There's a couple of different types of Cinder slash USB drives, right? One, it could just be an empty, essentially an empty volume. So you might have a VM running somewhere and you just need some extra disk that you wanna store some data on or a database. You can essentially create an empty volume, quote unquote, plug that into that VM and say start writing data to that disk. That's a typical way of doing things. This other way that what Aaron was demonstrating is, instead of just creating an empty volume, you actually create a volume that has a disk image on it and then you can actually spook up a new VM using that disk image. So in that case, that Cinder now is acting as the boot disk instead of just a second volume that you attach. So what I did right now was create an instance out of the volume that I had created, the bootable volume. So if you had seen what I was doing when Ken was talking was that, just created a VM and pointed the volume to be used as the base disk. So basically, instead of using the fmerl disk for the VM now, we are going to be using the volume itself. And yeah, I'm waiting for it to come up. So what I wanted to show you was if I write something into this volume and then detach it from this VM and attach it to another VM, you still see the data in there. Yeah, so when you guys wanna talk about why you would wanna use Cinder as a bootable volume as opposed to just continue to use a fmerl, for sure, as the advantages. So there's a number of schools of thought, right? And I call it baked versus fried. Baked is the situation where you take a volume, like a bootable volume, and you put some preloaded image with a bunch of data on it, right? As opposed to the other method, which is fried, which means, hey, I'm gonna spin up this fmerl instance and I'm gonna load everything onto it that I want dynamically when I need it. Well, what gets interesting is when you start doing things like CI, continuous integration or automated testing and things like that. So we have quite a few customers that use a concept where they spin up thousands of VMs or thousands of instances that have significant sized images that they use, like 500, 600 gig worth of data on them, right? So you don't wanna populate that every single time you wanna run your tests, right? So what they do is they'll go ahead and they'll just have one copy of this with a golden image or a master image and they'll clone that thousands of times. So what you can end up with is a test harness of thousands of machines all booted up, spun up, ready to go and interacting with each other in a matter of minutes, as opposed to some cases where they used to do it the old way, it would literally take them 72 hours to spin up this harness. They can now spin up that exact same harness in a matter of 12 to 15 minutes. So I will take the other side on this a little bit in that there are some people who would say that is not the approach that we wanna take because they would say if I do that, then I'm using this share storage system. The more share storage I use, the less I can scale and what they would say is, if you're running cloud native applications, you're a fool if you have 600 gig of anything, right? They would say to make cloud native work, what you want is small instances of very small disk but just hundreds and thousands of them instead of tens and hundreds. Does that make sense? So my point is, I'm not trying to say one use case is more valid than the other. My point is that there are different use cases and it's very important if you guys are, particularly if you guys are gonna be ones who are architecting your open stack cloud, that you choose the right architecture and make the right design decisions for what it is that for the workload they're trying to support. If your workload doesn't, it's going to be a lot of small instances, then you may not need to boot off this but if you're gonna use 600 gig volumes to boot off a lot of them, then you probably need to have Cinder somewhere in the back end. So again, think carefully, we're vendors up here but I would tell you don't just listen to your vendors. Unless it's me. Right, if you're an architect, you have a responsibility to understand what your use case is and what is the right solution for your use case. Yeah, so what I just did here is deleted that VM that I booted up through the volume and so now I can use that volume on a different VM. So I attach it to another VM and you can see that the file that I had written there was already here present. So I can do some more fun stuff as like snapshotting a VM. So an interesting thing is if you snapshot a VM with a volume attached to it, you're gonna get a snapshot of the volume as well. And you can create a new volume from that snapshot. It's like inception of volumes to volumes to volumes. You can do that. And just for you guys to know that, it's, so I'm gonna show you a DevStack with Horizon UI. It's just a different UI that we are using in Platform 9. But everything else, yeah. Do the same thing with those volumes, so. Yeah. So and by the way, what I want to just demonstrate is actually one use case. Another use case for Cinder is, you saw that because it was using Cinder, if the compute node died that housed that VM, if you had a femoral disk, basically you lose everything and you have to rebuild everything, right? In his case, if everything was sitting on a Cinder volume, but even if the compute node went away, you could basically just spin up new VMs and reattach to those volumes and get things up running very quickly. So it can be useful as a way to do very rapid recovery in the event of some kind of failure. So one example of that to touch on and back to the other thing we talked about. I do, I have a number of things that I do that has a test database that I have, right? So I have a database just populated with all kinds of random test data. And I will test that in multiple configurations in parallel and the way I do that is exactly like Ken is saying, I boot up an instance and I attach that volume and I run my automation on it and it does that testing for me. I always come back to the cloning thing. I use that a lot. Some people use it, some people don't. But for me, it works well because I can run all sorts of tests in parallel now. I don't have to wait and do it sequentially. So this is the horizon dashboard and I can go ahead and create a volume here and you can see it on the front line. It's interchangeable using the same database and back in new volume from horizon. This is using solid file, right? This is using solid file. So what I can do is I can create a volume but I created a volume type which probably John can talk about as a prawn. So it's got some QoS settings for solid file specific stuff and say create volume. There you go. You have a new volume and I think platform management should also see that. All right, so this is the discovery stuff. So again, the key is that the two different dashboards work pretty much the same. The main difference being the platform nine dashboard has the ability to also configure the storage array which isn't available currently in the horizon dashboard. Till we push it into center. Unless we, until we give this away and say and if people want it. And we do. Anything else? Okay. Does anybody have any questions? Yeah, so you got questions? Could you come on up or find the guy running around the mic? So are you using metadata with Cinder anywhere there? We use metadata in lots of places. It depends on where you mean exactly. So when you were creating that volume, that volume type and you said there were certain QoS. That is metadata. Characteristic. So the way the volume types work. You create, so an administrator can create a volume type and he or she can assign what we call extra specs to that volume type. Those extra specs are just key value pairs. They're just metadata. So when you were going through your own interface were you actually passing the metadata directly there instead of through a volume type? No, it goes through the interface. Or it goes through the volume type. What happens is the driver, the open stack driver, it actually interrogates the information as it comes in and looks and says, hey, do I have a volume type associated with this? If yes. Then does it have some specific QoS settings that I need to use? And if yes, then it sets those for you. I see. So it's all automated. For the snapshots, do you have any issue with getting transactionally consistent snapshots if you're using a database? So that's definitely something that you have to consider. And that's why you start having to consider using things like some of the Oracle tools or the MySQL tools that go along with that. Oh, if you're doing cloud native, the whole idea of consistency, you don't need it. At least that level of consistency, right? So again, depends on the use case. So honestly, the answer in the direction we've tried to push people for the most years is stop doing it that way and quit worrying about consistency. Do cloud native. So we're kind of losing that battle lately, unfortunately. But it's actually a much better way to go for you and everybody else involved. I see. So is there a way that I can get something like a thin disk here, or do I care? Would you prefer that I create lots of small disks rather than a large disk and have it thin provisioned? So depending on the back end you use, that's really a don't care. So in the case of SolidFire, and even actually in the case of LVM now is the default, most back ends are doing thin provisioning for you. So it's a don't care. Yeah, so that's actually one larger point that we're bringing up is one of the things that Cinder has tried to do now is if you guys are using different vendors arrays, not just SolidFire, but EMC, Pure, NetApp, whatever, all those arrays have obviously specific capabilities. So many of those capabilities can be exposed to Cinder. So that depending on which array you have, you can have different capabilities of that for that Cinder volume. That's what I call the magic of volume types, because you can put anything you want in there. Right, so. All right, what time is it? We are past time. Okay, so thank you, if you have questions, you can always come up, we're happy to try to talk to you in the back or in the front here, but everyone else, thank you for your time. Thank you. Thank you. Thank you. Thank you.