 Hey, my name is John Griffith, and this is Matt Molesky. We're from SolidFire, and we're just going to go through and show you a couple of new features in Cinder, kind of a little demo on how to use them, and then also how to use them with the SolidFire cluster. So first, I'm going to let Matt tell you a little bit more about SolidFire and the SolidFire cluster, and then I'll come back, and we'll go through some Cinder stuff, and kind of go from there. So what we have here, SolidFire is a scale out, all flash, iSCSI storage solution. Looks more or less like the picture you saw in the previous slide, but we can go anywhere from the five-node cluster picture there to 100 nodes. So anywhere from around 60 terabytes capacity to two petabytes and 5 million IOPS scales quite a ways up. Each of the units is just a 1U appliance. There's really no top of rack controller node or anything like that. Every node in the system is contributing both to the storage capacity and performance, as well as management functionality. So eliminate all the single points of failure and really end up with a solution that matches with the Cloud Paradigm in general, which is to have scale out, not scale up. Usable dollar per gig is going to be a lot similar to performance spinning disk after you factor in inline de-vocation compression and in provisioning, you're going to end up around $4 or $5 per gig. So very competitive there, despite the additional performance that you're going to get from flash solution. And past that, it's all about quality of service. So whereas in a traditional sand, you create a 10 gigabyte volume, your performance ends up being a function of everybody else that happens to be on that system with SolidFire, you create a volume, and you specify a specific amount of performance that you want for that volume. So you create a 10 gigabyte volume, you need to do 3,000 IOPS or 5,000 IOPS or however much. It's really the whole idea of being able to provision performance capacity independent of each other. You don't have to worry about drive groups or RAID levels or LUNs or anything like that. And finally, it's got full, complete management automation. We've got a REST-based API that every call is accessible to the customer. There really are no functions that aren't automatable on the system. And that's how we plug in here. We've got the Cinder Block Storage service. It uses a volume driver to integrate with our REST API so that when you go and you type Cinder Create or you use Rise into Creative Volume or whatever have you, Cinder will be able to reach out to our storage solution and create a volume with really no administration overhead. You don't have to go back to the sand and do any LUN administration or anything like that. So that's just a quick overview. I'm going to hand it back to John for a few minutes. And he's going to show you some of the Cinder functionality that we can use with the SolidFire system. OK, so first off, you're going to notice Rooted Grizzly, Rooted Folsom. Everything that is on here is actually running Grizzly. But one of our nodes is actually named Folsom. So don't be confused. Anyway, so in Cinder over the past six months, we've done a number of really cool things. We've fixed up Boot from Volume quite a bit and made a generic across all of the iSCSI devices. We've also added multi-back-end support. We've improved some of the things that you can do in volume types and extra specs. So what I'm going to do today is I'm going to run through real quick, show you how to create volume types and how to create extra specs for those volume types, in particular, related to SolidFire, showing you how to use those extra specs for QoS settings. And then what we're going to do is my other favorite feature is that Boot from Volume. So we're going to show you how to boot up some volumes to run a tool, a performance tool we call VDBench and do some IO testing against the SolidFire cluster. So for those of you, most of you are really familiar with Cinder. So basically, we're going to use an admin account. And here we have that. So let's go ahead and just create a demo type. And I can't type. And then we want to assign it some extra spec information. So we take the UUID of that type we just created and do a set. Do you want me to set a max? 100. Do 100. XIops equals. Keep the max under 10K somewhere. Let's do 8,000. So you can see those are the different QoS settings that we can use on the SolidFire cluster. So we'll actually look for those minimum, maximum, and a burst. So we'll go ahead and set those. And now we should be able to do an extra specs list and make sure and sure enough, we've got our volume type with our QoS information. So the next thing I want to do is I want to go ahead and create a bootable volume that will be on the SolidFire cluster. Which one of those types would you like me to use? Don't care? OK. So let's go with fast. Everybody want fast? Fast it is. The first thing I want to do is I want to get my image ID. So we'll use this Ubuntu 1204 with VD Bench. So I do a cinder create, flavor. One. Flavor one. And we'll do fast. You don't need a flavor on that. What's that? You don't need a flavor on that. Oh, hey, thanks. So I can get rid of that. And I can fix this. Stupid Max. OK, look better. Anyone? Anyone? Looks good. How big do you want? Let's go 20 gig. Ready? All right. Oh, man, you're killing me. You did that on purpose. You're sitting back there laughing. Why don't you type not phone? It goes by name, not by ID. Oh, that's right. Yeah, yeah, yeah. I'll send the image ID in there. You had the image ID in there twice, it looks like. Either way, it takes image or name or ID, rather. No. This is going well. Excellent. Took me a while, but I got there. So if you have a Linux box with a good keyboard, it's a lot easier and works a lot better. I've been sabotaged by Apple. It's on your list. I should have given it a name, but that's OK. So we're downloading. So this is actually going out to Glance and pulling the Glance image out of the Glance repository and putting it on the solidifier cluster on this volume. One of the other things that we introduced that's really cool that Matt will show you is the ability to actually clone volumes. So actually, I'll show you, or try to, based on the last demo. It might not go so well. So rather than go through this process again of pulling the image down and loading it and putting it on the volume and everything else, what we can do now is we can just go ahead and clone this volume multiple times on the solidifier cluster. So it'll do everything on the back end for us. It'll be a lot faster, a lot more efficient. And we just use this as our base image and it has everything on it. This image, by the way, is a special image that has all of our VD Bench stuff, everything that we need to run our tests, all the configuration files, the Cloud Init stuff, everything else on it. So it's all ready to go. So let's do a Cinder Create. And we'll give it that source vol ID. You did paste. What's that? Oh, you got it. And we'll do another 20 gig. So the other thing that I should point out, so notice we did the volume type, we could also actually specify another volume type. So if we had another volume type with different QoS settings that we wanted to use for this particular image or this instance, we could go ahead and just assign, say, minus, minus volume type again and give it the other one. And it would use that volume type and those QoS settings instead of the ones from the image we just cloned. So as it stands, if you don't provide that, of course, it just pulls the ones from the one you're cloning. So those are ready to go. Awesome. I'm going to let Matt go ahead and show you booting him up in Nova and running the VD Bench test. So we'll see if we can actually show some IO running on these volumes, if I can remember how to do this. Yep. Don't you forget your meta in it. Oh, right. So we need to make sure we specify the user data that we want to pass into it, as we're going to do now. So I'm actually going to do this with a secondary volume. It's a little bit tricky to run IO tests against the root disk as I hop over to horizon here. It's a little bit easier to look at the status of my instances. We can also see the QoS settings over on the SolidFire UI match the volume type that we created earlier. I should see IO running here momentarily. We've got just enough battery left. 22 seconds. No pressure to bear with me here. So for those of you that have tried typing in front of a group of people, you know it's not easy. It sucks. Uh-oh. So that's pretty much the way demos always go, right? Yeah, you'll have to forgive me here. I think these guys are going to kick me off the stage. But I mean, that's really the end effect of it. It's the whole idea of provisioning performance capacity independent of each other. So you can have a 10 gig IO, a 10 gig 100,000 IO volume right next to a 1 terabyte 100 IO volume for your images. And that's really no different to the SolidFire system. You don't really have to worry about how many spindles you're using or anything like that. You have some aggregate amount of capacity. You have some aggregate amount of performance. And you can carve them up really any way you want. So I hope that gives you a good idea of what we're all about. If you want, please stop over at our booth. We're kind of over in the back corner over there. And other than that, we're glad to answer any other questions.