 All right. So everybody enjoy their lunch. I had the chicken and fish. It was very delicious, but a little bit hard to keep down right before a presentation. So we'll see how this goes. I'm pretty sure I'll be able to hold on to it. So my presentation is on reconciling intent-defined storage with Cinder's volume-centric model. Now that is a little bit of a mouthful, but I promise we'll get into it and explain the various pieces that go into making such a complex title as this. So about me, I am an employee of Datara. We're a startup in Silicon Valley. I've been in the industry for about six years. I previously worked with automation frameworks and worked for Riverbed helping design and develop their steel script automation framework, which was used for automating the various products, including the steelhead. Currently, I am the maintainer of the Datara elastic data fabric Cinder driver, as well as the Docker driver and meso-sphere integration. Because it is a startup, I have about a half dozen other jobs that I will not even go into. But that's just typical startup life. I'm pretty terrible at presentations, so I'm just happy to be here. We'll see if we can make it through it. So Datara itself, we are a startup in Silicon Valley. We were founded in 2013 with the intention of making a universal multi-cloud data fabric. One of our founding members is the author and maintainer of the LIO stack, Nick Bellinger. We launched our product and came out of stealth mode in April 2016. We have quite a bit of VC funding, and we are currently getting our product out there to customers. We had about two petabytes of raw capacity in our first revenue quarter. We have about 60 employees, so we're getting on the larger side of startups right now from a variety of very well-known companies. So Datara has a product. It's the elastic data fabric. It's an iSCSI block hybrid and AFA scale out storage solution. Again, a lot of buzzwords. But essentially, it's just iSCSI. You have flash and high-capacity spinning disks, or you can go full flash with the product. It's also application-driven, intent-based cloud data infrastructure. And for this, I would expect everyone to go, what? What is that? So succinctly, it means that we don't provision volumes individually much. So legacy storage, if you want storage, you say, hey, I want this volume. We don't really do that anymore. Instead, you say, I want this application or applications worth of volumes and this specific structure and volume relationship. And we provision that all at once. It uses something called a template in order to do that. We have a template in construct that we deal with. So onto templates, what is a template? At least in terms of volume provisioning. So for us, it's a top-level object. It makes provisioning this complex set of storage much easier because you get to describe it all beforehand. You describe it all for your particular application. It's made specifically for it. And then you instantiate that template and you get all of the things you previously described. So for example, in our templates, we cover our snapshot policies. We cover authentication, IP pools, volume replicas, size, your typical cost values, storage placement, because we're a hybrid solution. You could place in large capacity flash or in just hybrid itself. And also the complex storage and target array relationships. So all of those you can put inside the template. And as the product matures, because I mean, we are a startup, the product is very new at this point. We will be adding more and more features that can be described via this templating structure. So in OpenStack, we already have the concept of heat templates. Why do you need another templating system? Well, from the perspective of OpenStack and heat, heat has to rely on, if you don't know what heat is, it is the orchestrator for OpenStack. It allows you to specify, in a way, templates for the various different services of OpenStack so that you can provision the entire lifecycle of a product in your OpenStack Cloud from beginning to end. But it's one main limitation, at least in comparison to our product, is that the templating requires reliance on the Cinder API or Cinder API and Novi API. And essentially, every API that the services provide, it has to rely on those. It can't really do anything outside of that. It has to go through those APIs, which is really what OpenStack is about. So if you're familiar with Cinder, it has these APIs for creating a volume or creating an export or attaching all of these different things. Now, for Datara, our product on the back end, it doesn't really have a concept of just create volume or create export. These are not things that happen by themselves. A lot of other stuff comes with it because we are dealing with the concept of an application and not a single volume that a service like Cinder is used to. Cinder provisions just by volume. It does have some additional things like volume group and other things like that. But those ultimately are calls to create volume behind them. Like it still is dealing with each of these volumes individually, which is something that we as a company are trying to move beyond. So Cinder just has no way to represent these relationships in the back end without doing some real complex hackery on the front, which is essentially what heat is. So how do we get this to work with Cinder? So currently, all but the most complex target and volume relationships are available for you to use in OpenStack with our current Cinder driver, which I developed. So if you go and you get the latest version of the driver right now, you can provision an entire application's worth of targets and volumes. It just requires essentially pretending on the back end that it is multiple applications. We are treating each volume as if it were its own application, at least in the current version of the Cinder driver. There are plans to rectify this and make it so that there is a single application that represents all of the volumes that you have in Cinder for a particular application. Now, there's two ways that we can go. I'm going to walk through essentially how I do this in Cinder. So we have two ways of going about this. We can heavily leverage the concept of volume types in Cinder, which is your ability to specify a series of key value pairs to associate with any volume that's created that has that type. Or we can lightly leverage it and instead leverage the data back end to take care of a lot of that for us. So why would we choose one of these over the other? So with the heavy volume type example, we have, well, it's not hundreds, but we have dozens of keys that we have to set for every volume type, for every application instance that we want to associate with the volume type. So for instance, if we want to set the IP pools and bandwidth, max, write max, total max, IOPS max, all of these different keys, we have to set each of those individually for the volume type. And then when we instantiate that volume type, on the back end, it's represented as a standalone application instance with a single target and a single volume. And then all of these are set on that application instance or that volume individually. So it has no relationship with any other volumes that are instantiated. And so this is all well and good. And up until now, our driver has pretty much worked that way. And it's perfectly serviceable. So if you want to have volumes that have no relationship with each other, you can do that. That is perfectly possible. Now, there's a better way to use our product, to be honest. So that's what this next option is, the light volume type option. So in this case, instead of specifying dozens of keys with all of their values, we'll instead just specify a template that we've created on the back end. In this case, we're using the key DF template. So I've created a template, my app template, that has all of the information that we had previously specified here. It's already in the template, so we don't need to specify it in the volume type. Now, the driver understands that if a template key is specified in the volume type, it will go and every application instance that it creates will no longer be standalone, but it will instead be instantiated from this template. It will be, in other words, bound to the template. And so if you instantiate 50 volumes or 50 application instances, all of them will be bound, and sorry, and they're all of the same volume type, they will all be bound to that parent template. So the main advantage that you get from a relationship like this is that all the child instances, all these child volumes in Cinder, are now bound to that template, and any changes to the template will now propagate to every single volume that you have, not to all that are on the data cluster, but instead to all of them that are bound to this template. They're treated almost like a whole application. Say, for instance, you had a Hadoop cluster, and you had a template Hadoop, or Hadoop cluster. You spin up 50 volumes of that Hadoop cluster, and you realize, hey, I realized that I set the cost too low on this template. And so every volume that gets provisioned, the cost is too low. I want to increase that bandwidth for them. So you go and increase it either on each individual template, or sorry, not on each individual template, but each individual instance. If you can go and change that on each instance, or if they're bound to a template, you change it on the template, it changes all of them at once. It's just a single place that you have to change this. And because of that relationship, the change propagates, and it happens almost instantaneously. In a traditional model, you would have to make a rest call to each of these different volumes in order to change them. You'd have to make a post call. Ultimately, that's much slower than just a single call to the template, which then modifies it and propagates the changes to the rest. So that brings us to the demo. Now let's see. Now in my previous presentation I did at Austin, I did this live. I'm not going to do it live anymore, especially not from Barcelona. So it may be a little bit difficult to see, and I wish I had put annotations on here, but I'll walk you through it. So up here in this top box, we have just a watch script that is watching NovaList, CinderList, and Cinder Snapshot List. Down here, I'm going to be running the script, but first I show you the Datara volume type that I've created. We've created the key, dftemplate equals Barcelona demo, and that corresponds up here to this Barcelona demo template. That's already been pre-created. It currently doesn't have a whole lot of stuff set on it, but at least we've created the template. Down here is the, here let me pause this because it's going a little too fast. So down here we have the Datara front end. So this is what you would look at to interact with the Datara box. We also have a very well-defined rest interface for this, but a pretty gooey zone ice. It will show some performance metrics, the number of volumes that you've provisioned, the total capacity that you have for the cluster, and some various other things. At this point, I've spun up about 10 VMs. You can see up there that it's actually created the volumes for these VMs first. Those are created in Cinder, and they are of the Datara volume type, which is pointing at this template. So every five seconds that one up there will refresh. We've created the VMs now. Now after I've created the VMs, I will attach these volumes to the VMs as scratch volumes, and then I will use FIO, which is a storage load generator, and run data to those scratch volumes. Once the data starts, which will take just a few moments, you'll see some performance metrics pop up here on the front end. Now an interesting thing is if you look up there at the template Barcelona demo right here, you can see all of these application instances that are bound to this template. Everything that's listed under here is bound to that template, and changes to the template will propagate to those. Now you can actually correlate those application instances to the Cinder volumes over here. It's just by UUID. On our back end, we preface it with the ecosystem that happens to be doing the provisioning, in this case, OpenStack. So os-the UUID of the Cinder volume. Oh, good. We're getting some performance already, or some data going. So at this point, we have data flowing, and I want to restrict that data. Every single application instance that's bound to the template will get the change that I made to the template. I reduced it, I believe, down to 10 megabytes per second, and that's a per volume value, or a per application instance value. So we were doing 580 megs per second, and now we're doing about 50, since I have 10. Now, that's obviously a very drastic case. It causes the latency to spike. So we'll go ahead and change this instead to be an IOPS-based cost policy. So I set the cost policy to 0 for bandwidth, and then I change the max write IOPS to 100. So in a moment, you'll see that change propagate, and the performance will suddenly shoot up again, because it's now only restricted by IOPS and no longer restricted by bandwidth. Latency will drop a little bit. I mean, we're still restricting it, so latency is still fairly high, because those VMs are writing literally as fast as they can. Yeah. So we saw the performance increase a little bit. Now I'm just going to go ahead and remove any cost policy that we have on there, and instead, set up a snapshot policy. So the snapshot policy was set to take a snapshot every 15 minutes. We should have at least one snapshot now, but I'm going to go ahead and issue a snapshot creation through Cinder itself. So right here, we're going to issue that snapshot creation, and we're going to see what kind of performance penalty we get for snapshotting every single volume that's bound to this application template. So we have a little bit of a dip. It comes right back up, and we get all 10 snapshots showing up in Cinder. So then we can navigate to one of the application instances, and you can see we have two snapshots, one that was taken by the snapshot policy, and the other one that we called directly from Cinder. And so that's represented in both Cinder and the back end. And I believe that's the end of the demo. So anywho, this is how I've been going about reconciling this difference between Cinder's sort of volume-centric model and our application-centric model. Even with our application-centric model, we can still sort of emulate Cinder's volume-centric one and get the job done. But if we leverage templates, at least the way that we've designed them, we can get a lot more functionality than what you can get with just native Cinder and the individual standalone application instances. Those relationships can turn out to be very powerful. Now, if you want to learn more about this sort of stuff, please drop by our booth. I believe we're located right over here, just to the side of the VMware booth. And come and ask me any questions that you'd like. I'll be here pretty much the rest of the week. So if there are any questions, I can take them now or I can take them there. All right, I think we're good. Thank you.