 All right, thanks. So this is my first presentation on a big stage. So we'll see how this goes. It may be a little bit rough. But I work for Datara. My name's Matt Smith. I'm a software engineer there. Datara is a startup based in Silicon Valley. We launched actually our product just about three weeks ago, so we're pretty new. This is a little bit about me. So I used to work at Riverbed. I worked on the steel stripped framework there. I'm a big fan of cycling, cats, board games, especially Eldridge horror. And if you want to find me on IRC, my handle's up there. I typically lurk on the cinder forums. So a little bit about our company. We have an elastic data fabric. It's essentially an ice-scuzzy-based drop-in replacement for a SEF solution. It's more performant, way easier to configure. I'm sure most of us have seen the pain points that come with sort of setting up SEF. We try and eliminate a lot of those. We've gotten funding from some pretty big names like Pradeep. And overall, it's going pretty well. So in terms of the architecture of the product, let me get most of this up here. So it's based, essentially, an intent-based system. It's intent-based specifically application intent. So instead of being something like a volume-centric system, which is what we're most familiar with and dealing with in elastic block storage devices, we're application-based. We have things, essentially, our top-level container is something called an app instance. That app instance contains storage instances. Oh, it's not in this slide. But it contains storage instances and volumes. But the main purpose behind this is that our product picks where the storage goes. Like, you describe that within your application. And then as you can see right here, depending on the needs of your application, the intent of your application, it distributes it amongst either the high-performance flash arrays that we have within the product or the slower-spinning disks, but high-capacity spinning disks. So the product itself is extremely scriptable. It's pretty much meant to be used by DevOps. This is an example of the API for our product. It's got a nice interface. It's essentially just JSON-based rest queries. You can create and do everything that you need to with the product through a rest query. And it's meant to be very, very easy to use and very easy to understand. At least most of the time, I end up interacting with the product through generally requests, the Python library. So this is just an example of the steps that you would take to log into the product and then configure your first application instance. And in this case, you can think of an application instance really as an entire application's worth of storage. It's not just a single volume. Like, you're not provisioning just that one volume. We do have the option of provisioning a single volume, but it's within the context of an application instance. And so since we're talking about instances, we have to talk about templates. So this is really what the bulk of this talk will be about is templates. It's that sort of new thing that we bring to the table with our product, because anyone can change the model of the way that you provision volumes. We're just, with templates, we make it extremely easy to describe what you want for an application once and then spawn as many of those as you want afterwards and then even change them while they've already been spawned and it changes everything that happens to inherit from that template. So this is the main focus of being application-centric is with these templates. So templates first give us the power to describe an application. They give us control of quality of service, snapshot policies, ACLs, IP pools, volume sizes and configs, maybe the weather. It gives us control of a ton of stuff to do with this particular application. Once we've described all of these different things, because normally you would have to describe them for individual volumes, in this case you describe them for the application and it can be per volume within that application, you get all of that stuff for free every time that you instantiate it. We can have many application instances. These are just templates that have been instantiated, bound templates that relate to that particular template. So that one template can have many instances and then you can go back into that template, say later on, you decide you want to add a whole X number of additional applications based off of this template and you realize that you're over provisioning essentially the amount of storage that you have or whatever metric. You can go back into the template and all of these live application instances that are being used by users, you can change that metric inside the template and it will reflect in everything that is still bound to that template. Like every application instance that you have generated from the template will be affected by it. So this brings us to sort of a bit of a conflict that we have. It's not a super big conflict, but it's more of a difference in paradigms. So Cinder is very much a volume centric manager, whereas Datara is application centric, as we've been talking about, with Cinder you have a single volume that volume is atomic. For the most part, it has zero relationships with any other volumes. There's some you can get with consistency groups, but that particular volume, especially at the volume level, doesn't know anything about any other volume. Now we sort of break this with Datara and our application centricity. These volumes have relationships with each other. They share information, that information is located in the parent data structure and the volumes will inherit it. So we are essentially sort of round peg square holing something right here with what I'm about to do, but it seems to work pretty well actually. So what we do is we leverage volume types. So in Cinder, you have the concept of a volume type. Most folks use this for essentially designating quads for different volume types, like you can have the levels gold, bronze, silver, and then each of those is a separate volume type with different information about them. And so that information is passed in to the driver, which is provisioning the volume and then it gets you that particular type of volume. Well, in this case, we don't want to have 1,000 different keys that the admin has to essentially put into the extra specs list or the extra specs for that volume type because it would be a huge hassle because every time you create a new volume type, you have to set each one of these different things. So instead, we allow the user to set a template key. So this is what it looks like when you actually set that template, or at least in the extra specs. So this is what the admin has control of. So if you don't know much about extra specs, it essentially allows you to set a set of key values or key value pairs. And depending on the type of operator that you're using, there will be a comparison between what the driver advertises and what you actually get or request with the volume type. And if there's a match, if it can match against what the driver advertises, then the sender scheduler will pick that volume type or, sorry, pick that driver for provisioning your volume type. And if you've ever worked with it, if you ever type anything wrong here, you will probably not get any driver selected and your volume will never get created. It has a little bit of a cryptic error message associated with it. So this is just examples of essentially what you can do or different combinations of volume type settings that you can use. So that brings us to the demo. And this is going to be live. So yeah. We'll see how it goes. We'll see if it even shows up up here. So first, we have our OpenStack instance. So this is what I'm going to be running against. We have our DayTerra back end. And it's running. You can see, essentially, nothing is running against here, which is good, because they're actually using this over in the booth. Nobody's using it. And then, actually, just sort of as a bonus, this is the DayTerra API browser. I probably won't have time to go into this, but it's there. And it's super handy. So we don't see anything currently running right now. And I'll go ahead and start this MakepDemo, Austin. So this takes just a moment to set up. And what this is doing is it is taking a trusty image. It's uploading it to about 10 volumes, DayTerra volumes. In this case, they're all created with a particular template. So that template is specified by that if we go back here while this is running. So it's specified by this template key. This is how the admin picks the template. In this case, FiOOS perf. So this is the template that we're going to be instantiating on the DayTerra box for each of these volumes. Now, we do have the capability to support multiple templates, but for the sake of the demo, it'll just be a single template here. So each of these are having the trusty image uploaded into their volume, which is instantiated from the template. It gets all of that inherited QoS stuff. And then we're going to boot from each of these volumes. So each of these volumes will have an instance running. And we should be able to see that in the OpenStack GUI, or in Horizon. So we're still booting the instances, but the volumes are all created. It's just a matter of booting those instances. And we could do this in parallel, but for simplicity's sake, it's still going. So what we're going to be doing is once we have all of these up and running, we're going to start changing things about the template. So all of these are going to get FiO run against them. It's going to start generating traffic. And it's essentially going to be saturating the current link between the initiator and the back end. And generally, you don't want to have no QoS on something, but we're going to have no QoS initially. And so it's going to behave badly, almost there. So it's setting up FiO right now. If we wanted to see the current application instances that are on this box, the REST API makes that super easy. We just go ahead and log in, go to the application. Or actually, we should go to the application templates. We'll just do a get request on FiO OS perf. So we'll submit that request. We get it right here, and we can actually see all the different app instances that have inherited from this template. So all of these app instances are under control by the template. And if we modify anything about this template, all of those will inherit it. So all of our stuff should be started by now. Yeah, it is started. And we can see we're doing about a little over a gig per second to the back end. So saturating at least the current link. So what we'll do is go to the template right here, pick our template that we care about, which is right there, and then we'll create a QoS policy for it. So we're going to enable a performance policy, and we'll restrict this pretty severely. So now each of these instances will be using only a max bandwidth of 4096 kilobytes per second. And if we go back to the Datara dashboard, we should see once it actually takes effect, did I save it? I may not have saved it. Templates. OK, I did save it. 4096, I set that in bandwidth. That's max total. OK, yeah, it just took it a moment to do it. So everything's been super restricted. We're now down to about 50 megs per second through that same link. If we go to the template, we can look at one of the application instances under that template and see that it has inherited the max total bandwidth. So we can change anything about this, snapshot policies, ACLs, performance, any of these different metrics. And everything will inherit from that that's been instantiated from the template. And you can clone templates and make small modifications too, so you don't have to, if you have a basic starting point, you can essentially recreate it easily. All right, so back to where I was before. OK. So we got the demo. And so for future steps, and this is pretty important, because there are some pretty severe restrictions on what we can do, at least right now, trying to get templates to work with Cinder because Cinder is so volume-centric. Now, at least with Datara's application instances, you can have many different volumes and many different exports under a single application. And so at least right now, we're treating the application almost like a volume. So there's at least currently the restriction of a single volume and a single export per application. But that can be gotten around. Like there's some functionality there for migrating existing volumes from a back end onto Cinder that we'll be exploring for getting multiple volumes supported in that application but still recognized by Cinder and still managed by Cinder. We also would like to support customizing template parameters in Horizon, because at least right now, all that template parameter modification has to be done through the Datara GUI. The only thing that we can actually do through Cinder is instantiate the template. Cinder does know that the template exists through the extra specs, and you can control that there. But in terms of modifying the template afterwards, it has to be done through the GUI. We do also want to support unbinding volumes from the template, because at least in our current version of the product 1.0, we don't support instantiating an application instance from the template and then unbinding it from that template so it stands alone. And then if you modify the template afterwards, not having it modify the application instance. So these are all on the roadmap. But at least they're not currently available. And then if we have time, I would like to end world hunger. Were there any questions? I think I have a minute and a half. Sorry, what was that? So we are actually a combination, at least right now. So we do plan on supporting commodity hardware that is within the next three to six months. We will be running on a much broader spectrum of hardware. But at least right now, if you purchase our product, it is both a software and hardware skew. So it comes together. But commodity is in the future, like it's coming. Anything else? All right, thank you all for coming.