 Good afternoon everyone My name is Greg Lockmiller. I work for NetApp. I'm in a cloud solutions group I've been with NetApp for about eight years working in various roles Wanted to share with you guys today a little bit about you know, sender volume types In some ways you can use them to match your workloads with storage capacity Storage performance how many people are familiar with the sender volume types? Okay, we'll go through some slides about how we expose some of the technology with cluster data on top Then we've got a brief demo that we can look at or you know go through real quick and show certain examples of how we expose The NetApp technology up through to the sender volumes with our driver So let's talk a little bit about what that is Again match the workload with the storage some of the things I hear with customers is the ability to match that workload You know like most technical people you know when I go get a new laptop I want an SSD and I want to put everything you know for fast access on an SSD drive But maybe pitchers or other things I'll put on a USB spinning media So you know the concept here is being able to match your workload to your storage You know do you have workloads that require low latency and high performance or for example? You have workloads that are they're there and they just need to be serviced little bit overview about sender and the block storage and volume types How we implement them within the NetApp driver and some things that we can share with you and how you can use that to your advantage And a little bit to how you differentiate that that goes back to the first bullet item matching the workload with the storage And we've got a brief demo and if we can get through that that would be great if not understand So we'll get into it. So matching the workload with the storage Obviously different workloads require different SLAs I've come from a database space and everybody wants the fastest workload possible with the least amount of Latency and that doesn't matter it doesn't have to be databases But it can be app servers or anything to do with DevOps or maybe a CI environment, too So it's the ability to differentiate on the sender back in when your tenants create sender volumes What storage is available to them and what performance characteristics that they can select or what you provide as a cloud Administrator and typically how some of this gets set up is working with your storage administrator and your cloud administrator And what you want to make available to your tenants. It also provides you for greater efficiencies Again coming from the database space. Maybe I'd have a database that was a terabyte in size and I would always ask for five terabytes and I never wanted to run out of space for my customers who were my you know consumers of the database infrastructure and And of that terabyte size and database, you know, maybe only 500 gig was really active data So it was always the purpose of over subscribing for my workloads for my databases And so when we talk about provide greater efficiencies, there's some technology within cluster data on tap the OS for the storage That we expose up through the sender driver the net app driver And that really goes along with some of the other features that we expose to and we're going to talk about that So it allows your tenants or your cloud administrator to provide features That align with your architecture standards align with your cost as well as how you can charge back to your tenants So what are what are volume types? I won't spend too much time here You guys already know that it's criteria used to how to define a particular service and how they use that storage So for example again, I'll go back to the database example I want high performance. So maybe I want to send her back in to provision my volumes my block the storage volumes on Storage that is high performance, you know, lots of memory lots of spinning media or SSDs things like that And we do this by utilizing key pairs And it's the extra spec feature of the sender as well as how we expose some new extra specs with the sender driver from that out So again, it's utilized. It's created by your cloud administrator and it's utilized by your tenet your end users So go back to an example here The technology here we call it flex fall. So just for Tech, you know, the taxonomy here being able to understand what flex fall means that is a storage object within data on top So maybe you have a need for deduplication. Maybe you have The need to store lots of ISO images or OS is those typically deduced pretty well So maybe you want to create a cinder volume using the surfacing up the night up technology and maybe using that with glance So now you have the ability to store many different templates and libraries for glance and you get dedupe capability It also works in terms of data points within applications. Maybe I have a cinder block storage for an application That has many PDFs How many people go online and look at their cellular wireless bills and they come up as PDFs a lot of times a Lot of times those are stored in somewhat like object stores or in file systems associated across, you know The different geographical areas and replicated another Opportunity for high performance would be maybe I want to use flash storage or different disk drives I want to be able to surface up the use of SSDs or certain types of SAS drives Maybe I want to put this particular back in and this volume on high performance hardware. I know as all storage vendors We all have different levels of hardware, right? You've got the top end to entry level So maybe you have a cloud environment with an entry-level storage hardware that you want to use for dev ops So you can you can associate again that your total cost of ownership and your return on investment in your hardware With how people use it within your cloud to and then finally what about Does anybody have concerns about quality of service being able to limit somebody from having runaway type applications So we also surface up the net app quality of service feature from the storage too So you can use that within your sender driver and then data protection We have a product like like any storage vendor to be honest with you snap mirror So you can replicate your data Again from different geographical data centers or within the same data center or even within the same cluster So you have the ability to define Not only your quality of service but being able to say I want to create a cinder block storage volume and I want it to be part of an environment that's replicated for snap mirroring and we'll show you an example of that walk Through how it's created and show you exactly it it gets placed on to a back end an NFS back in that's not mirrored And we'll show you on the storage that it's part of a snap mirror relationship So volume types are arbitrary. There's no set rhyme or reason. It's whatever you want them to be and what we do It's kind of hard to see but this will be available post summit But you have the ability to use different types of net up extra specs that are specific to our driver So you can use rate type Distype I can define a quality of service and I want to use quality of service I can say I want this to be a thin provisioned. I want it to be compressed or I want dedu So we surface up that technology through the driver through extra specs and that's how we bring that to bear We make that technology available through the use of extra specs and again the cloud administrator and the tenants can use those accordingly So again, maybe you have Maybe you want to define a particular storage type. We just use gold silver and bronze here. It could be tier one tier two tier three Whatever might Provide the right particular naming convention with your infrastructure in your ecosystem But and you know in this example in the gold we want to make sure it's part of high performance We want backups on an hourly basis. Maybe that data is worth being backed up So maybe that's something that you can offer out to As a private cloud to your line of business to your tenants or even as a service provider using the storage on your back end It can be replicated. I just spoke about snap mirror So being able to replicate that data and then obviously highly available And what we mean by there is being able to take advantage of the clustering technology be which is part of the data on tap and part Of our environments. So maybe silver Maybe I dedu but I still back it up and I still I still replicate it Maybe I have different SLAs and so I can manage the SLAs to the back-end storage and then also surface up some of the technology within that app and maybe Brian's then provision think about some of the data that It's not permanent. Maybe it's there for a couple of days or maybe it's there for three days You then provision it put it on SATA spinning media for example or some low-cost media and get a higher return on investment Again as a technical guy, you know, I like the SSDs on my laptop. I got SSDs in a device at home I put the critical stuff on there for speed that I want, right? So maybe in a bronze type level or a tier three. Yeah, let's just put Things on there. I don't care about for high performance or latency type things So it all depends on the SLA that you're trying to provide to your customers Another example too would be like in gold So another example too, maybe you use different volume types and you can use a database Maybe you have a database that you want to put or maybe you have an enterprise an enterprise CRM type application that you want to Put into the cloud and you want some very high-end storage So again, the names are arbitrary the volume types can be how you need them to be and you know That's the great flexibility about OpenStack and being a cloud administrator and offering these type of things to your customers Now not all of this is just NetApp some of this is available in the sender based sender driver today So hopefully, you know, there's some information there that you can take back and use even with an OpenStack deployment using non NetApp storage the NetApp storage features are these right here that provide, you know The features of thin provisioning DDo compression and Then you can define different back ends Even to define I can point it back in to a high-end storage device to a low-end storage device so real quick example How to how do volume types affect volume provisioning again? maybe with sender you have a couple of different back ends defined and Obviously the scheduler is going to look at what's the available capacity? What's the capabilities the type of filtering that the scheduler does? Our driver will report back some of the things that make that decision possible for sender But maybe we have an extra spec that says well, I need something to be placed on DDo And our driver picks that up identifies the appropriate Back end and the storage entities associated to those back ends to provide that capability So it would provision it You want DDo and it's going to provision it and back in B because back in B has flexfalls and storage entities with deduplication turned on Maybe sender back in a doesn't have any of the deduplication turned on so that's some of the net up driver feature set So here's some of the extra specs. You know, this is an eye chart and a little I don't want to go through it I don't want to you know read through any of it But just think about the extra specs capability of tarry talked about raid type Distype so maybe that's an interesting one where maybe you want your provisioning to be on SSDs or SATA drives or SAS drives then provision Be able to limit that candidate volume list only supports thin provision You know what volumes have thin provision turned on and then D Duke We've talked about that quite a bit so in the next next screen some of the other extra specs that we have available to be in snap mirror compression thick provision policy group And again, that's pretty important because you can define some back in with multiple policy groups and and now with us That's at a file level So if you create a sender volume, it's at the file level So now you can define some quality of service type metrics within that sender block storage device So another use case here may be gold is a distype SSD then thick provisioned In other words, I want to make sure space guarantee is complete. I don't want to thin provision anything silver quality of service We've got a typo Qs policy group something that's defined on the storage back in you associate that to your extra specs And then your scheduler and the net app driver will say, okay I need to go put it there because that's got the policy group associated to it and then finally bronze Maybe it's again SATA drives entry level hardware and I just defined a particular back in Now one of the things to again We talk about different types of semantics here, but these are just different types of volume types Maybe, you know temporary type block storage something that's not there very long Think about DevOps in and out things that are done for a day less than a day then provision Don't take up any space. You can oversubscribe your storage infrastructure and be in a pretty safe place there and get more return on your investment but these these particular volume types again, it's just arbitrary whatever you want them to be and a Couple of things in regards to this These are all net up pieces that extra specs that are part of the driver But this is also just part of sender right here right storage protocol volume back in name We support ice guzzies and NFS for For our back in and for our driver So I want to go into a quick demo Hopefully it won't glaze your eyes or anything, but it's a recorded demo and we'll spin through it real quick Show you a little bit about this particular environment go through some of the volume type creation So you can see the command line how it's done command line how we Provision a an object a sender object and then kind of say did it really go where we wanted to go? Are we just kind of showing it off or is it really it's in action? And one of them that will show particular is about mirroring and show you where the mirroring goes Bear with me while I go ahead and start this out So just to give you an example what back ends I have to find I've got NFS bronze gold silver ice guzzie The particular one we're working a lot with it's called C dot NFS one as a back end And these are what's NFS mounted the file system used by the back ends And I'll show you a little bit here about the file systems that are mounted So this is part of the sender drive the sender comm definition where you link to it. These are the file systems And so this is pretty quick and I apologize. It's a lack of time, but we're creating different Volume types right here. So in this example, I just created gold silver and bronze now I'll associate back ends to them so you can see how that's done And all we're doing here is saying okay I've got a I've got a type of gold silver and bronze and I've associated very specific pre-configured sender back ends to them And I'll show you the extra specs here in this particular example So in this case You know in a sender command line if you if I was a tenant and I had horizon and using the GUI I would see these as available and volume type in the drop-down So we'll move through the demo as quick as you can Another thing to set here I call seven mode we have two different types of storage operating system So we'll do seven mode and C dot So we're setting seven mode here. I've crewed now. I'm now I'm exposing some of the net app Technology I've created a type comp that's for compression and now we're to set the net app underscore compression to true And so if I created a volume of type comp it would look for a flex fall or a storage back in that had compression enabled Another one here for mirror. I set mirror up. I define it I give it a back end of or a variable net app underscore mirrored equal to true So now I have the ability to create a volume type of mirror and now I do another one for thin And thin provisioning is what that's looking for and I'm turning on the net app value of thin provisioning to true So I've created all of these different types of extra specs Utilizing the net app features one thing about the net app quality of service feature I can combine both front-end and front-end sender Q of s along with net app Q of s In this particular example here where it's called open stack dev. That's my quality Q of s policy group on the storage And I'm going to assign that to a particular back in in this case So we'll list all the extra specs here and we're going to go into with what little time we've got left to kind of show You the mirror example. I'll fast-forward into it. I'm pretty sure we're coming up to it here Again another list of the extra specs. I want to stop that real quick and kind of Give you an example So this is what it would look like if you were a cloud admin or even as a tenant You had the ability to run that and it show you it'll show you some of the extra specs over here Again volume back in name is just part of sender and then net up ddupe for example net up thin provision Net up mirrored net up compression again. Those are extra specs exposed by our driver So I'm just creating some quick volumes here a volume type gold. There's thin Again, no different seven mode because I wanted to go to a particular type of storage controller in OS Now QoS is being created and so all of these are created based on the volume types We just created and again all of this is done by the GUI Wanted to go through command line being a again being low-level guy. I like command line versus GUI So we're gonna validate the QoS here just to let you know what happened So we're going to the controller here and we've created a QoS Sender object and I'm going into a particular mode within the storage OS And I'm going to show you that that workload for that file is monitored by this QoS policy group Let me stop this one real quick. Whoops. Let me go back. Sorry Come on stop. Oh, well, I can't get it to sync up So what what it does with what I was going to show you with the net up C dot feature is it shows that particular Cinder block storage volume being monitored with a QoS policy group So here we're creating a mere volume and we'll get into the back end in the next couple of steps creating a cinder volume We want to make it be replicated. We want that particular cinder block storage that's associated to be part of a replicated environment We're gonna find it Get that out of the way. We'll find it in the file system. All of this is in FS So we'll find it in the file system We'll locate it. I want to find this particular object. It's in the file system. Then we'll show you which Which particular NFS server serving that up and this is the last piece of it. We just got about 10 seconds So this is the NFS server I defined it So it's got mirroring defined as part of the name so we can find it and then finally we're gonna confirm that that particular object is part of a snap mirror relationship and All I did was do a snap It's effectively a status of a snap mirror replicated environment and finally we see that this particular file system and Storage entity is part of a snap mirror replicated environment. And so all I did was use the net app driver I defined an extra spec to leverage the net app specs and I bring that up through so now you can create cinder block storage That is replicated to another site for DR and we'll have some blogs and some other Collateral around this exact use case so how you can restore that too We have the ability to do what we call a single file restore So I can restore a particular object within that flexible and we'll show you how to do that So be on the lookout for some blogs and some other ideas. Thanks for your time. My time's up And I appreciate you taking time for me. Thank you