 Everybody welcome. This is cinder 102 pouring a solid foundation for block store services. My name is Jeff apple white I'm a technical marketing engineer with net app and so glad y'all can be here today This is not the fire trail session contrary to what's on the board outside. So Good to see everybody So I was kind of astounded when I saw the hands that went up when they first You know they asked how many people are new at the summit and just you know hundreds of hands went up near me so When I saw that I thought well It might be good to kind of capture the new people that are here and so even though this is cinder 102 I'm gonna do a one-on-one recap for just for those of you who might be completely new to sender I'm not gonna spend a ton of time on that So I don't want to scare anybody away by covering 101 But just just a brief kind of overview of that and then we're gonna move into the other things that are on the Agenda still have plenty of time for all that So Here we go try that again Getting a lag here Sorry, my slides are lagging here still not happening for me. I could turn my laptop around, but I don't think anybody could see it Can I get some technical help that's a popular slide that's not the one I'm looking for but Okay Let's see does the HDMI having that worth trying there was Now there we go great HDMI all right, so the agenda today is as I said, I'm gonna do a brief sender orientation talk about You know the basics of block storage what it is and I'm also gonna talk a little bit about what it isn't because Honestly when I came into open stack It was a little bit confusing of what exactly is sender what's included in sender and was not sender So I'm talking a little bit about that as far as you know, a federal storage and other terms that you'll hear Around sender talk about back up or in store of volumes and also some new functionality and that respect that's available now Talk also about consistency groups replicated volumes some new administrative operations that are available within sender and Some other new features as well, and then finally I'm gonna do a and if you pay close attention to some of my examples We'll reference that reference back to that when I get to the troubleshooting part so The there'll be an error that you can that we'll talk about later But actually it's it's fixed in the early slides because I figured I would just use whatever I hit when I'm doing my demo to Demonstrate here how to troubleshoot sender so keep an eye out for that So basically sender one-on-one blockstores basics, you know, so it's it's the service within open stack It's the equivalent of of Amazon's, you know elastic block service Basically, it is the ability to create extend delete block devices They get provided to virtual machines or instances in an open-stack cloud very basic It's appropriate for any, you know any scenario where you need blockstores whether it's a database Where that's a root disk for a VM and I'll talk a little bit about when do you use sender for a root disk? When when would you use what's called ephemeral storage for that instead? Also snapshot functionality is provided through sender, you know point-in-time Capture of the state of the of the disk at any one point in time through a snapshot And they can be used to do the restores as well and create new volumes And also, you know, there's interaction between sender and other services and to me That's where things really get interesting with sender sender You know creating a block device in and of itself is not so snazzy But when you start interacting with glance and Nova and all the other services That's where you start to get the real value of sender that's enabled So just an overview So if you look at the way, you know, there's different processes that I'll be running in a sender deployment For every and I'll get into a sort of a brief description of how that the blocks of the configuration relate to a process But basically for every back end you configure you'll have a sender process. It's listening on the backside But at the front end you have, you know users that are making restful API requests to the sender service And so they're not actually generating rest through, you know, like W get or something like that They're either doing it through the sender command line client or they'd be going through horizon or there'll be some automated tool That's actually accessing those API's and making restful calls through to the sender API service And the API the API services job basically is to stuff those requests that come in Into the message bus a MQP and then there, you know There's interaction with sequel at certain points to determine what services are available and different metadata They get stored in the sequel database but in general the requests are going in the message bus and if it's a backup job sender backup takes off But in most cases the sender scheduler is going to come by pull those requests off And it's going to do the filter scheduler is going to decide where does this this block device need to get created So we get to talking about trouble shooting It's good to keep in mind that even though your sender volume back ends are actually, you know providing the block storage service It's the scheduler where you're likely to see things go right or things that happen that are unexpected You might have expected one thing to happen, but something else happened So looking at the sender schedulers were typically where you're gonna go. We'll talk about that So this is where I talk about what sender is not sender is the block service Where is Manila, which is another project that's an incubation with an open stack is the file service So whereas blocks are provisioned through a hypervisor up through to the VM provided as a block device A block device just shows up in the virtual machine, you know, it might be dev VDB just appears in the virtual machine I can format that device. I can you know, put a file system on it mount it up Provision services through it and then snapshot it do all those kinds of things Manila the the point of that is to actually enable File sharing, you know, whether it's NFS or sys or you know a Gluster FS or GPFS various different File protocols within an open stack tenant namespace or network namespace So Manila is emerging, you know, it's not actually a formal project yet It's very close within liberty time frame that'll be available as well But just wanted to kind of draw that distinction. So whether you need block services or file services There's two different ways you can go with an open stack And then so another thing term you'll hear that is not really sender, but you'll hear the term a femoral disc What does a femoral disc mean? So if you think of a hypervisor, and it's really just a Linux machine bare metal machine running KVM Quimew or you know, it could be yes acts it'd be other hypervisors on the market Essentially, it's a disc that is designed to as a bubble would indicate is going to be around further You know for a while, but it when that VM is performed its function is shut down and terminated that disc goes away And it may just be sitting on local storage on the server It could be mounted in cases where you want to do live migration, you know, you have to have You know typically with KVM, you know like var lib nova instances will be mounted and shared And then you can migrate a VM from one hypervisor to the other So in that case, even though it's technically a femoral it's it's a shared disc You would use it in cases where For instance in our OpenStack team at NetApp. We use a femoral disc We've had, you know tens of thousands of the femoral VMs come up and go down in our continuous integration environment where we're testing our OpenStack and our OpenStack code and things like that. Well, you know VM will come up It'll run a job those jobs get logged and you know put into log stash or something and then that VM is performed At service it goes away. The disc is deleted. No need to keep it around. So that's where a femoral would come into play and I got a Basic sort of this is sort of the center 101 I wanted to play this because this is a kind of thing that I do anytime we bring on new people on the team It just sort of light bulbs go off when you see something happening You can actually see a volume name getting entered in The type of you know if I had multiple types of storage on the back end I could select that and then create a volume Very simple stuff. It's just to me. It's just endlessly fascinating I don't know why even even the basic stuff when you really break it down. It's actually pretty complex So this is a case where we're booting up a nova instance. I select the flavor Which is a basically the memory in the disc. I select an image. I'll launch it and so that's a femoral It's booting up that at instance is booting from a glance image It's coming up memories spawning the IP address is available and then here this this video is kind of rolling along pretty fast But what I'm doing now is attaching the DB volume that I created to the to the instance So that disc is now going to magically appear within that VM as probably dev VDB or dev VDC or something of that sort Then you can provision it you can create a file system on it use it within your VM another Mode that you can do is you can create it from a base you can create a sender Volume from a base image in glance So basically what's happening is creates a blank volume pulls the glance image that gets copied into the sender volume And then you can boot and you can even make that bootable so you can boot up a sender volume That is not a femoral it'd be persisting in whatever your block storage is on the back end and And use that without Without fear that it would disappear So now that was 101 a 102 we're starting to get into the command line what I used to call We used to call it the dark place all this is a white terminal You know you're actually doing things on the command line here And so whether you're whether you're doing sender commands here or you're hop operating through horizon You're creating those restful calls to the sender API service. That's putting them in the queue the schedulers taking them off Handling that request however was appropriate whether it's a snapshot create or delete Etc. So In this case, I'm just I'm you know showing that you can Extend the volume the from images the volume name. I grew the size from 25 to 30 It shows that it's extending and then it's available. So that that block device grew You know from 25 to 30 then you'd obviously have to extend your file systems or whatever operation you need to do to utilize it and That's about it So now that you've passed 101 your kung fu is strong You're ready for the real world now, all right, let's get into this a little bit more a little bit heavier stuff That's kind of a basic recap of what you can do with sender and you can see even the base functionality is very very powerful if you think about open stack as a way to Wrap an API around multiple hardware vendors whether it's nap or emc or solid fire You're running stuff or whatever you're running, you know LVMI scuzzy in the back end There's lots of choices. They all plug into this common API You can write code and that code will work anywhere no matter what the back end as a very powerful, you know basic concept So so the backup and restore Basically, you know, it's what it you what you would think it's a command line interface to create backups of a volume And so It's a little bit to me was not very intuitive at first, but the backups in the fault case go to Swift So they'll go to your object store that that block device gets copied into made into chunks stuffed into Swift Swift is whatever the replication is in Swift takes care of moving those blocks around to create the data protection that you need and You know so basically on the command line client You would do sender backup create and give it the volume ID, you know the GUID or the ID of the the volume or the name The reverse process would be sender backup restore from that and they need pull from Swift Restore that sender volume to the state that it was in prior to any data changes one thing that's new in in Just just recently there's an option to back up to NFS and so Swift So if you want to you made just a two-line change to your center conf You enable that NFS backup driver. You can back up to an NFS device. So That's a good option for people that might want to use that for replicating data around or you know There's a lot of different use cases around that so Swift is not the only target in here I'm just kind of going to illustrate a simple listing of a volume Creating a backup from it If I could type and then I want I kind of wanted to show you how things actually appear when you once you have a live backup and Then here at the Swift level where things actually get stored So I'm doing there a list I see that there's a volume backups container Then I do a list on the volume backups and look there's all the files that got broken up from that single one gig sender volume If I then Remove that backup delete it and do a list again Well, it's confirmed the backup is gone and then Swift Swift list on the volume backups. You'll see there's no objects there So basically it makes illustration of how how that back how that works Within the kilo time frame. There's there's also support for a new there was a blueprint that was partially implemented and Incremental backups are now an option So you can do a sender backup create dash INCR and give it the the full backup container that already exists and it'll do a an incremental backup from that point in time There were also options to there were some other advanced options in that that boot print that didn't didn't make it in but and this is and as I said If you want to do an NFS backup you can set your backup drivers sender backup that drivers NFS and Then your backup share to any any NFS share that you want to Back up your files to as well Restart your center processes and that would be your target for that point Okay another valuable thing as support for encrypted volumes That that was a tricky thing And you know the junior release brought the capability in crypto volume, but there was no way to back it up So not not very useful. So now in kilo there is support for You know backup and store of encrypted volumes As if you know anything about a crypto, you know, you have to have a key to do encryption and decryption so basically what happens is the the UUID Key gets copied through the key manager And allows the source volume to be deleted and it's a encryption key UUID made in balance. So basically you could create The backup it would be encrypted you could delete your source volume And yet the key will go with it that corresponds to the backup so you could then restore if you need to One thing important to keep in mind is that restores must be made To the same volume type as a source So if you had an old backup and you deleted the volume type you need to restore it You're out of luck. So just something to keep in mind if you're an operator In that case the backup rest restoration would fail All right consistency groups This is a relatively new feature There's very sparse support within the center if you go and look at the sender matrix There's not very many drivers that support consistency groups Basically, it's a set of sender volumes that are grouped together And you would use them as a kind of a logical set mainly for the purpose of creating snapshots So where you have an application that has dependencies on you let's say a database with log volumes and various different pieces of it Spread around you might want to create a consistency group for those those volumes so you can treat them as a set for for you know archival restores, whatever It can support more than one volume type So if you have you know a consistency group with you know part of it's in a volume type It's flash base part of it's in you know spinning media whatever you can still create a consistency group around that and I'll show you that you can see the reference here if you're interested in that feature Example usage here Center consist create you give it the name of the consistency group and the the volume type You always have special specify the volume type and so You basically create a volume and add it to the consistency group at the time you're doing it So that you know the system you have to have a consistency group ID to do that There's some examples here not gonna deep dive into that it's as I said it's it is kind of sparsely supported these things are Moving there's active discussions on a lot of this stuff in the center community And if there's anybody here and we get to the QA session there's people on core team that have Comments or thoughts here. Well, feel free to chime in replicated volumes this is a Basically this is a key storage feature for you know high availability disaster recovery type scenarios for applications in open stack There's an existing v1 implementation of this. It's sort of a first take Go at it basically said well, let's get something done I'll get it out there and and you know to try to try to make some progress in this So basically, you know the driver would establish a replicated relationship for you You know it's create the primary to secondary a relationship And then be able to promote the secondary in the case of you know a switch over And then make that primary and then re-enable replication in the reverse direction It's it's it's been a tricky process though because you know in sender There's a lot of different hardware vendors. They have a lot of different Technologies different ways to implement things on the back end. So finding a common, you know approach that really makes everybody happy is a challenge. So What send you know people have done different things with net app We've done things with our store service catalog where we have replicated volumes that pre-exist and you just simply filter on those by having a You know an extra spec that says I want a mirrored volume Others want to create the relationship anew as as something's you know happening in real time So there's different ways to solve for it So basically if you see that the the volume type has the capabilities replication true Cinder can perform, you know a variety of operations on those create update extend elites You know as I said it'd be basically setting up the relationship being promoting the secondary falling back And so the the different things, you know the actions will depend on what the operation is Also v2 is the you know the next version of this is an active discussion now and in the sender dev team So if you guys have thoughts needs about that specific use cases It'd be good to bring that up now in the developer community So I want to shift a little bit and talk a little bit about resource pool management sort of describe how that works In the junior release of OpenStack Sender introduced this concept of storage pools and prior to that every back-end was basically this one monolithic, you know group of storage that didn't really have any distinguishing Characteristics amongst it so but now there's a concept of storage pool So you can have a storage pool that has you know Various different volumes in it that have div and this is an example based on that a net up capabilities, but these could be They can be very vendor specific, you know, whatever the vendor can enable Through those extra specs can be you know treated as different pools within that back in So basically the way it works is the you know The driver comes up. It's it tells the scheduler. Hey, I have three storage pools You know they have capacities of x y and z And they are you know these attributes these various attributes are specific to these pools and then the sender scheduler says okay I've got that you know, I'll make provisioning decisions based on that information So the driver kind of informs the scheduler What it's capable of and then the scheduler's job is to do the filtering and make the decisions based on that So if you think about it, I mean, you know a storage rake and as I said, you know can be subdivided in lots of pools within you know within our context we might have You know volume that has mirroring enabled. We might have a volume that has QoS enabled we might have Compressed volume or a duplicated volume or you know a volume. It's based on flash media or spinning disks out You know sass, what have you and so those can be put in different, you know Different pools within that back end And we can do do things with extra specs to allow the sender scheduler to make intelligent decisions on that and I'll get into actually some pretty cool Capabilities that have just come out recently with that regard In this example, you know, we're representing different media on the on the back end of the storage device So volume migration This is when I say administrative operation This means you need to be obviously it be need to be an admin user So this is something that would be available to the cloud operator would not be available to a standard tenant in the public cloud or private cloud You'd have to have special permissions to do this Basically what it is is a transparent move of data as you would imagine from the current back end To another target back in that can be I'm going to show you an example in a minute It's actually a pool to pool migration. You can do it within actually entire drivers I could migrate from you know, let's say I want to migrate from an NFS store to an ice scuzzy store or what have you you can You actually migrate between drivers as well as pools The only problem is there is one got you cannot have existing snapshots so You can do things like doing a workaround where you you know to create a volume from a snapshot and then migrate that volume Is kind of a workaround, but that's one of the limitations right now also migration of volumes attached to Nova instances is only supported Where the hypervisor runs, you know, it's capable of a lot of migration And it gives you a little brief example of the command line to do that Here you're specifying basically the it's kind of the host name But you would get this information here from a sender services list command It'll show you all of your all of your drivers. I'll get the questions in just a bit Sorry, okay. We'll go into the example. I do do an example in a bit So what I'm doing here is I'm just picking a sender volume I'm going to show it and I'm going to grep for just the the host field So basically where does it reside I'm trying to find out where this volume is and I'm going to migrate it from its Current location to a different location in this case. I've got two back in Locations vol sender one or a vol sender and vol sender two I'm telling the driver to migrate from sender to sender two Basically, it's doing the block move on the back end transparently and then See how it's progressing Actually, I may have waited for it to actually complete here so yes, it You can see that it's now instead of where it began out on vol sender It's now sitting in vol sender two. So it's a nice feature for the administrator If you need to do migrations due to load or things of that sort All right, moving on to sender manage This is a Had a discussion with the guys in our team to me I think of it as an import export manage also makes sense to it's kind of just a semantics thing It's a way to take pre-existing Lones or files that you want to bring in to sender and bring them in through the manage command It has it basically takes two options. You can either do it from the source name Here or the source ID It's very vendor specific If you give it the source ID, it may mean you know, it means a pretty generic thing But it's up to your driver to figure that out You know the source name in this case is obviously a file path or one path to ball ball one one and giving it the the pool Also talk a little bit about and in order to have this kind of features you need to have what we call multi back in You know if you want to move things around within your sender configuration file you're going to have the enabled back ends parameter and so these these blocks refer to This config here where the standardized LVMI scuzzy driver or in this case the C dot Which is really just a reference to that identifier there and you have you know, your drivers all have their own particular configuration parameters depending on the vendor and What's required to make that particular driver work and then when you? You know you'll see that You can use these types go ahead And here I'm becoming the admin user again rather than doing dash dash os username, which is kind of tedious on the command line So what I'm doing here is to create a sender type Create a center type named C dot I'm also going to create a key value on the C dot type, and I'm going to set volume back in Name the C dot NFS and then just show you that as a the extra specs list here So what this shows you is that the you know the LVM driver was pre-created it already existed And then I just created a volume a type of C dot with the value of C dot NFS We'll use that in just a bit So another thing that's fairly new There's been some optimizations of the scheduler The schedule now supports over subscription for thin provisioning of sand If you think about it in a thin provisioning case the provisioned capacity Parameter to might not actually mean a whole lot to you So basically what you have now is a way to over provision Through the max over subscription ratio and that defaults to one which is a safe setting But if you want to bump that up and oversubscribe to 1.5 or 2 or whatever whatever you decide as the cloud administrator That that option is now there Another very cool feature is that you can actually do pretty advanced filtering With these filter functions now So if you know I'll get into an example in a bit here while I show what the driver actually You know reports to this to the sender scheduler You can basically take those different parameters that are reported to the scheduler and do advanced math or regular expressions to say If you know stats dot involves is less than 1000 and volume size is less than five then deploy it You know here So you can get very fine-grained with the way you do that So there's a goodness function and then this this filter function There's more if you want to dig into that feature. There's more here at the bottom here But this could be a very powerful way to kind of do some intelligent Provisioning within skint center. I mean the scheduler does a good job of figuring out You know basically what what back into I have that has the most space and putting the volume there But in some cases that that's not enough. So you've got a little more power here through this new feature Basically what I just said on basically, you know, the free capacity in a thin provisioning profile Scenario doesn't really mean a whole lot. So Okay, another big thing that was a big thing for the sender project this time around was the ability to do rolling upgrades That's been a an ongoing pain point and not just in the center project That has been a painful part of running open stack in general And so with this release a lot of the projects are on board now with being able to do rolling upgrades where You know basically masking the complexity of schema changes and databases and things of that sort It's all handled much more dynamically This particular change came from code in the nova project That was modified to allow services to be independent of schema upgrades from this point forward So operators, you know if you're one of these operators I'm sure you know about this and you're you're very happy to have it in your tool set Okay, another thing that's this kind of a new thing within sender is what it's referred to as private volumes There's basically, you know, and it is public flag. It's very similar to the nova construct of his public so you can set a Type to be false public is false So only only the administrator can see if that's what all you do you set it to false Then you as the administrator can see this type nobody else can see it So it's good for testing scenarios where you want to validate that your config is good and that provisioning is working. Well So but if you wanted to add a tenant to it To that type then only that tenant would have access to it So let's say, you know, you might might use that where you had a High-value analytics project, you know, you need to grant a certain tenant or a group of users Access to that particular storage you could do it through the product volumes Construct Okay, so if you if your kung fu is weak you need to do some troubleshooting the first thing if you're running a Distribution of open stack that's you know vendor supported most of them will all Disable the verbose and the the debug in the logging So the first step is to turn that stuff on you need verbose you need debug because if there's trace Trace errors or debug logs that you need to get access those will be completely hidden for you until you do this and restart your sender services So that's step one You also need to have understanding of which log is going to have your error as I said You know the scheduler is the one that's making the decisions if the scheduler is aware of all the different sender back ends That might be distributed across multiple hosts or different arrays It'll know where the capacity is and make decisions based on that so you can see that You know here you need to make sure that your services are up as a first step That's obvious troubleshooting and also you need to figure out where your your center volume resides. You can do that with that Admin show command Okay, let's say you have two volume types one C dot one LVM one I created an extra pet extra spec of C dot NFS. I try to do a create and looks all good, right? But then you get the dreaded no valid host This is my personal pee with a This is a very common message that you see in the in the schedule log You'll typically get and you'll get it in Nova you'll get it in sender no valid host And it's it is exactly what it says, but unfortunately doesn't tell you what the next step is So you got to dig a little further to figure out. What is no valid host mean? so in this case Let me flip forward here So I'm looking at What my back-end driver is reporting volume backing name C mode NFS Does it match? No, the problem was I had C dot NFS in my volume type of what I created earlier Okay, so if you have a mismatch between what the the driver is reporting this would come from your center configuration block here So you can't just create extra specs and and hope that they're going to work If they don't match what the driver is actually reporting You're going to have a mismatch and things are going to fall apart very quickly So the fix would be either to change your configuration to your volume back-end name would be C dot NFS or You could update your volume back-end name equal C mode of NFS here with your extra specs And that this the reason I want to bring this out. This is kind of applies broadly It's not just with volume back-end names As I said if you have mismatch between what the what the sender driver drivers reporting and what the scheduler Knows about through extra specs It's going to be a problem so That's that's all I actually have I think we have a little bit of time for questions if anybody wants to dig in We have a lot of documentation here if you guys are interested in that So if anybody has questions, please come to the microphone 19 Yes, this will be a this will be available online And they're recording this as well. I'll be on YouTube. I can't go to 19, please We want to review the So that the example that I the demo that I showed you the migrate was the correct syntax I'm not sure what you're wanting to look at Yeah, you see some I think in the very the last line you have to add a Pong sign plus the post name otherwise it come I won't work. Oh, you're telling me at the pool Yeah, yeah, I pulled this from the docs online and that you're correct. The doc need to be opted definitely otherwise You're caught a lot of confusion. That was that was from doc. Just that's one point Yeah, I think my example had the pool but yeah, I know so that's why I pointed pointing out and the other thing gentlemen I think it's this command only works for Migrating between I think within the same back end if we want to migrate for example LVM to To not have for example, we have to use the command retype So this is a bit. This is a great confusion caused by water migration. Hmm. Okay. Thank you Those extra specs that you said on Cinder volume Let's say you said something wrong, you know, because it's all text through you want to change it, right? So question number one, can you change it on the fly? Yes Let's see if I can get to that slide real quick Basically when I do the Type set command it's essentially an update it looks Thanks this slide here So this command here you can run this command a hundred times with a hundred different values here And it'll just update update update every time. Okay, if it's your so that parameter will just get updated That makes sense. Do you have to restart sender services? No, okay. No, that's dynamic. Yeah. Thank you Okay, I have a question about the replicated volumes. Yes So basically I would like to understand more how you would make it work with two different data centers Do you mean that? For example in one zone you have one Cinder back end in the other zone You have another Cinder back end and then you can do it automatically or you will have need one back end across two That the centers that yeah, that gets to be very vendor specific I mean net app has a way we've solved that but the replicated volumes is You know I would advise you go look at the Cinder matrix and look at the drivers that actually support replicated volumes We can filter on volumes that are replicated so you can set those relationships up and data automatically gets you know Transfer to the remote volume we can filter on that through an extra spec There's a different approaches of time trying to enable this kind of functionality So yeah, go out and look at the matrix for the center drivers and you see yeah, and Cinder do placement of volumes based on IO Based on IO. I believe is anybody from center core here I believe that is one of the features that gets reported up through this the driver and the scheduler Anybody here know for sure? IO Don't anybody from core here. I think so would have to take that offline then All right. Thanks everybody. There's a question about the private volumes. I'm sorry Question about the private volumes private volumes. Yeah, so is it possible to assign a volume type to multiple projects? Or it's a single project assignment. Yes, I believe you can assign multiple tenets to the private volume. Yes Thanks