 Hi, sorry Wow, this is a good crowd. I'm glad to see this So this is the cinder project update and Sorry for the late start my laptop while I was trying to prepare during the keynote just hung but we're working now So we'll move on Just for anybody's new how many people are familiar with cinder Hey everybody, it still works the same way. It's your traditional client with a API and scheduler and Volume services, so I'll skip over this and get to the good stuff project background for for anybody who's not familiar with this already it started back in the Folsom release when it was pulled out of Nova and You know has a hundred and fifty eight contributors in Rocky So we continue to be a good healthy project Had a good Representation at the PTG in Denver as you can see from the picture here You know the usual suspects there and we continue to have in the user surveys reported that a Significant number of our users are using cinder both in production and in test So What we're going to do today is the usual update. I'll go through what we've added in Rocky share some of the news there and then talk about what we're planning do in Stein and Hopefully have a few minutes for questions at the end here. So Rocky We had some new drivers come in as we always do You know if for those of you who've been with cinder since the early days there used to be a flood Now it's kind of down to a more manageable trickle, which is a good thing We had next Santa Edge Ice fuzzy storage to the veritas access ice because he driver and our in spur in storage Fiber channel driver for those of you that are excited about NVMe. There was a target added to that for LVM So we're starting to see more NVMe support work its way in Okay, come on there we go We've had we've been kind of working through the process of making sure that third-party CI requirements are being met By our drivers and with that, you know ones that fail to report on it Like at least half the patches over, you know a three month period of time We've been deprecating those and so with that happening We had to mark the these four drivers unsupported For those of you that don't have experience with this it doesn't mean it's removed But users have to agree in the config file that they are aware that they're running an unsupported driver. So Just for awareness, these are some of the ones that have been removed and are marked unsupported during Rocky Multi-attach multi-attach multi-attach Talk about it every time I'm up here Want to give an update with the usual reminder that we have multi-attached But it doesn't mean you can use it on any back-end or any file system You can hurt yourself if you're not using a file system that supports being attached to multiple operating systems at once But with that said it's there nine new drivers added support in Rocky So if you are using one of the back-ends, they're all in the release notes that supports multi-attach We will be you can use that We did have to disable it for the LVM driver if you're using Lio and iSCSI and working on getting that fixed that was a late discovery in the release And hopefully we'll have a patch up for that soon to fix that Scheduler improvements this was kind of a cool feature we add in add in Rocky was the ability to control where Creations deletions etc are scheduled depending on the the back-end and what the operation is so for instance Let's say I've got an old LVM storage back-end that I want to replace with a new shiny storage back-end And I don't want my users to start be you know to still create Volumes on that old back-end. I can say I don't want any volume creations to be scheduled on that particular back-end I don't want to disable my users from doing snapshots or other things on it But I want to start slowly migrating data to that new storage box and so With these operations it allows the administrators to better control what functions are Able to be scheduled to what back ends so I thought that was kind of a cool new functionality that came in Have had requests over the years for capacity-based QoS This went in in Rocky So for those of you aren't familiar with what this means it basically is when you have you know You can specify I've got one gigabyte and I want so many IOPS per gigabyte So if you create two gigabyte volume you get two times that number of IOPS and so on So more of a large Cloud way you know for where you want to on an individual basis Control QoS for larger environments where you have wide variety of volume sizes Replication support that's another one that you know replication replication we've been talking about for how many releases This is a good example of kind of the phase that Cinder is going through In we've gone from adding a whole bunch of big shiny new functions to really making sure the functionality we have works So we've had replication out there for a little bit people are like well great I can set it up, but I've had a failure How do I move to using the new storage device now that the failures happened? You had to go hack the database. Well, that's a great user experience. Isn't it? No So now we've got a command through Cinder manage that lets you say hey I've had a failure and this back end is now my primary move over to that you start getting your API interacting with that back end instead you recover your old one You can go and switch back and you don't have to hack the database which given that we say Please don't hack the database is a better way to do things Backup support improvements everybody's been asking about backup lately Please come to our session later today I believe that's the user feedback Because we need to talk about backup a little bit and better understand what's needed what needs to be improved But we're working on improving it already in rocky So now you can say said an availability zone on a backup. So say I'm backing my data up I want to stay in the same zone as my storage. You can specify that we've also made it more efficient. Thank you, Gorka So that you can utilize multiple processes when you're doing a backup to get better processing better speed using however excuse me CPUs may be available on the system where the backup is happening and Improved support for the Google off library. I'm not sure many people are using Google, but that seems like it's worth mentioning Another cool kind of availability zone based function is being able to say And it goes along with the backups appropriately, but to create volume types that say what zone do I want my storage to be created in? You know, I know that I'm going to be running on computes in a zone I want my storage to go to the same zone you can create volume times the types as of micro version 3.5 2 That go to that same zone Image signature verification Security is good. We're adding support for that So the ability to check a signature in glance When you create a volume from it and then adding a flag saying signature verified So if you've got a volume that is created from an image that had a verified signature It adds that tag and you know that you got the data you wanted To your volume for that image This was a work item that was wow we did a lot in Rocky didn't we good job guys This was a piece of work that was in in process for a long time Creating you know when you transfer a volume to another user the snapshots didn't go with it And then that user would say I'm done with my volume I want to lead it and they couldn't because they didn't have access to snapshots And you have to call up whoever sent you the volume and say hey You can get rid of these snapshots so that I can delete my volume So now we say hey, you're sending the volume. Let's make you the owner of those snapshots as well greatly improves user experience and then you know for some reason you want to make the person who you're sending the volume to is life Harder you can still say no snapshots and you'll own them and they'll get the volume and then they'll call you and complain later so by default this is enabled and Again, thank you, Gorka. We're working on active active HA documentation so that people are aware of what supported with Cinder and active active HA support We've got much better documentation out there as to what should work what shouldn't work and what's recommended And I'll actually be using this as the segue into our next topic. I think almost first for your reference Here are the micro version changes that went in in Rocky So if you want to use those features make sure that you set that version When you use the command or make sure that you have your environment set up to use the latest micro version Priorities for Stein. How am I doing on time? I got five minutes Reminder we talked about these at the Denver PTG They may or may not happen But it's give you an idea what we're working on and what we hope to get done in the next release For details we track in our ether pad the Cinder spec review tracking ether pad What we're actually getting done with links to the reviews and that kind of stuff So segue What we've discovered from going through the documentation and setting up HA development is that the placement service has kind of already Solved the issue of how to do this global locking so that we can have multiple active volume instances So we're hoping during Stein, right Corka to get this implemented so that we'll have better support for Active active volume processes along with active API active active API and scheduler instances For a good HA environment Big goal for the community is adding upgrade checkers. We're working on that Basically a command that will let you Run that you can run before you upgrade your system from one release to the next and it says hey It looks like your systems ready to go or you might want to look at this or hey You got to fix something before you move on So we will be implementing that during Stein Generic backup implementation. I'm hoping that we'll get this in we've been working on it for a little bit again, this goes along with the Backup goals. What if you don't have a backup system like TSM or Google or whatever? Well, how about you use one of your storage back ends and back up to that? That's the goal here I think we're on track to get that done in Stein. I hope if not in the train release How about for train getting named? Driver capabilities reporting we've had challenges here If you want to know what your driver is doing, it's not the easiest thing to figure out So this is an area where we're trying to Improve the user experience so that people who you know want to go and see I've got three back ends What are each of the capabilities of those back ends? Hoping to make that more readily available and usable to the administrator This is up here so that you can hold me accountable because I'm supposed to get this done And I need motivation. So it's documented We're going to try to get to storyboard my goal in the process is to Improve our processes for tracking bugs and specs and stuff so that we can be held more accountable for the work we're doing. I Got to do it. So now I've said it Seth ice-cozzy support. This is another one. I'm kind of holding my team accountable for we've had requests from ironic and some of the other Um Stakeholders here to add ice-cozzy support. I'll be honest Lenovo were interested in it. So we're working on it And partnering with red hat people at red hat to get that done But this could be a fun feature You know being able to use RBD if you want or if you want to expose your volumes up using ice-cozzy It helps really kind of support this idea of having a standalone cinder environment where you may or may not have RBD access To your volumes. So and then this also will help enable doing boot from volume with Seth on bare metal So working with Julia to help make that happen to in the ironic environment Other improvements. This is just kind of some of the other stuff. We're looking at hopefully getting through re-initializing failed volumes Try and get it so that if you don't have volume types We don't leave you with a bunch of untyped volumes that are not really clear what they are deferred deletion in our DB the recycle bin But that that's a good feature to add with Seth and improves performance when you're doing deletes And then we've got people working on all the other things that come around multi-attach and multi-path and stuff improving that shared targets Improvement that kind of stuff and then adding stuff to the transfer records so that you can better track Where has a volume been passed around? Some links here that will be out in the presentation when I get posted Reference for what we're doing also the release notes. There's a lot more detail in there as to what we've done in in Rocky Thank you, and did I do it? Yes with one minute to spare Does anybody want to ask a question in one minute? I answered all your questions. I covered everything you wanted Good well, thank you for coming to the project update And if you need anything, you know find us out on OpenStack cinder In IRC or you know post something to the mailing list and we'll be glad to help you Other words happy storage go go forth and save your data