 the project team lead for the CINDER project so we kind of wanted to go through and give a quick update on where things are at and what all's going on so first off for those of you that aren't familiar and wondering what is CINDER there are still some people that aren't quite up to speed on what's going on but at the last summit we went ahead and made the decision to try and break NOVA volumes out of the NOVA project there was a number of reasons that we thought this was a good idea one was to simplify things in terms of the NOVA code base and make the maintenance and supportability of that a little bit easier and one of the biggest reasons in my mind was also to try and generate a little more attention and interest on block storage inside of OpenStack it's always been there but it's always just kind of been in the background and when you're in NOVA I mean the key is virtualization and the hypervisors and everything else so the idea was to go ahead and get some more focus get some dedicated resources and get a team put together that actually cared about and focused on block storage the approach that we took in order to do this was just to go ahead and basically just extract the existing NOVA volume code out of NOVA and put it in its own project the whole idea along the way was to have equivalency between NOVA volume and sitter basically all the same functionality all the same bugs and all the same bug fixes so everything was was to remain constant between the two we started off right after the last summit we proposed the idea at the last summit started off immediately that next week with the goal of having something that we could actually be in lockstep with the official OpenStack process so we followed the same release schedules the same guidelines all of the governance rules everything we set up all the launchpad accounts everything else with the intent that once we got to Folsom RC1 we would be able to make a decision on whether Cinder was actually ready and whether it could be a core project so on and so forth so and obviously we achieved that goal so we actually we didn't finish quite as quickly as we would have liked but by RC1 we were well on our way and everything was completed we would have liked to have been to the point of installing new features and things like that already we hadn't quite gotten there yet as far as new features that we implemented there's there's a ton of new things here and there and stuff like that but some of the big ones that people were looking for and interested in is we added this ability to create volume from an image so instead of now doing this manual process of creating a bootable volume and stuff like that there's actually a call now that you can actually do that directly so you can do a read of the glance image onto a volume in turn there's also the ability now to create an image from a volume so we can actually do the read off of the volume and put it back in a glance as well which is kind of a nice feature we've also gone ahead and we've started to do some NFS type support so we have this NFS layer that will go ahead and actually take an NFS and present it out as a block device so those of you don't know center right now it is a block device service so it is block storage it is not Swift object store or any or file servers or anything like that NFS is something that some people have been been looking for and find really useful and stuff like that and I think there's a lot more to come on that as far as how that's going to evolve and what kind of support might be added in the future but right now there is a basic layer in there to allow some NFS support that's not very helpful one one of the other good things that came up was we went ahead and we implemented persistent I SCSI targets so the idea there being that if your volume node or your compute node reboots or resets or anything like that your volumes don't all go away and lose all your connections and have to re-initialize everything so that was a really nice feature really necessary something that's been asked for for quite a while so we finally got that in along with that of course we had to create a new sender client those of you that are familiar they have the Python Nova client the Glantz client the Keystone client everything else so of course we had to do a sender client one of the things that we did was with that in terms of this whole compatibility thing that worked out really well as we went ahead and took the Nova client and the sender client are equivalent in terms of their functionality and the calls that you can make in the operations that you can do the only difference is depending on which service you have configured they point to one volume service versus the other so for example if you have sender configured as your volume service that's your endpoint you can still do Nova volume create or Nova volume list and those still work they just go to the sender API instead of to the Nova API so that was kind of a win for a lot of people in terms of compatibility and getting used to things and so on and so forth we also kept all the EC2 stuff as it was that's all still in Nova but it all still behaves the same way and the mechanisms are all the same and none of that was lost along with that one of the other things is there was a number of issues that people had with the response data coming back from the APIs for example on a create volume only half of the information that was actually available would get returned and somehow nobody noticed that for a really really long time so there was a few things like that there's a number of API calls that have been changed in terms of the data that's that's returned back in the payload that's going to be really helpful for some people so as far as things that went well I think there was a lot of things that really went well there was definitely a lot of really great participation and you can see this is only this is a partial list I could have went on and on but the reality is is there was people from all different companies all different vendors and not only storage vendors but you know service providers everything else one of the ones I forgot to put on there that was really critical to was Seth and dream host but there was just a lot of really good turnout a lot of really good participation I would get a review and it would be somebody that I'd never heard of or talked to you before anything else like that and they wanted to get involved and we're excited about the project and it was really cool along with that you know some of the other things that aren't on there it pretty much every vendor that you see Zedera solid fire IBM net app HP everybody they all submitted updates their drivers they all made new drivers and made changes significant improvements so it was a really good experience all the way around for everybody as I said before we had we did achieve that Nova volume equivalency so right now today everything that's in Cinder is in Nova and vice versa so that's a good thing Nova volume will be deprecated after this really or will go away after this release it's not deprecated so hopefully we won't be talking about Nova volume six months from now and we did achieve core status and like it says so far everything's been going pretty well so so this is the hard part for those of you that have went to the Nova meeting and stuff we haven't had any of the Cinder sessions yet they're all tomorrow so as far as me getting up here and talking about what's next and stuff like that it's kind of speculative these are kind of the hot topics that have come up that people want to talk about that have kind of been buzzing and circulating around for the past month so that's why I put them up there QoS is a big one the idea there is the the ability to go ahead and control a lot of back-ends have the ability to control I op-based quality of service and we'd like to go ahead and expose that out through the API there's some discussions about how that should look whether it should be something embedded for admins only versus users etc etc so those those are the kind of things that they're going to be talked about there the other thing is is right now in terms of volumes volume status in Cinder you have just a simple text string representation of the status it can be available ready failed attaching detaching so on and so forth but they're all just strings so we're going to go ahead and we're going to convert that and turn it into an actual state machine it'll be a lot more robust clean things up and give us a lot more flexibility in a lot more stability right now this is still a point of contention it's it we've cleaned it up and made it better but there are still a lot of things that can go wrong when you're just using strings like that so one of the other things that came up is API improvements so we're talking about a possible API version 2 with a lot of enhancements and one of the things that's going to drive that you stick around for the lunar talk I can't remember exactly when that's scheduled but you want to learn about that that's going to drive some things that we're going to want to pull some more features in the API but at the same time there's also a lot of things that a lot of the vendors for the back end storage they provide that there's no way to access it so we need to make some changes the API we need to grow that and improve it so that's going to be a big focus for this release as well speaking of the back end right now the way it's the way it works is you when you set up a volume node you only have the choice of one back end storage device so that could be the local storage LVM that could be an HP box a solid firebox and that box stuff whatever but unfortunately that's the only choice you have so once it's set it set one of the things that we're definitely going to try and change this release is add the ability to have multiple back ends and that way either the admin or the user tenant can go ahead and select which back end they want their storage to reside on and that'll be a nice feature it's really handy for things like tiered pricing and things like that if you want to do pricing based on performance or you know that sort of thing so along with that is currently we have this concept of volume types and extra specs it's been hanging out there forever but the problem is is nobody's really ever known what it meant or what to use it for so one of the things that we're working on doing is actually solidifying that definition and clearing it up and actually throwing together some use cases for it and things like that so that will help a lot some of the things like back end types and things like that that's where that'll come in naturally one of the things going to go along with that is is right now the volume stuff is the scheduler is actually just a simple scheduler there's no intelligence in it there's nothing really smart about or anything like that there's already some work that's been done that will go ahead and start doing some more complex filtering based on volume types and other things that's really going to help out and improve that functionality as well the other things that came up from the folks at HP currently if you do a boot from volume unfortunately what happens is if the image is deleted out of glance they no longer have any metadata or information about that image that's now residing on that box in that volume this creates a lot of problems with billing and things like that recreation anything that they need to do there so we're going to make some changes to go ahead and do some kind of retention on that data so that we can keep track of it and they can still effectively do their billing or or recreate the image if they need to and so on and so forth one of the other things that would be really cool that a lot of people have asked for is the ability to actually do backups of your block storage onto object store so for example cinder to swift this makes a lot of sense it's one of those things that you know you go from the what you may have is higher performing block storage might be a little more expensive doing your backups over to object storage may be a little cheaper may not be as performant so that's that's going to be a good win volume resizing a lot of people have asked for that so that's one of those things that will definitely get implemented and then one of the other big things is the folks from Citrix have contacted me on a number of occasions and they definitely want to focus on improving the support in OpenStack and the functionality there's a lot of new features that they have that they'd like to get implemented and stuff like that so look for that to really continue to grow and be enhanced so there's some really good things there oh wait I missed one hold on so I said what went well I missed a slide so I was going to go over what didn't go so well which I hate to do but I should so the things that didn't go so well the first thing is is well I'm new so you know I'm still getting used to this whole thing and learning how to be a ptl and stuff like that that went okay I think but what didn't go well was documentation and a couple people have voiced that to me already and I'm painfully aware we did not do a very good job of documenting how to set things up what the changes are so on and so forth unfortunately what happened is I think we got into a mode of well it's Nova volume so it's exactly the same so whatever you do in Nova volume that's what you do in Cinder but that's not really an acceptable answer so that's that was a shortcoming this release that is definitely something that's going to improve and get better and that won't be a problem in the future the other thing that was kind of interesting is figuring out how timing of releases actually works there was there was a number of times where we'd get to an RC point and I'd be thinking okay so after this week you hammer it really hard and everything else after that you're gonna have a week of cleanup doing some documentation maintaining the bug list things like that it never worked that way every single time it was as soon as you finished one it was on to the next day lose of things for the next one so it's a pretty fast pace and it's in its heart until you go through it it's really it was really difficult to figure out exactly how to schedule things and prioritize things so that's going to be an interesting thing going forward too those are the big things that that from my viewpoint need to be fixed in the future the other thing is getting some more consistency in terms of solid reviewers that are always there and always available right now the system works really well there's there's pretty good response time and pretty good turnaround but there's there's a only a small set of four or five people that are actually always there to do reviews and I'd like to see that number grow a little bit if possible just to take some of the pressure off and to get some more eyes on the code so yeah I think that's those are the big things now if anybody has any questions we have we haven't we've talked to them and they are definitely they've got some things going on on their project and inside the hypervisor to go ahead and start collecting that data so there is work being done there and there will be stats being reported back on volumes including things like megabytes red IOPS all kinds of things so there is a pretty good list there yeah yeah so so again I can only speculate about what we'll talk about in the session I can tell you about some of the things that I've gone through you know personally and when I've been thinking about over the past couple of days so my idea is is yes you would actually be able to recreate the entire volume from from that backup so otherwise it wouldn't be much of a backup right ideally the other thing is is a number of vendors have talked about actually implementing this type of functionality in their own device so what would be really neat is you know the base case would be one thing going from LVM or something like that but what would be really cool is is when these vendors actually have that and they can just have a API call in their device that just does it for us they can optimize it make it more efficient so on so forth I'm sorry what yeah yeah so they actually implement they actually implement the copy to object store and that's something a number of vendors have talked about doing it's it's something that's that's pretty commonly asked for and it's in it's in high demand so does that make sense yeah so it's it's really not that bad so the the on-ramp to open stack is kind of tough for a lot of people to figure out but really the best way to go about it depending on the complexity of the driver and things like that you get your launchpad account you sign the contributors agreement and basically first step is submit a blueprint and in that blueprint you just basically say this is what I want to do this is what the driver does this is why it's good so on and so forth and then you go ahead and you write the code write some tests for it and you submit it to Garrett that's pretty much it I mean that's the quick run down scale down version the other thing is you know always feel free especially in Cinder I'm on IRC almost all the time my nickname is is Jay Griffith grab me on there I can help anybody that wants to get involved wants to write a driver anything like that I can definitely help you come up to speed on how to do that and get it going or anybody on IRC really can help but there's there's actually a Cinder channel that a lot of folks monitor now it's pound Cinder that's a great place to start but yeah any anybody that wants to get involved in any way whether it's a driver contributing something else that's a great way to start correct so as it sounds today that the the intent is for multiple back-end storage devices so for example somebody may have a net app device somebody may have a a CEP rados block device somebody may have a solid fire device there's no reason why we shouldn't be able to come up with a way for for you to be able to select any of those three depending on some certain criteria that you set as a service provider right now there is I mean right now one of the things that's going to come up in the set in the in the sessions tomorrow as well as fiber channel support I've had a couple of folks ask about that right now we do not have any fiber channel support right now it's pretty much ice scuzzy Seth has their own driver to do some special things and then there's the local storage stuff but it's mostly ice scuzzy based right now so yeah yeah so there's there's some different discussions going on around that so that particular point was not necessarily to that end it makes sense that you may be able to extrapolate to that next level and actually get to that point but there's some other conversations that we're having that we just started having like 15 minutes ago so to talk about ways that we may be able to do exactly that so yep yeah I want that to go away I want that to completely go away so yeah and hopefully at some point that will I don't know how that's going to shake out this particular problem was was specific to a service provider had things set up so they a client would do a boot from volume and they have special metrics that they do for their billing based on whether it's an image or not right whether it's bootable or not and what happened they found out was they deleted the image out of the glance repository and they lost all the information for their billing and everything blew up so so that's that's more what this is line to but I think it's going to come into play for some of these other things which are definitely on the road map there's a lot of things that are that are being talked about right now to improve how images are passed around from glance to Nova to sender to whoever and I think I think it's going to be pretty cool once we once we get it shaken out okay cool yeah great yeah so so right now it's it's kind of it's kind of interesting because it's not it can be used by the driver yes right now I don't I don't know of any of the drivers that are actually really using it in that manner but yeah that was the intent was to have volume types be something kind of akin to flavors in in instances and then extra specs would be just you know extra data you know that you could do for extra tweaking and tuning and stuff like that so those have always been there but they've never really been taken up and really used at least from from my experience in my knowledge never really used all that much or leveraged but there's a lot of potential there there's a lot of things you could do with that the other thing that exists that wasn't exposed so that wasn't exposed through the clients either which was kind of interesting so it's always been there but nobody's been able to access it through the client unless they write their own the other thing that was not exposed was metadata a lot of people weren't aware that that you could actually set metadata on your volumes when you create them so that's something else that's that's kind of interesting so so there's a number of things like that that have come up anything else this would be the shortest project status update ever all right cool thanks everyone