 Hey there, my name is John Griffith. I am the current project technical lead for the Block Storage Project in OpenStack Cinder. Today I wanted to just go through a quick update on where Cinder is at and how things are going. So just kind of a quick run through in terms of numbers. During the Grizzly cycle this time around, we did 35 new blueprints, 151 actual bug fixes that were committed, over 600 merges into the code base. We added over a dozen new back end drivers, which is pretty astonishing considering. And we had a number of new vendors come into the OpenStack project and get involved. Included some work from EMC and a company called Co-Raid and some other folks. So it's been really exciting. The project is really growing. Things are starting to branch out and expand. HP also showed up and started doing some more work. One of the things that's really cool about this is not only are some of these folks bringing in their drivers and getting support for their products in Cinder, but they're also branching out and helping out with the project overall and helping out with the core project itself and advancing it and making it better. So that's pretty exciting. I wanted to kind of at least give a mention of some of the new drivers and some of the new vendors, if I could. One of the exciting ones is the HP folks introduced support for the three-par array, including fiber channel support. So that's a pretty big move. Co-Raid came along and they gave us AOE support. So we now have AOE inside of Cinder. So now we have iSCSI, AOE, and fiber channel as well. Hawaii came in, Scality, Gluster. We have a special Gluster driver now. That one's modeled kind of the same way that we did NFS the last cycle. So it's a block storage overlay on top of Gluster. So it's kind of interesting. We added thin provisioning to our LVM so the base reference LVM driver. So there's now a thin provisioning option depending on what version of LVM you're running. So that's pretty useful. There's some pretty significant performance improvements that come along with that, so that's a good thing. We added mirrored support to LVM driver. So now we have mirrored LVMs. ZenAPI introduced an NFS option for use if you're a Zen person, so that's good. And then EMC has come up with some support for their arrays and they've got more to come as well. So during Havana we should see some more. And I think we'll see more companies coming in with drivers and more support for that as well. In terms of features in Cinder itself, this is stuff that I thought was pretty exciting. We did a new API. We came out with a version two of our API. One of the things that we really focused on with that, those of you who've been around the open stack world for a while, usually anytime somebody says incrementing the API, everybody cringes and gets ready for breakage. So we really focused extremely hard on making sure that we maintain backward compatibility and that we didn't break anything and that things would go smoothly. And so far, for the most part, the upgrade process is pretty much upgrade the code, kick things off and run. It's worked really well. We haven't come across any issues yet. So Mike Perez from Dreamhost worked really, really hard on that and it's come together really nice. So now we're at a point where we continue to go forward and improve some of the things that we're doing in the API like reporting errors, giving more information when something goes wrong, that sort of thing. So that's gonna be a huge, huge, huge improvement over time. I mentioned the fiber channel support. It was really cool. We actually had a consortium of folks from Brocade, HP, and IBM, NEMC all got together and went off and worked as a group and actually came up with a solution to get basic fiber channel support in for storage. They've done a really good job. This is just the first step. The next step, we go into things like zone management, sand management, things like that and they're already starting to meet and work on that. So there's some pretty cool things happening there. AOE, I mentioned. One of the other really big things that we added was, we finally have a real scheduler in Cinder. Those of you that are familiar up until this release, all we had was a simple scheduler, which meant that anything that you scheduled to Cinder nodes would just go in a round robin fashion. Now we actually have the ability to set custom filters, filter on volume types, filter on capacities, all sorts of things. So there's a ton of flexibility there, including customization. You can set up your own custom filters to help you determine where your volumes are located. Along with that, one of the things that drove that was multi backend support. If you're familiar with the Cinder architecture, one of the things about Cinder is, if you're not using the base LVM driver and you're using a backend from another vendor, for example, the Cinder node actually just acts more or less as a controller or a pass through for the vendor's API. In the past, if you wanted to have multiple backends in your Cinder cluster, you would have to actually spin up a new Cinder volume node and have it dedicated just to serve those requests and send them back. You also could not have differing types. If you did have differing types, you would have no way to control which one you use or where things went or anything like that. So with the multiple backend support, what we did is now you can have a single Cinder volume service running and you can have it configured to actually talk to multiple backend devices and you can actually assign those multiple backend devices a specific type and then direct where your requests are going based on that type. So it's a huge feature. One of the things it's great for is scale out. For example, if you have an OpenStack Cinder setup and you need more capacity and you're adding storage, you can easily just keep adding storage and growing this out without reinstalling OpenStack or reinstalling a Cinder node or anything else. We also started working on LIO support. So a new I SCSI target, if any of you are familiar with that. This is kind of the incubation period I think is what Havana will be. It's pretty cool. We've got some pretty good performance increases, some new features, some new functionality. That's also kind of a hot new item these days anyway. So I think it's gonna continue to get better and give us some more flexibility. So that's something to keep an eye out for and that should be pretty cool. And then one of the other really, really cool things is we went ahead and we added a new backup service into Cinder. So now we actually have the ability to take your Cinder volumes and back of them up to Swift. So there's functionality in the API to actually configure everything and set it up and do a backup to Swift. In the future, there is talk from some folks about doing things for tape as well. And even some tape application vendors or talk backup application vendors are talking about putting support into Cinder to actually go ahead and do that as well. So that would be pretty handy. As far as what's next, this week we've, as you all know, been in the design sessions and hammering out some of the ideas and priorities. Already touched on the expanded fiber channel support. That's a big one. ACLs, we wanna start doing access control lists. One of the big things about that is have the ability to actually transfer ownership of volumes between tenants or have multiple tenants that have access to a volume. That's gonna solve a lot of use cases and a lot of things that people have been asking for. There's all kinds of crazy stories I hear about how people work around this today. So it, some of it's scary. So it'll be nice to see something actually in the code and tested and everything else to expose this feature. The next thing is volume migration. We wanna actually introduce the capability to do things like migrate a volume off of one node to another or actually as part of a compute instance migration take your volume with you. So that involves some tricky things in terms of the ischese connection and having multiple connections without losing data so on and so forth. So there's gonna be some interesting challenges there but that's one of the things that we definitely have highlighted for this cycle. So that'll be good to see. And shared storage libraries. One of the things that we have going on right now is Cinder was kind of born out of Nova volume and there's all the block storage semantics are inside of Cinder. But there's still things that are block storage related that need to be done in Nova compute as well as some other places. And as of right now, the result is we have copy and pasted code in multiple places that all do the same thing. Which of course is very inefficient. It's not good. Something gets fixed or adjusted in one but not the other, they're not in sync, so on and so forth. So we've decided to work on this idea of taking all of those common things that we can find and putting them into an independent library that everybody can use. So that's gonna help a lot. That may be its own library, it may be Oslo. I'm not quite sure. We'll see how it all shakes out. But that's gonna be a big improvement and it's also gonna open up a lot of flexibility and some new options in terms of things that you can do with some of the other projects. That includes bare metal, glance. So that's kind of my quick five minute run through. I'd like to open it up if anybody has any questions about Cinder. Oh, sure. You can, yeah, thank you. Okay, so NetApp just introduced shared service and I talked with them, they say it's part of Cinder. No. You know anything about it? I know a lot about it. Can you elaborate? So for a while now NetApp has been interested in having a shared NAS type service inside of Cinder. There's been multiple debates about whether NAS services mesh well with block services and so on and so forth. There was a tent made, the end of the last release cycle that actually didn't mesh too well. So we made some changes and some recommendations and went back and some additional things were tried as far as having an actual independent service inside of Cinder. That was submitted. Unfortunately, it wasn't submitted in time. It wasn't done and it didn't have support for any generic base implementation yet. I think that's gonna continue to be visited. I know NetApp is talking about it to a lot of people. I think they've actually done some media announcements. If it's just where you've heard from it probably. Oh, okay. But anyway, so there are some things going on with that. I'm not sure exactly where that's gonna be. I haven't talked to those guys that much this week about that and what their plans are. I don't know. I don't know if it'll be a partisan or not or if it'll be its own project. If there's enough interest and enough people are willing to invest in it and stuff like that, I think it would be great for it to be its own project. And I think it's something that we need an open stack. So I think that would be great. Yeah, absolutely. Yep, so the idea is the way it's architected is it's designed as an independent service inside of Cinder. And it has the same sort of architecture and model of being able to plug in drivers for what that backup target is. So that could be somebody that wants to implement something to a tape drive, whatever. So there are options there. And then of course, block to block, disk to disk, that's always a possibility, things like that. So those things are in the works and you should probably see those in the next release as well. So are the hooks being opened in the Cinder.conf? Yeah, so first you'll have to actually develop a driver and write a driver and make it work, right? But then the selection for that would come from Cinder.conf with a combination of specifying the target in the client when you do the backup. The same service, do you need a backup application or is it self-contained? No, so that's the beauty of doing it with, for example, with Swift. If you have a Swift install, you can actually just use everything that we have in right now in existence and just go directly to Swift so you would not need a separate. Yeah, oh, absolutely, yeah. Yeah, backup without restorers is kind of useless. But you never know. So yeah, well, thanks for your time.