 Cool. Okay. All right, everybody, thanks for coming to my Cinder and Docker talk here. Peanut butter and chocolate, as I call it. Two things that go great together. Or work, come on. My God, not another Docker talk at the OpenStack Summit. So I couldn't decide whether this was good clickbait or whether it was just ironic or whether it just didn't matter or whatever. So anything with Docker in the title these days seems to get a pretty good audience, as is evident here, so. But I'm actually going to focus mostly talking about Cinder and the things that you can do with Cinder. So this is an employer trap. So for those of you who don't know, Cinder started quite a ways back after the Folsom release, Vish and some of us got together and had this idea of breaking Nova volume out of Nova and creating a block stored service. Neutron had kind of set a precedent for doing that sort of thing. They had started that quite a while before us. We went ahead and started working on that and we did it in one release cycle. So at the Grizzly release, we had Cinder volumes and we released that. It was really cool, three or four drivers in there, a handful of contributors that were there full time on a regular basis. Lots of folks that were just fly by and help out and teach me what I was doing and stuff like that, cuz I had no clue. So as of today though, we went from that three or four drivers. As of today now, Cinder has something like 80 different drivers that you can configure and use inside of it. So it's taken off, it's grown immensely. It's kind of crazy and we've got a ton of regular contributors. We've got folks that are full time, that's their full time job is working on Cinder and writing storage code and developing back end drivers and stuff like that. So it's been really cool to watch. It's been awesome, it's been really, really neat. So I usually sum it up when I talk to people who don't know much about OpenStack or Cinder just like networking. If you're a storage vendor or a networking vendor, you've probably got a plug-in for Neutron or Cinder inside of OpenStack. It's just what you do. So a little bit about Docker, and again, most of you probably already know this. But containers have been around, they've actually been around a long while. The past year, I would say the past year or so, there's been a pretty significant explosion in popularity and they've become a really big thing. I remember a couple of years ago, I used to get a lot of inquiries from folks asking me about looking at putting ISCSI support or storage support inside of containers. For a long time, that was kind of an issue for a lot of people. They didn't have any way to persist data. They didn't have a good way to persist data, I should say, inside of a container. You could certainly attach the local file system and things like that. But when you were doing things and you started to scale out in intro, Kubernetes, Mesosphere, whatever, all these other things, there was a lot of limitations. And there was no real good set model or set API to do those sorts of things. So as of Docker 1.9, which I guess about nine months ago now, I think that's about right, that came out and they released an actual volume plug-in API. So what that meant was is now there was a clean, defined API that you could use and do things like create, remove, attach, detach everything else. So that was really cool and that made a big difference. And all of a sudden, everybody was interested in persistent storage with containers. So for better or worse, whether you think that's a good idea or not, it became a thing, and it became a really big thing. So it's still really early, but if you go out and you look at the activity and the talks and all the things that are going on with Docker and with persistent storage and stuff like that, it's kind of amazing how fast it's moving and how rapidly it's changing. I always talk about how fast Cinder grew and how fast OpenStack grew and watching the change summit to summit. And it's kind of amazing to watch Docker because it actually makes OpenStack look almost slow, which I never thought would be possible. But it's quarterly releases and with each release, there's usually some pretty significant things there. And then there's all kinds of actors entering that stage right now. So it's pretty phenomenal. The other thing that's interesting is there's a lot of volume plugins being developed. So again, it's kind of like the Cinder story, right? There's a lot of people that end up, if you're a storage vendor, then you've got to do a Docker driver. That's kind of what I'm starting to see a lot of. As of 1.9, there was six published and kind of talked about volume plugins on the Docker website. As of right now, I think there's 18 or 20. So that's just one release to the next. That's the difference you've seen. The thing that's cool, well, I'll talk about some of the models and how it works here in a bit. But then there's also a lot of people that are creating some extra projects, some abstractions. And frankly, they're calling, so they want to be the cinder of containers or the cinder for Docker, right? So you've got guys like the Rexray folks, the Flocker folks from Cluster HQ, convoy from the Rancher folks. And I think there's a couple others out there as well that are kind of on the same vein. I think even NetApp has some things that they're looking at doing. So all neat stuff, lots of interest, lots of activity going on. So I think they're all really cool. So I don't want anybody to go out and say that I said that they suck and don't use them or anything like that, because I'm certainly not saying that. I think they're all really cool. They each have their own very specific value proposition in my opinion. One of the things that's really tough in a market like Docker right now, is there's so many options and there's so many things going on and things are changing so quickly that it's really hard to know what the right thing to do is, right? How do you pick one of these? You gotta go with your gut and hope you make the right choice, in my opinion. Cuz right now there's not a ton of data or a ton of crystal ball reading that's gonna tell you exactly what the right answer is. So all of those three choices that I just showed you. Some things about them is they work remarkably like Cinder. They give you an abstraction layer that presents an interface. And then you can contribute and plug in drivers underneath for your storage back end, for your specific back end. They will then provide some level of a command line interface. They'll let you do some things on the command line. Hello, so they'll do things like that and it's pretty cool. Docker's actually really simple on how it works. These plugins, what we do is we just provide a socket listener and that socket listener connects and the Docker daemon can talk to that socket. And that's it, it's really, really simple. What's kind of more interesting, cuz they start looking at all of these though over the years. All of them seem to kind of start with this concept of, hey, let's put a Cinder driver in here, right? So this is always kind of interesting. So some of you may know me, some of you may not. There's a long history with me on abstraction layers. I think that the proper number of abstraction layers inside of a project is one. And I don't think that having multiple layers of abstraction is necessarily a good idea. I had this battle in Cinder for a really long time years ago. And I think I was probably wrong, but that's okay, it's just my opinion. But the thing is, is I look at all these people that are putting these Cinder layers underneath these other Docker projects and everything else, and my question is, is why? Is there anything that says Cinder has to be an open stack only thing? When we started Cinder way back in the beginning, one of the things that I had actually planned on doing within a year was make Cinder a standalone service similar to what we do with Swift. So here we are four years later and I still haven't done it, but that's okay. So what if I told you instead of using these layers to talk to your Cinder deployment and stuff like that through Docker, you could just use Cinder. So there's a lot of really good reasons why this should be interesting, right? First of all, there's a ton of investment inside of Cinder already. It's a large community, it's fairly mature, there's a lot of testing. We do CI, we do all sorts of things. So it's really actually a pretty good model. It's also remarkably simple to actually break Cinder out into a standalone service. So I've got a prototype of this running and playing around with some of the stuff with the Docker plugin that we're going to take a look at and have a chance to fail miserably at a live demo. But it's actually not that hard. The hardest part is figuring out how to monkey around with some of the return data and that's it. I've been actually doing a standalone just Cinder only type of deployment with Keystone and Rabbit probably for a couple of years now. I just have some hacked up Baskrips that talk to the Cinder client and do some things for me and it lets me do interesting performance testing. I like to test some of the different performance characteristics between what happens when I'm using Libvert and do invert IO versus another type of pass through or a local disk or something like that, right? So that's kind of what I've used it for in the past. But it's not that hard, it's actually pretty easy to do. Basically, if you can figure out how to hack together Open I SCSI and automate that a little bit, you're pretty much good. You're done, cuz we'll give you all the information you need. So the beauty of that is we don't have to duplicate all this code, right? I mean, there's a ton of duplicate code out there right now. It's kind of crazy. Everybody's off doing their own thing. So the way I solve that is I go off and do my own thing and create some more, right? So, but in all seriousness, if you already have OpenStack deployed, why not leverage it? Why not give it a shot and see if this works for you, stuff like that? Now if you're not an OpenStack user and you're not using OpenStack, there's probably still some value in this, but it may not be as compelling. So that's fine, but, all right, so what do you need? Well, basically a running Cinder deployment. Right now, currently, you can hack that together and make that work with just Cinder Rabbit and MySQL and Keystone. And it works pretty good. If you've already got OpenStack deployed, just all you need is that off endpoint and you're just going to point to it and use it. So that's a piece of cake, doesn't get any easier than that. We are hopefully in Newton going to make some things inside of Cinder. I've got some ideas on maybe having a single package with some stripped down things in it so you don't have some of the off problems and Keystone problems and things like that. So you can run it in a standalone mode, really, really easy, right? It's not really hard right now, but we could make it easier. So we'll take a look at that. One thing I will caveat, and again, this will come as no surprise to a lot of you. I'm a very ice guzzzy centric person when it comes to block storage in OpenStack and around the world. So I have not looked at or taken into consideration how to make this work for things like Fiber Channel and SEF and some of the other protocols. I think it's fairly trivial. I don't think it's that difficult. There are plenty of people who have a lot of expertise in those areas and could probably put this together pretty quickly. Basically leverage the same sort of thing that Nova uses today. But those are not things that I'm looking at. I've just looked at ice guzzzy and kind of hacked some things together. So what I built is I went ahead and I wrote a native, if you want to call it native, a Golang driver to talk to Cinder, basically. So a while back, back in November, I think it was over Thanksgiving. I thought I'd hang out on a weekend when it was really, really cold. And I wrote a driver for solid fire for Docker. And I was like, hey, this is actually really simple. So I said, you know, I'm gonna do that for Cinder. Well, took me a while, but I got back around to it. And I wrote most of it last night. So I had a little trouble sleeping. So I was like, well, I'll write this driver and I'll do a demo at the presentation tomorrow. So anyway, so what I did is I took the, that's not completely true. I did plan this out a little bit. So what it is, Rackspace has a package out there called GoforCloud, which is a Golang SDK to talk to OpenStack, which golden, right? That was gonna be the hardest part. So unfortunately, I pulled that down and looked and it's a little lacking in terms of what's there for block store support. So like a good community member, I wrote some code, implemented the stuff that I needed and submitted a pull request. And it's just been sitting there for a while now. So I'm not sure anybody actually monitors that repo or does anything with it or not. But we'll find out after this talk, I suspect I'll get a minus one or something. So the other thing that's really cool is back when I did this before and when I first started doing some of these things with Docker about a year ago. I was doing all this work of writing my own HTTP handler and my own REST interfaces and all these different things and just craziness and everything else. A couple of months ago, I noticed that some folks actually merged a fully functional library that handles all of that for us. Score, that was the second hardest piece, right? So it's really, really, really simple. There's now Docker volume helper and there's now Docker network helper. Do you think they call it network helper? Those save a tremendous amount of grunt work that you're not gonna have to do. So that's really cool, that made life really easy. And then, of course, I have a Cinder backend. So I can do things like QOS, volume types. I can go ahead and call Cinder and do snapshots, create from snapshot, clone, all that sort of thing. So all of those features that are in Cinder, I just get for free. I don't have to rewrite them. I don't have to implement them in Golang. I don't really have to do much of anything except talk to those APIs and create an interface to make those calls. So it's kind of, it's all seeming really cool, right? I mean, really simple. So this is my next point. So once we open the box and start talking about this stuff, I always, always, always have to just beg people and please implore you. One of the things that I loved about Cinder in the start, and I still love Cinder, don't get me wrong. But one of the things that I absolutely loved about Cinder was the fact that it was simple, it was very basic. It was create, attach, detach, delete. That was basically it. Since then, things have changed significantly. We've got all kinds of things. We've got consistency groups. We've got replication. Yes, it's my fault I wrote it, but either way. There's all kinds of things and they're good things and they need to be there. It's just nice when things were simpler. Things ran faster, they were easier to debug and they just worked, right? So now it's really, really complex and that's just part of growing up. But what I would ask is, we try and keep Docker as simple as we can, because that is the main thing that makes Docker so valuable right now. So we've got something that's unbelievably easy to use, simple to set up, simple to understand, and simple to run. So my concern is, is once everybody starts getting involved from the storage side, everybody's gonna want their custom features in there. And they're gonna start trying to push and do all that sort of thing. So I'm gonna get t-shirts made up, sort of like Save Ferris. I don't know if any of you saw Ferris Bueller. So it'll be Save Docker, Save Docker Volumes or something like that. The other thing is, if anybody's open to it, I'm happy to go ahead and just start stripping stuff out of Cinder. But I don't think anybody else would approve the code, so, all right, I got one. Okay, what I always say is, Docker right now has basically everything you need and nothing you don't. So you can create, remove, mount, unmount, get the path for where it's attached, get a volume, and do a list of volumes. And actually, list, I think, was just added. So we're slowly growing, right? Really, quite honestly, the way the model works and the way the plugins are, since the plugins that you do don't actually go into the Docker tree or anything like that, they're external modules that you load, I think you could probably get away with this model for quite a while, because if you want to do fancier things and more custom things and stuff like that, you can do it down at your layer and provide an interface to do that, i.e., give them Cinder, right? And use the Cinder client or the Cinder APIs and do whatever you want. So I'm up here talking about all this code and everything else. There's a link to the Gopher Cloud pull request that I have up, which I actually have an update. I know this this morning there's something I need to update that and send it up there. But again, nobody's looking at it. So this is a hint, anybody from Rackspace that works on Gopher Cloud. Take a look, let me know. It may not be any good, maybe you don't want it, that's fine. But let me know, cuz if you don't, I'll move on and do something else. So the other thing is this Docker driver code for Cinder that I have. It's a POC, it's definitely a proof of concept. It's nothing fancy, it is down and dirty, and I mean dirty. I will get that up on my GitHub, and I have this, look over there. Cuz I didn't want you to notice there's no link there. So there's a few things I need to fix in there, and then also the internet and the hotel went out last night. So I gave up, so. So let's look at some code and maybe even do a little bit of a live demo. So I think my average right now for live demos at talks is somewhere around 75%. So I think today it's gonna take a hit, but we'll see. All right, so I should probably make this a little bigger, that's better. Okay, so first off, let's just kind of take a look at kind of the structure of how some of these things work, right? So if I go, and I'm still connected to the VPN, that's good, it makes me happy. All right, so here's basically, you guys can't see that. Who am I kidding? Is that better? All right, so if you look at this guy, basically this is the breadth of the code, this is basically all you need, right? So what I have is I've got this file called driver. Driver is really, really simple. All that's doing is giving an interface to these calls that I make, right? So I'm using a config file to make some settings. I'll show you that in a second. I create a driver, populate all that stuff, create my Keystone endpoint. If you look here, this is kind of the gist of all of these calls, right? So on create, I come in, the first thing I do is I go out and I do a get, and I make sure that the volume doesn't already exist on sender. If it does, I just return a handle to that volume or the information for that volume. Go ahead and take the input options like size, type, things like that that you may have specified. And if it doesn't exist, obviously I make the call out to sender and I create it, and that's pretty much it. If you look here, that is the gopher cloud code right there. That's the call into gopher cloud. So it's really easy once you have that backbone set up and everything else and makes it kind of a piece of cake. And I apologize I didn't use my development machine with fancy colors and everything to make this look better for you, but so. So what we do is I use GoDeps and I'm kind of struggling. I've tried Glide, now I'm on GoDeps. I'm using GoDeps now and having mixed results with it in terms of trying to keep all my packages locked down and all my versioning. So far it's kind of working, but I'm still new at it. So what you'll do is once this is up in my GitHub or in NetApp's GitHub or SolidFire's GitHub or whatever, or in sender's GitHub, who knows where this might be. All you'll have to do is you do a go get and the path to the repo. And it will install all of this for you and update those repos and set everything correctly. And that's basically it. After that, you're pretty much ready to use it. The one next step that you do have to do is you have to create, so this is the extent of persistence for this driver. So the whole idea of all of these that I'm trying to do is no persistence anywhere, but there is an exception. You have to have something to tell it a configuration, right? So if you look here, and I think this has a pointer, yes. So we get to put a default volume size, and this is in gig. That's traditional sender fashion there. Amount point, so this just says, where do we want to mount volumes when we attach them? So the thing to remember is it's kind of similar to how Nova works when you do an attachment. It doesn't actually attach it inside the container. It doesn't do an ischusy connection inside of the container. It doesn't ischusy connection on the node that's hosting the container and attaches it, mounts it, and then does the pass through into the container. So that's what we're gonna do. So this mount point that we look at here, this var lib sender mount, that's actually gonna be on the Docker node that's hosting the container. And then we give it an initiator interface. So that's just the ischusy stuff here. So again, I'm ischusy centric, so. I made up this host UUID thing for now and just slammed it in there. Because the whole point of this, so the things that I'm doing in my lab with some other drivers that I wrote are more things with Kubernetes and Meizos and things like that. And I need to have ways of tracking those nodes and knowing what they are. I've been trying to come up with a good way to use NetUUID and stuff like that to actually generate that and keep it that way. But for now, for this little hack in here, I just threw this in here and just hard coded it in. So that's some of the things I need to fix before I push it up to GitHub. Then we have this endpoint that should look familiar for anybody that does OpenStack. That is just our auth URL. So that's just telling the Docker driver where Keystone is and how to talk. That's it. And then of course username and password and tenant ID. So those last four things right there, those are just my OpenStack credentials. So that's what's basically in my openrc.sh file. That's pretty much it. So after that, okay. So after that, we can come in here and we can start my driver, right? Okay, so we're running, not doing a thing. That's cool. Let's come over here and let's do a sender list, okay? So we go to sender and we see we do a sender list. There's nothing there. So let's go to Docker and tell it we would like to create a volume. Okay, so just a basic volume, real simple, right? We say Docker volume create minus D is to indicate what driver we want to use. So if you have multiple drivers. And then, can you see that over there? Name foo. So I just give it a name. You can also give additional options. You can do a minus O and then give various arbitrary options. So you can do things like a different size. Right now I'm just gonna use the default. One of the things I've always hated about sender is you have to put in a size. Just give me the default if I don't give you the size. So you'll get a default of the one gig that we set in the config file. You can specify size, type, QoS specs, all those sorts of things. At some point, if this goes further and there's interest, I'll actually figure out how to do some things like create from another volume and stuff like that if that's of interest. You can still do that outside using OpenStack Client or whatever. But we'll see where this goes. So we'll do that. If you come over here, you can look, I'm running this log here. So you can see, basically there's not a ton of info here. But basically I just created the volume and registered where that path will be. So Docker's a little funny. You have to tell it when you create the volume. You wanna actually tell it and give it the information on where that path's gonna be and everything else. And then you're gonna return it again later, so it doesn't matter. But so that's why that var lib sender mount foo. That's why that's where that's coming from. So and I think that's a bug now that I think about it. So if you now do a sender list. So you should load this in your production environment ASAP. Yeah, there's one tick. Yeah, dang it, let's see what I do here. That's so disappointing. It really is, it makes me wanna cry. What did I miss, hold on a second. Now, now, now. POC code, how can I be any more clear? You know, actually I do this on purpose because I didn't want it to work. Because we'll have a better interaction with each other, right? So, okay, well, so I've apparently foo-barred something. So I'll have to go take a look and see what I did. Sorry about that. Odd, very odd. That should have worked. Well, that's always an option. No, I am not. No, I did not. That's what I was just looking at. Yeah, good call though, because earlier this morning I did that and was like, well, sorry about that. So that's okay, because we're running out of time anyway. And there's a couple of things I want to talk about and then give people a chance to ask any questions or anything like that. So that was less than impressive. But really, I can do better, I promise. Coders are at four in the morning, so. So some of the things that I've been looking at with this. I'm a big fan of Golang. It's a lot of fun. There's some reasons that it's beneficial to use and stuff like that. But one of the things I was talking to with some of the folks on the Cinder team is if there's interest, it would probably be smart and kind of cool for something like this to actually live as a sub-project inside of Cinder. And get the whole community behind it. There would be a lot of advantages there that way you could get people that want to do fiber channel and stuff like that and contribute and things like that. So the trick with that is, as some of you may know, OpenStack is traditionally pretty Python centric. So I don't know if using Golang is the right way. So I've got another little prototype that I started spinning up to do all of this in Python. It was a little more difficult for me just because I had to rewrite the web interfaces and stuff like that. By no means really rocket science or anything hard, it's just figuring it out. So that's an option, we can do that as well. The performance isn't quite as good, we should get some benchmarks. But I'm gonna raise it in some of the design sessions this week and see kind of what people think and stuff like that. So, but I think we'll definitely, for Newton, I would really like to see something possibly community driven to kind of make this more publicly accessible. So, anyway, thanks a lot for your time. I am sorry about the demo, it's a little disappointing. I didn't even get to create a container and attach it. But if there's any questions, I think we've got a minute, but also feel free to come on up and grab me. So, thanks a lot.