 Last session of the day, never good. Last session of the day on the last day of the week. Last day of the sessions. All right, we actually have more content on this one than we probably have time. So I'm gonna get going here. I'm Ed Baldough with SolidFire slash NetApp. John Griffith, same place. SolidFire and NetApp. We're gonna go through some of these slides, then we're gonna skip a bunch and we're gonna try to do a live demo. So yes, hopefully the demo gods will be with us. So let's get started. We're gonna talk about consuming Cinder from Docker. Yeah, that's a good title, although there's some beyond Docker in there. So the first thing that everybody asks is, hey, Statler, what's Cinder? Who cares, Waldorf? He's gonna talk about Docker because every time you put a Docker in the title of a presentation, you usually get more people. Hey, where are all the people? Hey, we're gonna talk about Docker. Come on in. Tell all your friends. So if you're, we're at an OpenStack conference, so this is a little less of an importance of the slide. We usually go through and say, what is Cinder? Most of you guys probably familiar with that since it is OpenStack. And it's an abstraction layer for a pool of block storage devices. Back end, we support many back ends. John keeps quoting that there's 80 of them in there that are supported in the upstream tree. I haven't counted that, so I can't very well validate, so blame it on John. It could be 82, I could be wrong. So you can keep plugging in back ends and you can scale it out. The scheduler can figure out where to place things. It's kind of like having an infinite number of disks so you can hot plug and unplug from your instances or your containers, right? So somewhere in here, Docker made the Docker volume API. You can do things like create, delete, attach, detach, and who puts snapshot in there? So these are actually, these are the things that I say you need in a cloud. Right, okay, those are the things John says you need. Those are the only things you need, but. Docker supports the first four and a list command. The snapshot stuff is not in the basic Docker API. We'll talk a little bit more about that in a bit, but we have some ways to make that happen, so. And then there's the stuff that a lot of people want and again, we can make some of this happen. This is John being opinionated and blunt. These are other storage features that you may want. Replication, consistency groups, backup, migration, so on and so forth. You don't really want those. Yeah, maybe you want them. Some people want them. And he made the presentation, but hey, I thought this would be a Docker talk and which character was that one again? Gamzo, maybe? I don't know. Docker was the best ever geek bait and they're flocking in a few. It's, you know, OpenStack in containers. That's one way of doing it. You can put containers on OpenStack. Container orchestration around OpenStacks from Magnum and Kubernetes. There's a lot of things. I'm gonna actually try to do a mesos demo with Docker under the covers here at the end. And that's where we get the unicorns for everyone. Also driven to interesting ideas and plans. Again, we see Cola, we see Magnum. How do we use containers to deploy OpenStack? How do we have OpenStack consume containers? So on and so forth. But let's bypass a little of the hype and do some cool stuff. And so, oh you, yes, we gotta build this out. So new thing, you know, the new thing used to be OpenStack, the new thing now is containers. There's, you know, pets versus cattle, conversations and chickens. So I think I had a presentation I do where I talk about, you know, we got pets, which are the enterprise apps. We got chicken or cattle, which is in the cloud. And then we got chickens, which are containers. What do you got against chickens, man? Chickens live less long than cattle do, right? Less long. Less long. All right, Donald. So there's always something we need, better networking, better persistent storage, those kinds of things, different development paradigm, you know, small ephemeral services. So just like we're here in OpenStack, containers needs networking storage. And in Docker 1.8, we finally got the storage provisioning bits, right? So there's always been the ability to do storage in containers, but was there automated provisioning and that really didn't come along until 1.8. It's gotten a lot better in recent version, so it continues to mature. There's a whole circus, you can use a lot of analogies there. I'm trying to think of one that's appropriate, but we'll just say there's a whole circus around vendors and other people trying to provide plugins to rapidly fill in where this API allows you to work there. There's a lot of confusion. There's a lot of stacking. We're gonna talk about, we think it's pretty straightforward one. If you've got OpenStack, taking advantage of Cinder to get your volume services is pretty straightforward. So we'll go through that. Docker volume plugins, it's simple. Volume API, again, there's basically five commands. There's a couple more in the API, but there's basically create, delete, attach, detach, list, and that's all you care about. It does include provisioning. So when you say create, it will go off to the back end. In this case, we're gonna talk about how that goes off to Cinder, and it will grab a piece of storage per your commands. It runs as a daemon. The daemon needs to run everywhere that you wanna consume services, so if you have Docker engine on 300 hosts, you need to run the daemon on 300 hosts. Right now, it uses a simple Unix domain socket to talk to Docker, or Docker talks to it that way. And it's JSON RPC, I already talked about that. It runs near the Docker engine. Works with Swarm, it works with Engine, it works with Compose, it works with other things that consume Docker, so like I said, I'm gonna try to do the Mesos demo and some Swarm demos at the end of this. So with that, I'm gonna turn it over to John here to talk for a while. All right, thank you. So Ed has the unfortunate pleasure of having worked with me for a really long time, so I give him a really hard time on stage, and I'm gonna continue that tradition today. I'll pass it back. Do that a lot, so. Anyway, so all this stuff's going on and everything else. At the time when I started this, and some of you have seen this talk, there wasn't really a good solution, so I created one. It was bored sitting around one weekend and said, hey, I'm gonna write a plugin. So I wrote a plugin. The whole intent of the plugin is it's written in Go. It's in Golang. Most of the stuff in Docker, Kubernetes, et cetera, is all Golang. I stuck with that mantra. And it is focused completely on Cinder, and that's it. I don't care about what's underneath. I don't have any vendor interests, anything like that, there's nothing associated with it. It is pure Cinder. It is open source. I gladly welcome contributors and feedback, and I was hoping for community support and feedback and things like that, but I'll talk about that in a little bit. I think there's some things gonna change. I licensed this under the, I don't care what you do with it, license. It's actually a thing, look it up. And it's kind of interesting because I've found a number of repos out on GitHub where people are doing whatever they want with it, and so that's kind of cool. But, so this is the part where most of the open stack development community, especially this Cinder guys, get kind of freaked out because I don't have these as part of any organization or grouping or anything like that. They are not a sanctioned open stack project, not even a big tent project. And they're not a Docker project. So they're just in my own personal GitHub, publicly available, do what you want. So, some people like that, some people don't. Those people aren't here, but there's one person I know that if he saw this, he would literally have a stroke. So, and it would have been cool if he was here, because well, yeah. So how does this all work? So the thing that's kind of cool in all seriousness, the actual point of this exercise was more to show that Cinder is actually something that's valuable outside of open stack or outside of a Nova context, right? And that was kind of the whole idea. So that's kind of what I was hoping to kind of raise awareness about and kind of get people thinking about and things. Because the reality is any consumer of storage, any consumer of block storage out there isn't really any different than Nova. We're all doing the same thing. We're creating, attaching, detaching, deleting. I mean, that's what we do with storage. So it doesn't really matter. So the whole concept and the whole idea is trying to drive the point that you can use Cinder for other things. And you should use Cinder for other things. You've already invested in open stack. You have a cloud and you have all these backends that are abstracted and everything else. There's no reason to go out and reinvent the wheel and write a new abstraction, you know, whatever it might be. And I won't say some of the companies that are doing this. You might as well just use what's there. There's no point in writing another one. So creative volume, attached volume, it's all the same stuff, right? I always do this just because Docker gets a lot of hate. So I like to give them some love. I think they're doing a great job. I love what they've done in 112. It's made my life easier. Things are fantastic for me. Some things break once in a while, sure. But, you know, open stack, we probably really shouldn't cast stones about that, right? So, all right. So, I'm gonna go into some things here and I'm gonna skip through most of this because we thought rather than look at screenshots and videos and stuff like that, Ed's gonna be brave and he's gonna do live demos. And I'm gonna heckle him and laugh at him when they fail. So. If it doesn't work, we have a video that you're gonna annotate, right? Yeah. And so the good part is these screenshots are in the deck. So if you get the deck, which is already up on SlideShare, you can look at the screenshots instead of having to watch the video. Yeah, so a couple of things. There's the slides are on SlideShare and I also have all the stuff up on a blog and it has his blog and we linked each other so you can find all that stuff. So what we're gonna do is we're gonna look at take our open stack cloud. Just all we have on there is compute networking and storage right now. And we're gonna mix in our chocolate, which is Docker. And then we're gonna throw in a little frosting, which is our Cinder Docker driver that we're talking about. And we're gonna make a nice tasty treat. So. I'm gonna go ahead. Actually, you don't even need these, do you? I'm gonna let Ed take over and show you what I just skipped. We'll try this here. Screening. Uh-oh. We're all right. So I'm VPN back into Denver. And so let's, we'll do the Docker swarm stuff first, right? Because I have both of these set up. We'll try to do that. So, and you can tell me what I'm forgetting here to do. So Docker volume LS is the first thing we wanna look at. It'll give us some stuff. I have a test volume, one, two, three, four, five. So we've created that. We can actually go back and look through our, we can go actually look at our open stack here. So one of the things you have to do, and it's a good practice, is to, and this is gonna kick me out here. So I'm gonna walk back in. Make different users for each of these. So I have a user for swarm. I have a user for mesos. Again, we've got a little latency here. So we're gonna go log into swarm here. And hopefully demo gods will look upon us well here. We'll actually see this. Ethernet gods. So the other thing we can do while we're doing that. Docker volume, I can type, right, create. And then we give it the driver. So you can have multiple of these. So again, we talked about there's a lot of, I don't know, you wanna annotate while I type? Yeah, sure. Yeah, so okay, so what Ed's doing here is he's gonna go ahead and create. One of the options is a minus D. So you specify what driver you want. If you leave that out, you just get the default AUFS driver inside the Docker. You can have multiple drivers on a single machine. So you could have a NetApp, a SolidFire, a Ceph, a Gluster, whatever you want, and have all of them running simultaneously on the same nodes and then just pick whichever one you want. Now the beauty of Cinder is it does all that for you. So you just run the one and talk to all of them, right? So the next thing is the name. Obviously you get to provide a name if you want. And then the size, size and gigabytes, and then type. So we take all of the same parameters that you can pass into Cinder and we go ahead and we abstract those and pass them back up so that you can actually get those from the Docker API as well, right? The only difference is you use the minus O or the option method instead of the minus volume type. So literally we got that one. We should be able to go in here in our open stack and see that we have our two volumes in here. So the second one I created was 20 gig. First one was a gig, they're both there. You can see the different types. Anything else you wanna say about that? Perfect, all right. So then we would do something like Docker, Ron, keep talking. So now we're gonna launch a container and what we're gonna do is we're gonna say, hey, attach this volume to that container when it boots up, right, or when it launches. So it's pretty simple. This is one thing that works a little bit differently than what we typically do inside of open stack because what happens here is we go to that volume and when we wanna mount it in Docker and in containers, what we're doing is we're dealing with file systems. We're not dealing with raw block devices. So what that means is the first thing we're gonna do before we go into the container is we're going to partition that disk and we're gonna put a file system on it. Right now, by default, I just use EXT4 for everything. It's configurable, you can change that around and you can put something else on there if you want. So he's already done it. You can see the arguments minus V is the volume and the first argument is the name of the volume from the Docker perspective and the other side of the colon there is where you want to attach that volume inside of the container and then of course he's using Ubuntu and he's launching a interactive Bash shell. So there you can see there's our volume. Yay. 20 gig volume. All right, so part of the demo gods shined upon us. Let's go, let's actually log back into this and look at what's on the base operating system just for a second here. We get there. Cut off at the bottom there, buddy. Yeah, I see. Oh, you can see out there in her mind. You're good. We'll put a few. So you wanna talk about that? What's that on the host? Oh yeah, yeah, yeah, yeah. So similar to how, so for those of you that are familiar with how OpenStack and how Nova works, the way we do attachments to an instance is we actually go ahead and we make a connection to the compute node or the compute host and then what we do is we pass that in. We pass that through the virtualization layer. We do the exact same thing with Docker. So what we're gonna do is we're going to create, in our case, for what we're doing here, we're going to create an iSCSI connection. We're gonna iSCSI attach it to the Docker host and we're gonna mount it, put the file system on it like I talked about and everything else, mount it and then pass it into the container as a file system. And I'm just kinda showing some of the other bits of it here that you may or may not wanna see with the iSCSI connection and how it translates into the one. So let's go back to this. We'll get out of there. So the thing that's cool about this and the thing to keep in mind and the reason why, depending on who you talk to and some of the 12-factor app zealots and things like that and which side of the fence you're on on that doesn't really matter, the reality is if you have cases where you wanna have persistent storage and you wanna use it like a database and things like that, you wanna be able to actually have that transfer across a cluster, whether it's a swarm cluster or a mesos or just your own hard, your own bare metal cluster or whatever it might be. So with things like iSCSI, that's incredibly easy and that's why it's a good choice for OpenStack as well, right? So all you have to do is disconnect and reconnect somewhere else and you can just move that around. So with swarm, for example, your container goes down, your node goes down, swarm automatically detects that, it knows, it responds that same container somewhere else on another node, it also takes the volume with it for you. So you don't even have to worry about that, it just does it all for you. So I'm gonna have to go pull that. So I wrote a test file here just out to that volume and so I'm gonna go create a swarm service here. It appears that it's not in my buffer so we're gonna pull it out of my magical list of things here just because we don't wanna see me type this and screw it all up, so. Crib notes, I need crib notes. So that just, so go ahead and keep talking, you see what I did there. I'm making this up as we go, so I'm kind of screwing it. He's gonna screw with me, I'm messing with him, so. Yeah, one hour dude, that's completely wrong. So what Ed's doing right now is for those of you that haven't used it, check it out. You can look at, like I said, the slides or just go out and Google it. But the Docker 112 stuff with the built-in swarm makes life really, really easy. So you can just go ahead and you can combine that with Docker machine and talk directly to your OpenStack Cloud. So you can have Docker do things like go out and create instances for you, install Docker, install your security keys, everything else, set everything up for you, and have a running swarm cluster in a matter of minutes. I mean, the only thing that takes any time is how long it takes to install the software and that's it. It's really, really awesome. So you do that and then what you do is you create a service. So the way swarm works is you create a service that's made up of a group of hosts inside of your swarm cluster, right? So that's what Ed's working on right now. Right, so I did that, I started it up. I actually put a web service in here so it just runs a browser or a LS files listing here in the root so we can go in and see now that this is attached to a different thing I should have, oh I connected it to my other volume. So we'll go do this again. So this is one thing though while you're here for folks that if you try this, one thing that is kind of interesting and we can fix this if people are interested by default we just do a raw makefs and create the file system in there. If you are using this for things like SQL and stuff like that you could run into a little problem because we use the defaults and you get a lost and found directory and if you don't go in and delete that sometimes my SQL gets a little upset with you. My SQL gets upset, my MariaDB does not. So I'm gonna kill my service, I'm gonna restart this again because I connected it to the volume that was already there so I'm gonna go back and connect it to the volume that I made that file in so I gotta go back. So one of the things you'll see if you're watching this I know it's getting cut off a little bit at the bottom here. Oh you're good in the back, it's actually just on here. Is that, that whole command went off and created yet another volume cause I didn't give it. So now I'm gonna give it to 6789. So I don't wanna talk about this real quick, right? So things are a little bit different, right? It's a little bit different but it's the same. So when you have a service and you're dealing with the service it's a little bit more complex and there's a few more options but you can kind of figure out, right? So they change the syntax and now what you're doing is you're actually giving a mount command and you're telling it specifically hey I'm mounting a Docker volume, that's really important. So when I first started doing this before the documentation came out and before 112 was official it took me a while to figure out why things didn't work so. So and you can see you're giving it type volume, you're saying the source, destination, et cetera, et cetera. So all the same options that we did before with just native Docker volume API are still there but unfortunately the syntax has changed and you have to do it a different way. But that's life. So one thing I'll point out here is I did that second time I did this. I gave it a volume that already existed and we're gonna prove that it didn't actually go create that again. So even though I gave it all these commands of where the volume driver was, what size I wanted it's still gonna be our 20 gig volume because the name is the key here. So test 6789 is a name that I already created so it won't go create another one. That's just the way Docker does it. So when we go over here to jump over here to this guy and we refresh him, there's our test file. So that proves that it was there and it stayed in that volume. If you guys wanna see we can go download it and open it, right? So it's still there. So that's pretty interesting. The other thing and I'll let you talk again, I'm gonna go so that it's running on D1. So I got three nodes D1, D2, D3 creative. If I Docker PS here, I can find that container and if I Docker kill that container ID. Now this is not through service, but I'm gonna kill it and the Docker swarm will actually go and respond it. So we're gonna watch that here. So it had mentioned how we don't actually recreate the volume and stuff like that, right? So what we do is anytime that Docker create, Docker volume create call comes in, what we do is we go to sender and we do a sender list and catalog through it and try and find that volume. So if it exists, we just use it. If it doesn't, we create it. And there you can see it's now on D2, so it moved. Swarm moved it over to D2. If you wanna see this, we can reproduce it. Yeah, we believe you. But the thing that's cool, so sometimes people, what's the big deal about containers, right? I can do all these same things with OpenStack and some people don't realize that you can do all of these same things with OpenStack. You can do these failover type things and stuff like that. But the difference is you can't do it in three seconds. That's the difference and that's what's pretty cool. And you can keep playing this game. You can have a cluster of 10 nodes and you just sit there and just take them down and just watch that thing go around Robin all the way through. So it takes about two seconds at most. So now I'm gonna cut over to, we'll jump over to Mesos here, Marathon. Just kinda, if you haven't seen the GUI before, there you go. You can hit the Create an Application button, but I'm gonna actually do this through Postman and just submit this through the API. So I have a, I don't know, you wanna talk about this or you? No, this is all you. All right, so basically Marathon defines things with a YAML file or a JSON file here. So we've got this JSON file. A couple of things to point out here is that you're telling it that it's a container. I don't know if you guys can see that. Yeah, I guess it's big up there. Telling it's a container, you're telling it's a Docker container. So Meso or Marathon is going to call into Docker through the Docker API and then it's gonna use Docker's capability to go off to volume, the Docker volume service and get these volumes. So that's the next couple lines here we use passing these parameters where I say what the Docker volume driver is and then I say I want a volume and I'm gonna give it a name here called Meetup MySQL Cinder and then I'm gonna tell it where to mount it. So what I'm gonna build, what I'm actually building here is PHP MyAdmin talking to a MySQL database just to get us something that we have a two-factor or a two-piece app here. I actually have Mesos DNS set up so this will find itself no matter where it runs. It takes a second, but you simply, so we simply submit this through Postman into the Mesos API. You'll see it comes back here or the Marathon API and it gives us an ID. So from there, we can actually continue through the API. I like Postman because I can, it's got the editor I can just hit submit but we'll duck over here to this and you'll see that the Meetup Example where we started this building this example here is created and then it builds a file system of the different parts. So we've got the database in the My PHP Admin. When we go in here, you'll see that's the services running as a container and you saw the other one in there which is the My PHP Admin. So if we drill into these, you can get all the details. You can see down here, this is running on these different set of hosts. This one's called container one and there's the ID. If we killed it, it would start up somewhere else but that's not what we wanna do here. We wanna actually go in and look at the My PHP Admin. That's running on container two. So these have to do a service discovery and find each other because Mesos will schedule them wherever, right? That's the interesting thing. So then we can actually fire this up and if all the demo gods worked well, it comes up and if I can remember what the password was that I put in here so we'll have to go look over here. So we're passing all the stuff in his environment variables. Password was like password one, yep. So we should be able to- It's much better than password two. Log in password manager. So sometimes if I go through more, I mean, we've kind of chased the service around with Swarm, Mesos and Marathon will do the same thing. We can go kill them but I'll create a database table in here. I mean, you can see that the basic database has been created and it's talking to it. And so you can use the Docker volume. It's gonna do its favorite thing here. So for those that are wondering and may not know, some people may be asking a common question I get is, can I use this with Kubernetes? And the answer is absolutely not. So as of right now and probably for the foreseeable future, Kubernetes does not actually leverage the Docker volume API. So it doesn't have any interaction with that. Demogads said we're done. Kick this out. So there's no, so they use a completely different model. The good news is as of recently, a couple of weeks ago, they have decided to have an official volume API so that we can now develop to and we can create something to use that. So since Kubernetes is all the rage and everybody loves it, that'll be the next thing. So along those topics, did you have anything else you wanted to- No, that's, I mean- The other thing I wanted to touch on is as far as this project goes, I started this quite a while back. It's been moderately interesting and some people are interested in it and using it. Since then, there has been a group of folks in the Courier Project inside of OpenStack that are kind of launched a similar thing. So I'm actually gonna meet with them tomorrow and look at maybe working with them and converting all of the stuff I did into Python and adding it to some of the stuff that they've already done and seeing what we can do to actually make it an official big tent project. So that's kind of where things are going. So hopefully in the future, rather than J-Griffith's GitHub, it will be on the OpenStack GitHub. So that would kind of be the idea. That's the name, that's not how it's pronounced. So I didn't say the name because of the way it's pronounced, but it's actually F-U-X-I. You guys can imagine how the pronunciation works. I'm sure that it's pronounced. Maybe that is how it's pronounced, where it's supposed to be, but... So yeah, there's Fuxi, we'll call it Fuxi. So Fuxi is out there. It's a little different approach. It's written in Python, so that's good, so it can be in the OpenStack ecosystem without any challenges or anything like that. There's a little difference in performance and things like that that you're gonna get. And also, there's a number of things missing right now. But I was thinking, I could probably pick that up and run with it and get something going. I would, it would be interesting to try and get something with the Golang stuff just because that's native to Docker working inside of OpenStack. But the reality is, it's just a Unix domain socket, so Docker doesn't care, it doesn't matter. So it can be Python, it can be Ruby, it can be whatever you want. So there's no reason to rock the boat or make things, make life more difficult because then you've gotta come up with an infrastructure and a CI and everything else, and so it's a lot easier to just use the cookie cutters that we already have. To forget a little time left, I'm just, one of the things that's interesting here is I'm going to actually to Cinder and creating another volume. So the nice part is we get all the Cinder features. So snapshots, clones, those kinds of things come along in Cinder that aren't supported in Docker. So I just created another volume here and when we go back to this guy, we'll see he shows up in there, another volume. So we're just looking for all the volumes that are in that tenant in Cinder. And so now if I wanted to do things like create a bootable volume, which we wouldn't be bootable here, but we might have an image that's in glance and we're gonna lay that down so when it gets mounted in the container, it just happens. And we can do that through Cinder commands, which gives us all the features and functions and we can extend volumes which Docker doesn't support, we can change types which Docker hasn't built support for. So having Cinder on the back end really brings you a lot of value to that stuff. And that's a really good point. So one of the things that some people ask is why, what I do this as opposed to using the vendor X, Y, Zs, plug in and talking directly to the back end. That's the main reason. So all of the stuff that you get from Cinder, you now by default kind of have in Docker, which is a pretty big win. It's a significant consolidation of effort and a consolidation of code. The big thing is the whole Linux mantra is basically do one thing, do it really well and then build on top of it. So the premise here is let Cinder do the abstraction stuff, let it do it very well and build one small layer on top of that and then so on and so forth. So that's kind of the idea. Right, so I mean there's lots of bits where I can go and create the volume from the snapshot and then create the volume. It's basically creating the clone. So, I don't know, what else do you want me to show, John? Does anybody want to see anything? Any questions? So the question was what does the types mean and what does gold mean? So inside of Cinder, we have a concept of volume types and the nice thing about volume types is you can make them whatever you want. So it's an arbitrary label for metadata. So an administrator has the ability to go in and create a type and they can call it Foo or Gold or whatever they want and then they can assign extra specs metadata, key value pairs to that type. Now the end user doesn't see that information. It could be something like this customer's lame, charge them double, who knows, right? It could be whatever they want but when that person says, hey, give me a type of gold and in this case what we did is we used gold to signify higher performance storage. Yeah, and the way that works is that works because the Cinder scheduler then looks at all of its back ends and tries to match the filters that it has based on what you passed in in that type. If you want to talk about it, I'm putting this up on the screen. So if you want to talk. Talk about it. So I'm using EXT4 right now but you can configure that and change it. If you wanted to use XFS or EXT3 or whatever you wanted, you could change that. I see. So if you guys, the question on volume types, I just kind of put it up on the screen here so you saw a list of volume types. You'll notice on the list it's just kind of scrolling off there. They're all on solid fire right now but that could be multiple back ends. So that could be an EMC, a Pure LVM, Ceph. Well, we can't do Ceph because we don't do iSCSI on Ceph. And then down here are the parameters we're passing to for the different quality and services. So again, these are all solid fires. We're using solid fire semantics here but if you were passing these into a NetApp back end they have a different semantic for what they do for QoS. And that's a good point. The, this particular thing that I have right here is iSCSI only. It doesn't do RBD, it doesn't do fiber channeling and those. That is one of the other advantages of bringing it under the courier umbrella is then we could use the internal things and have folks internally test it. So. Small QA, what happens when you create two volumes with the same name in OpenStack and then try to. You got big problems. Yeah, so that's bad news. Unfortunately, Cinder still does not enforce unique names. I always tell people pick a control plane, one or the other, not both. You know, Ed was demoing how you can do that but you really shouldn't. So what I do right now in the Docker code is if I see more than one volume with the name I error out and say, sorry, you've got two, I don't know what to do. Good question. Something else that I've looked at and again, if this lives on and goes forward, what I'll do is I'll stash the UUID from Cinder inside of the object on the Docker side and I'll use that as a source of truth. Most of that code's already there. So it's not hard. Yeah. Okay, yep. So the question was, how does the volume attachment actually happen? And do you mean from the Docker host perspective or from the Docker containers perspective or both? Okay, container. Well, okay, so it's still kind of the same thing. So let's say we have a volume that we've already created and used. Or no, actually, let's start from scratch. So we don't have a volume. We say Docker run volume, minus D, Cinder, volume, empty, whatever. What's gonna happen is it's gonna go, the Docker is gonna make a call down to the Cinder driver and it's gonna ask it, hey, do you have a volume named empty? If it does not, it will say, okay, create it. And so we'll go off and we'll go to Cinder and we'll call Cinder create volume with all the options and stuff like that and create that volume. And then what's gonna happen a little bit later in the process is Docker's gonna come back to the Cinder driver again and it's gonna say, hey, I want you to attach this volume. So inside of that driver, I'm going to go ahead and I'm gonna find that volume again. I'm gonna grab it and I'm gonna inspect it. I'm gonna do an ISCSI attach and I'm gonna inspect it and see if it has a file system on it. If it has a file system on it, all I'm gonna do is I'm gonna mount it. That's it. If it does not have a file system on it yet, I'm gonna run a make-of-s on it and format it. After that, I go ahead and I send that information where I mounted it back up to Docker. Docker now knows what to do with mount points and it just passes it into the container. And that's it. So the... Yeah, go ahead. Yeah, so Docker's kind of interesting. They really are pretty serious about this save no state thing, right? So if you go on the back end and you delete that volume on Cinder, so the question was what happens if somebody goes to Cinder directly and deletes the volume? It's gone, it's done. I mean, that's the end of that. Now, if you go then and you say Docker volume LS, you won't see that volume unless Docker got in a weird state and cashed the information somewhere, which it sometimes does. But yeah, typically what happens is all of a sudden Docker doesn't know about the volume anymore either. If it's actually actively in use and attached and you try to do that, Cinder will fail to do the delete. It will say, hey, you can't delete this, it's in use. Good questions, I'm doing this all on the screen while you're watching, if you're... It will delete. It won't. It will not delete. Cinder won't let you delete a volume that's in an attached state. Oh, so you wanna use Docker volume plug-in for something else? I'm not, actually, I'm not. Okay, yeah, we should chat. I'd love to hear what you're doing. It'll be cool. I don't know, our time is probably up, I think, so four minutes usually. Any other questions, quick? We'll be up here for a bit afterwards. Oh, one more? So yes, as a matter of fact, there has been for quite some time. So there's a SolidFire GitHub, and it has one, but one of the nice things about the acquisition and the merger of SolidFire and NetApp, NetApp actually has a really cool team of folks that are focused almost exclusively on Docker and containers. They've created a project called DVP, Docker volume plug-in, which is a NetApp Docker volume plug-in. It, they were able to just take my code that I wrote and put it into that NetApp DVP, and it has support for SolidFire, CDOT, FAS, all the NetApp portfolio. So you've got options, it's kinda cool. There's a lot of choices out there. That happens to be loaded on this machine too, so there's one from a NetApp box. I didn't give it a size or anything. All right, last one, cause they're dimming the lights. Yeah, you need OpenStack. Well, actually, no, you clarify. You don't need OpenStack, you need Cinder. And one of the things that people don't really understand is installing Cinder is actually pretty easy. It's all of the other stuff that comes with it makes things a little bit more interesting. Cinder is actually really easy to install. All you need is the Cinder services, MySQL, and Rabbit, and you're done. But we're working on changing that. So I actually have that on this laptop here. I have a very nice setup on a single VM that has just those things in it that I was able to, I can use to develop stuff on the planes back and forth. So, fully standalone setup to do this on a single VM on that laptop. I didn't demo off of that today, but we can. All right, with that, I think we'll wrap up. And the next guy, you look like you're here to present. No? All right. All right, cool. Thanks, everyone.