 All right, we can get started. This is standalone Cinder. And we're going to talk a little bit about Cinder, what it is, how it works with OpenStack and sort of the normal deployment. We will talk about and demonstrate Cinder alone without OpenStack or any of the other services running. We'll make a case for Cinder with at least Keystone and Glance. We'll show you that as well, and hopefully we'll have some time for Q&A. The slides are going to be available, or they're already available here at Google's short URL. If you'd like to find them, I'll show this again at the end. I'm Scott D'Angelo. I work for IBM Cinder Core Developer. You can get ahold of me through IRC or email. Thank you, Scott. I'm Ivan Kolodzashny. I'm also a Cinder Core Developer, and I joined Cinder team, I guess, two years ago. And I worked with OpenStack Science Diablo Release. Walter? Hello. I'm Walt Boring. Yes, that's my last name. And I joined the Cinder team back in the Grizzly time frame, and I'll be showing a demo at the end. Thanks. So what is Cinder? From our wiki, Cinder virtualizes the management of block storage devices and provides users with a self-service API to request to consume those resources without requiring any knowledge of where the storage is actually deployed or on what type of device. So Cinder basically abstracts away your storage and provides an API to get to it. So Cinder is a software-defined storage management system. This is kind of what Cinder looks like, similar to all or at least many OpenStack services. You connect to the Cinder API server via REST, either through the client or directly with the HTTP. A message bus transmits messages from the API to the scheduler to determine where to deploy a volume, where to create it, depending on criteria you may have set up in your configuration. The message bus also will pass information to the Cinder volume server, which does the work for Cinder. Volume servers have drivers that interface with the underlying storage. And so that's our architecture for Cinder itself. So what happens when you attach a volume? This is the normal use case, not Cinder standing alone, but the OpenStack deployment you may be familiar with. So you would interact actually with Nova. You'd go to the Compute API, make a request to attach a volume. Nova itself will use the Cinder client to connect to Cinder's API through the messaging bus. It'll go to the Cinder volume server. And then depending on your deployment, you'll use the proprietary backend or the individual backend driver to connect to the storage. This is where we get information on how to attach the volume. So iSCSI targets, fiber channel information, SAP or NFS have their own connection information. This is all passed back up through Cinder, back through the API to Nova. And then Nova will use that information to directly connect the volume from the storage to the Compute host. So Cinder's no longer a player in this when Nova is interfacing with the storage, the data path, Cinder's out of the picture at this point. The tool that is used in Nova for this is OS Brick, which is the underlying connectivity service. So the question really is, why not just use OpenStack for your Cinder connectivity? So one reason is this. If you want OpenStack, if you have need of all the services, you deploy them all, you're running a big cloud, that's all fine and good. But there is some complexity to OpenStack. This is an old picture, but I think it's still applicable. So what are the use cases for Cinder standalone? One is, as we just showed with the complexity of OpenStack, it gives you the software defined storage management without all the overhead of deploying all those resources. So Cinder standalone will work with Docker containers. We believe that all of the supported Cinder drivers will work with Docker containers to attach a Cinder volume in a standalone way. Except LVM. LVM is a little more complex because of the way it uses loop deck devices, the way it uses Udev. It's been done, but it's a little tricky. Most of the other drivers will work sort of out of the box with the tools we're going to show you today. Cinder will work on almost 100 storage back ends. That's just the in-tree supported drivers. All the drivers are open source. They're in the tree. They run CICD on every patch so we know that they work. So when you get Cinder as your software defined storage management, you get all these drivers. There's a list of them approximately. So if your driver's on here or if you don't like your driver vendor and you want to kick them out and get a new one, you have options with Cinder. And most importantly, Cinder is open source. So lots of proprietary vendors will give you a software defined storage solution. But it's not open source. It's not as underactive development. Cinder's very mature. Just gave you some pipe commit statistics. 369 commits, 140 bugs fixed, et cetera. A very active community. So unlike a proprietary system, if you were to use Cinder, you're going to get the vibrant community and the active development. So I think Ivan is going to talk now about Cinder as it's normally deployed with OpenStack. Thank you, Scott. So I believe many of us started with OpenStack using DevStack. It's the simplest way to deploy OpenStack on your workstation, virtual machine, et cetera. So I will show you a simple demo with DevStack. I've got almost a default configuration of DevStack. What's going on? Where's the video? It should be video. OK, maybe it's loading. So it should be a default DevStack except Cinder volume driver. I used the RBD driver to this demo. Oh, OK. So Cinder standalone starts two years ago when we were requested to use Cinder with Ironic. That's why I will show you a demo how to attach volume to Nova instance. And then I will create some file on this volume and read this volume from the DevStack host. I will use Cinder local attach feature for it. It allows you to attach volume on any host. And one requirement for this is Cinder client and Cinder break extension, which is available on PyP on Debian packages. I believe on CentOS 2, I did not test. We've got some issues with internet connection. Must be the Wi-Fi. Yeah. That's the way it goes. Yeah. It looks like it's loaded enough of it. I don't know why it's not playing. It just didn't auto start. Oh, I will skip. Demo is uploaded to YouTube. And all links will be available on the end of our presentation. You can download it. You can download DevStack config file and use it on your open environment. So it's using SAP, right? Yes, it is SAP. And this is the standard DevStack deployment that most developers use when developing on any given project, right? So they stand up their DevStack, which includes almost all of the services. Yes. It includes Nova, Glance, Kistol, Neutron, Cinder API, Cinder scheduler, Cinder client. Yes. All right. So what box are you logging into here? You're doing an SSH. Oh, yes. It's a serious default image. I attached Cinder volume to this instance. And I'm using the default tools to create disk partitions and create file on it. As a result of this demo, you can see that you are able to attach your volume to any host and read all data. So one of the use cases is ironing this and containers as Scott mentioned. And also, it could be used for troubleshooting issues when you've got something went wrong with your instance and you can attach volume to compute host to some backup host and do anything you need. All right. So you just wrote some data to a file there. So this is showing that we can detach a volume from one host and then attach it to another, I presume. Oh, Cinder local touch feature works only for admin users because it uses a SkyZee connection or IBD connection in case of safe drivers and safe backend. So what he's actually showing here is the Cinder client that has the Cinder client brick extension installed so that way you can attach a volume to any bare metal host. So that's not actually inside a VM, right? That's, well, I guess that could be a VM, but it could also be bare metal. Yeah, sure. So the original impetus for all this was that using Ironic to use bare metal, we needed to have Nova spun up in order to attach a volume to a bare metal host, which seemed a bit ridiculous. So we, and by we, I mean Ivan, wrote the Cinder client Python brick extension that enabled Ironic to start to use Cinder without having Nova in the picture. But in fact, it basically allows anyone to use this tool to attach a Cinder volume. To be honest, Walter starts with brick implementation in Cinder tree. And after we moved it out of Cinder to Osbrick library, we were able to implement these things. All right, so what are we seeing here to go backwards? So let's talk about Cinder standalone. Okay. All right. It's almost the same, but as Scott mentioned, we need a lot of services to start Cinder with Nova Keystone, all the stuff. It could be complicated and not needed if you need only storage management. So how can you get it installed? Of course, you can use DevStack, but it's not production ready it's only for development testing purpose. So you can use operation system package manage like up to install Cinder, yum install Cinder, but in this case, all configuration will be your responsibility. So you have to create Cinder config and so on. That's why as one of the methods, you can use OpenStack Ansible. You can use CF, CF puppet, but all of them requires Keystone. To be honest, OpenStack Ansible requires Keystone too, but we work on it and I hope it will be merged part of PyKrelease and we'll get standard standalone mode with OpenStack Ansible out of the box. For now, I've got some Georgia hacks in OpenStack Ansible, you can clone my repo and play with it. On internet's work. So our demo is pretty much the same, but I use OpenStack Ansible, I use only Cinder. Okay, as per requirements, I configured Cinder, no ALS support. It's old feature in Cinder API, but Cinder client will support no ALS mode only for PyKrelease. Of course, you can use new Cinder client and all the Cinder APIs, it will work too. That's why I installed Cinder client from master branch because no ALS feature will be merged one or two months ago. And as you can see a bit late in the demo, I use only Cinder services. I use Cinder APIs in the Cinder volume. All of them work in containers except Cinder volume because it's complicated to use LVM inside containers. I mean to use LVM and provide volumes to consumers. All right, so it looks like you're talking to a Cinder instance somewhere, right? Yes, and as you can see, only Cinder, RabbitMQ and MySQL service are running. Nothing else, no Keystone, no Neutron, no Glance, no Nova. And you're still able to use Cinder, attach volumes and use it. Like you use it with Nova. And those are all running in a container there, right? Yes, everything in containers except Cinder volume. And you could run Cinder volume in a container if you had a different back end than LVM storage, right? You can use Cinder volume in containers for any volume, any remote storage, like safe, net up, solid fire, so pure storage. Pure. So are you using the big extension here? So after the local attach is complete, you can see in that output there that it shows the actual raw UDEV device that shows up. And then after that, it looks like you're partitioning it and formatting it. I use the same written machine for attaching volume running, but you can use Cinder somewhere and attach volumes to your desktop, laptop, or even smartphone, I guess, with Android. Walt's show you something different. Yeah. An even more risky demo. Yes. The coolest demo. Well, we'll see. All right, so this is just the raw Cinder services, right? Cinder scheduler, API, and Cinder volume. There's no keystone of off here. There's no glance involved. It's a standalone mode. Good question. This is no off. I use no off mode for Cinder here. It means no keystone, no authorization, no authentication, so all users are, it means, and so on. And can see everything. But don't try this at home, Mr. If you wish to have authentication, it could be used on some trusted environments like one tenant, clouds. Yes, you can implement Cinder client plugin to use DirectLDub without keystone or anything else. Yes, yeah. And of course you can use keystone and everything will be okay. Which we'll see in a minute. Okay, unfortunately, we've got several issues and limitations for standalone mode. Not all supported feature Cinder's work with no off because nested quarter requires keystone with three API. We don't support all of existing drivers. We support only LVM and RBD backends. It's mostly because we don't have a lot of environments, a lot of storage is to test it. So if you are interested in extending this feature to use your storage with your protocol, such protocol, maybe fiber channel, it's easy to extend, mostly add to line to the code. We can discuss it after the talk if you are interested. We've got POC of NFIS support, but it's too tricky, not secure, and has a lot of concerns. So I'm not sure it will be merged in Cinder. It works now only for Linux hosts. So you can run Cinder anywhere on Linux on other servers about to attach volume directly to this host only links to patient systems support. I guess it will be not very hard to add windows support if anyone is interested, welcome to contribute to it. Yes. And of course, I cannot mention security issues because without keystone, we've got a lot of security issues. Even with keystones in the standard mode, it provides to use the many knowledge about Cinder API itself, about storage network, about storage protocol storage itself. That's not what Cinder is supposed to do, but it's how it's implemented, it's how I sky that other storage protocol works. So it's up to you, up to cloud operators to configure all security or to configure, maybe pertinent network to storage for this feature. So we provide this feature, but it's up to operators to configure it right. I hope we will have some security guidelines for it soon. So it will be easy to implement production environments. Okay, so in the bulk of stone, Walter will show you how to work with this stone. Okay. Thank you. All right, thank you, Ivan. All right, so what we kind of showed here, what Ivan showed is how to use Cinder, just bare bones, absolutely bare bones. I actually installed the Cinder myself from the Ubuntu packages and you can actually get that up and running with no Keystone, no Glance. It does function, but you know that you're missing some key features that Ivan kind of talked about already, which is the ability to do authentication. That's a pretty big thing, right? And one of the other capabilities that's not there, if Glance is not there, is the ability to copy volumes to images and images to volumes. It's sort of a way to do backups. You do that a lot when you're doing a standard open stack deployment, when you want to duplicate volumes and create images that you wanna boot later. So there's a use case here for adding Keystone and Glance as part of a standalone Cinder deployment. Now Keystone gives you the authentication that you probably are going to need. And Keystone, as you know, has different authentication backends that you could use if you wanted to play within an enterprise. So you could configure Keystone to use LDAP if you want an open ID. I think it also supports some various other backends Keystone can do. So there's a lot of value for a standalone Cinder deployment for Keystone itself. And it takes quite a bit to stand it up manually, but there's a lot of value there. And Glance, of course, is the other part that I already talked about here of being able to do the image to volume and volume to image operations. And if you don't need that, you can actually disable it in the policy file in Cinder policy when you deploy it. But I think it's a pretty valuable thing. All right, so I think my slide is next. So I brought some bare metal nodes with me here to do a demo, and I'm crossing my fingers that it's gonna work. So I try and do things a little bit differently in the community and try not to live up to my last name, which is boring. So I wanted to do something that was really risky and kind of crazy and stupid, but kind of interesting and fun, and I learned a lot in the process. So what I set out to do was to use Docker containers on my bare metal and deploy the Cinder services, right? That doesn't sound like it's really that difficult to do, but I'm using a different kind of bare metal and one of my nodes is a Raspberry Pi. So I'm using a Raspberry Pi zero here as one of my clients. I have a server that's running, which is a Raspberry Pi three that's right here, and I'm trying to be careful not to unplug the power because it would totally roast me right now. And I also have another Raspberry Pi three here. That's also another client that I'm gonna use to create volumes and attach volumes and so on and so forth. All right, so let's plug this guy in and get it booting. Show that this is something kind of live. Okay, that's plugged in. My phone here will show me when it's online. So all right, so this is the network that I have. This is my cloud, right? Which is ridiculous. It's my cell phone acting as a Wi-Fi hotspot so that way I can have connectivity between all of my Raspberry Pi machines here. So yes, I am going to do iSCSI over Wi-Fi. It's ridiculous. But hey, why not try that here at OpenStack at a live presentation? Hey, that sounds great. What a great idea. All right, all right, so. It means faster Wi-Fi. Yes, we need faster Wi-Fi. Okay, so it looks like everything is up and running according to my phone. So let's go over here. So all right, we got that up. So 221 is my server that's right here. That's running all of the services. Let's make sure we can get on there. Come on, baby. All right, so what I wanted to do was use Docker to deploy a bunch of containers for MySQL, RabbitMQ, Keystone, Glance, all the three center services. And as they have been mentioning before, I had a really tough time getting LVM working inside the container itself. I tried just as much as I could to get it to work and I couldn't get it to work. There's a lot of issues with needing privileged access to the host system to get the loopback devices in UDEV to work correctly. But even when you do that, you start getting duplicate loopback devices and the host system says, wait a minute, what's going on? And it starts really messing with you. There's probably smarter people that can help me figure that small piece out. But I decided after I was running out of time, I was working on this a couple of days ago, getting that up. Yeah. So I got to the point where I just had to punt and try something different, right? So the cool thing is though, is that I did learn that in fact, Cinder and all of Cinder services itself work great inside of containers, even on a Raspberry Pi in Docker. And it does work and it's great. It's really, really cool. But unfortunately, I'm not going to show you that cool part because I couldn't get LVM to work. But like they said, if I had actually lugged along an actual storage array that I could get my Raspberry Pi to talk to, then I could use that to serve up by SCSI volumes. But those are a little bit bigger than Raspberry Pi. So I decided to use DevStack. It's an old anchor of ours as developers. We use that a daily basis, you know, for developing and deploying and stuff. So it's not as sexy as Docker containers, but trust me, it did almost work. All right, so, oops. All right, so here we see, you know, your standard DevStack screen set up as your stack is up and running. And you could see here clearly that there's just Keystone, Glance, and the Cinder services running there. And that's what's running on one of these boxes. And so let's go over to the other client, the other Raspberry Pi. And I'm pretty sure that one is 63. Yes, it is, okay. Get on that one. Hopefully it comes up and it's working its way in. There it comes, okay. So this is my other Raspberry Pi that's sitting up here. And I'm gonna log in as root because in order to do an attachment using the Cinder Brick client extension, you need root access to do the actual attachment piece here. So, okay, so I have a script that I created that helps me set up the virtual environment as well as the environment variables that are needed to set for the Cinder client. And so that's what I'm setting here. So let me make sure I get this right. 43.221, okay. All right, so now I should be able to do Cinder list and let's hope and pray that this actually works. It takes a little while sometimes because this is a Raspberry Pi and it's over Wi-Fi. So it's not gonna be the fastest demo we've ever seen. There we go, okay. So we got a Cinder list, all right. So let's create a volume here. So dash dash or single, let's try one. I probably got this wrong. Of course, I'm trying to create a, all right. I did it wrong. Display name. Display name, no? Okay, I thought that was deprecated. Yes, I thought that was deprecated. Okay, thank you. Name is optional, you know. Yes, it is optional, but it helps to make myself look silly doing a demo. Okay, so hey, there it is. It's working. Let's go over here and you can see some API requests going on. Let's see if I can switch screen windows here without blowing things up and evidently I can't. Maybe it's my fonts. Why can't, oh, maybe that's right, yeah, okay. Let me, let's get out of here. All right, let's start over slightly here. Awesome, yes. If there's anything that can't go wrong during a live demo, it will. Okay, so my SD card may be going south on me, which is what happens on these things. Okay, so let's see if I have any services over there running still. You know you wanna work. Hey, and it says it's available even. How about that? All right, so let's go the extra mile here and try and attach the volume. And science, we've seen the local touch surface, different network connect. So you can configure each network adapter will be used as storage network. Why can't I see? I wanna show the Cinder volume service, but I can't. Screens giving me fits. Okay, hey, there it is. An iSCSI volume attached to Raspberry Pi over Wi-Fi. That's ridiculous, I know, but this is why I do things. All right, so I don't know how much time we have left, but I could show a demo of it actually working on this little guy here in this other screen here, but I think we're getting six, seven minutes. We have six, seven minutes. You wanna see me try it? I'll try it. You get to do one, I think. Okay, this is a single core CPU. It all has 512 megs of RAM. It's running at one gigahertz and it takes even longer to execute anything just because it's only a single core. We can come over here and, oh, there's the request, it looks like it's coming through. Hey, that worked, okay. I thought I could do this. That's not gonna work. I thought for sure that will work. I think name does work. Name works. I think it does. Display name is deprecated. It does work. It does work. Display name was deprecated a while back. It's actually a name, so I just forgot the leading dash last time. Okay. All right, let's make sure it's available, then we'll do the attach and then, I'll call it. The demo gods are kinda sorta with me here today. It's a miracle that I've gotten to this point. Okay, it's available. I'm root, let's do the local attach there. Let's not try and attach the one that's already attached to the other machine. Do we have multi-attach working yet in onion? This would have been a great demo to show that. I actually thought about that. I'm like, well, if I'm gonna actually do that, I need to write some files to it. I'm like, well, okay, I can't use EXT anything because that would immediately blow up. And I'm like, well, I need to install GPFS. And I'm like, oh God, do I wanna do GPFS on Raspberry Pi? No. Okay, so I just thought I'd get to this point and see if I can get this to working. And hey, there it is. It did in fact work and attached. And now we have an iSCSI volume attached to a Raspberry Pi Zero W over Wi-Fi on a cell phone, on a conference. All right, so there we go. We... Good job, Walt. We have a couple minutes for questions if there's any questions. We're done. Oh, you want me to do a benchmark on it? Okay. I had a question. I think, Yvonne, I might have made the comments. Not all of the drivers support standalone Cinder? No, it not dependent on driver implementation. But Cinder local touch feature works only for iSCSI and RBG protocols. It's only because we did not test it and we do not want to enable something which won't work. So any one of the drivers that supports iSCSI attach to the Nova hosts today can support local attach over iSCSI? It's almost all drivers works over iSCSI but just have a fiber channel or... Yeah, sure. And a file source or something, yes. Yes, all iSCSI drivers that should work in Cinder as they do to Nova should work in this case too, should. What are the underlying API calls? Like, I noticed, you know, you have to run it as sudo. It says the OS brick, I guess, plugin supports iSCSI. Does it have to use a specific iSCSI target implementation on the machine that you're locally attaching to? I don't even know if I'm using the right word, sir. It requires open iSCSI package and that's all. And it uses the standard Cinder APIs for attachment like Nova does. The questions? So you can see here at the bottom my, I'm just using HD Parm to do a generic really lame performance test and you can see how incredibly speedy iSCSI to, a Raspberry Pi W0 is and I tried running it on my Pi 3 and it's, well, it seems to be worse. But it could be for any number of reasons, so, anyway. But it'll work better on your own enterprise system. Yeah, so, you know, when you want to deploy your cloud on a Raspberry Pi cluster, then this is the way to do it. Another question? Basically, I just want to ask you like, how you get rid of the most open stack components. Then I just wonder like, what's the really usage here? I mean, what's the difference between this one just from the iSCSI ADM command to attach a lung, you know, I mean, without those time, maybe you don't even have a QoS controls on all the drivers. So do you have it or not? And also like how do you play with the live migration in this case, if you want to do this? So one thing that is not tested to my knowledge is some of the other features of Cinder like the migration and backup, et cetera, et cetera. Some of those probably could be made to work pretty quickly but they're at this time untested. But why use it? Was that your first question? You know, really it would just be if you wanted to have some, well, or you wanted to have some way to manage storage and you didn't want open stack. I mean, if you're here for open stack and you have an open stack cloud, you get all this. But if you need some way to manage storage, you want to be vendor independent and open source, you want to have a way that provides an API to your users that's consistent so that it can abstract away the storage. You can get all the features of Cinder without the open stack overhead of Nova and Neutron and all the other things you have to. Yeah, you could have purchased a particular vendor's backend and use this to create your volumes and then realize, hey, I want to switch over to another vendor or SAF and go open source. Then you can use this to actually migrate your volumes too. Or open open stack too. Three different vendors in SAF you could have a mixed deployment of storage and make this work. What I'm asking is that the other thing is that in open stack for the same driver, you can configure different drivers for different tiers for the QoS, but you can just stand alone and do still have this kind of feature. So actually, yeah. It should work. Most of the QoS, it depends on what kind of QoS you're setting up in your volume types, right? Because you can set up your volume types such that all the QoS settings are based upon the backend itself and the backend does the QoS limiting and rating for you and not Nova and Libvert, which is what I think you might be trying to distinguish there if I'm hearing it correctly. Yeah. Yeah, so the sender backend would do that for you. Got the acknowledgement slide. Oh, got the URL for the slides too. All right, so here's a couple of acknowledgments. So one of our other developers on the sender team, Gorka, he actually created an amazing blog post that talks about in depth how to do a deployment of sender stand alone. And that's kind of what gave me the motivation to do a presentation here to talk about it as well as do something crazy here. One comment about this blog post, it's absolutely great, but minor comment, it was pretty old before Cinder Client implemented no-else feature. So for now it works out of the box if you installed Cinder Client from the Git Harper repo. Yep, yep. All right. Thank you. Thank you. Thanks guys.