 All right, good afternoon. Thanks for being here. It's the last day. It's nice to see people still rolling in. So we're here this afternoon to talk about volume attachment failures in Cinder, and going to hopefully give you some tools and tricks to debug when you get into these situations and know what to collect up information-wise in the case that you can't figure it out yourself to hand off to us so that we can help you from a community perspective. So today, presenting, we have myself. I'm Jay Bryant from the Cloud Storage. I'm the Cloud Storage lead for the Lenovo Cloud Technology Center. To my right. I'm Walt Boring. I work for IBM. I work on the Cinder team. I should note that all three of us are cores on Cinder. I'm Scott D'Angelo also with IBM. We're all available on IRC and Twitter. So feel free to, especially on IRC and Twitter, I try to keep a running bit of information about what's going on with Cinder and storage on my Twitter, and then feel free to reach out to us on IRC. So the plan for today is to do just a brief, quick overview of the architecture of Cinder and how it's organized. I see a few familiar faces. I see some new faces. So we'll try to kind of time it accordingly there. Then I'm going to go through a logging introduction, which there are some new features I want to share with you guys as part of that. So hopefully that'll be helpful as we're moving forward. Then using that information to leverage the information that we'll share later in the presentation. Scott's going to run you through some good information from a cloud operator's perspective of things that he used to have to check when he was having problems with commands and examples there. And then diving a little deeper with Walt, we'll get into talking about what OS Brick is and how that, because since the last time we did this presentation, this is kind of a follow on to one we did in Paris on doing more high level configuration debug with Cinder. This goes in a little deeper on the new features that have come along with OS Brick. And I actually have a link at the end of the presentation back to that original one from Paris. If you like the information you see here, it would probably be worth going back and looking at that one again, too, as there's additional information shared in that one. And then we'll actually go look at some example attach and detach flows so that that's something you've seen before if you need to do such debugging. OK, so Cinder overviews, you've probably seen this slide a couple of times this week if you've been in our sessions. But it is the block storage service for OpenStack. And it supports multiple backends, nearly 100. The number grows and shrinks depending on where we are at in a release cycle and what's going on there. But the design is same as many of the other OpenStack services. It's a client with an API and a scheduler that then talks to one or more volume services that can be run either on the control node or on across dedicated storage volume nodes. And they may or may not talk to backends that are within the control node or they may be talking to remote NAS or other storage servers. Here's our current list as of a week or two ago of drivers that are in the supported list for OpenStack. It took me a long time to build this up, so look at it lovingly and excited. Many different vendors are obviously represented here between Dell and IBM and Huawei. The ones that are in bold are kind of our reference architectures. So we've got LVM is the default that runs in the gate when new changes are pushed up. Uses iSCSI as the attachment protocol. So that's that for a shared file system. The reference is NFS. So you can use that as your example to work from. And then RBD-Saf is the clustered file system reference architecture that we work from when developing our drivers. As far as details go around that, each of the different vendor backends is written to talk on the control plane to their storage and to configure it so that it'll be ready to do attachment. So it sends the commands via REST or SSH, depending on what the back end is, to prepare the system to export its disk or target. And so we support a number of different protocols. Obviously, the ones that you most often see are iSCSI or Fiber Channel or RBD. But then you've got some other remote file systems. And there are some companies that use their own special protocols, and then they need to work with the OS Brick Library to get those protocols added into Cinder for support. So the reason I share this is how you're actually going to debug an attachment issue is going to depend on which protocol you're using. And Walt will be kind of explaining some of the differences there as we go through it. High-level overview, we'll go into this a little bit deeper later in the presentation. But just to give you an idea of what we're talking about here as we go forward, we've got the Nova side and the Cinder side. So there's work when you do attaching a volume that happens on both sides. And when I talk about the logging, this picture is kind of important to remember. Depending on whether you're in the process of getting information about the attachment, it's going to be over on the right-hand side of the image here in the control, or maybe even in the storage back end. But generally, you'll find the information on the volume controller. If the problem is happening after that information is sent back to Nova, you're going to be looking out on the compute node. And which path you go down, whether it is different depending on whether you have Iscazi fiber channel, or if you're doing something like Cef with RBD, which we mentioned here, because it seems like a lot of people use that. So keep this in mind as we go forward. And Walt will be talking a little bit more detail about that later on. So logging. This is important to know, because that's where you start to find out where your problems may be. In the case of the cinder log files, it depends on if you're in a development environment or in a production environment. For production, you're going to find your log files under var log cinder, then labeled based on the component that you're looking at, either API scheduler or volume. There may also be a backup service log file there as well. But when you're running in dev stack, the logs, by default, I'll go to op stack logs. And then C for cinder, followed by the component API scheduler or volume. So that, however, for those of you that just ran into, I need to update this now, I think about it, into the change to now use systemd to start dev stack, you'll have to use journal control command to pull the logs out of there. So thankfully, they've put decent documentation together for that. And if you check the upstream institute logs, or upstream institute education, we have pointers to that. Nova, same kind of thing. When you're looking at attach-detach issues, you're going to be looking at the compute log. So it'll be var log nova in production or op stack logs if you're using dev stack and dash CPU. And we'll talk a little bit more about how you decide which one of those you look at in just a minute right here. So where are my logs? Answer depends on at what point in the process you run into a problem. If it's on a control action, when you're doing a create, delete, create, snapshot, that sort of thing, you're doing control actions against the storage. It's going to come out of your volume driver service. And so it's probably going to be on your control node or wherever your volume, storage volume control process is running. So that also is going to, once Nova has asked Cinder for that information, it's going to go through a process on that side. And so you would need to check the logs on the control node. Datapath problems. So I can see on my back end it set up the target, but the target isn't getting to my compute node. Well, then it's probably somewhere in the Datapath and you're going to need to look on the compute node to get those logs, not on the control node, because that's where all the work is actually happening is out on the compute. Log levels. I was trying to get logs for this presentation and I had to actually walk the guy through this and I thought, there's a slide, I'm going to add it. So if you're looking, it's going to be easier to do debug if you have debug set to true. And we're both set to true so that you can get all the information out of the system that you can about it. So that's going to happen in either nova.conf or cinder.conf. Now remember, again, this goes back to the idea of the locality of where this is happening. For Cinder, you're going to do it on the control node or wherever your volume service is running. For Nova, it needs to be on the compute node where you're seeing the attachment problem, because that's where you're going to want the logs there, they're not centralized. In default configurations, I'll put that way. So after you add those lines in, you need to restart and then recreate your failure. We're working on adding in dynamic logging for Cinder. It's a work item for Pyke. I don't know if it'll land in Pyke, but hopefully at least in Queens, where you'll be able to go from the CLI and say, I want to increase the volume service logging level and it'll do it without having to go kick it and restart it. Wanted to quickly add, we're working on making logs more user friendly for you. Anybody excited about that? Oh, come on. Yay, I see a couple of people who look excited. I'm excited. So with Microversion 3.3, we added the ability to get asynchronous messages back. So I'm trying to create a volume. It goes into error state. I don't have to go to the logs necessarily anymore. I can go and use this new message API service to list and show error messages that are being put in the database. And let me show you so you can list show or delete. So when you've seen the message and you're done with it, you can also go and clean it up. We're working on a spec to get a process added that will just automatically go and call the messages that have been out there for a period of time after they don't need to be there anymore, clean them up so you're not filling up your database with log messages. It's new. We're working on getting people to add messages. Hopefully, we have an intern that will do that this summer. But just to show you, the dreaded my volumes in error state, I don't know why. What does it mean? I just hit Make It, and it should have happened. Well, now you can do this. You specify the API version you want on the CLI and say message list, and you get a list of messages that have been reported with a little bit nicer format and information. And then you can do a show on it. And it tells you when the error happened and what resource type it was gives you an ID with a little bit nicer message. So this is coming. Anybody? Are you excited now? Yeah, that's what I like. So if you want to help put messages in, it will get better that much faster. So help us out. It would be nice. All right. Now that we've gone through the housekeeping, we'll have Scott take over and talk about working in your environment. Who likes the new logo? Yeah, I do. I love it. And it's in the very subtle C of the tail. All right. All right. Take it away, Scott. So just some basics about debugging. Some are applicable to an attachment failure. Some are going to be useful in all contexts. But quickly, you can check whether things are running or not. Nova has the service list. Cinder has a similar service list. So that'll give you a sort of an overall view of your system. I've often found that the service might be running, but it might be hung. So if you look at the logs, especially the API logs should constantly be spewing things. So if you have a look at the logs and nothing's happening, you might look to that as part of your problem. You can find out a lot using the debug commands and the clients. Both Cinder and Nova have a debug tag that will give you the step-by-step things that are happening with the REST API, going to Keystone for authentication, the Keystone's connection back to Cinder or Nova. So you can often get some information out of this. We see that with debug, the token is obscured. So you won't have a valid token there if you need to go and follow through on these commands with your own curl commands. So you can use an open stack token issue. It'll give you a token that you can use that'll be valid. You can export that token as an environment variable. And if you do that, you can just add dollar token to your curl commands. And in this way, you can do any of the commands that you would otherwise do for a volume attach or for a volume create. And you can make sure that what's happening in the debug output is what you're expecting. So with Nova volume attach, you can get a little bit more information to your exact attachment failure. Once again, you'll get your exact REST command. You'll substitute in a token to make it valid. You can get the IP address. I should show that back here. You can see which IP address Nova is using to talk to Cinder, this particular API server at port 8776. So now you can also check the connectivity between Nova and Cinder, ping it, telnet to that port, make sure that telnet works. Telnet will always return this if it can make a connection to that port. A lot of times you'll have a problem with the port being closed. A larger deployment will have network switches in the path. You may have multiple availability zones. So one common thing we found in our public cloud is as they change configuration in the network topology, they forget to put ACLs on. They put too many ACLs on, closed down ports. So things can change pretty quickly where volume attaches C to work. And sometimes it's due to the underlying infrastructure. So have a look at that. So the database can give you some information. If you have access to the database, you have admin rights. You can see things in the database that you won't otherwise see through some of the other debug commands. You can get the tables MySQL, show tables for both Cinder and Nova. It'll tell you what information exists in the database for each service. The block device mapping table for Nova is where all the information is kept for volumes that are attached. So looking at the block device mapping table will give you information there. You can check for a certain volume ID. You can select the connection info from the block device mapping for a particular ID. And this will tell you all the information that Nova knows about this volume from a standpoint of the data path. Scott? What kind of situation would lead you to look at the database instead of, for instance, going down some other path with network connectivity? Well, here's one example. If you look here, you can see the connection information will give you all the iSCSI information. It'll give you this is an iSCSI example. It'll give you the initiator. It'll give you the target. It gives you the CHAP username and the CHAP password. Some of this information is in the logs. The password is always obscured in the logs. So one thing you can do from the database that you can't do from the logs is you can connect from the Nova machine to the back end storage array and log in. And if that doesn't work, then that's a good clue as to why your volume attach is failing. You can also make sure all your IPs are right. This is the IP of the iSCSI portal. So this is where it's actually connecting to your storage array. Some arrays will allow you to ping them, most will, I think, so you can always try to ping that. Some will actually allow, tell them that, not as many. But you can check to make sure that your Nova compute host can connect to your back end storage array. If there's a problem in that connectivity, there's probably the source of your issue with a volume attach. So all that information is in the database for the Nova block device mapping. Here's an issue that I've seen in databases. Cinder does lazy deletes when you delete a volume. It just marks it as deleted, but it stays in the database. So if you have a very busy system, you have a lot of users over time, you will rack up a large number of entries into your volumes table. $4 million is not unusual. Once you start to get beyond that, you'll start to see your database slow down for access to any volume. And an HTTP timeouts will start to occur. Sometimes it's three seconds, sometimes it's five seconds. So if you're starting to see timeouts occur in your logs, look at your volumes table in the database, see how many entries there are. These will never get cleaned up unless you manually do it. We have the CinderManage command, db purge. And then you give it the number of days that you would like to purge if it's greater than that. So db purge 21 will purge anything older than three weeks old. So you do have to manually do this. If you have a small cloud, it's not going to be necessarily relevant. But if you're on a busy cloud or a large cloud, this will just keep growing forever. And as you start to see timeouts happening in your connectivity to the database from Cinder, you want to look at this. So you can find out some information on the Nova side, on the Compute Host. So the VERSH commands are all how you get information about the virtual machines that are running on the Compute Host. VERSH List, this only has one VM running. It's going to have an instance name, and it's going to have a short name here. This is number one that just runs sequentially. And you can use that information. And you can also use the information in the Etsy Libbert Kimu file. There's going to be an XML file for each instance. And that XML file has all the information for a Compute instance. When they're not running, the XML file is authoritative. That's where information about your block device is stored. So if you dump the XML, you can get information on your Compute instance. If you grep for iSCSI in the XML file for this, you can find out information like the dev disk path by IP. You can try to tell that to that. That's another way to get to the information about how Nova is connecting to the back end storage array. And you can also look at the dev mapper device. Each device is going to be mapped to a dev mapper file, like DM2 in this case. And one thing that can happen when you're trying to detach a volume, if that volume is in use by the underlying file system, it won't allow you to detach it. So you can look at sys block. Look at the particular device. In this case, it's DM2. And then the holders directory. If nothing returns, that means there's no references on your block device. If some number returns, there'll be a reference on your block device. And that means you'll never be able to detach it without rebooting the Compute instance. You can also do an app user on that file. And you can find out that in this instance, the process using it is 13771. You can do a grep on that with a PS. And you can find out who's using that. If it's targetD, that's what we would expect. That's the iSCSI subsystem. If some other process is using it, you may be able to kill that process off. But in general, when you find something that's holding onto your volume and you can't detach it, you're probably going to have to reboot the instance. I think Walt's now going to talk about some of the underlying OS Brick system and how it connects volumes to the Compute host. OK. So I'm going to show a little bit about OS Brick. I'm going to also show a little bit about the volume attachment flow process that we have in place. We diagrammed the actual workflow from Nova to Brick to Cinder. I'll show that here in a second for volume attachments and detachments and to show how complicated the system is and why it's sometimes very difficult to figure out where our problems actually are. So what is OS Brick? Brick is the library that Cinder team has created and maintains actively. And it is used to connect and remove volumes from a host. It's used by Cinder, Nova, Glantz, as well as the Cinder client for use on bare metal. So you can actually do an attachment of a Cinder volume on a bare metal node if you install the Cinder client, the Brick Cinder client extension, and OS Brick. And it currently supports iSCSI and NFS right now. We're working on adding protocol support for the rest of what is supported in OS Brick here. Brick actually supports Windows as well as Linux and a couple architectures for Linux. As you can see, those are the connectors or the protocols that Brick actually supports there. And depending upon how it's being used, and I will show an example of this later on, and some of those protocols don't actually go through Brick for Nova, but they do with Cinder because Cinder has a backup service that does a volume attachment process or workflow as well. And so that's why, for example, RBD here exists in Brick. All right, so go back to... All right, so let's talk about the attachment to attachment workflows. Okay, how do I look at my logs? Where are my logs for the different processes and sequences in the workflow itself? Okay, so we'll go back to this picture here. As you can see, we have the Nova side, we have the Cinder side over here. And this chart here kind of shows a little bit about what the control flow is as well as the data flow is for volume. So as you may or may not know, Cinder doesn't really have anything to do with the data plane of volumes. It really is just a control plane. It's the provisioning and attachment and detachment controller, if you will. So you can see here, there's basically three types that I show here. I show ISCSI, Fiber Channel, and Cef. And the reason why I call out Cef here is twofold. One is because it seems to be the most popular storage back end for most deployments that the user surveys always show. As well as the fact that it is a different path to getting your volume up inside your guest. So for ISCSI and Fiber Channel, it actually goes through the flow of using brick to connect the volume, detect the volume, and pass it up into the Libvert volume driver in Nova, which then modifies the domain XML for the guest and then causes the volume to show up. Now for Cef, it's quite a bit different. It actually doesn't use OS Brick at all because there's native support for LibRBD and QMU. The Libvert volume driver knows that and then basically gets LibRBD to do the volume attachment and hands that up into the guest itself. All right. Okay. So this next picture is probably impossible to actually read the text and that's not the point of this slide, but the point of this slide is to show the complexity of the actual attachment. There's two halves of this. The left hand side is the volume attach workflow. The right hand side is the detach. The column in blue, that's the Nova API calls or the Nova function calls. The red column is what happens inside of OS Brick and the green column is Cinder. And you can see how complicated this setup is and why sometimes it's very difficult to find out where your failures are actually happening, which log to actually look at depending upon where in the process it blows up. And so that's why we, as Jay mentioned earlier, we really need to be aware of this workflow as well as which log file to tail and look at depending upon the failure. Okay. So here is a successful volume attach. I did some screenshots here. You can see doing Cinder list, Nova list and then calling Nova volume attach. So Nova is actually the coordinator of the volume attachment process itself that's kicked off in Nova. Nova calls the Cinder API to get the volume information and then calls Brick to actually do the work and then Brick responds with either success or failure and in this case success that it did discover the volume and passes it up into the guest. And one of the important things here that you'll see is the tracing capability that's built into Brick if you turn on debug logging. You'll see the, I don't know if I can point to it here. Where's my pointer here? I don't see it, it's not working. Great. Oh well, it happens. You got it there, Scott? Okay. So if you look at the actual connect volume call you'll actually see the dictionary that gets passed into OS Brick there and that is something that Scott showed earlier the big giant long dictionary, the JSON dictionary of the connection info. That information came from Cinder. So when the attachment happens when someone issues Nova volume attach Nova calls Cinder to tell the back end to export the volume to a particular host. So Brick is involved in that process well with collecting the information on the initiator side. So for an ISCAZE attachment it collects the IQN, right? The host name and that information is passed into the Cinder volume driver. The Cinder volumes initialized connection goes to its back end says I need to export a particular volume collects the information from that particular back end which we show here which is the target IQN, the target portal which is the IP address, right? And the IQN as well as the target LUN and any CHAP credentials, right? So that comes back from the Cinder volume driver. And then that is passed into Brick. Brick does a whole bunch of stuff in between this first screenshot and the second one. It does a lot of ISCAZE ADM calls to log into the portal, right? To discover the portal, to discover the target on the other side to rescan the SCSI bus if it's already connected to that particular portal, et cetera, et cetera. There's a lot of stuff hidden back there. But what I wanted to show here is a success. So you'll see at the bottom portion here the connect volume is returning the information up to the Libvert volume driver at NOVA and then that passes back up into the domain, rebuilds the domain and you'll see the volume in the guest. So that's success, that's an easy one, right? So what happens on a failure? So if you're tailing the NOVA log, which is where it's either tailing the NOVA log or you're tailing the Cinder volume process log, right? The Cinder volume process log is what calls initialized connection to export the volume from the back end and that gets returned to NOVA. NOVA calls brick, right, which I just explained. And then, so all the important failures in this case, in this particular case, happen on the NOVA site. So I hacked the LVM driver just to artificially produce a failure here, which is in the fact where you can configure Cinder incorrectly to have an incorrect iSCSI IP address. I got a reminder, got a wiki call coming up. Yes. Thanks, Jay. You're welcome. So in the Cinder.conf, you have to specify the iSCSI IP address for a particular, in this case, the LVM driver and that IP address is the IP of the iSCSI interface that is gonna be exported for the targets, right? So I hacked that, I put in the incorrect information and you can see at the top portion here, brick is being called to do the connection and discovery of the volume. And then you'll start seeing a boatload of errors happen in the NOVA log. And this is the second screenshot that I have here is the very first failure that you'll see. But if you're tailing the log, what you'll see is a success of three attempts to try and do the connection and discovery of the volume and you'll get just a mountain of output of failures. But what you really wanna do is actually scroll up to the very first failure to discover what the real actual problem is because we do a lot of retries and be really smart and very verbose about outputting information that might help the operator discover what's going on here. And unfortunately, this is one of the situations which is really just a Cinder.conf configuration failure that you won't really know about until you try and do an attachment because the ISCSI IP address is not even used at the provision time, right? So you'll be able to create volumes, delete volumes in Cinder and you think everything's just fine and then you go to attach your first volume and then you'll start seeing a lot of errors here. So some of the other errors that you'll see in the log for the same attachment failure kind of looks like this where we're trying to do a discovery in the ISCSI admin sessions list and we don't have one. So it kind of blows up because we can't talk to that particular target portal and then the very bottom here, we actually show the stack trace which you'll see at the very bottom end of the log file. So it tries three or four times depending upon the configuration to do the attachment and you'll see a lot of this kind of stuff here but scrolling up to the top really kind of helps you out but this message here as you can see could not log into ISCSI portal. Well, now you need to go figure out, well, what IP address, what ISCSI portal are you trying to log into and that's why it's important to go back and look at the call into brick itself because it actually tells you what the IP address and trap credentials and the port are so then you can either ping the IP address or try and issue an ISCSI admin command against that IP address itself and then away you go on your debugging journey. Okay, so fiber channel's a little bit different. There are certain things that you need. Are we running out of time? Yes. No one question. Go ahead. Yes. So if you're looking for the source of the error and you see that one then that same stack trace will also be somewhere else and somewhere else is the real place. Somewhere else is the real place. If you actually look at this stack trace here at the very bottom of it, you'll see that it's actually coming from disk packages OS brick initiator connectors ISCSI and that's where the real problem actually is and not all the RPC retrying stuff up above. So just to repeat for the recording that what was being noted is the fact that the message is coming from Oslo messaging RPC server means that it's not actually happening locally but is coming back the RPC, right? That's okay, good. Thank you. Good, thank you, that was great. Yeah, we've got about six minutes left. Okay, so I will skip ahead to this slide. So for fiber channel, the reason why I bring fiber channel up is because I get these questions all the time whenever people have problem with doing attachments for fiber channel, you gotta go back to basics, right? The first part is making sure that you have your kernel drivers installed on the host. Some people just forget about that. They don't always come with your favorite distributions, standard default packages. So for canonical and Ubuntu, you don't get the fiber channel HBA drivers so you have to install that package. You have to install sysfsutils, csstools, what is used to discover the HBAs whether they're online or not and their worldwide names. And the sd3utils, which is used for SG scan, we use that in Brick to discover some of the block device information about fiber channel specific volumes. All right, and that's pretty much it today. We have a couple here, Google links here that I created for this big ugly slide of all of the impossible to read workflows. We have that so you can download it and zoom in and see what the actual function calls within Nova, Nova API, OS Brick and Cinder and how that process all actually works. And the links to OS Brick. And we have the original YouTube video of what Jay and I did in Paris a few years ago now that kind of covers some of the same stuff. Yeah, that one goes back and does a little more the I did something wrong setting up Cinder in my environment. How to track it from cinder.conf to my volume services and starting and those sorts of issues. So if you're looking for a little more basic introduction, that's a good place to start to that we'll give some context on what we've talked about here today. So with that, questions? If you can step up to the microphone if you have questions or comments. All right, do we have a taker? We have one, we have a taker. I was just going to make another comment. So yeah, from the trenches, I tend to get bug reports way off the facts. So I tend to be looking at logs that people have sent me rather than telling things. And something I find enormously useful is looking at a merged log. So you're talking about all of the different services, both Cinder and Nova, and you're not entirely sure which one yet because maybe you don't understand this thing very well, is the source of your problem. You can, if you can just dump all of the logs into a single merged log and then find your error and then grep based on request ID, which is an awesome thing. There's a tool under the Oaks and Stack banner called OS log merger. I don't know if anybody's familiar with it, but it's a tool for providing merged logs from multiple OpenStack services and it's really cool for doing this sort of multi-service debug. Oh, cool. Thanks, thank you. That's good to know. Any other questions or comments? Rich. So I've been to about 40 sessions this week on containerization and so on. And I don't want to go through any more, but... But you're a container guy. Right, so when I stand up OpenStack right now, I'm running Cinder in a Docker container. So is there anything specific to running Cinder? Are you guys seeing anything specific to debugging Cinder that's process related? For example, Docker container doesn't really have its own storage, so everything's mapped out and so on. Is there anything, are you guys working with the container community at all? Some of us in the Cinder community are. John Griffith is working. He's done a lot more stuff than I have with containers, but some of the problems that I've seen with running Cinder in a container, for example, is you can't run LVM as a backend because it needs access to some of the hardware you dev devices and stuff like that. And there are tweaks out there that I've seen that people can use, but the best way to use Cinder in a container is to have an iSCSI backend or something external to that container that you do the exports of the volumes into wherever you have never compute installed. But I know the Cola team has images for Docker containers for the API scheduler and the volume service. I'm not sure about the backup service yet, but I know you can stand those up. And that's what actually John was trying to show at the keynote on the second day. He had a Docker compose built of showing all of those services running standalone, but he ran into one of the race conditions just magically during that time. It just didn't work, but it does, in fact, work. So that's a good example, right? Something that might be a bit unexpected with running stuff in containers. And I think probably you guys are gonna see more and more of that just looking at this summit. And in fact, one of the things that I'm doing is I'm moving away from the model of using DevStack for development because as someone that I just work on Cinder all the time for the most part, it's so much easier for me to stand up all the other services in containers and just kind of leave them. And then I'll bounce the Cinder volume container and work there or even stand at multiples of those with different branches. So there's I think a lot of mindshare going that way for a lot of reasons, not just for end user deployments but also for development too. That's a good idea though for us to start looking at going forward, how do we work with Cinder differently in that environment and what do people need to be aware of? So thanks Rich. Hey guys, that stuff you added with the messages in 3.3? Yeah. That's really cool, but can you share that maybe with Nova and, cause it doesn't sound Cinder specific. It's not Cinder specific. It is something that other projects should be looking at adopting. I don't know what their progress is but that is something that's available to the wider community. And I think it's something that OpenStack needs to be looking at. It was kind of driven by someone within Cinder who wanted it and we put it out there and got it going and I would hope at some point it is adopted and goes to Oslo. I mean it seems. Cause actually that volume tag at the beginning does not mean a Cinder volume, it means volume service. So there's a compute service that should be, start getting messages like that and what have you. So object image, but I don't know the current implementation status on those. So I think the hope is with that is that those messages are gonna become much more useful to end users. Cause what we spent a lot of time talking about here is something for deployers, operators, how to debug their deployment. But eventually these kind of messages might get filtered and shown in Horizon. Something that a user can say well, cause right now when you do a volume attachment through Horizon it either just works or it doesn't and the user doesn't really get much of feedback. Well it just failed. Okay now what? Then they have to send off a ticket to an operator and the operator has to go through all of this process to find out well why did that one particular volume fail? I know. It's not easy. So that's one of the reasons why we started going down this road too is to provide some eventually some more useful information to end users too. But that's also good feedback that we should be, I think we're gonna try to get some resource in here soon to actually start embellishing that service and making more use of it. The plumbing is there, but we really need to instrument more of the code. Right. Yeah. So thank you for that. Thank you. All right. All right. Thank you. Thanks guys. Thank you so much for being here. Thanks for having me.