 discussion with the Cinder developers. I'm Ed Balduff and I'm gonna moderate and ask these guys some questions to start and then we'll take questions from the audience about halfway through. If you guys do have questions and you want to queue up behind the mic over there that it helped the AV guys. So first off we're gonna go through and just have everybody in the panel introduce themselves. I'd like all you guys to tell us your name, what company you're with, what you do in Cinder, you know, kind of what your specialty is, what your background is, where you reside and any other quick facts but make it fairly expeditious. All right, so my name is John Griffith. I work at Solid Fire. I've been working on Cinder since at the start, so a little over three years I was working on Nova Volumes before that. Come from Boulder, Colorado. I'm Sean McGinnis with Dell. I've been involved a little over a year now around the ice house timeframe. Just we have our driver for Dell Storage and helping out wherever I can. And Minneapolis, Minnesota. My name is Mike Perez. I work on block storage and block storage accessories. That was a keen of the hell reference sorry didn't do the accent. So anyways I'm the project technical lead for the Kilo and the Liberty release and I've been working on the project since 2010 before Cinder and the Nova Volume project. I work at Dayterra and I'm based in LA. Hello, my name is Walt Barring. I work for HP. I've been working on Cinder since about the Grizzly timeframe. I started riding drivers as I think a lot of us have and went on to work on volume attachments, fiber channel, the BRIC library, and I work in Roseville. Hi, I'm Jay Bryant. I work with IBM and in the Cinder subject matter expert for them acting as liaison between our driver developers in the community. I've been involved with Cinder since Grizzly. I think it was early 2013. I've earned the name Captain Oslo. So I act as the Oslo liaison for Cinder as well and work out of Rochester, Minnesota. Hi, I'm Duncan Thomas. I'm an Englishman currently residing in Tel Aviv. I work for HP in the public cloud and healing on product teams. I've been involved with Cinder since before it was Cinder and my main interest is scalability, back compatibility, stability, that sort of thing. Hi, I'm Xinyang. I work for UNC. I have been working on Cinder since Grizzly. I started with a driver contribution and now I'm working on some core contributions. My work include adding consistency group support and sender, oversubscription in thin provisioning, and increment backups. Hi, I'm Patrick East. I work for Pure Storage. So this is my first kilo. It's my first full release cycle. Started a little bit at the end of Juneau. I work primarily on our integrations, our driver, maintaining that, our test systems, and starting now to get a little more involved in core Cinder development. Thanks, everybody. So I'm going to throw out a few questions here. We'll kind of pass it around and I'm going to call some names to start with these. So the first one is, in your opinion, what's the mission of Cinder, right? Just give me a paraphrase of that. Should it abstract everything? That's lots of room there. What about when backends want to expose specific things? What's your thoughts on those kinds of things that are specific to certain array vendors? So we'll start with Xinyang. You got a microphone. Go ahead. Okay. So Cinder provides block storage as a service. It allows various storage arrays to be plug-in and provides block storage in an open-stack environment. So I mean, I think Cinder should abstract things as much as possible, but I can't say that it should abstract everything. So if there are something that you think are, like, vendor specific, I think we have to look at what that is, right? What feature that is. If it's a well-known feature that has been used for a long time and is very important, well, has lots of value to Cinder, then I do think it should be abstracted by Cinder, even though it may not be supported by every driver, it could be an advanced feature. Okay. Mike? Yeah. So regardless of the location or whatever the physical devices are, I think what we're trying to provide is sort of this pool of storage that can be consumed by in a multi-tenant environment. That's just in general how I feel about the project. I don't necessarily care if it's over a particular protocol like Fiber Channel or iSCSI or any of that stuff. Just as long as you can give me a block storage in the end, it's cool. Other than that, in terms of exposing certain features from vendors, I think it's this kind of important thing that we need to, I mean, while we want to keep a level playing field and everything for people, certain features aren't completely possible by our reference implementation to actually be able to expose it, have proper testing and so on. As we've seen in the past, that kind of doesn't really work too well with certain features like replication. Thanks. Jay? You got a microphone. I got a microphone. So, you know, I want to kind of expand a little bit on what Mike said. You know, I think the goal of Cinder is to provide storage to people and they don't have to worry about where it came from. You know, and that's kind of what I've been, people have been asking me more and more internally, you know, okay, what about what's happening on the back end? It's like, well, if you set it up, go ask for storage. You should just get your storage and be able to use it, not have to worry about the details. So, the more we can do to enable Cinder to get our users that experience, I think, the better. And being able to quickly debug any issues that come up with that. So, those are kind of the areas I focus on. Vendor-specific items. You know, this is something we've been talking about a lot lately. And I think it's important that we provide a guide as to through, you know, the required functionality for each release that we have. What's a driver required to implement? I think it's important that we keep that updated and make it an attainable goal for all back ends. I think that's the first place in the discussion. We've got work to do going forward, figuring out how we take the more advanced features and make it work for everybody. I totally agree that, you know, in the case of replication, our first time around, we didn't quite hit the mark and we've got more work to do there. But we're looking at how to fix that and moving forward actively in that area. So, in terms of my viewpoint on that, I'm kind of the odd person out. I think my opinion is that vendor unique features should not be exposed in the Cinder APIs. There are methods inside of Cinder that will allow you to expose those and utilize those, whether it be extra specs or something new that we could create and stuff like that. But the whole idea, you know, if you saw the keynote today in particular, when you look at things like federated cloud, the whole idea is you want to make sure that no matter what cloud you have, you have the same capabilities, the same APIs, your applications will work no matter what open stack powered cloud it is. If you start bending the rules and saying that, Oh, well, this vendor feature, we're going to go ahead and let in and this one were not. What's going to happen is you no longer have compatibility and it breaks the whole model, which Thanks, John. Alright, everybody pass the microphone to the right or to your to your left, my right, sorry. So the next question, Cinder now has over 40 plus drivers, right? So we all have seen this happen. Should they all be upstream? If not, where should they live? How do we prove testing, right? So today we've got all this continuous integration stuff that we've set up. And if we take them out of upstream, how do we prove that? And is that a something that should be done by the Cinder community? And we'll start again with everybody that's got a microphone from Patrick. Okay, so, yeah, I mean, there's a lot of drivers and there's more on the way. I mean, if you look at the Liberty L one milestone, this whole bunch queued up, I mean, we have one queued up. And I mean, even just from last was the police cycle with our driver, there's a lot of churn and little bug fixes here or there that we had to put in and takes core viewer time for, you know, a log message and our driver that nobody else cares about, right? So in that sense, I think having the drivers out of tree, you know, more kind of what like neutrons going down that route, right, makes a lot of sense. Testing, though, gets tricky, right? Like, being able to verify that they actually work is, it's going to be a lot of work to get to where we can trust them. I mean, our CI system right now is not super trustworthy. So I think we have to nail that first before we start pulling drivers out. Sort of saying whether or not they're actually going to work for a customer or not. So I think we are getting to the tipping point with drivers now. We've got so many that in the next six months a year, we're going to get to the point where the core team simply can't keep up with them. I think pulling them completely out of tree and saying vendors maintain their own drivers is a recipe for disaster. Our experience of vendors actually having working tested drivers is shown they don't, they can't. If they're not pushed into it, they won't. So the whole CI thing has been about pushing vendors to actually invest in this stuff, possibly creating a Cinder drivers tree with a different larger set of reviewers and letting driver maintainers review each other stuff more than the core team is a possibility that would certainly increase our scaling somewhat with little increase it far enough, I don't know. But I think relying on vendors to have out of tree drivers, I think that's a recipe for a complete breakdown in workload portability and reliability, which is the whole core mission of Cinder failing. Yeah, so I, excuse me, I tend to agree with you Duncan that pulling all of the drivers out of Cinder itself is going to be very, very difficult to maintain compatibility and making sure that Cinder is going to work over the course of time. We see this every single day inside doing our own CI. We see constant false failures, false positives. And we're fighting this problem of, of how, how we're testing our drivers, right? Because right now we're testing several releases against every single patch that we do with our client and our drivers themselves because we get support calls from our customers that are still running on Havana. And so we, we need to make sure that stuff we're doing is still working in Havana and Icehouse and all of those releases. So I think the testing really needs to become much more solid in order for us to even think about pulling the drivers out because that's the only way we're going to ever know if a specific vendor's driver is going to work in a release or not. So that really needs to get solidified. And I don't think we really have a solution just yet that we know that works for reviewers as well as deploying Cinder itself. So we need to deploy Cinder. Which driver are you going to deploy for your array with that version of OpenStack? That, that is something that we haven't solved either yet. I think the idea of having maybe a separate repo for drivers is interesting, but I think it really does need to be entry. There's so much I can say for adding our driver recently, just having the, getting the feedback from others out there that have done this really improved our driver more than we could have done on our own. So I see a lot of value in having it out there and, but I agree the testing just needs to keep getting better and get to the point where there is the expectation that if a driver is posting positive results when someone pulls down a distribution and deploys it, that they can expect that to work and it is good quality. We're multiplexing the mic here. So all right, pass the mic to your right folks over there. So today, Cinder has a concept of a reference implementation and it's based on LVM and everything in general that is implemented in Cinder needs to be implemented in the reference implementation first. So let's talk about that a little bit. What do you guys think of that? Is it, does it help development? Does it, does it slow development of features? Should it, should we continue to use it based upon LVM and the Linux tools or should we do something else? Other open source alternatives like, you know, Seth, Sheepdog, those kinds of things. So we'll start with, we'll start at this end, Mike. So yeah, we have the LVM reference implementation. I definitely feel like, like it has created a very positive thing in the Cinder project in terms of being able to contribute to our API that we have that we stand by and that we have other drivers stand by. We sort of have a way to reference certain features and test them inside of the gate, which is kind of essential. Again, I'm not going to pick on replication, but that was an example of, I don't mean to, sorry, but like it, that's an example of something that would have been really great to, one, have that in the gate, but two, as a way for driver maintainers to actually be able to know how exactly they would implement that in their own driver. So that's my view on it in terms of LVM in particular, not, it hasn't been a total positive experience working with the project in terms of different bugs that we hit in the gate. But we kind of work, I feel like, anyways, that we kind of work around a lot of, a lot of the problems that are inside of the project. I don't know what that noise is. All right. So, yeah, I think it's essential that we have a reference implementation for people to work from and to demonstrate that the functionality is working in the gate. We're reaching the point, though, where I don't think LVM is able to cover all the features we want to look at, and we need to start looking at adding SEPH or something like that, that allows us to broaden the coverage that we're getting. So, whether it's two reference implementations or sharing that, you know, responsibility in some way, and I think we've kind of started looking in that direction to add that. I think that will be better once we have, you know, more than one back end that we can use as an example implementation to work from. Yeah, I think the reference implementation is very important, especially when you introduce a new feature or you want to provide an example on how to implement it, so that will be much easier for a driver to add that feature. As you, LVM, I think, I mean, we definitely can look at other alternatives. I'm not sure if SEPH is a good one to add, because it has a special protocol, so if you add that one as a reference implementation, I mean, other drivers cannot just leverage it directly, but if you add LVM, it supports iSCSI, so a lot of drivers support iSCSI, so I think that is pretty good to start with. So, you know, the key to having a reference implementation, in my opinion, is it is the reference, so I don't like the idea personally of multiple reference implementations, because I think it kind of defeats the purpose. You either are the reference implementation or not. That being said, I think the reference implementation is also vital. I think it's a key importance. I think it's something that you have to have. And unlike most people, I think LVM is a fantastic reference. The thing about LVM is it's flexible. We can do iSCSI, we can do fiber channel, we can do all sorts of things with LVM. The biggest problem that we have with the reference implementation right now is a lack of people willing to actually invest in it from the developer side. So there are a few of us that spend a lot of time working on it and maintaining it, but it doesn't get anywhere near the attention and stuff that it probably deserves, in my opinion. I'm going to jump around on my list of questions, so if you guys memorize them, I'm going to screw you up here. So there's a list, yeah. Let's ask the question of if you're a Cinder user today, what's your recommended, everybody here, the developer's recommendations on how to get help implementing Cinder if you have any issues, and if you guys can pass the mics to your left, and we'll take the four of you, start with Patrick, I guess. Okay. I mean, basically IRC is like the number one place, if you need to get hold of any of us, or any questions related to Cinder, that's your best bet. Mailing list works, you know, for a different time zone or something, but yeah, I think 99.9% of any questions I have, I go to IRC and I get help there. Yeah, I think you hear the same answer over and over. IRC is the way forward. Just remember that there's a lot of time zone variants and a lot of amount of attention to people paying variants, so then join IRC, post the question, not going to answer in five minutes, quit and disappear because we can't answer that. Join IRC, post the question, and wait. Most people are logging in this session and we'll, you might get an answer too, I was late when people wake up and come online or stop talking about what they're doing, but IRC is the place to be. Yeah, I have to agree. We were just having a discussion about this with some of the developers that are working on Fiber Channel, and so the answer is the same for developers as it is for end users, right, is that IRC is the active place every single day where all of us are hanging out in and we're more than willing to help with any problems that folks are having with Cinder, with development, and then also with deployment as well. But the mailing list shouldn't be overlooked quite as much though because a lot of us, like you guys said, have time zone issues and we're not always available on three o'clock in the morning to answer questions. Some of us are, but most of us aren't. And I have to agree, you know, there's Ask.OpenStack, there's the mailing list. So there are other resources out there, but probably the best place to get help is on IRC. And, you know, like Duncan said, don't ask a question, wait two minutes, and drop off, really stick around. You can learn a lot just by being in the IRC channel. Alright, multiplexing the mic again. So based on that, let's move on to what does it take to be a successful contributor to Cinder? So if somebody out here in the audience wanted to actually contribute something, what's it take and pass the mics to you, right? So, so that's a good question. And it comes up a lot. And in my opinion, the biggest thing that makes you a successful contributor is that you're not just contributing a driver. That's a great starting point. It's an important starting point. The great thing about Cinder is the fact that we do have so many people interested in contributing drivers. But if that's all you're contributing, then the model is broken, right? Because the whole idea is that is supposed to be the onboarding ramp. It's supposed to be the gateway to get people in, get people involved, and help make the project overall better. And that's what we're looking for. So that's what makes a better contributor is the person that actually looks at the whole Cinder ecosystem. And then the other thing that's really important is actually understanding OpenStack. It's amazing how many people contribute code to projects in OpenStack and don't even know how to run OpenStack. They can't deploy it. They can't even deploy it with DevStack. And then even if you help them deploy it with DevStack, they can't run it and utilize it. You have no business contributing the code at that point, in my opinion, if you don't know how to actually use what you're trying to build. So there's some things to keep in mind, I think. You stole my answer on the deploying part. Yeah, we have a lot of people who, I don't want to say a lot of people, but we have we have times for the people that bring in drivers and then it comes to actually like, oh, I need to set up a CI for this driver and I don't know how to actually deploy OpenStack. Kind of amazing. But I would say in general, though, if you're looking to get involved with anything with OpenStack, and for myself, I just wanted to be part of the project. And this project, in general, for Cinder, and it's something that we've been focusing on, and I've also been mentioning in posts that I want to make this project a very welcoming and like starting gate point for people, if they want to be getting involved, maybe you don't do storage, but you want to have a positive experience in contributing code and feeling like you're part of the project, I absolutely welcome you to be and join us. We're really friendly people. Jay? So, yeah, I get this question all the time, you know, kind of the example I give is, you know, two and a half years ago, I didn't know anything about OpenStack, and I knew not a lot about storage. So, I'll count myself as a success story with Mike and John here and the rest of the group who've brought me up and helped me learn about OpenStack and doing code reviews, and that's the way to get involved. Come sit in on the weekly meeting if you can or look back at the logs so you know what's going on. You know, don't be afraid to open bugs so that we know what's going on and if you can dig in and even try to provide a solution, that's awesome, but we know that that's not always a possibility. And in other words, code reviews. I did tons of code reviews just to get what experience I have thus far and that really gets our attention when you're helping out and giving your input. And then we can latch on to that and use that as an opportunity to continue to help you learn more about participating with Cinder. So, Jing? So, I will just reiterate what others have already said. So, you've got to participate. You get on ARC, you can talk to people, participate in the weekly meetings, and help with the reviews. Also, respond to review comments in a timely fashion. So, all of that. Okay, cool. Thanks. Let's see. Anybody in the audience got any questions? If you do, raise your hand. Or, okay. All right. So, I'll keep asking questions. We've got one over here. All right. Perfect lead into what I was going to ask. So, I'll repeat the question. He once, he mentioned that we mentioned some stuff about migration, replication problems, and so can you guys expand upon that? And who wants to expand upon that? I should maybe start. Okay. So, I inherited some code when I came up in the Cinder community that had been written. And we've needed to improve it. So, we're working on that. One of the big problems with migration we're finding is just a misunderstanding of how it actually functions. You know, there's migration and there's retight. And if you want to take your data and move it back between basically two, you know, physical backends that are the same configuration, that's where you want to migrate your data. But if it's two different backends, it's going to fail because it doesn't find a host of the same type. You need to actually retype your volume to get it to move. And so, you know, there have been a number of bugs out there now that we've commented and tried to explain that better. You need to get a documentation update out there to help better explain that. But I think a number of the problems around migration and retype are in a misunderstanding of how to actually use that. So, that would be my initial thought. Just FYI, so migration and replication are basically broken with the exception of one driver. I mean, that's the reality. They don't work and they don't work properly. I don't agree that it's a documentation issue at all. You can try it for yourself, you know, use LVM and see if you can get it to work. But there are problems with it. We are in the process, as Jay said, of fixing it. But right now, it's features that, first of all, it's only implemented in a couple of the drivers, which creates a problem. Replication in particular is only implemented in two of the drivers. And even there, it's unknown exactly how well it works. There's some problems with it. So, not to cut it. Who else wants to comment? Duncan, go ahead. So, I think my great ignoring replication is, I think replication was just something we attempted. It didn't work. I treat that as something different. It was an experiment. We're going to do it better. And I don't think we could have done better than trying it and actually finding out. We discussed it so many times, the discussions were going around in circles. So, in the end, somebody had to step out of the plate and write it. And it kind of sucks that we're going to throw a lot of their work out the window. But on the other hand, there's a lot of lessons learned and that they'll end up with a good product as well as as far as migration is concerned and retype, I think Cinder really needs multi-node testing, multi-node CI and tempest scenarios or whatever is coming after tempest to actually exercise this. Because the thing we find again and again and again in Cinder is that if it isn't tested and tested all of the time, it doesn't work. So, we need multi-node testing. We need lots of people doing multi-node testing. The fact that multi-node testing is hard, I think tells you a lot about the state of OpenStack as a mature product. The fact that setting up a multi-node automated test is hard tells you there's something wrong with, suddenly wrong with OpenStack. Just like setting up CI was hard in Cinder. And that told you a little bit about the state of Cinder. I think we need to grow up and fix our basic low-level problems across all of OpenStack, not just Cinder and get things like multi-node testing to the point where anybody can do it. It should be easy. Thanks. Anybody else? Yeah, multi-node. So, yes. Any other questions from the audience? Yeah, we've got one over. Let's take this one first. Hi, my name is Satya Madhavan. I'm a colleague of Zingang in EMC. So a couple of points with respect to local replication and remote replication. Yeah, when we take local replication, there is a functionality today to take local copies, snaps, or anything like that. Now, what happens suppose the source will get corrupted? Is it functionality we are proposing to restore it back from the target, sorry, the target what we generated to the source, number one? Then I'll ask the remote application later. Anybody want to take that? Don't get it. So right now, that's part of the problem. Right now, that's something that isn't really well thought out and it's somewhat, like I said, it's basically broken. The idea is we have a number of things that we're trying to work on right now in terms of rewriting that and designing it and figuring out how to achieve it going forward. I think what we'll probably do is hopefully this time try and do a more iterative approach on it and start the failover piece, maybe do the failover to switch back to the secondary, have that be a separate admin call that has to actually institute that at least to start with, and then later down the road have it possibly be automated since the failure and pick it back up on its own. There's a number of options. The biggest challenge that we have with things like replication, however, is when you try and have a common API to serve that every backend, EMC, NetApp, SolidFire, HP, whatever it might be, IBM, they all behave very differently and even worse, they all tend to define replication differently. So replication means different things to them. So until we can kind of all settle on what we want to define it as and how we want to solve it, we're kind of kind of stuck at that point right now. Don't get, yeah. So one of the things about I think about replication in the cloud is you've got to think cloud. There's an awful lot of people who are trying to build VMware, turn up and stack into VMware. It is different. So some of the replication store of people like, oh, well, I want my volume to be back. And it's like, no, you get a volume back. Your volume ID doesn't matter. You get a reference to a chunk of data, you can attach it to a VM. If that means you have to change your mind around a bit to think, this is a new volume ID containing my old data and that sort of thing. I think if we can get that message out there that we're cloud, volume IDs are not nothing special, volumes are nothing special, you shouldn't be thinking I want to put the data back on my old volume. You think I want to get a handle on my data and it really doesn't matter how I get that handle. And the looser we can keep the interface, the more different back-end paradigms we can map on top of. Mike, Sean, you guys look like you might want to have something, Paul? Anybody? Okay. All right, I'm going to jump to the next question here from the audience. Yeah, it's a different topic. What's the status of sender backup? Is it scalable? Is it stable? Can you work between? Can you back up between different back-ends? How is incremental working? Who wants to take that one? Patrick? I can start and then you can. So I can start because I just did some work on incremental backup. So that was, so that actually, incremental backup actually using change block actually works for Swift and also for MFS. So I think, so it works. I know that I heard that there are some performance issues and there was some coming work in Liberty that want to address that. So just leave it to you, yeah. So the performance of backup, so the concept of work, we originally wrote it as a concept that could be improved and we haven't got around to improving it. So there's a lot of CPU-bound stuff. There's a lot of parallelism that's not exploited, but I think the concepts I'm reasonably happy with, we've got incremental backup now. The internals of that can change without the tenant having to care, which is an important bit, but we really need to put some focus on performance. And I stroke, we, our team tried that, failed miserably. So volunteers welcome, particularly if you know a lot about parallelism and Python. Mike? In particular, though, for, there is going to be an actual session initiative for this release, as I understand, specifically to prefer scaling the backup service. Positive. Cool. All right. I saw a couple other hands over here. So I have a couple of questions. First one is around the API, our OSAPI max limitation in regards to SNAP management. Is there going to be any change to that? Because right now there's a limitation of a thousand. So if you try to do a volume list, you only get a list of a thousand. First question. Second question is about the migration. You said there's only one driver that actually works. Can you tell us which one it is? All right. So first, easy one first, which driver does migration? Who wants to answer that? So it's, it's the authors were IBM and so it works with the IBM driver. Okay. As far as the, the max on the volume list, we put pagination in there. So that should be fixed. That shouldn't be a problem anymore in, in Kilo. Kilo V2 API. So there's your answer. It's, it's already fixed. Anybody else? Other questions? All right. I got two more I'm going to take. I'm sorry. I can't see over there. Okay. I'm going to take these two and then. I have a concern about the number of LUN limit for most block storage arrays. In most cases, it seems to be around about 8,000, which seems on the face of it to be pretty good. But if you had 500 VMs with a couple of LUNs per VM and you wanted to do a week's worth of snapshots, you'd use it up immediately. So I can only assume that the array manufacturers are going to increase that limit. But my other alternative is maybe to just use LVM and attach much larger LUNs to the compute nodes. So what is your opinion on that? So this is something that I get into a lot in really what it comes down to is what you build your cloud on and picking an architecture and a storage device that is actually designed and built to operate in a cloud environment, right? If you're stuck with something that has that sort of limitation, you're going to have those limitations. You can come up with workarounds like putting LVM on top of it and stuff like that, but then you're going to pay a price to do that as well, right? You're going to sacrifice performance, you're going to sacrifice some of your flexibility and stuff like that. So the reality is, is even though we have 40-some plus drivers inside of Cinder right now to use as a back-end, the reality is, is there aren't 40-plus back-ends that are actually good candidates for an open-stack cloud or at least for every open-stack cloud, right? Now the other thing is, is every open-stack cloud is unique and different. So you may be able to fit it, you may not have a 7,000-lun limitation issue, right? You may never get there. You may only use a couple hundred luns, a couple hundred volumes, whatever it might be. So that's a tough one and that's really hard to kind of figure out how you digest that and figure it out. I hate to pun it and say you need to look at what you're using for your back-end, but honestly, I think that's kind of the answer. There's a lot of companies that are coming up now with storage products that are focused specifically on solving that sort of problem, right, in working in a cloud architecture. Anybody else want to comment on that? The comment was, he's tried a few in summer at 2,000 and summer at 8,000, so. I have to say though, 8,000 does seem a little excessive. That seems like a pretty high lun count. Luns plus snapshots. Anybody else up on the panel? So for a coming from a company with an older storage solution, the cloud is exercising storage, I think in ways that even five, 10 years ago, we hadn't seen this coming and so those of us that have the existing solutions, we're discovering these challenges and we're passing that back to our hardware back-end developers and working with them to look at how we're going to resolve these. So I hope we'll be catching up soon. All right, I got one more question from the audience that I told them I was going to get to, so. Thank you. In fact, I have two questions. First question is about Docker integration. I see the new feature list for Nova and Docker. Nova, the Docker don't not support volume attach or detach. So personally, I think it's a technical reason that Docker cannot support volume attach. But I'm not sure we'll do some integration work for the project. The second question is still about volume replication, sorry that. In fact, as you know, in fact, I'm from IBM. We have finished volume replication in Juno-Rave. But the new volume replication design will make some changes. So that means we need to think of the new design change for volume replication in our release. I mean, I mean, the Juno-Rave, what we did, we will not work in IO maybe. Is that a truth? Yeah. So I'll take the first question about Docker and I'll kind of repeat and summarize. So the question was is Docker integration, there's nothing for Cinder in terms of an integration right now. The issue with Docker basically is, is you can't do dynamic attaches of storage after a Docker instance has been spun up. I have some code in process that does some things inside of the Nova Docker module that will allow you to attach a volume, a Cinder volume to a Nova Docker instance when you boot it. There's some questions right now in the community about the future of the Nova Docker driver and the interactions with Magnum and things like that. But so the Docker pieces, they are coming. I would expect to see something in Liberty. Hopefully sooner in Liberty rather than later. But there will be an option to add Cinder storage volumes. To Docker instances at that point. All right, with that we're about out of time. So I know we didn't get to the second half of your question. I apologize, but I'm getting the high sign here from the AB guys. So these guys will be up here for a little bit afterwards. I think we've got lunch next. So there should be some time to hang out. So thank you all. Thank you.