 All right, we're gonna we'll give it a couple minutes as people still file in here, and then we'll get this going I don't know about that I'll just hold on to this All right, we're two minutes Yes, we're two No, not quite 30 seconds 30 seconds All right, good. That's the way to start. All right Okay, welcome. I'm Ed Baldoff with Solid Fire. I'm gonna be the moderator so I don't get an opinion here We'll start with a few stock questions. We'll have everybody introduce themselves We'll start with a few stock questions that we've written up or I've written up And then if you guys have questions They want us to use the mic in the aisle way there So just jump up and stand behind the mic and I'll call somebody So let's start with The illustrious John Griffith and then we'll work our way across everybody introduce yourself What company you're with where you reside like where you live kind of thing What's your background? Are you an ops person program or systems networking that kind of stuff? And then how'd you get involved with Cinder? John All right. My name is John Griffith. I work at Solid Fire I'm from Boulder, Colorado Well, kind of Boulder outside of Boulder So I've been working on Cinder since the beginning So about four years A little over four years ago I started on OpenStack and started this Cinder project with a number of folks that are on this stage Hi, my name is Zhiteng Huang So I'm from China I based in Shanghai. I work for eBay I work in the DevOps manner So we have a private call Patrick East from work with Peer Storage From Bellevue, Washington My background I guess is mostly like embedded applications But now I'm doing cloud applications I'm Sean McGinnis. I work for Dell in Minnesota I work in applications too Currently focusing on OpenStack Cinder and PTL for the Mintaka release Hi everyone Sorry, I break mics and sound systems I'm Mike Perez, former PTL for the Mintaka Sorry, don't quote me on that Sorry, Keela Liberty. I'm working on that So yeah, I've been involved with OpenStack since 2010 With NOVA Volume and all that stuff I had in previous working on stuff driver and things like that Currently I work for the OpenStack Foundation Focusing on cross-project I like cats Hi, I'm Jay Bryant I work with IBM, the liaison basically between Cinder and the IBM driver developers Based out of Rochester, Minnesota And been working on Cinder for about three years now How did I get into it? I got really lucky and my manager picked me to work on Cinder And I've got no work with these fine people So it's a good thing I'm Xin Yang. I'm a technologist from the office of the CTO from EMC I'm also a core reviewer in Cinder and Manila I live in Massachusetts in the US I wrote Cinder Driver for EMC Storage And contributed back in Grizzly That is how I got started And since then I shifted my focus to work on core contributions My work includes consistency group support Incremental backups and non-destructive backups Hello, my name is Walt Boring I work for HP I've been working on Cinder since the Grizzly timeframe Along with Kurt Martin We added fiber channel support to OpenStack Way back when I support the three-part drivers and left-hand drivers And I work on OS Brick Alright, cool I'm figuring out who I'm going to ask Who's the first victim here? And I think that's you, Patrick So the first question is what happened in Liberty So specifically what were you working on And how did it turn out? Sure So Liberty spent probably the first half of it Doing pure storage stuff We added fiber channel driver, things like that A lot of, you know, probably 50% of my time Throughout the whole release Working on CI system stuff So it's probably a common theme I'm not going to ask any of the guys Who have to maintain one of those And also adding in support for image caching To Cinder So we got that in for Liberty A new feature caching What does that feature mean exactly? Sure, yeah So it's Previously, or well, if you don't turn it on It's disabled by default Every time you want to create a volume from an image We download it over HTTP From glance, store it in a temporary file And do a DD copy the image data over the volume So if you want to spin up like 10, 20 VMs really quickly Backed on volumes You do that 10, 20 times As opposed to something smart Not very quickly Yeah, exactly Yeah, it's very slow Whereas pretty much every one of our storage solutions That, you know, the people who pay us To work on OpenStack They support doing really fast image clones Or, you know, some sort of Way to, on your On your storage array at the sort of back end You can clone those quickly So now we'll download it once Keep it in the cache Subsequent request, we'll just do the clone So, you know After you've done it once It's just Cinder database calls And, you know Whatever management APIs you have to do For the back end So this is not going to involve Going through glance Or at the glance back end store Or whatever It always stays within your actual back end storage That is a smart copy on right And it's a generic solution Which is why it's tremendous Walt, what did you What did you wind up working on in Liberty And give us a little Okay, so I focused primarily on Finishing the project of OS Brick Which is something that we all kind of Were involved with for a long time So what was left In Liberty was to Extract the Lever volume drivers That were in Nova Because a while back We created the OS Brick project That was part of Cinder And in Kila we extracted that From Cinder and created a Separate project called OS Brick Which is in GitHub And it's owned by the Cinder project team And so what I focused on Was extracting that same code That lived in Nova And made Nova Lever volume Which is used as Brick Thanks Walt Shing So I worked on Adding support for non-destructive backup So previously When you do a backup You have to detach the volume So now you can actually back up A volume when it's still attached And also I added An API to support clone CG Which allows you to create a new CG from an existing one Cool Jay you got it I helped Encourage John to work on Replication v2 I helped keep That going so for those of you that know the History there that was a lot of fun And then And then I've also worked with My team to kind of finish cleaning up Oslo libraries in Cinder Removing stuff that we didn't need anymore And getting Working on improving the way that we're handling Config generation in Cinder So those were a couple of the major work items We hit during Liberty John you want to Not give away our session on Thursday But talk about replication for a second Perfect Well for Liberty Specifically Liberty but what did you do Liberty I worked on a bunch of bugs I actually spent a lot of time And I worked on replication So we did v2 of the replication Feature in Cinder We'll talk, I won't go into detail But basically there was a replication Feature in v1 Or v1 replication in OpenStack Already But it was a little difficult To implement and stuff So we kind of regrouped as a team And I tried to lead an effort to help everybody Kind of get on the same page And agree on a core Implementation that everybody could adopt And we just focused on Doing the core piece of it for Liberty And then in Mataka Everybody's Almost everybody already has a driver So when you say core There's no drivers out there that support replication today So I was very unpopular Because my stance was That no driver should actually implement it yet Because it wasn't finished Until two weeks before we Were actually cutting the RC candidate Which In my opinion we're releasing a brand new feature Especially a major one, two weeks before RC Is stupid So we shouldn't do it because Customers will be very unhappy Cool, anything else? Alright Mike, I'm going to let you summarize Since you're the PTL for Liberty Any more? Yeah, all these people did all the work And then I Just looked pretty But no, seriously I mean sort of I guess my own work is just Making sure that the things work Bringing them into test environments Making sure that the documentation is there And I'm trying to think of anything else Was miss, oh yes We had some things that actually didn't land And that I was actually actively testing But wasn't involved to actually Allow them to be merged in time There are a variety of You probably hear me preaching a lot Especially on Twitter about rolling upgrades And being a boring thing to keep talking about But the great thing that This community is actually Working on right now is actual Solutions to solve that Not just within Cinder but within other projects It's not myself that's working on it I'm probably going to butcher the name But Thane Fam Is, yes, thank you Thane Fam for doing all the work On making the services actually Independent of The databases as well as The services themselves independent of each other By having RPC compatibility All these solutions are things that Are either eventually or have Already been brought into Oslo So that other projects could have a base foundation Because I do respect the fact that Every single project is going to be unique Thank you You did the V2, the API V2 stuff Right, that was kind of your That was my thing, yes So don't tell yourself short So let's talk about my talk While it worked in Dev Stack Shirt That is like my only good contribution To this community So we're going to talk about my talk And now a little bit, I'm going to turn Over to Sean because I'll let you give a Summary and call out whoever else Is going to be guilty John's always guilty Well a lot of it is continuing The work that's been mentioned here The replication piece because it came in So close to the end Now I'd like to see some drivers Some of the issues there, the spec is great It's when you get into the implementation That you find out what you're missing What you didn't think of Rolling upgrades There's some work being done Of actually being able to test that A little more thoroughly That'll be great to have in the community Actually being able to have some Backing data that will work For end users And like everything I expect there's probably some work The third party CI, we've come a long way On that, there's still Issues, I'd say Everyone's done a great job Implementing their third party CI Now we need to start refining that And making sure that The results that we get from that are consistent And at the point Where if someone submits A patch and there is a failure Out there with someone's third party Results That it's not just ignored Because that one always fails I'm not going to look at it And I guess I'll open it up to the team here As well We have a lot of different initiatives going on Anything we want to highlight One thing I forgot to mention That we worked on in Liberty Is getting migration and retype That's been a constant source Of bugs and concern We've been working on that You know, as IBM contributions Going into Mitaka To finish up that Reporting back to the users better How migrations and retypes are progressing And getting that code out there So that's a continued focus Okay Yeah, I have something to add about What's been done for Liberty It's a big step forward That we have support for generic Migration Which means for those search plugins To support iSCSI or fiber channel Now as long as they have a connector In our OS Brick library Then they can do the migration From one back end To another one Why exactly did we do the OS Brick library What benefit did that bring to other projects For example If we Before we have OS Brick We actually have code duplication In other projects like Nova So Nova actually has a copy Of our Cinder's code to do the volume attach Volume detach When we want to fix a bug In Nova project, it takes a very long time Much longer than What we can do in within Cinder project So now we Extract the power into the library Which means that we have that All holding on control So that's a big way Yes So we are talking about Things in Mataka So I'm going to steal your thunder Unless you are up to speed Well I have thunder So Michael Doolock has been Spearheading a lot of the HA Involvement inside of Cinder itself The idea to have a multiple volume Manager processes that can talk to The same back in storage without Clobbering each other There are other things that help with that initiative I understand There's a big piece that was actually solved Or I won't say solved But there was a really great consensus That came from the summit today That I wanted to highlight For the distributed lock manager There was a consensus that we will start Having abstract layers for the different Lock managers so that we can begin to Make progress on this sort of initiative With volume managers which is a huge Win, huge HA win and all that stuff In Cinder Anybody else want to comment on HA? I'll let that one lie A lot of people have been Spending a lot of effort on it as well Gorka has been doing a lot of work With that too so Scott DeAngelo The whole community is Really working on this effort And we hope to get something From a talk I think that would be a big win For Cinder I said And I'm going to propose A completely different solution Just one other thing I want to highlight As well Scott DeAngelo And others have been looking at Microversions in the API A lot of work has been done there I believe in NOVA Manila as well What does microversions get us? Microversions allows us To add Changes between API releases Without having to come out With an entirely new API We have an issue right now Something just to discuss today We have the v1 API There were some issues with that Mike did a lot of work on implementing the v2 API We'd really like everyone To use that v2 API But we have a lot of users out there On older releases still using v1 And for whatever reason They can't migrate off of that So we'd like to get rid of that code But we can't That's one piece of it And then moving forward And being able to add new features We need a way To have some flexibility In what we expose through the API Without having to Reimplement things and cause breaking changes So one of the things Microversions in the API Is add things Remove things with a With a little less burden On long term maintenance Kind of One thing to add to that Is if you look at So the way the API code used to be structured If you look at v1 And you look at v2 And you do an LS on the directories It's about 90% Code duplication across the two So it's kind of Silly the way we've done it in the past The thing with microversions Is when you implement a change To a specific API call You can just update That particular call version And still have the old call Available as well Without having two branches Or two copies of the same code For a lot of you from a user's perspective This may not seem like a big deal But actually At the end of the day it's actually For a user And that's what's driving this Is making an improvement for users Yeah, it also prevents us from The need of releasing a v3 Just because we make a change at the API Which means a lot of work for clients Which is kind of a nightmare For operators too So I think this is a good thing And a third copy of all of the code A third copy, yeah exactly So while you have the mic Anything about multi-attach Is that gonna hit my talk? Awesome So there was a mailing Us post last week Someone posted About trying to coordinate That effort In the NOVA He's project coordinator, right? No, it was someone else It wasn't Mike, and I just Wrote the guy today asking If he's here because I'm Very interested in that Multi-attach, we implemented That sender a couple releases ago And we've tried very hard To get the code in NOVA To land, especially In Liberty, but it The amount of work kind of Had scope creep quite a bit And it kind of blew up a little bit On us, so There's still an ongoing effort To try and get multi-attach Implemented on the NOVA side So we can complete the end to end capability And we're still very interested In making that happen Cool, thanks for the comment Attaching your volume to multiple VMs If you want to do a clustered file system Yeah I'll use this opportunity to point out You do need a clustered file system for this There's been some confusion on that You cannot just attach the same volume To multiple hosts and yeah The EXT4 will fall completely On its face if that's what you're putting On your volume Hand the microphone to Shing please Shing, you've been working on Backup a lot Is there anything for Mitaka that you're planning to do Or thinking about Or anything else you want to talk about in Mitaka Yeah, so I'm planning to Export for a backing up Snapshot, right now You can backup a volume But you can't really backup a Snapshot directly yet, so This just gives the user Another layer of data protection Thanks Patrick So A couple smaller things I guess on my plate So one thing we've Tried to add in L That didn't make it, which we're going to get This time is trim support That's, you know, those of us With Flash arrays really want that And users that pay for our Arrays probably also want that So that's Work in progress, we're getting there Is that in Cinder or is that Is it in Cinder? Yeah, so we landed the Cinder part in Liberty The Nova side is on its way in For M And the other thing, so We have a session sort of on top of Microversions Experimental APIs Maybe, you know I don't want to say it's happening, but If you guys have interest in that We're going to have a session on that So they may be there Which would be nice for things like When we merge giant new features Before we release You might want to explain a little bit more About what that means Yeah, sure, so Sort of modeling after what Manila's done On top of their Microversions They allow for experimental APIs Basically things that they've added They've merged into the Core service that If you really want to, you can start using these But they're kind of like big morning label Says, hey, we reserve the right to change these They're not totally finished It's a work in progress The alternative is Times like, you know, for replication Or other features like that If it's too close to the end of the cycle We don't have enough testing We don't have enough drivers We either merge it and you have this feature That somebody could potentially try and use But it may not work or you just hold the feature For another six months And slow down testing again So it's kind of the Goal with those Jay, do you have anything to say about my talk? I think I recovered My points of interest Thanks, ma'am And John, last, but not least You got it Yeah, I got a whole list of things Yeah, that's... The most interesting one for me Is something I wanted to do for a while Is I want to actually decouple Cinder A little bit more from OpenStack Make it more of a stand-alone service That's consumable for Clouds other than OpenStack Bare Metal Whatever it might be There's actually some work We've finally gotten started In the Liberty release to do that for Bare Metal I think we have Long ways to go And there's a lot of potential And a lot of cool things we can do And we're going to talk about those things this week Hopefully come up with some good ideas Hopefully it's finally going to happen And I'll just shame this plug Session we're having about replication If you want to see how it's supposed to work John and I are going to talk about that on Thursday It's going to be awesome Alright I'd love the heckler to show up And we'll put him to work Alright, so You have not said much So I'm going to ask you to throw you the hot potato What do you think Cinder needs For Docker support? We'll put the potato around if you're done So There are Several different solutions To incorporate Docker with to storage So some said you should use Share file system Like Manila or something Some said you can work with Cinder because you can attach The volume to something And then detach when it's finished Something like that Within eBay we're still evaluating Solutions like that So I'm open to Any solution that works For us It doesn't have to be specific You bring a unique perspective The panel here because you are an end user So that's why I'm asking you some of these questions So Yes, so I think myself is Different because I used to work for Intel is still not Storage manner And I mostly focus on Cinder and not that driver part So I tried to Contribute that Peace So one thing I wanted to add He actually Wrote our entire Scheduler code base Which was a pretty amazing feat And everybody should give him a hand for it But To the Docker question I kind of left that out That's actually one of the things about The handling services that I was talking about That's one of the big wins actually There's different ways Everybody knows containers are the hot new thing There's different theories on Whether containers live in open stack Whether Magnum manages them Whatever you might do My opinion is You should be able to do whatever you want One of those things Docker has the ability to do volume plugins now Having a Cinder plugin Talk to all the Cinder drivers Would allow all of us to just Continue on upstream And give a solution for that That's one of the impetuses for that whole effort Over on this side Anybody want to comment on Docker? That was a better answer than mine So cool I think it gets to I don't know a lot about the details They're going to need to be done But I think it gets to the discussion Of what do we have to do as a Cinder project To be relevant and continue to move forward And we need to be looking at these technologies Like Docker and what's coming down the road Making sure that we're able to work with them So that we continue to grow So that was a good answer, John Alright, let's change to a different Oh, if you got it No, go ahead I just just see that on the mailing list There were the ops They are trying to propose a tag Called containerization Just to see if a project Can be deployed In a container and In production So I don't know if that's going to be approved Or not, but if it is Then it's a good tag for us to have Yeah, for sure, and that's kind of a different That's the other side of it And that's one of the That's one of the things that's come up In HA solution ideas and things like that Is actually containerizing the service As opposed to building a bigger service And an HA service Actually break the service down And let another tool Whether it be a mesosphere Or a Kubernetes or something like that Let that actually take care of the HA pieces And the scaling pieces, right So, yeah Alright, so we'll Change directions here a little bit And talk about CI systems So you guys wrote up some questions here We got a lot of third-party CIs What's your opinion on Is everybody getting it right? Is it working? Because you're going to have to be the enforcer this time Alright Like I mentioned before I think everyone's done a lot of good work Getting their CI systems where they are today There's still definite Improvement That needs to be done I think The main thing being The results of them being useful Of being able to see a failure And not just blow it off Of actually looking at the failure You know Our third-party CI Just like a lot of other Ones still has random Failures Makes it a little less likely That someone's actually going to take a look at that When their patch is submitted And gets a failure result There's A lot of moving pieces I think there's just different areas We just need to work on refining how it works And I think Rami have done a lot of Great work Starting to standardize The CI solutions So that there's at least consistency Across everyone's CI's And hopefully that leads towards Bug fixes, improvements And making everybody's CI work better So one thing I want to add to that Just so people know And don't get the wrong idea The failures that some of these systems Are seeing and stuff like that The intermittent failures They're not actually that the back-end device Doesn't work and That that's failing They typically end up being things like The actual Jenkins deployment failed It's something in the automation Or in the CI itself That's the piece that's failing So if it was The driver or the back-end wasn't working We would have yanked them and thrown them out already So that's not the problem About this round I think the policy we had in the past It worked and we need to Enforce having Third-party CI running And if we can't get involvement From a vendor To make sure that they're at least working on it Then We have to take their driver Out of the code tree I think one of the good things Our team has been doing I look at Walt specifically Is Code is getting pushed up Now if your CI system doesn't run Against your own code We're not going to give it a plus one Or merge it at all So that's been Painful for some people Like me But it's given the right tool To go back and say look this is serious And we need to be doing this And run Tests running frequently So that we know that things are running For everybody else It's something to watch for Any questions from the audience I apologize if this Doesn't make sense in context But hopefully you guys can add the context My question is about the scheduler There's been some ongoing work So what's the current state Of the sender schedule You're talking about multiple volume managers One of the jobs of the volume manager I believe used to be picking the host Which who you're going to schedule the volume on Is there been any work going on that Or what's the current state of the schedule How does that work Or is that even still a thing in sender So I think that was done In keel That we added support for pools So the volume manager doesn't have to do Any scheduling Right now volume driver Will have to Tell how many pools That this driver has And then that scheduler made a decision So we have No So each pool will be treated like The whole storage backend So it's just like you have multiple Storage backend that single driver Can talk to It's just that you can even create a type That is specifically mapping to One of the pool not the entire Backend So our model is a little bit different At this point so all of our stuff Actually goes through the scheduler Before it gets to the manager The manager right now is actually Just kind of a shim Of common code that goes Into the drivers And as he was saying Basically each pool Looks like it's own independent backend And it's treated as if it was Was a backend by itself The problems that they're trying to solve On the Nova site are a little bit different Than what we've got And they're also looking at doing Things like more of a If I recall A common scheduler Like an Oslo version of scheduler code But they have some different problems That they're trying to solve that We don't really have We don't have the problem of scaling To a thousand backends We don't do that sort of thing in Cinder Most customers have one They might have three or four But Nova they're looking at I'm going to have a thousand compute nodes That I have to schedule across So that's a different problem Following on the aspect of scheduling On the storage pool as opposed to the host Or migration then So would we be migrating between storage pools then? Yes Because we support like Migrate from one storage backend Like solidifier to EMC So we definitely support the migrating From one pool to the other It's just a problem that If the storage backend is able to Create a shortcut for you Like if these are the two pools From the same backend That was created on the same They might have a chance to give you A shortcut to Accelerate the process Absolutely We're supposed to go to the tent after I see a clap starting so I might Be at the top of the beer hour I was going to throw out one more question Around testing if you guys didn't So what do you think the current state of testing is And I'll throw the hot potato like How are we going to test replication If it doesn't have replication Walt you're grinning Shing you're grinning You guys can start the comments down to that end I'm grinning just because It's a difficult problem We're still working on Actually implementing our replication Code and we're doing a lot of manual Testing right now But obviously we need to get to a point Where we can add that to the CI system That one is a very difficult problem Because we have multiple Arrays that we support And we need a lot of hardware To do the replication support Because we support ice guzzy and fiber channel For two different arrays And that requires a lot of A lot of hardware for us to do Yeah it's And we're struggling with that right now Because we have a lot of different scenarios That we want to test not just with replication We want to test multi Attach multi path We want to get it off for ice guzzy and fiber channel So our number of CI jobs Is kind of blowing up a little bit And it's a good problem to have But yeah Shing? So I think to test replication We need to provide API to do like a test Feel over because otherwise If you test it, feel it over We also want to feel the background We need to also be able to Have that support in the API as well I'll just go down the line I guess kind of a question Out to the rest of the core panel Is it something that we need to look at Each of the Each of the different Driver CI systems Being in charge of setting up Their own replication test Yes, yes But So here's the What's the but? I would say that since we've got ourselves Into this mess where there are Features that we are allowing Vendors to have In their drivers that aren't In the reference implementation Which I think is a horrible thing We should have never done When you're going to write the replication For the reference implementation But we're there, we've done that We broke our rules and we've done that So what that means now is That exactly what you guys are saying In my opinion If you want to Think about a feature like replication Or consistency groups or something like that That means that you Should tack on automation And testing to your CI That runs that, and that's an easy There's a process to do that, it exists So it's not hard to do So that'll be a fun future Discussion What do you think about test coverage? Mike, good today, needs more For what we have Needs more, sure John, more Definitely needs more But like the replication issue Being able to set up all the different Scenarios is a challenge That we just need to keep working on So I wanted to go back to the other thing Like exactly what John was saying Because I've actually Part of the cross-project work That I'm doing is also looking at Expanding out third-party CI Sender Like Neutron for example Neutron is actually more In my opinion more of a complicated Problem than it is inside of Sender Because there are Like with Sender the most that you can have In combinations is zone manager And said fiber channel driver And that's about it And then with Neutron though there could be A variety of switches, routers And then Whatever else that's involved There's a variety of plugins that are involved And so I think that's exactly What we've been looking into Is a way to actually Where in those CIs we can actually trigger Those particular tests For those certain kind of combinations Of different plugins Or drivers That are involved with that test If that makes sense Alright, we're near the end of time So we'll call it beer 30 A short one If not going once Going twice, thank you Thank you