 All right, so let's let's kind of get started here So this presentation we want to talk about the past present and future of fiber channel past summits We've had a lot of discussions and and questions come up about fiber channel in open stack and in particular fiber channel in the Cinder project, so We've got a number of people here today, and we're gonna do things a little bit different than a lot of presentations What we're gonna do is The four of us here on stage are gonna kind of go through a few slides and talk about a few things and some of the Efforts and what's going on and share some thoughts, and then we have a whole panel of Cinder developers sitting out front Most of which have fiber channel or are working on fiber channel And we want to open it up to a Q&A for for folks in the audience And we'll all be available to answer questions if you look here. We've got a pretty good list of folks Myself John Griffith. I work from solid fire based out of Boulder, Colorado I've been working on the Cinder project for almost four years now Before it was Cinder back when it was Nova volume. We've got Walt boring from HP Got Ken Martin from HP or Kurt Martin from HP and we've got Shin from EMC and then the other folks here are out front and they'll be ready to go so on that note For those of you just a quick overview Open stack. There's the little diagram that everybody loves to show tell you what it is Hopefully you already know that a little bit about Cinder Cinder is an open-stack block storage service provides persistent block storage For instances to be used in an open-stack cloud It has a plug-in architecture, which is probably the most important point about Cinder and the most important thing to keep in mind There are close to 40 available back-end drivers that you can use in Cinder as of the kilo release Which is a lot. It's a lot to choose from the initial focus of Volumes and block storage in open stack in historically was always ice guzzy and RVD So up until a couple of releases ago. That was your only option and that's pretty much all there was This is not a talk about Cinder itself though. This is a talk about fiber channel in Cinder. So Let's talk about fiber channel Fiber channel has been around for a long time. Most of you probably already know this For a while. It was the de facto solution in IT data centers for a number of reasons Lots of vendors have a lot of great fiber channel devices Customers there's a lot of people that have invested and spent a lot of money to have a fiber channel infrastructure and they already have that setup and That's a really significant investment So then you look at something like open stack that is predominantly ice guzzy then makes some really hard decisions So in terms of ice guzzy There's a number of people will tell you that there's things wrong with ice guzzy and people will tell you there's things wrong with fiber channel For the most part ice guzzy. The only thing that's wrong with it is a bad history So when ice guzzy first came out a lot of people were trying to do things like use one gig Networks for ice guzzy and share their internet traffic all their regular networking traffic and everything over ice guzzy That didn't end well These days with 10 gig and dedicated networks ice guzzy is a really different story. It's Extremely flexible. It's performant. It's very reliable. It gives you a lot of choices and it makes things really easy The reason why it was the default in open stack in particular is the fact that it's easy to plug in and plug out Right. So the whole idea is all you need is a network. Everybody has a network So you can test anything and run anything fiber channel. That's a little harder to do But the whole point of open stack is to give you options and give you choices. So Over the years, we've been doing a lot of work on fiber channel and adding fiber channel capabilities The predominant reason for that is people that have legacy infrastructures and legacy equipment that is fiber channel I've already made that investment. They still want to adopt open stack and run private clouds So they need to have the ability to do that and that's how fiber channel started on that note All right. Thanks John So I'm gonna talk a little bit about the past of the fiber channel implementation within open stack where it's come from and then I'll hand it off to Kurt Okay, so back in the San Diego conference we it was our first time going to working on open stack and Kurt went to the conference and basically announced that we wanted to Implement fiber channel within open stack and we had a bunch of people come up to us from different companies that said hey We're really interested in this as well And so that created a working group that we have been meeting about once a month ever since then To work on fiber channel solve some of the problems and discuss where we wanted to go in the future within open stack And so we came back from that conference and said okay, how are we gonna do this what we needed to do For our company it was to create a volume driver within Cinder that actually supported fiber channel But we didn't have some fiber channel support within Nova at all So we we sat down and worked on it within a couple hours. We had our first Attachment working, but we couldn't detach volumes yet Kind of need to do that So over the course of the of the grizzly release We were able to get the the Nova patch to land at the very last day as I was in Disneyland with my kids Kurt was babysitting the patch and helpful Was helping to get it to land So what did we end up at the end of the grizzly release? We're able to attach volumes and detach them But you had to have your your fabric pre-zoned and that's not optimal, right? It's completely impractical and not very cloudy to do that. So What we had to do is we went back to this working group that we're meeting once a month a lot of companies and we and we Co-designed what we call now the fiber channel zone manager and the zone manager's job is to do the automated zoning For us so that way when you do an attachment of a volume It automatically creates the zone in the fiber channel switches to create the fabric for you So that way the two endpoints can see each other All right, so this was added in the ice house release cycle a lot of companies were involved with it brocade HP EMC IBM and It's it's a really good example of why I personally love working on this project because we have a lot of different companies Working together as a community to make this stuff work and it benefits everyone So I'm really proud of that Okay, so what are the pieces of the zone manager? There's three main components Zoning which is adding and removing of of the zone and then the lookup service And we have a pluggable architecture in the zone manager itself that allows us to have vendor-specific Drivers that knows how to talk to the vendor Switches themselves So the lookup service its purpose is to help The volume drivers create a an initiator target map that knows which two endpoints can actually talk to each other Basically the worldwide names of either side So that way when you do an attachment or the export on the target side You know which ports to export your volume to and then the host on the initiator side will see them So the zoning is basically simple add a zone remove a zone One of the pitfalls that some of the volume drivers fall under when we're doing reviews is they they pass in the Initiator target map every time you do a removal of a volume Well, you don't want to remove that zone if you still have volumes attached, right? So that's one of the things that we look for and and Kurt will talk about so the the two Vendors that we have support in the zone manager today as listed on the slide here is brocade in Cisco And we're actively developing on the zone manager and adding new features And this is a little bit of an overview of what the architecture looks like. It's very simple We have the lookup service and the zone add removal API And then the layer underneath that is where all the vendors plug into that architecture So we have a lot of classes that for each of the vendors brocade in Cisco knows how to do a lookup service They can form to the API's just like we do for volume drivers, right? There's a given API that the volume manager talks to the volume drivers Well, it's the same thing with the zone manager. You have to support the lookup service You have to support add and removal of zones and then it's their job to talk to the the actual switch So from there, we'll talk a little bit about The volume drivers and the support that all the vendors have within center Thanks, Walt. Yeah, as Walt mentioned when we first started this we had to you know, basically Pre-create your zones, which wasn't very usable In the real world in the cloud so in the early time frame Grizzly Havana ice house we had HP IBM EMC submit drivers But once the as you can see by the number of vendors Once we hit Juno and have that zone manager ironed out there was more drivers and We really hit a lot of new drivers and new vendors in the kilo and Time frame The Liberty as we see it. It's just getting going and there's already, you know, four or three new vendors And HP's putting another one up More vendors It's it's really starting to take off and as you can see by the logos there is a number of Options out there For the for the end users for the fiber channel drivers What are the the requirements in that are required? Of course, it's got to meet the minimum requirements for any volume driver But there's a few special things for fiber channel drivers And one of those is you have to extend from the from the fiber channel based class And as Walt mentioned there the fiber channel zone manager We have decorators that you have to decorate the initialized connection with to take advantage of the zone manager So there's a decorator for You know add fiber channel zone So that will call in if the zone manager drivers are configured and send or comp It will actually go and auto create the zones for you when you do an attach Likewise on zone removal so on terminate connection in the drivers You have a removed fiber channel zone and as Walt had mentioned you be careful you have to be careful because It's the drivers responsibility to figure out if the last volume is connected to that hose before it actually removes his own You don't want to wipe out the other ones and Also at starting in the kilo release all sender drivers require are required to have Continuous integration third-party CI every patch set that's put up in the sender comes back and runs a slew of tests on real hardware back in all the vendor sites the results get posted up so In ice scuzzy You have your network and and it's just over ethernet But there's a little few gotchas for fiber channel when you're running your CI environment within a VM on open in open Stack you need to get the HBA information passed through to the VM There's a couple different solutions for CI that people have but PCI pass-through will solve that that requirement and get the HBA PCR PCI for information passed up to the VMs We have a number of people in the community that will that will help with CI I put Rami Oslin his IRC name is off Oslin. He's always in the sender channel. He's available He has this the the CI solution for the ever channel trying to get that all so it's more of a push-button Solution and also Duncan Thomas. So if there's questions on getting CI running for fiber channel, please visit the OpenStack sender channel and being one of these two guys I'd like to pass it over to Jing now that we'll talk about some of the futures that we're looking for for fiber channel So right now there are two syntax facts being reviewed One is the friendly zone names currently the zone names contains The WPMs of the host and the target So it's not friendly the proposal is to add the host name and also storage system names Into the zone names to make it more readable And the second one is the virtual fabric support in the brocade zone manager Cisco already has that support for the vSend. That's the equivalent to this. So those are right now being reviewed And the QA support for zone that's another thing that could be added in the future So that every zone has a different QS level That specifies what's the priority of the traffic flow between the host and the target And another thing is the MPIV support in Liburd so right now you if you partition the HBA into multiple virtual HBAs you can't do a pass-through and Present that into a guest. So you the only way you get it worked is to pass through the entire Entire HBA. So it's just not efficient And then the and another thing that I want to mention here is We're talking about that. We want to move the zone manager into a sub-project Into under sender so that it can be Released as a like a standalone library just like the OS brick so that it can be Leveraged by other projects in the future. So those are the future work future efforts Those are just some useful links. Those are Blueprints actually old blueprints that give you some background of the FC support and the zone manager That's all we have so you guys that come up. So we can take some questions. So real quick Show of hands. How many how many people are? Open-stack developers in the room vendors that are providing Okay, how many people are actually using open-stack deploying open-stack and considering deploying it with fiber channel Okay, so a few so a lot of this talk is it's kind of a mixed talk, right? There's there was a lot of technical information and stuff like that When we talk about the NP IV pass-through and stuff like that That's really only Applicable to people that are developing a driver and need to run CI or people that are running triple O models We're open-stack on open-stack Otherwise, it's it's important to note that the whole point You know in cinder is we want to keep that Connection and usage model exactly the same whether it's ice scuzzy or fiber channel So you as a user or a consumer of open-stack don't actually need to know anything about NP IV or zone managers or anything like that. That's all supposed to be abstracted and handled for you It's all automated So I just wanted to point that out because I think it's important Yeah, if you're not familiar with it, you might be a little confused after seeing some of that But on that note if if there are any questions Yeah, thanks for pointing that out John So basically the the only thing as a deployer you really need to know or understand about fiber channel is You have to know where your switches are right there's some cinder.conf entries that you have to put in To configure the different fabrics that you want to support Which basically is URI to the switch and then Authentication for it and then after that cinder basically takes care of everything for you So when you do a volume attach it automatically creates the zone for you and removes it for you appropriately So that way the attaches just work And were there any questions out there go ahead I have two questions First is what is the integration with the BNA on the brocade side? Instead of direct to a switch as you mentioned So we have a rather large fabric Multiple so at the time when we implemented the solution is directly to the switch, but Angela that stood up in the very back. She's with brocade She will tell you some more of the future plans that they're working on for for exactly that The question was integration So open-stack integration can use the mic. Thank you clarifying the question that Yeah, so there's no future plans There's no plans right now to integrate open-stock with BNA or point open-stock at BNA We contact the switch directly Via SSH or a future plan is for HTTPS My understanding is BNA has the rest API's that Can expose all the fabrics that are managed by that Is it okay? Sorry, so you're interested in fabric management through open-stack Open-stack managing those fabrics through our central management tool Okay, so that is a topic that we are discussing about future Open-stack Fiber-channel network management the second one would be Zoning and the serialization for zone set changes does the FCZM manage concurrency For humans interacting with the fabric as well as automation interacting with the fabric So currently the zone manager doesn't doesn't know anything about humans talking to the switch So if a human goes in and removes his own the zone manager doesn't know that that's actually happened and and in terms of The parallelism I suppose from the cinder's perspective There is a local file lock on the zone manager right now for when zones are added and created So there there is a critical section there to prevent multiple people trying to remove the same zone But it's something that we should probably look at in the future to to ensure that hey when you go to remove a zone It's actually there and there's some fault tolerance within the drivers themselves to have to manage that to a certain extent right To to handle those air conditions so to kind of add to that too one thing to keep in mind that I try and tell a lot of people is You know with with newer things like fiber channel and stuff and you know as you point out There's definitely things that you're going to do differently and you're going to need that aren't available from the orchestration layer But kind of the rule of thumb typically and good advice for most people is Either have open stack manage things or don't when you start mixing those two things together You run into a lot of problems and get a lot of inconsistencies Unfortunately, there are cases where until things are ready and open stack for what you're trying to do You might have to come up with a way to do that But just something to keep in mind Yeah, that's correct John I mean that's pretty much the case with our volume drivers with the arrays I mean if you create a volume and sender and then go delete it on the actual array itself Without going through sender there's obviously going to be problems there right and and sender itself Can't know that someone else's has done that outside of it And so as soon as you the way we look at is this as soon as you plug in that Infrastructure component and have sender manage it then it's really sender's purview to own it at that at that point Okay, any other questions? Yeah, go ahead. If you can use the mic. They'll be great, please Do you have do you have a feeling for how? prevalent the use of fiber channel with a stack currently is and Do you think it's going to? expand or or Become less and less as time goes on considering There's a lot of work involved in trying to get all the zoning working with sender and so on So that's really the question. Have you got a feeling for how prevalent it is and so so I think I'm I'm gonna Give my opinion and then I'll let some other folks give their opinions as well So you've probably noticed that the the bulk of the discussion That's been here and things that we've been talking about is vendors Not users not community personally from from my viewpoint What I have seen in the user community is a lot of people Given the choice between the two in a green field are definitely going the ice scuzzy route Ice scuzzy has come a long way. It's pretty good technology. It's pretty solid that being said Traditional legacy shops that already have fiber channel that want to use that same gear that they already have or reuse that Infrastructure, that's where the need the demand in the community for fiber channel. That's where I'm seeing that demand right now Yeah, that's pretty much the way we view it as well I mean we a lot of our customers for our array. They're primarily fiber channel based So we wanted to make sure that we enabled Open stack for them for that particular reason now in terms of the global community and and what percentage of that is We really don't know but in terms of ice scuzzy versus fiber channel that discussion is no different outside of open stack Then it is inside of open stack right now. So what we're trying to do is ensure that You know open stack is Provides as much support as we possibly can to the existing fabrics that are out there, right? So we we don't want to exclude any anyone at this point So I just want to add that It's a similar to what you guys are saying that we are also having customers who who have this demand I mean we have both right their customers who want ice cousin of a customer who want FC I don't think it's going away because they have already made a lot of investment in FC So they definitely want to move that into open stack I was gonna support that I've actually suddenly seen a real growth in request for it as we have more of our existing users Moving to you know trying open stack and it's like what we want to use our existing fiber channel infrastructure. So, you know, I think Through sessions like this and some of the work we're doing There's a we're gonna continue to focus on improving it as we can Yeah, you have to remember the difference between Public, you know, obviously cost reasons it wouldn't make sense to do this in a huge public cloud But in a private cloud as everybody kind of alluded to you have you have your infrastructure there already and It's chances are it's being way it under underutilized now So they're trying out open stack and yeah Anybody else want to Mike Hello, I'm just thinking on how about Having both technologies together. I don't know they'd be having in this in the center several backends Like ice Cassie RDB and pirate channel together We absolutely do support that today and and we do it in in-house in our testing and development on on our same blades or Servers if you will we actually have ice Guzzi and fiber channel volume drivers instantiated and are testing those both in real time Just using different volume types or whatnot to test them both out this at the same time So so cinder offers a multi back-end actually it offers two options for multi back-end You can either deploy them on the same cinder volume node In one volume service or you can horizontally scale out like everything else in open stack and add more volume Yes, absolutely you want to incur that pain you are welcome I just want to point out to we have we support both and It is two separate drivers, so it's it's not like you can Have some kind of multi-packing between the two You can definitely run them both concurrently and and that is supported Yeah, so if you have an existing FC deployment and you're thinking about switching ice Guzzi You know you can actually deploy it that way and have both at the same time as you're transitioning Even talking to the same array, so a lot of the race support both at the same time so So it would work in that One point of clarification though to be be careful about is There's the assumption that you have a symbol a single fabric behind it that you're not mixing vendors because that that does not work Correct if you have some brocade some Cisco hardware you need to have a single fabric environment Yeah, brocade and Cisco don't play well together Surprise I have had people asking about that though, so I thought this was a good opportunity to bring that up Yeah, absolutely go ahead Yeah, just because some question about When you test this driver For the five, I know what's the backhand array you are testing like? Hitachi or It's whoever that vendor is that's running that test so so This is definitely a vendor-driven effort So HP for example has the three power array that has fiber channel IBM has store was with fiber channel EMC has some fiber channel those folks are actually testing their back-end devices on fiber channel Going forward what I would like to see is there is the ability now to do Use things like lio to do fiber channel targets for generic LVM devices and disk back devices So that we could have something that was a more general, you know test environment and vendor neutral I'd like to see something like that take off Whether we will or not. I don't know. I don't know how valuable it is Currently is any demo or any use case for like for our environment actually we are using Hitachi So I can see they have driver for usp vsp stuff so So it's up to it's it's up to the vendor It's up to Hitachi for example to provide a fiber channel back-end driver for their device inside of Cinder, right? Okay, and that's that's your use case. Okay, very good. Okay sex You can actually go see the CI results if you you know every Wonder who has a CI driver now actually we publish results Yeah, every patch that gets submitted into Cinder is actually now Tested against all of the arrays that are supported within Cinder And that includes all of the fiber channel drivers as well We have another question in the back. Go ahead. Sure So as somebody with a large investment in fiber channel currently, I'm interested in this panel's perspective As far as I guess the prevalence of fiber channel in the market goes We see a lot of Good still in fiber channel. I know there's brocade people in the room too who might want to comment, but I Just wonder what your feeling is for the future of the technology you know what with direct connected sass and you know everything ice scuzzy kicking around the corner on 40 gig and whatnot I Just I just want to feel from the room to see what they actually believe or perceive I don't think is going away anytime soon I think there's a lot of value our customers are primarily fiber channel based because of the of the performance of it They're willing to invest that that high cost to create the fabrics and and do that separate kind of technology And and use fiber channel. I can't speak for everyone But I think it's I think it's going to stick around and we're continuing to develop on it and add more features to it As as Ying talked about in Cinder, so we're going to support it for a while So I'm one of those people that always has the contradictory perspective on this my viewpoint is that I Honestly, I do believe that one of the only reasons why fiber channel is still Really around and gets discussed and talked about is because the investments that people have already made I think that if you were to to start again like I said before if you go green field and you put the two technologies up against each other There is almost in my opinion no compelling reason Anymore to choose fiber channel over ice scuzzy especially with 40-gig coming out The the arguments about performance reliability and stuff. Those have pretty much diminished over the past couple of years ice scuzzy technology has come a really long way and then as you pointed out People are even moving towards the direct attach model, right? So So san in general is not quite the king that it used to be So it's kind of an interesting. It's kind of an interesting shift So that that's my perspective and in to be fair I do work for a company that does both fiber channel and ice scuzzy So I'm not just saying that because I think ice scuzzy is because ice scuzzy is the only option We have both options, but personally I Think the future direction is definitely Just add a little little to that. I'd you know fiber channel has been on its way out for a long time now and it It's like tape. It's eventually someday. It'll go away, but There's still a lot out there and we do still see a lot of interest in it. I Actually don't think that's going to go away Yeah, I have seen enough customer demand on that so I don't I don't believe it's going to go I mean, I mean definitely there are more and more demand on the ice scuzzy But I just don't think fc will always have a place. So yeah, and you also see more and more vendors Riding fiber channel drivers. So I think they're getting asked to do that by their customers. So the pure numbers Yeah, that's that's the the the one big question that everybody has is security, right? there there's You can find security questions and security issues no matter what right and the problem is is yeah, you can you can You can inject the argument about packet sniffing and Breaking into the network and stuff those sorts of things It also holds true those same thing. I get this all the time in open stack because of hey What are my guests having access to what if somebody breaks out of the VM or breaks out of the hypervisor, etc? Those are questions and those are issues Some environment some shops they will probably never be able to use ice scuzzy because of the perceived insecurities I say perceived But it just kind of depends so that is one case you're absolutely right that is one case where I think it is So are you asking about QoS for volume types? That's there's no blueprint yet. It's just something that Someone mentioned it could be added in the future. Yes, a brocade has that support So yeah So so one of the things that we're actually thinking about adding it is support for QoS in the volume types itself too as that add that as a new back end For the zone manager itself there's a Back in QoS, which the raise handle there is a front-end QoS that is a type and Know when the hypervisors support that it makes sense and the switches Offer it to awful offer the middle QoS Setting so the the structures there this the switch companies are working on or you know that it's In the future, but it has been talked about up offering that at that middle level as well So unfortunately we're out of time But we're all available if you guys want to grab us outside the door or even up here answer any other questions But thanks a lot for coming appreciate it. Hopefully this was helpful