 Hello, everyone. We're going to get started here. My name is Mark Dietrich. I'm with Brocade and This is Gary. Hi, I'm Gary, FunQuest with Hewlett Packard, and we're going to talk to you today about using fiber channel sand infrastructure in open-stack environments. So some of the things we're going to cover. Why is fiber channel support important? The blueprint that we're working from What's planned for the ice house release? Some items about fiber channel zoning and a proof of concept. So as far as why is fiber channel support important? That has a lot to do with the installed base of fiber channel, which is huge. Now we're primarily talking about enterprise, but enterprise is a significant part of where open-stack is going. And let's talk about performance, for example. Our friends at VMware, when they do benchmark testing for performance and throughput, they do it on fiber channel. And why do they do it on fiber channel? They do it on fiber channel because it's lightning fast. I mean, it's very fast. And it has the least amount of overhead and the least amount of latency. That's why they use fiber channel compared to any other protocol. It's not that other protocols won't work. They will. But there is a place for fiber channel when you're looking for high performance. Fiber channel has a lot of resiliency. It's been around a long time. It's available for fiber channel in terms of resiliency and availability. And it does provide some good means for security as well. So as far as why we want to do this project for Ice House, is currently there is a way to do volume attach and detach as of the grizzly release. But it's plagued with the problem of either you have to have open zoning, which is a non-starter, really. Because if you can't isolate one tenant or one host from another's LUN, that's a problem. Or you could pre-provision all the initiators into pre-made zones. And that kind of defeats the whole purpose of having an open stack to begin with. So that's a non-starter as well. So the whole goal here is to automate zone configuration, to automate the detection of sand context, and the application of zones into current zone sets without any manual intervention. So maybe one thing to add there. This is good what Mark brought out. The question, why fiber channel? Who cares about fiber channel? It's a question that we got a lot of people asking about, and we started bringing this into open stack. And it's a good question to address. And I think what we look at is we have so many customers being from HP, Brocade as well, they're using fiber channel in their data centers. That's what they're familiar with. That's what they've built out their infrastructure with. They want to start deploying some cloud, either proof of concepts, or they want to start doing some projects and operating them as a private cloud in their data center. And they want to be able to hook into the existing assets that they have, the resources, round storage and such, and be able to reuse that as they grow up their private cloud in the environment. I think that was really the impetus behind bringing fiber channel in as a first class supported interconnect between storage, sender storage, and the compute side, the Nova side, to be able to stitch together environments and support customers using the infrastructure that they have. And I think going forward they still can use iSCSI, they can use fiber channel, they can now have a choice, or FCOE, if they want to go there. But they have a choice, so we want to make sure that the choice of fiber channel has the advantages that fiber channel inherently has within it as a technology that they can benefit from those, actually use that in their environment. Absolutely. So there were two blueprints. One is in the past, and that was the implementation that was done in Grizzly, which didn't provide for any automation of zoning. However, the second one is the fiber channel sand zone controller blueprint, and that one automates zoning, which is really where we need to be for this to work. In fact, I believe there's a number of other things past Ice House that need to be done, but one thing at a time. And that blueprint has been approved and the coding is in the works. Actually, prototypes are already working. It's just a matter of hardening the code. So there's a volume manager, and then there's a fiber channel volume driver. A new fiber channel sand lookup service was needed, because with sands, there's always two fabrics at least. There could be multiple fabrics, but there's never one fabric. And if you can't tell which fabric the ports on your target are in relative to the VM, how are you going to do the zoning? So you need a mapping service, which is what the lookup service does. And then a fiber channel zone manager, which manages the life cycle of that zone, and then a driver, a fiber channel zone driver. And that zone driver, in turn, talks to sub-drivers that any, like Brocade will have one, Cisco will have one, so that it could talk to their specific equipment. Do you want to add to that, Gary? Yeah, that's good. So Nova makes a volume request to sender, and then the volume manager will gather initiator and target worldwide with some info. And then the volume manager will then call the zone manager, and of course zoning has to be enabled. The fiber channel zone manager and the fiber channel zone driver will provide the information needed to create the zone, so that you'll have either a single initiator, single target, or single initiator multiple target type zones, depending on how it's configured. And the sender volume manager completes it by telling Nova about the connection. So here's another way to look at it in which, so the first thing that happens is Nova will contact sender, and then sender talks to the volume manager, then communicates with the fiber channel volume driver, then it does a sand lookup, that information comes back, talks to the storage volume, the volume then goes back to the driver, and the volume manager here communicates with the zone manager and then the zone driver to the fabric. So in this case a brocade fabric, and then ends up going back to where it began in order to complete the process. So a little bit about zoning, there's two modes, there's fabric and there's none, and fabric means that you want the fabric to do the zoning. Now an emerging technology in sands is peer based zoning or it's also called target based zoning. The nice thing about target based zoning is you no longer have to do zoning as a storage admin anymore. All storage admins have to put a LUN mask on every LUN to define the initiator that will talk to it. So if you have that information, why do you also have to do zoning? The array can just tell the fabric to create a zone based on the port that that LUN mask was put on and the initiator's information. So it completely takes away an entire process of having to do zoning separate from the LUN masking process. If you want to do that, you do none. In other words, you don't want the fabric to do the zoning because what will end up happening is the LUN masking information will be sent to the volume driver. The volume driver will communicate with the array. The LUN mask will be put on and then it's responsibility of the array to update the zoning in the fabric. So that's why there's a none. But if you want to do a traditional, and I believe all the storage manufacturers, HP included, and Brocade and Cisco are all going to support peer based zoning or target defined zoning as well as regular fabric zoning. And as far as what you could do in an active zone set is you could add or update, remove, or read or get zoning information. Yes, I think this is a key part here and that is, is that if you have fabric child storage that you're using, as Mark says, does target based zoning, then you don't need to have the explicit zoning manager do fabric zoning for you. Go ahead and let the array do the fabric zoning. And basically then you don't need a zoning manager driver to do that yourself. If however you don't have that kind of storage, then you'll be able to use a vendor driver under the zoning manager to do direct fabric zoning for the volume attaches explicitly in the fabric. So you can, where the zoning is being orchestrated from is really up to you who are putting the environment together, whether you want the storage to do it, the array to do it, or if you want the zoning manager under Cinder to do it, it's up to you. Yeah, it's important to realize that all the target defined zoning is an emerging technology. It'll be in next generation arrays and next generation products from the sand companies like Brocade and Cisco. The type of zoning supported as far as best practice is the single initiator and single target zoning, both types depending on what you prefer. I know some customers prefer one over the other. It may result in a differing number of zones and depending on your environment, how many zones, there could be limitations or reasons why you want one over the other. And you also need a way to perform zoning on multiple fabrics or sands. Every sand is going to have at least an A and a B fabric, but you could also have in these type of environments, multiple sands that are their own sand islands, and you need a way to be able to map ports to the correct fabrics, and that's all been taken care of. So there's a fabric mode, when you put it in fabric mode, because I told you there's two modes, and then you have the type fiber channel that instantiates the appropriate vendor specific fiber channel driver, and then that delegates the zoning responsibility appropriately. And the Brocade fiber channel zone driver supports, of course, Brocade products, and there would be different drivers for other vendors. Okay, so at HP we have been working with the fiber channel, bringing fiber channel into the environment with Brocade, with IBM, EMC, a group of companies, and in doing so we've set up an environment where we're actually using this, and so I want to kind of walk through how does this fit together and what are the responsibilities of the different pieces and components in the environment, just so you get an understanding of how things fit. So I'm not sure how clearly it came out, but the first fiber channel support came out in the Grizzly release. This is where we have the ability to support volume drivers and sender that can attach storage volumes to the Nova hosts via fiber channel. That was what was brought into Grizzly, and what we came out with at that time was a reference driver, as well as a base class for other vendors to write additional drivers for their devices, and we brought out an HP 3-par driver as a reference driver in the environment. So that's what came out in Grizzly. So that is this 3-par driver here talking to a 3-par storage array and being able to then attach those, these are just some pictures of dense server HP, blade servers, dense server racks, that can be Nova host compute side for attaching storage that was provisioned over in the storage system, attaching it over to the hosts. The Nova side then attaches those volumes in the regular fashion that it does up to the guests themselves as you're attaching volumes. So this light-colored blue part is what was done in Grizzly and enabled in Grizzly that's been running now for a couple of releases. There were some aspects of that that still were not on par with some of the iSCSI capabilities that were brought in in Havana. Some of the interaction with Glance and such, some of those functions were filled out. So now that the fiber channel capabilities really are more on par with what you have available from a iSCSI aspect. So that was what was brought in then so over Grizzly and Havana. The work on the zoning manager was being done during the Havana time. It didn't get in at the freeze for Havana. So I think the current goal is to bring that in here in Ice House in the first sprint and to start landing that. So you should have it there by the end of Ice House. What that brings in is then this fiber channel zoning manager that Mark just described where we can now have the ability for the fabric to be zoned as volumes are attached over to the host. Previously in Grizzly and Havana, we required that the sand itself was in an open zone state. There was no active zoning going on unless you happened to have one of the newer array types that does target-based zoning. Most people don't. So you were constrained to just having an open zone sand where now what we're bringing in here in Ice House is the zoning manager capability, Brocade really spearheading the development on that, working with the rest of the companies in the group that's involved, and then writing a reference zoning driver to talk to the Brocade fabric that can orchestrate zones then to be set up, created, modified, deleted as attachments are made across the fabric to the hosts. So that's what's brought in. In the environment that we have, so we have still the volume creation and attachment is done by the cinder volume driver, no change there. The only thing that is a new service that is available to that driver as Mark talked about is there is a lookup service that's now available from the zone manager such that the array driver, if it does need to know anything about being fabric aware of where to make storage visible across different sands, it can get visibility into what its connectivity onto the different sands are from the zoning manager and do lookups to find out what connectivity it has to the different sands that exist, that storage is being attached over. So that is a service that can help some of the arrays that may be doing specific mapping to specific ports or interfaces on the array to be able to orchestrate dual fabric resilient environment presentations. So that's now available as well. So this is something that is the environment that we use from a reference standpoint that we have set up, we have working and so if anyone is interested in setting up this kind of environment, certainly we'd love to talk to you more and talk to you about any of the issues that you might be seeing and how to get past those. So who was in the previous session that IBM gave? Anybody? IBM gave a DR OpenStack talk and to be truthful mostly of what they were getting out was array to ray replication using something like Global Mirror, for example, between the arrays and HP has similar technologies, actually all the array manufacturers have similar technologies and if you want to use that type of technology you have to get your lones that the arrays are replicating off fiber channel arrays. So that is a use case for fiber channel in this particular environment an OpenStack environment is if you want to replicate the way IBM described in their last session you have to connect to the array using fiber channel. So that's a use case that we don't have a slide for but I was thinking about when they gave that talk. Sorry, I didn't mean to cut that up. There is a session this afternoon up in the Cinder design area around supporting replication in Cinder. If you're interested in that this afternoon I think something, I don't remember the exact time but let's look it up so that could be of interest. So the way the arrays are today they could support thousands of lones that they could become an inventory to be connected to various workloads. Am I wrong? That's right. If you want workloads to have persistent disk like this and be able to connect up using high-performance fiber channel connections this is enabling technology. Open up some questions. Yeah, so let's see, do we have one more? Okay, yep. So a few places you can go. First off, if you want to be involved certainly come talk to us. There is a lot of work going on in the area as we said here in the Ice House development cycle. Brocade booth. Talk to somebody there. Talk grab mark. Varma is sitting quietly against the wall. Talk to him doing a lot of the development for brocade. So anyways, if you're interested, yeah, invite you to please come and talk to us and we can point you in the right direction. So comments, questions? Yeah, that's a great question. The question is, is there any plans to instrument any of this and make it so that troubleshooting, diagnostics, those kind of things can be done through OpenStack. Obviously there's the salometer opportunity. I think that's something that is being looked at. I don't think we have anything there right now to talk about. So I think right now you're using, if you're a service provider, you're using your vendor tools to diagnose the equipment, to troubleshoot. As some of these things get plumbed up into things like salometer and can be monitored at that level, that will help. There will probably always be a place where you're going to need to go down to your vendor, device managers, element managers to find failures, isolate failures, swap out equipment as it fails, that kind of stuff. So I don't think what we're trying, at least this ice house release is not trying to solve that problem. The ice house release really is trying to make it so that you can use fiber channel infrastructure in your under infrastructure up at the tenant level of volumes attaching to instances and such, no change up there. But you now have a technology choice at the infrastructure level to use fiber channel in addition to ice scuzzy or other technologies that are supported. So good question, but our focus at this point is kind of at the crawl phase. Let's get the infrastructure supported so it can be used in the environment and now we can start looking at instrumenting the upper layers beyond that. At Brocade we've identified some issues that are along the lines of what you're talking about. When you get into large multi-tenant environments you don't want to get into a situation in which this tenant is causing headline blocking for this tenant, right? So Brocade is in the process of engineering solutions for that. I don't have anything to report at the moment but we are working, I could just say we're working on that in reality because if you're going to use fiber channel in an open stack environment and you're going to have multi-tenants so let's say it's a service provider environment but it doesn't have to be it could be an enterprise environment as well you can't have you can't have one client causing problems for another one and it's not scalable to hook up more IS cells it's not scalable to use B-sands or logical switches because if you're going to scale into the hundreds which is what we anticipate how can we make this work so that's what we're working on. Other questions, comments? Okay, yeah so when you make changes how do you ensure that you aren't impacting all of the services that are running on top of the infrastructure I think that's always, that's a key question that any service provider has that's doing any maintenance on their hardware I think that's an excellent question I don't think we've changed that problem It's actually one of the benefits of fiber channel is that you can at least do in-service software upgrades or hot code load depending on what you want to call it without disruption I wouldn't do it during the middle of prime time but still it should be disruption less and a lot of that's true for the array side too right? Yeah this is enterprise class equipment that does handle hot upgrades rolling upgrades of firmware across the arrays, across the fabric what we showed was a resilient fabric configuration where we can take down a service provider could take down one side of their sand for reconfiguration and then roll over to the other keep things live during that so that reference implementation we showed you was an enterprise class resilient setup that does allow for continuous operation and allow you to roll your hardware and roll your firmware and reconfigure with only some degraded areas but no outages across that it doesn't change it's the same as what the data center admin has to deal with in a data center today so we really haven't changed that problem or helped it but the solutions that are there in the data center are still available to use with OpenStack here making the change more frequently so in terms of what changes are happening if you're talking about rezoning the sand frequently so the zoning that is done is only impacts the involved host but yes there is zone modification being done whenever there's attachments being done to hosts from new arrays additional arrays those kind of things so we've looked at that as well and engineering has proposed some some ways to do that in which there might be timers in which it's done in batches less frequently that type of those type of solutions are being looked at but in general remember it's the host and the array that are being zoned together so every volume isn't going to necessarily require a new zone only a path between an array and a host needs a new zone and there may be 100 guests on that host and all of the volumes consumed by that coming from that array are sharing the same zone so it isn't the fact that every attach is necessarily creating or modifying the zone so it maybe isn't as bad as what you're thinking or as frequent time is up okay great well thank you everyone if you have any more questions so we'll be here for a few minutes