 We'll get started in a couple of minutes while folks find your seats. We have a couple of seats up front, if anyone wants to join us up front. Okay, I'm going to go ahead and get started. Good morning, everyone. My name is Andre Bosele, and welcome to the orchestration of Fiber Channel cloud technologies for private cloud deployments. Thank you for joining this session this morning. This is one of our first sessions. And I realize it's also a tax day, so I'm glad you're prioritizing what's important here as it comes to Fiber Channel, the OpenStack community. So we have an agenda that I want to share with you today where we're going to talk about the Fiber Channel support that we're adding to OpenStack. We're also going to talk about what we're delivering in Grizzly and what we have planned for Havana and beyond. We're also going to have a call to action if you want to get involved and the effort that we've started on, we're hoping that you can partake. And what I want to talk about is how we're adding Fiber Channel support to OpenStack. Can everyone hear me fine in the back? Good? Okay, great. Thank you. Okay, so how did we get here? This journey actually started six months ago at the last OpenStack Summit, which was the Grizzly Design Summit in San Diego. And I attended the show with the specific intention of hopefully meeting similar folks who had interest in pursuing Fiber Channel and getting Fiber Channel added to OpenStack. And sure enough, at one of John Griffiths, who's the team lead for the volume manager initiative, Cinder, he said, well, there are a couple of guys from HP who are going to talk about Fiber Channel. You may want to sync up with them. And sure enough, I synced up with the HP guys, and then we went to the Design Summit. We talked about extending Fiber Channel, and sure enough, we met up with some EMC folks and some IBM folks, and lo and behold, we had a quorum. So after the summit, we held regular meetings to talk about how do we go about this process of adding Fiber Channel to OpenStack. And lo and behold, here we are hosting one of the very first Fiber Channel sessions. So I think we've made a lot of progress, and I want to thank everyone from the multi-vendor community who was involved here. I think they deserved a round of applause for making this session possible. And when we talk about Fiber Channel, this is not new technology. Fiber Channel has been around for well over 15 years. A lot of data centers have defined footprints. And what I want to stress here is that although it's a mature technology, it's not dying, it's not going away. Clearly IDC says there's growth rates that's projected out through 2016 where we can see Fiber Channel storage growing at about a 36% rate, compounded annual growth rate. So clearly, as we provide support for Fiber Channel, we're empowering folks to take advantage of what they have today and then hopefully as they grow their environments, they can continue to add Fiber Channel support and their OpenStack deployments. So a lot of questions that we get from the community is, so why Fiber Channel and OpenStack? This is something fairly new. I say why not? The whole process of delivering OpenStack is to allow some level of interoperability and extensibility to incorporate different types of storage and management of different types of storage within your environment. So we saw it as an opportunity to, again, stay to the OpenStack and the open source community and allowing the incorporation of Fiber Channel. And also we realized that a lot of companies, a lot of organizations, as they deploy their private cloud environment, Fiber Channel is probably a good fit and they have the expertise. They know what Fiber Channel delivers in terms of performance, resiliency. These are things that they want to take advantage of, particularly when it comes to the applications that require a Fiber Channel infrastructure. So we all know when you look at the database, large database applications, when you look at the exchange servers, those are all applications that require this type of infrastructure. So we saw it as necessary and this is just the right time. We expect this to grow in the future. Now what I'd like to do is bring up to the stage Edgar Sainte-Pierre and have Edgar go through a couple of slides and I'll be back shortly. Thank you. Good morning, everyone. So what are we talking about here? We're talking about choices, right? So my dad was a carpenter and he always said that you use the right tool for the right job. Now I use that as a great excuse to go out and buy new toys for every project I do around the house. But in the data center what we're talking about is providing the right technologies for the kind of solutions that you are trying to provide in the environment. So if you're talking about low-latency transactional workloads, you probably have existing implementations already in your environment that are leveraging FiberChannel and you should be able to use that in your OpenStack environment as well. So whether you're talking about migrating solutions from legacy implementations that are running on hardware and it's a consolidation play where you want to bring them into a virtualized environment and you want to bring these into OpenStack, great. You'll be able to do that or even if you're migrating from the VMware environment into OpenStack, you'll be able to do that for cost reduction purposes. But options now, if you're looking for FiberChannel plays that you can actually implement in your OpenStack environment, both iSCSI and FiberChannel, it's a matter of options and choices. And the benefits you gain from this is you can leverage your existing resources, not just the physical resources but your people resources, your processes associated with how do I actually manage my FiberChannel infrastructure, my highly resilient infrastructure, perhaps including remote replication capabilities you might have in play as well. So you'd actually be able to introduce that into your solutions as well. So what have we done so far? There's been two blueprints, one completed and one proposed. So in the Grizzly timeframe, we completed the initial blueprint, which was to introduce FiberChannel for initial connectivity from KVM into Cinder Block Storage. Coming up in the Havana timeframe, we'll be talking about how do we actually create connectivity or zoning fabric within the FiberChannel environment for you to create connectivity on the fly instead of having a pre-zoned configuration or an open zoning configuration. Now this is important from the perspective of this session is really the first formal session that we've had any discussion of FiberChannel within the OpenStack community at all. So the fact that we have, and we'll be talking about the design session summits that we have coming up later on this week, just the fact that we have a first FiberChannel session here is significant. So with that, I want to hand it over to Gary Thunquest who's going to talk more about Grizzly. Great, thanks, Edgar. So one of the goals that we had when all of these different vendors came together at the Grizzly Summit around FiberChannel was how do we come up with one implementation of FiberChannel that all of us can now use and integrate different products into and bring FiberChannel into this environment. So we used this group that came together at the last summit and we used that as we created the first blueprint to bring that blueprint in and do a lot of reviewing, refining, extending so that it met all of our needs. And then even as we went into implementation being able to use this group to be able to review the progress and just refine and make sure that everything was going to work for all of us as we did this. And as we look at this, this is really just a six-month period of time that we've been able to go from people shaking hands and saying, hi, I'm Gary or whatever, and to where we have blueprints submitted and code was submitted, code was accepted, and this is shipping now in Grizzly. I think this is a good accomplishment. But let me walk through and talk about, so what is it that we have added to Grizzly and what's there and how can you take advantage of it? So in Grizzly as it exists, as it did exist, iSCSI storage is the only type of storage that Cinder understands. There are drivers that connect to different devices that bring FiberChannel storage into the environment, excuse me, iSCSI storage in the environment and attaches that to VM hosts all over iSCSI. What we did first off in this first phase of FiberChannel work was to bring in a new interface into Cinder to allow us to create FiberChannel drivers in the environment. So what exists today is a base class around iSCSI. We extended and added a new base class around FiberChannel that allows any vendor now to come in and subclass off of that to create their FiberChannel driver to their device. So that's the first part is being able to enable the development of drivers, of FiberChannel device drivers in the environment. The second piece then is FiberChannel obviously has some very different characteristics than iSCSI so we needed to bring in some mechanisms to be able to accommodate that specifically around addressing and how SCSI is managed at the FiberChannel level versus an iSCSI level. Today the iSCSI mechanisms that exist between NOVA on the client side and Cinder on the storage side are around getting IQNs and addresses off of the hosts exchanging those then with IQNs addresses on the storage side with IP addresses addressing the storage target. With FiberChannel it's a little bit different in that it is a lower level protocol that has typically multiple ports on your hosts. Multi-pathing is done above FiberChannel as opposed to with iSCSI where it's below and so it required us to bring in some extensions in the exchanging of addresses, initiator and target addresses between the two client and storage systems, NOVA and Cinder management pieces to do this. So there was some Cinder work that had to be done to be able to accommodate this. And then lastly there's a piece over on in NOVA that needed to be changed. From the hypervisor standpoint to be able to import volumes via FiberChannel to then connect them up to VMs required some changes over in the hypervisor side. So as part of this we made the changes to KVM to be able to accept and attach FiberChannel storage into those hosts and have it be able to attach those to the VMs. And so that was a piece of work done as well. Then lastly we did some work of bringing in a FiberChannel driver, a storage device driver being at HP. We did the HP 3-par device. So we submitted a FiberChannel driver for the 3-par array and that really can serve as a reference implementation for a FiberChannel device driver. That was something that as our group was working together there were a number of vendors that were doing FiberChannel device drivers and so we were able to share code quickly and easily and get that incorporated across some of the other vendor drivers as well. So there are now more than just the HP driver even in the Grizzly release. So those were the major components of what was done here for the Grizzly release. So what you end up with is an environment where you can have in your data center you can use those FiberChannel arrays and FiberChannel sands connecting those to the hosts. The way that we worked through this being able to handle both single fabrics as well as multiple fabrics so if you have redundant and resilient deployments and infrastructure build-outs that can be accommodated and volumes that are created over on the arrays can be created through sender attached over to the hosts and then attached up to the VMs via Nova. So all that was part of what's in Grizzly here. So what can we do now? So if you are a storage vendor and you have some FiberChannel devices that would be useful to people building clouds you can take this and you can build a driver for your device and integrate that into the system. You can use the HP 3-par driver as a reference or the other drivers that exist are coming in now as well for reference implementations. If you are building out a cloud environment so if you are a data center where you are bringing in and building out that private cloud or a service provider that does want to use FiberChannel it is in Grizzly, it's working and it's there and can be taken advantage of. From a user perspective the nice thing is is that as you would expect this is completely transparent to users whether they are using iSCSI storage or FiberChannel storage they use the sender operations to just do the create, delete, attach, detach, snap, all those things and it happens whether the infrastructure is using iSCSI or FiberChannel it's really transparent to them. So that's goodness there. So known limitations. So there are some things to keep in mind if you are building out an environment using this FiberChannel work that we've done. The first thing everyone always asks is okay what about zoning? What are you doing for fabric zoning? Zoning is an area where we really felt that fabric orchestration was something that we wanted to separate from the storage device orchestration. So in this first phase we have not implemented anything for automated zoning in the fabric. So that does bring in some limitations. It requires then that in what's in Grizzly in this first phase that the SAN can either be open-zoned or be pre-zoned. So since there's no automated zoning one of those approaches is what you need to do. Or if you want, I suppose you could take one of the drivers or build your own driver and build in zoning as part of it. That's certainly an option. But what is coming is a second phase of FiberChannel work that Edgar referenced a second blueprint for Havana that is going to address zoning. So we're going to talk about that here in a couple of minutes and Andrea will go through that. But that's something to know about as a limitation for the current Grizzly implementation. The second thing is around security. So iSCSI has this mechanism of using CHAP to be able to authenticate clients and servers, initiators and targets across SCSI. So a secret is effectively injected into both the initiator and target side. That's used then to authenticate that client at connect time to assure that you can put access control reliably around volume attachment to servers, to hosts and have that be done securely. FiberChannel doesn't have a mechanism that's directly analogous to that. FiberChannel does have access controls at the fabric level. It also has access controls typically at the server target level in some of the masking capabilities that storage devices generally have. However, those mechanisms are reliant on the initiator's address being trustworthy, being trustable. With NPIV, where you can change addresses on client side, that may not be the case. So one thing that we wanted to stress is is that when you're using this FiberChannel solution as it exists in Grizzly, you need to make sure that end users, untrusted individuals don't have access to NPIV mechanisms on the host to be able to change addressing because it could in your build out of the infrastructure, they could get access to things that they shouldn't, depending on how you condition the fabric and build things out. So it's something to keep in mind. Now in most deployments, end users don't have access to that mechanism and it's not an issue. If you are doing any kind of a bare metal or a capability where they did have access to that mechanism, it's something you need to keep in mind and actually work through that and make sure that your environment is still secure. Okay. Then lastly, hypervisor support. So we mentioned that we did the changes for KVM around being able to accept FiberChannel storage and attach those to VMs. We did that only for KVM. There's some work in Havana that started that we're working with some teams to get this integrated in with VMware and so it will be there. But there's also some talk of some Hyper-V work going on. So today, KVM only and there's some other hypervisors are coming. So not yet on some of the others. So those are some of the limitations that we have in terms of what we have in Grizzly. So now Andre is going to talk about now where are we going to go from here with Havana. Okay. Thank you. Anyone? Can you hear me? Okay. So as everyone see, this is a very, it's a multi-vendor approach. We're doing this transition to cover different aspects of the slides. I just want to just by show of hands who has a FiberChannel private cloud deployment and they're looking at adding, oh, you may not have a FiberChannel cloud deployment. You may have a private cloud deployment and you're looking at adding FiberChannel. Do you have other hands, anyone working on a project here? So we're planting the seeds. So hopefully my goal is by the next OpenStack Summit we'll have at least a third of your hands up in the air, very high and very interested in doing some of these deployments. Okay. So if anything, I want to stress this particular slide because this is really what we're delivering we're planning on proposing and delivering in Havana, right? So what it shows is that we want to extend Cinder's volume manager to automate the FiberChannel zone management aspect when there is FiberChannel storage that's deployed and fabric zoning is enabled, right? This is going to be a functionality that's transparent to the end user. So it's all being managed in the back so that you can get access to that FiberChannel storage, right? So Gary talked about, you know, when they came out with the FiberChannel block storage, you know, we assumed a couple of things around zoning, right? And one of those things is that, you know, it's either an open zone environment or the environment has been pre-zoned to accommodate. So what we're doing here in providing the FiberChannel zone management capability is to automate the addition of zone resources, right, to update the zone sets when you add compute resources to your cloud environment, right? And similarly, when you remove those resources, we want to be able to update those zone sets as well and do it, you know, in the background. So there are a couple of things that we're also wanting to add to the FiberChannel zone manager is an API that allows for integration similar to the FiberChannel block API to other vendor solutions so that they can take advantage of some of the southbound information, right? So we'll be able to do zone management across the entire enterprise. And then we realize that as you create compute resources that we'll need to provide context for in the Nova side. So that's probably going to be a separate addition to the blueprint that we'll do to support that Nova component as well, okay? So let's talk about some additional requirements, right? So, you know, in coming out with the FiberChannel zone manager, there are a couple of things we'll have to address that, you know, to make this somewhat automated and transparent to the user. Then we'll have to come up with a zoning mode. And the zoning mode essentially would identify whether you have FiberChannel fabric zoning enabled or not. It may, whether you have target-driven zoning. So with this particular configuration, we can then invoke the FiberChannel zone manager functionality. And then we talk about zone grouping policies. Zone grouping policies really shows the relationship between the host system and the storage system, right? The initiator and the target on FiberChannel. And there are a couple of different scenarios here, right? So we may have a zone scenario where you have just one target assigned to one initiator, right? Or we may have one initiator that has visibility on multiple targets, right? So we have to be able to manage those zone set configurations accordingly. And then we want to be able to, of course, as we create and manage these zone sets, be able to enumerate the sand contacts as well as the fabric contacts. So, you know, we have that information readily available. Lastly, I want to talk about what we're delivering in terms of the Havana release, which is going to be focused on more simplified zone management. And what I mean by that is that we're going to have one active zone set. And as you add compute resources to that zone set, we'll update the members on that zone set. If you remove compute resources and you have a zone set that has multiple members, we'll not just kill the zone, but we'll just remove that component or that member from the zone set. If it so happens that it's a single initiator and a single target creation, when you remove the compute resources, we'll just kill the zone set. So we'll have the ability to add, update, and remove as well. So that's an important delivery for the Havana release. Now, post-Havana, we're going to be looking at widening this to support more enhanced zone management, be able to support, you know, full life cycle management and multiple active zone sets, right, to be probably something that a, you know, a large enterprise can deploy within the environment as they're doing today. So that's the step approach that we're taking and supporting both simplified and enhanced zone set configuration. And what I want to do now is just talk about what we're doing in terms of the vendor community, and I'm going to bring Edgar back on stage to talk about what EMC is planning. Thanks, Andre. Okay, so in Grizzly, EMC completed its iSCSI integration for VNX and for VMAX. And what we've done is we've simply built on top of that to include fiber channel capability that didn't quite make it into the Grizzly release, but will be there in the Havana timeframe, both for VMAX and VNX, and we'll have a whole host of other volume drivers as well that support the rest of the EMC product family for iSCSI connectivity at least. And then also in the Havana timeframe, we'll be looking at how we handle quality of service within a single volume driver across multiple pools within a single storage array, including multiple protocols, so for fiber channel and iSCSI connectivity. And finally, of course, we'll be testing with the fiber channel zone manager work that is going on in Havana. It's a very important integration for us, and we'll make sure that that takes place as well. So a heads up for design sessions. For the developers who are here, I just heard, you know, Tierry say, please don't go to the design sessions because they're intentionally small rooms. So if you're a developer and you're interested in fiber channel activity within Cinder. So on Thursday, all the Cinder sessions are on Thursday. There's also one Nova session associated with this. The first one is the fiber channels sand zone access control manager that Andre was just talking about. That's late in the afternoon on Thursday, the 18th. The purpose is to introduce the particular topic that we have here in terms of being able to create zones and zone sets within a Cinder environment. There's also multi-attach and read-only volumes, which happens earlier in the morning on Thursday. Now that is to be able to provide for cluster configurations. There are other use cases as well, but specifically for cluster configurations, very important within a VMware environment, running within OpenStack, for example. And finally, associated with this, there's also the VMware compute driver roadmap session, which unfortunately runs concurrent with the cluster discussion as well on Thursday. But that will also be touching on fiber channel connectivity as well. Now, Gary, I think you are going to roll this up for us. All right, yeah, so I think just to end, you know, to summarize, what do we got? So in Grizzly, what we have is, you know, the infrastructure in place to be able to come in, and you can actually deploy your private cloud using fiber channel storage connected to your host and to put that into a production environment. So if that's of interest to you, that's something that you ought to be able to do with Grizzly and what's just been released. If you are a vendor and this is of interest to you, there's also the interfaces now in place for you to be able to create drivers for your product to be able to bring those into the ecosystem and support them in the open stack environment. So hopefully, you know, if that's of interest to you, that's a place where you can jump in and be a part of this as well. There's also, as we, you know, Andre talked about, new things coming in Havana to start orchestrating the fabric as well. That's coming here in Havana. And really, I guess for all of this, you know, if you are interested in being a part of this, if you're a developer or someone who wants to be, you know, even closer to what's going on in this area, feel free to contact, you know, myself, Gary, ThunQuest, HP, Andre, Edgar as well. Our emails are up there. So you'll plug in and contribute and we would love to have you a part of it. So I think that's all we've got. So thank you. Do you have any questions, comments? Yeah. Yeah. So it's just like with iSCSI, this is all management path, configuration path orchestration. Once things are configured and put in place, those infrastructure services are all using data paths and there's no, you know, nothing is in no open stack in the data path there. Right. So this is all configuration orchestration. Right. Yeah. So I think that the solution that we are headed towards would enable those kind of policies to get implemented. So now I don't know that that will be part of Havana, but the interfaces that are coming into play would allow us to start, you know, getting more finer grained, putting more finer grained access control mechanisms in there if you have infrastructure that, you know, that you want to orchestrate to be able to to configure it to that level of granularity. Right. So the original question was, what about lawn-based zoning? Yeah, sorry. The answer is yes. So I don't know if you want to comment as well, Andre. Yeah. No, I think you covered the potential. Any others? Is there a road map on recording information and bringing that back to the open stack player as well? So in a multi-vendor situation, do you just kind of see from the same type of glass all the information? Is that not a road map? Well, yeah. The purpose and the goal is not to really, you know, put a monitoring system in for everything in the environment. I think that the focus right now of monitoring is really at that infrastructure service level. The resources, your physical resource consumption and utilization and troubleshooting and break-fix and, you know, all that stuff is kind of your physical infrastructure management problem still, but not your open stack solution. There will be, I mean, if you look at one of the other projects going on within the open stack right now, Solometer, right? So that's raising it up one level and it's looking at it at a level that does relate to open stack. Now, how do you tie the relationship between events that might be detected within Solometer within the infrastructure back and forth? I think that's still got a long way to go before that's actually there. And road maps are worked out every six months. Right? So I think that's more of a stay-tuned kind of question. Yeah. So with SIOC, if you think about that, it goes through VMware where VMware is in the data path and it's able to monitor different characteristics about the IO. In this case here, you'd have to add the same capabilities into something like the KVM hypervisor or Zen or whatever the case may be. So SIOC is strictly a data path function, whereas what we're doing within Cinder is strictly a control path function. Right? So if you want to evolve some of the other hypervisors to monitor for latency and things like that that they can react to, then that's going to be built into the hypervisors. Yeah. I think our vision really is both. And typically, what we've seen is they start with that small POC where it's just a slice of infrastructure. That could certainly grow from there and be all. But there's not an assumption that the entirety of the infrastructure is owned by the solution. Yeah. Options. It's all about options. Yeah. Right? Yeah. And again, as environments that have established fiber channel infrastructure as they look to leverage some of those resources, we figured they'll take a piece of it and dedicate it to their cloud deployments and start getting some play there. But we want to make this transparent. We want to open the doors to allow them to incorporate that management of their fiber channel infrastructure, whether it be dedicated or whether it be a hybrid solution. So I'm not sure I caught all of your question. I only caught some part of it. Yeah. So the question about snapshots, for example, snapshots are supported. Right? So the major change here was... So Cinder does support snapshots at the LUN basis. Yeah. And at the LUN level, snaps and clones of volumes. But where fiber channel comes in, the changes here are really around now attaching those over a different type of a storage network than what we had before. I SCSI, and now there's an option of you can use fiber channel as well. But the storage functions are retained. Still create, delete, snap, clone. Those kind of things are still in place. Yeah. Good. So you can just get into the cloud that aren't developing for the cloud so that they can move their workloads that are more fiber channel centered, more vertically scaling? Okay. Yeah. It's a good question. What do we see as the end game here? Is this just a bridge approach? And do we really see this going away? And I think no. I think our position, and really as Andre talked about, where is fiber channel going and what is the adoption rate looking like it's something that's going to tail off. And really what analysts are saying is no. This is a tool that has a place and there are times when that's the preferred tool to use. And we don't see that going away. And in fact, what we see is the need to bring that into the cloud paradigm so that people can deploy those kinds of applications in the cloud that do want that, that type of predictability and resilience and those behaviors at the fabric level. So it's not something we see as a bridge or stop gap. In fact, on the contrary, we see it being something that's very sustainable and going forward. Yeah. I completely echo that. If you think about design to fail application implementation, there's not a tremendous amount of experience. There's definitely a growing amount of experience in the industry for cloud scale application development. But the fact of the matter, there's a ton of applications out there that are more transaction oriented that need low latency implementations within the infrastructure. And those are not going away anytime soon. So is this stop gap? No, I think this is more of let's bring all these other applications into play that haven't been discussed up to now because it's been all about cloud scale. It's been all about test dev type applications. Let's bring these other applications into the conversation as well. So that's one of the design sessions this week. So yeah, there is discussion around that. Yeah. Right. Right. Yeah, it is an important paradigm that we need to bring into the mix so those applications can be a part of this environment. Yeah. Okay. Well, thank you very much. You appreciate it. Thank you.