 Hello, my name is Brendan Wolfe. I am the marketing manager for OpenStack at NetApp. Welcome to our presentation this morning. 9 a.m. on a Thursday morning. I'm glad to see people coming in. I'm going to be co-presenting with Rob Esker. He's our product manager for OpenStack at NetApp. We're going to talk a little bit today about NetApp's commitment to the OpenStack community and what does OpenStack mean for NetApp in our portfolio moving forward. NetApp has been a major contributor to OpenStack for a few years now. I think this is our eighth summit, ninth summit, something like that. And we've come a long way. We're the number one storage provider for OpenStack today. And we have quite a few customer deployments. And we're going to talk about some of those use cases a little bit further in the presentation. NetApp's vision around OpenStack has to do with how do we manage our data between the mini-cloud ecosystem that we're dealing with today. People want to have their data available when they need it and where they need it, have strong data management controls, but they still want to take advantage of this mini-cloud ecosystem. That's what we're calling the NetApp data fabric vision. In order to enable that NetApp data fabric vision, you need to have a common set of APIs between all these different cloud services, which is why OpenStack is a really good fit and a strategic place of investment for NetApp today. Under or over, depending on the provider, each of these clouds NetApp storage can sit. We've done a lot of work in our driver's set across a portfolio to order to pass through a lot of our enterprise features to make them available through this cloud ecosystem and enable that data fabric vision that I was talking about before. So sometimes the question comes up, why does enterprise storage matter for OpenStack? We're going to go through several stories today to explain why we think that matters. But essentially, we can categorize most of that into four major quadrants. Your data and your storage needs to be flexible, needs to be efficient, needs to be secure, and it needs to be proven. And all of these things need to be built with OpenStack in mind. And then we also need to classify people into a few categories in order to understand what's driving OpenStack adoption today. Hopefully, most of you in the room will see yourselves a little bit in each of these pillars. They're not entirely comprehensive, but it gives us a reference point in order to start having this conversation because people are approaching us for different reasons. You might be an enterprise customer where your major concerns are things like availability, TCO, and avoiding vendor lock-in. You might be building a web scale architecture where you're trying to build out a large system where ease of operation and density become your primary concerns. If you're a service provider, it might be SLAs and differentiation for how do you compete against the other players in the market. Or you might be in DevOps, building cloud-native applications where you're worried about open APIs, elasticity, and the agility to bring your products to market faster. OpenStack addresses each of these pillars in a little bit of a different way. And on the storage side, we are trying to target these potential customers and build these kinds of features into our products. So with that, I'm gonna hand it over to Rob. He's gonna talk a little bit about how we've been successful so far. Thanks. Hey, good morning. So as I mentioned, I'm Rob Esker, product manager. Do a little bit of strategy work around OpenStack. Started the effort at MatApp 2011, actually into 2010, technically. So I've been at this for a while. I think it's actually my ninth or possibly my 10th summit. So it's interesting seeing the evolution. And by the way, I commend everybody here in the room at 9 a.m. Night or morning after a typical night of parties. So appreciate your attention on this. Wanted to point out the OpenStack Foundation's user survey. So this is published every six months, usually in the week just ahead of the summit. So again, true, this particular cycle. So end of last week, the foundation published the latest results. And this is consistent with what we've seen before. Amongst the sort of commercial storage vendors, amongst the enterprise reliable storage vendors, we're consistently number one. We actually increased our lead this particular time. Now you'll note, stuff is certainly very, very prevalent. And for a lot of sensible reasons, it's usually very easy to get started with it. We're starting to see a little bit of a trend. It's kind of interesting, which is, I got OpenStack stood up. I have a certain set of cloud native applications that are deployed atop it. Ceph can perform within these particular set of characteristics. But in order to fulfill the complete vision of our cloud mission, we need to actually enter additional set of capabilities that augment, if you will, what Ceph can do. It's perhaps maybe more aligned with a no or low SLO or a certain different performance tier. But I need to actually move more of sort of those pets into my cloud, if you're familiar with that metaphor, I apologize if not. I need to move those classic applications, those high SLA applications into it. Just wanna point out, again, we're explicitly called out there, but we also know that LVM and NFS contain some of our systems as well, where they're actually using those drivers instead of the NAP specific drivers. I would like to point out though, that we actually see, we have a little bit, and I would say probably more accurate empirical way of measuring this. The user survey is, of course, self-reported. NetApp systems, this is certainly true of our on-tap-based systems, if you're familiar with them, and it will be true of our E-Series systems and the coming releases of OpenStack. We've instrumented our drivers, our Cinder drivers, such that as a deployer, if you've elected to phone home back to NetApp for support purposes, there's a little bit of telemetry that's provided to us that indicates that this is being used with Cinder. And so we're able to actually fairly empirically measure adoption rates, and there's a couple of things happening. We've seen a really steady growth in the number of actual total deployers, and by the way, there's folks that come and go, so this is sort of an aggregate, a pretty significant rise in the number of folks actually going to a production deployment status, but more significantly so, and I think really speaks to where OpenStack's at, now we're starting to see that ratio of systems to deployments grow significantly. We're seeing organic additional capacity growth, which is an indication generally that this is now becoming more and more of a production quantity. So that this, by the way, if I go back further in time, the curves is more or less consistent. So briefly, I wanna touch upon the nature of our integrations. I should probably mention that we have a full spate of sessions that go into more specific detail on each of these topics, some of which actually occurred a little earlier in the week, and of course, if you're not already familiar, all of these sessions are on YouTube. Likewise, later on today, we've got a number of these that'll focus in on these particular topic areas in greater depth. So what we wanna do is provide a little bit more of an overview here in this particular session. So our capabilities range across block, object, and file, and also augmenting the creation of instances, of creating nova instances in a more efficient and performant way. So let me actually, though, start with a new project that we're adding to OpenStack, namely Manila. So Manila, if you're not familiar with Cinder, or if you are familiar with Cinder, is essentially the same model, albeit for shared and distributed file systems. This is a capability that NetApp, actually, originally conceived of, proposed within their Cinder community, and in the process of discussion, in places like this, at the design summits, basically moved to a state where it was settled upon, it really required its own separate service. So we've been actually working on basically building a prototype, enhancing that, expanding upon it, building a community around it. So now there's quite a few different Manila driver options beyond, of course, in NetApp's Cinder NFS, I'm sorry, Manila NFS, Manila SIFS, and a couple of other derivations there. You also see it from other common NAS vendors as well. And so we expect actually to expand again in the Liberty timeframe. But Manila, Manila, it fills a pretty critical gap. If you look at OpenStack as the infrastructure as a, open infrastructure as a service framework, I think it's firmly established itself. There has been a critical gap for infrastructure, namely treatment of and delivery of share and distributed file systems in an as a service construct. So Manila addresses that. By the way, this is an interesting sort of point. I think it might be the very first example of this, but this is probably, it's notable to see that Amazon has launched an announced capability called EFS, Elastic File Storage, here in the last approximate month or so. And that's essentially Manila for Amazon. And that follows the evolution of Manila. I'm not suggesting they're using Manila, but it's the same construct. It fills the same set of gaps. So perhaps the first incidents of Amazon starting to actually follow OpenStack instead of the other way around. So a common question is, well, where is Manila? You may be familiar that once upon a time within OpenStack there was this notion of capabilities existed in sort of the broad, sort of nebulous set of projects, not necessarily given specific distinction, that then you could apply and move into an incubated status and from there you could actually move into an integrated status and then potentially be designated as core. Oh, that's gone. That doesn't exist anymore. So you may have heard of this big tent approach. Manila had achieved an incubated state at the beginning or at the end of the JunoCycle. JunoCycle, during Kilo, we significantly advanced set of capabilities and matured at a great deal and would have been ready to move to that integrated state. But in the meantime, all of this sort of changed underneath us. It's moving to more of a tagging methodology. So it's beyond the scope of this session to get into what all of that actually is. But just for your notification, we've got four out of those six tags already assigned to Manila. One of those is actually a legacy tag that'll never apply to it. And over time, I think what you'll see is Manila proliferate via its availability in common distributions. We're working with the major distribution partners. We've been working with them a number of months and there's a lot of advancement just within the course of this week in describing how those will start to appear. I believe Red Hat already announced that it'll be part of their RELL-OSP distribution in the future and look for similar announcements for others. So to answer the question of where's it at, it's ready for some use cases now. We're gonna try and achieve kind of that, if you can think of it as kind of like the equivalent of a commercial 1.0 at the end of the Liberty cycle. So appeal to a broader set of capabilities and be better rounded out in terms of installation, experience, documentation, ongoing care and feeding. So that's where we're going with Manila and look for it in distributions in the not too distant future. So which brings us to Cinder. So I should probably start with saying that our strategy is to provide best in class integrations across the entirety of our portfolio, whether it's our on tap based hardware systems or we don't have it depicted here on tap in a virtual form. So for example, we have a derivation of our operating system that has classically run on the filers you may have heard of from that app that actually runs in clouds. The first example of which actually runs in AWS. These integrations I'm talking about in Cinder actually work with that as well as they work with cluster on tap as well as work with seven mode. E-Series is a platform, E and EF series, EF being an all flash version of that. We've also provided Cinder integrations and there's another platform that we've announced but not yet started shipping called FlashRay that we also have a prototype Cinder enablement for. And you can see that there's different sort of protocols we support, a notable addition of late is actually fiber channel for the types of use cases environments where that's required. When we actually enable it via these protocols we're also within our drivers unlocking the distinguishing characteristics of that platform. Some of which are just inherent to the box but some of which are like specifically configured. Like perhaps you've a E-Series system with the largest available, slowest spinning, high capacity drives. Well, you probably arrange that into a Cinder catalog item that might be more archival. Or perhaps you put an EF series which is all flash because you need vast throughput. You might align that to more of kind of a streaming workload or an analytics workload. It's highly dependent. I should mention that in Cinder, all of these are arbitrary, these catalog capabilities. This is something we helped evolve within the Cinder community over time. You can establish whatever makes sense for you. Classically gold, silver, bronze, maybe it's some other sort of more granular notion of what are these back end capabilities. So these are the protocol options on the right. On the left hand side we're trying to drive in our Cinder drivers those unique capabilities into Cinder so that you can create this catalog and distinguish from the different options and the different capabilities deliver that to your tenant base. One capability that's kind of interesting in the on tap space is the ability to actually integrate with Glantz wherein you can actually use our deduplication capabilities. Get 90 plus percent deduplication rates if you're not already familiar. Glantz typically are guest virtual machine images, golden images if you will, and it's since we were talking about common OS bits, do you duplicates aggressively? And likewise, once you've actually done that, we can actually create new instances in a very efficient manner. So if for example I have like a RAL image that I want to actually boot and I wanna boot eight instances associated with that, by default in with Nova, that's actually gonna copy that out and it might actually end up in the extreme cases, eight separate copies to eight separate compute nodes. We'll actually boot from volume and we'll clone from Glantz. That's an instantaneous operation and it's space efficient. So that's a cool capability we see in a lot of places. And just to get the case in point, we recently did a performance characterization with in the process of validating something we call FlexPod, we'll show you a little bit more here in just a bit, but what we ended up doing is comparing our results for creating new instances and there are those sort of characteristics or another set of circumstances versus another well-known competitor here within the OpenStack community. They publish some of the results similarly and you can see a very dramatic improvement overall in instance creation time, mostly attributed to that instance creation cloning capability. I should mention that both systems are all, actually the system on the left was all flash, the system, the NetApp system in fact wasn't. So briefly when I talk about Swift, object storage, there's really basically two options presently. One is a storage grid web scale and this is actually a new and emerging capability within our portfolio specifically for Swift. Not suggesting that storage of a web scale is entirely new, that's actually a product that's matured over time and that we've refactored in a pretty significant way for many of the use cases that Swift would be deployed for. Today it already actually, you can be used as a glance back end. Swift API and Keystone support are coming in the future. I can't get into specifics about that but not just in future. It already supports S3 and CDMI interfaces so if it's the case you have application workloads and a top open stack that you just wanna talk to S3, well you don't even need that Swift API support. We can deliver that in a web scale appliance, actually it's kind of a unique marriage between some of the storage of web scale capabilities and the characteristics of E-Series. We deliver a geo-distributed erasure coding capability and the scalability you can achieve are outstanding especially when you look at what kind of some of the scaling limits are you hit with Swift proper. That said I think Swift proper makes sense in a lot of use cases and so we've also had a lot of success. I'll talk about one notable example in just a minute where you actually just take upstream Swift and you make some tuning modifications. It's not modified, we're not talking about changing the code that reduces the replication count based on a unique characteristic of that E-Series platform I mentioned before, something we call dynamic disc pools. You can think of it as like erasure coding within the frame. And so what you get is the ability to actually run more of a classic parody scheme versus relying upon this inefficient consisting hashing ring approach. Long story short, I can think of an own example where a customer needed to deploy six petabytes of object storage and by default in Swift that would be 18 petabytes. When you look at like the parody overhead with using our system that would be at 7.8 petabytes. There's some other advantages beyond the obvious you know the fact that you don't have to actually deploy quite as much storage overall. It can actually also improve the scaling limitations of Swift because you've reduced a lot of that E-Swift traffic for replicas. So this is again another session later in the day where there'll be more detailed treatment. So let me get into some of the specifics of the use cases that that Brendan had alluded to. And this is just a sampling. I wish we could talk about all of them of that 484% growth or some really interesting names that as you might imagine don't necessarily appreciate being or not interested necessarily in sharing their story perhaps for competitive reasons. But Thompson Reuters is a joint session we have later today where we're gonna talk about their use for utility cloud capabilities. In this particular case, I really wanna talk about the fact that they're able to adapt existing database as a service, existing sort of classic application, high SLA application capabilities and requirements and make it available in an open stack construct. So this is something that actually where we've done some collaborative work notably around Manila and they now have Manila in a production state which is actually among the first in the world. So quite interesting to learn more about that in that separate session. So Deutsche Telecom, another interesting case. So they're going down the NFV path very aggressively. We've long been a primary provider to various different arms of Deutsche Telecom and there's a lot of different open stack initiatives there that they've discussed publicly. There's actually a separate session where we also talk about a collaboration there. Likewise we are using Manila but it's beyond, it's kind of the whole suite of capabilities although predominantly I should mention block and file. They'll talk about the results of that collaboration and I think where they're intending to go forward in another session. So NHS Entertainment in Korea, they have interesting set of requirements. I've actually seen this a few times over where they'll actually deploy an application out into a public cloud by default typically. So what it basically boils down to is I want to deliver a game title to the world of consumers and I don't necessarily know exactly how much traction that will obtain. So I want to put it in a place that has the lowest latency connections to my potential consumer base. It's broadly distributed, it's very elastic because I can't accurately determine what it is. LSD is quite important and typically that ends up being one of the public infrastructure service cloud providers. So not their own sort of on-prem. But when you get that same game title and it becomes more of a steady state, the economics do not favor keeping it there and the effect is quite dramatic. Not in the case of NHS but another one I can't get into specifics for. The cost differential is actually about 10X in terms of leaving it in the public cloud or repatriating it back on-prem. And OpenSec is used as the means to provide a common API between the public infrastructure to service capability and the on-prem. I think you probably see a lot of this here at the summit this time here in Vancouver. Folks that are talking about increasingly the ability to use OpenStack as a facility to like move data or to have kind of a common abstraction across many clouds, whether it be perhaps Amazon web services wherein the OpenStack support for many of the AWS APIs, all the EC2, S3, EBS, so on and so forth, you can actually then host that, essentially impersonate that with OpenStack or perhaps it's OpenStack to OpenStack cloud. So this is an example. I know they of course actually do support, deploy some of their titles on-prem first, but the inverse of what I just described is true. It burst out to that public capability. And by the way, I should mention that that was an interesting scenario which we hadn't actually talked to them prior, started seeing them in some of the auto support data I mentioned earlier and it ballooned or perhaps you could say it bloomed into a pretty significant deployment for them. At University of Melbourne, in particular I'm referring to the Nectar Research Cloud, they're making use of the full set of NetApp capabilities to deliver a full portfolio of cloud services, block, object, and soon to be file. So just to harken back to my earlier discussion around Swift on E-Series, they've got a multiple petabyte deployment, same scenario, they would have had to deploy 3x, the number of, you know, the capacity of the objects they want to deliver, instead it's about 1.3x. So that's been a successful deployment since some point in 2014. They started, however, with BlockStorage, with Cinder, and that's proliferated. They have both R7 mode and clustered on TapDeployed. And they're also going to be one of the folks, I think, that will come before as one of the first production manila deployments. So that's the intent they've expressed to us. So we've been working with them on that. So they're using the whole full suite of capabilities that I've described, Block, Object, and File. And then NetApp, I like to actually talk about us for a second. So, you know, we come from the development organization and there is, in fact, a two really distinct sort of internal IT organizations at NetApp. An engineering services organization we're referring to as the global engineering cloud and there's a corporate IT, classic or corporate IT function. In both cases, we have OpenStack initiatives underway. In the first case, the intent is to actually move a fairly significant, by the way I should mention, 10,000 VMs is actually within a single site. So it's actually much larger than that, moving that to OpenStack over time. In this particular case, this is, you know, an intent to move from a classic enterprise virtualization stack to more of a cloud construct, and then the process also, perhaps, an intent to create a little bit more of an advantageous licensing scenario, wherein, in fact, we're avoiding lock-in. So this is a big effort and so NetApp is, in fact, actually an open, I guess you could say, NetApp is an OpenStack on NetApp Deployer, and that will only grow here going forward. So just to cover briefly some of what we've been up to, and I'm not gonna go through every one of these bullet points, I'm really just wanna focus on Kilo. Some of the capabilities that we delivered in that timeframe, a significant effort to mature Manila, and in particular, to try and make it as modularly independent as possible. We are seeing an increasing number of folks that'll deploy elements of OpenStack independently of the whole. And I'm not suggesting that that's the common case, but it's a growing population. So for example, there are a few places where, you know, large, I guess you could say, online auctions house. I guess they've been on stage with us before actually talking about this, so I can mention the name. eBay will use Cinder in places independently, the rest of OpenStack to, as a standardized block storage abstraction. You know, I wanna vendor neutral open standard block storage API so that I can actually switch and swap out the implementation that makes the most sense without necessarily having to re-platform my provisioning automation logic. So we wanna do the same thing with Manila. We also added in fiber channel support. That's in Kilo for most of the platforms. It's actually in, for E-Series specifically, it is available on the NAPS GitHub repo, or I'd rather I should say it will be available on the NAPS GitHub repo in a back port form for Kilo in the next couple of weeks. We've actually done the engineering work, just haven't done the back port and got it pushed up yet, but that's essentially done. We also delivered a generic Cinder-NFS backup driver. So if you're familiar or not, Cinder has a basic backup facility where you can take a Cinder volume and push it to Swift, or actually I think there's some vendors like TSM, for example, that have options. Well, now there's a generic NFS capability and the future will offer a more optimized means of using, for example, NAPS on top, or on top product to facilitate a more efficient transport underneath that. But that's a cool capability. Any NFS capable blocks can actually be, capable blocks can be a Cinder backup target now. So that's something that developed and made available to the entirety of the community. It's not just a specific driver. We also contributed to the community a pretty significant enhancement to the security model for Cinder. This is definitely something I would look long and hard if you are using any of the NFS derivative drivers, is moving to Kilo. It's a much more by design, secure construct than it had been prior. We've added some, what we refer to as storage service catalog enhancement. So earlier I described how in Cinder you can advertise unique characteristics of backends to the Cinder scheduler. Well, that same construct applies to Manila and in both cases we've added enhancements. In particular, for E-Series with Cinder and with Manila, we've added basic, a lot of enablement for our cluster on tap Manila drivers. We've added manage unmanaged capability and we also improved upon live migration stability in the Kilo release. And this is just highlights. There's been quite a lot more that occurred. Case in point, I got these metrics actually about a month and a half old but this was I think mostly applicable during the Kilo development cycle. If you're not familiar, there was a requirement to actually move to kind of a vendor driven continuous integration sort of automated testing harness that has to live in relation to the upstream. So just an example of some of the scale, the effort just for our Cinder activity, this is none of the Manila activity in that timeframe. Now NetApp is not a full stack provider and we don't intend to be. So we partner pretty aggressively with class and class vendors. We're upstream, so we'll work with any of the distributions that are open stack low compatible but it's not to say that we don't work more closely with a number of parties and certainly Red Hat and Ranta. SUSE and Rockspace have been longstanding partnerships in the open stack ecosystem. We've built some reference architectures for Red Hat and SUSE. We have a Validate architecture that will combine Cisco with Red Hat and I'll talk about in just a second. And I want to point out that our generalized reference architecture is applicable to Ubuntu as well. And then we're also doing some work with automation partners, presently Puppet Labs and Chef but possibly expand upon that here in going forward. The thing I've kind of been alluding to just a little bit is FlexPod. FlexPod's fairly well known in traditional enterprise IT circles. This is a converged infrastructure architecture. It's a validated architecture in which we provide technology and architectural updates. In this particular case, it's a collaboration underway between Red Hat, NetApp and Cisco. It is not yet shipping so I don't want to get into the specifics of when but I think it's sufficient to say that this validation exercise is underway and we expect some news soon. So Brendan, please take us away. Thanks, Rob. That was really useful. So I'm going to wrap up the presentation here. I just want to reiterate that we feel very strongly that enterprise storage does have a role in OpenStack. And this is why we've been investing so heavily and contributing to the community, working with partners to build some of those reference architectures. And looking forward for us, this is going to become more and more important to our business. So some of the takeaway points before I finish this, we have a broad portfolio. Rob went through a lot of this that is fully OpenStack enabled. We are looking to continue this community commitment and this partnership engagement in the ecosystem. And we're going to continue adding features and future releases for future OpenStack versions and summits. So if anybody wants to learn a little bit more about what we do or follow us, we're putting all our information on our GitHub site. This allows us to be a lot more agile and reactive to the community. We have a community engagement forum on there. We have our deployment guide, which I encourage everybody to download and take a look at. And we're going to continue adding more and more material in this as we move forward. You can follow us at OpenStack NetApp. And I'd like to see anybody with any questions either come to us now or visit us on our website. About one last thing I do want to say, I look forward to seeing everybody in Tokyo in a few months from now. And thank you very much. Do we have any questions? So I think, I like the way you characterize it because I think that's a good way of putting it. Ceph versus Enterprise, meaning Ceph is not nearly Enterprise-ready in many ways. Ceph is a more nascent capability. For those not familiar, kind of a distributed storage capability, some of which actually exists in upstream Linux. I could take like a pool of systems, compute systems and kind of knit them together into a collective and then a couple of different protocols. So it's popular, I think, to get started if a lot opens to that clouds because there's not a lot of friction actually acquiring it and putting it into place. I guess I liken Ceph to kind of the classic Gartner hype cycle of innovation. If you're not familiar, there's basically this peak of expectations followed by a trough of disillusionment followed by a slow ramp towards productive use. And there are different people in this community in different places. In fact, I think we've heard some stories who are kind of on that ramp into productive use and they've characterized where it does and doesn't go. There's some real performance considerations. There's some things that it does not do well from a performance perspective. There's a certain type of SLA, a resiliency requirement. So resiliency is probably not the right way of putting it but continuous availability, durability of data are architecturally provided for in Ceph but in reality have proven to be problematic. Examples, I encourage you to Google the term Cephpocalypse. For that matter, I can offer anecdotally folks who've kind of struggled with like corrupted crush map that got replicated everywhere which equates to poof, my data's gone. So Ceph is real, Ceph has relevance in a certain sort of strata. Within that catalog construct I talked about you would put certain volume types and associate it with it I think reasonably but for anything that you put a higher SLA on it I would advise a lot of caution. Anybody else? All right, thank you very much everybody and I hope you stick around. We have 10 more sessions today. I think eight more in here plus a few more somewhere else. So I hope to see you around. Thank you very much. I appreciate it, thanks.