 Good afternoon. My name is Robert Askar. I'm the Prime Manager for OpenStack at NetApp. I've been working with OpenStack for about three and a half Years, so I've done this a few different times. And I thought we'd just go ahead and launch it. And we've got a total of about 15 minutes. So I just wanted to give you an overview of NetApp's Integration story with OpenStack. And a couple of the things that we have coming. Do a couple of short demos. So let's go ahead and get into it. So just to get a little bit of the groundwork and the way We think about something like an OpenStack at NetApp. We believe that we have a number of the technologies in place To create a common data fabric across endpoints. Will that be private cloud, public cloud in a hyperscale Sense or of course hosted private cloud or a number of Other boutique service providers. Of course we're familiar with some of the today's Hyperscale providers. The potential to actually host workloads that might have Actually been intended to land at an Amazon or perhaps In the future in Azure or a Google cloud platform. And of course most of this is shimmed presently in the form Of AWS and OpenStack. Any number of different OpenStack equivalents. Some of which may in over time in fact achieve some Of the same scale that you would see of a Google Of an Azure and perhaps over time even in Amazon. As I expect most folks are already aware here, there are Of course some high profile large public clouds already Based on OpenStack or at least the veiling OpenStack APIs. And of course a number of other small locales. It becomes that sort of common plane or at least Run time for infrastructure as a service above that Common data fabric that NetApp is executing upon. Just briefly on the prior slide our data on tap operating System is by most measures the single most prevalent Commercial storage operating system in the world. Of course NetApp is not the single largest provider of Storage but all of our platforms with a couple of Exceptions are based on this data on tap. And as such it's a good place to start and Building a common data fabric. To achieve the largest number of nodes possible. Familiar with metcast law and he given network as only as Useful as the number of nodes within so it's a good place to Start. A bit about open source at NetApp. That data on tap I referred to derives from BSD originally And of course we also are the primary employers of some Of the Linux maintainers around NFS. So OpenStack was an organic thing for us to settle upon. There are a number of different things that NetApp Differentiates within the market at large. No intention to go through all of these. But when we start in on something like a cinder Integration exercise we want to make sure none of it is left Behind. We're not a commodity storage device. There's a lot of differentiated capabilities. And something like a cinder is an abstraction. It allows you to write application logic to address a Single api for all block storage. But as such it's an abstraction to make sure that the things That we do differently whether it's various qualities of data Protection or storage efficiency, uptime availability, Protection security so and so forth that those are Explicitly accessible through that abstraction. So that's actually where we start when it comes to Cinder development. We've been at this for like I said a little while. I've been working with it for three and a half years. Our first integrations they viewed in the sx release And we've iterated upon it successively expanded. And in fact here in ice house we've debuted support for an Entirely new platform namely our e and e f series. An all flash version of our e series systems. Clustered on tap is a programmable capability. That's resident on many different modalities. Many different contexts whether it be on prem and a Host of private cloud sitting in front of foreign storage Meaning non adept storage or increasingly in the future As an end point at some of those hyper scale providers I mentioned earlier. So let's actually get into a little bit about open stack. And since I'm already talking at length let me speed along Here. Let's talk about glance. First thing is when you back end it on that systems you Get to take advantage of deduplication. Since we are talking about os bits deduplication tends To be very aggressive 90 plus percent is not uncommon. You can do that with either the object or the file back end But from a simplicity perspective file tends to be the Path of least resistance. When it comes to object storage we have an interesting Reference architecture on our e series platform which Possesses a I guess you could call it a node level Erasure coding capability. It's actually an alternative implementation of the crush Algorithm for those might be familiar. That allows you to mitigate the effect of long rebuild Times classically associated with raid. Why mention this? Well for swift by default uses a consistent hashing ring. That ring makes three copies by default within a single Site and of course more as you extend over multiple Sites. You wouldn't want to reconstitute over the WAN so you're talking about at least two copies at the Other end of it. Traditional raid or traditional parity scheme has Us long rebuild time. Since we've mitigated it with our dynamic dispools Technology I just talked about we're able to Dramatically reduce the actual consumption associated With storage of a single object. So it goes from 3x within a site to 1.3x. And of course that single site also becomes Immediately consistent. And I guess what you ultimately see is a pretty Significant reduction in cost of operation. Power cooling, floor space so and so forth Management associated. On the topic of block storage which of course where We started it goes without saying this is a Control plane activity. It's not the data path. One of the things that we work within the community on And in fact actually there's a session concurrent with This on the topic of the use of volume types within Cinder. You allow you to actually construct a catalog of Capabilities. So I earlier alluded to the fact that our Systems do things different than commodity storage That are different from them. And so how do you get at them? Well frankly you can you establish what's referred to As a volume type. It's arbitrary and then you compose it with what are Referred to as volume type extra specs. Which are the unique capabilities that a given Cinder back end can deliver. And so from that a requester of Cinder storage You know speaks to the api server and then the Cinder scheduler attempts to levy the request against The back end that's most appropriate given the Characteristics of what you of the type that You requested. So just a brief demonstration of establishing some Well actually we've already established the extra specs. This is an indication of some of them that have been Established as well as quality of service attributes. And I'm sorry I moved a little too fast with the Fingers there. What you'll see is that we've established a gold Silver and a bronze catalog if you will with Different attributes. There's quite a lot of option there in this particular Case we've aligned for example bronze to a sort of a Lowest common denominator storage option. Perhaps it's for ephemeral type use cases where you Really don't care about the qualities in your Aligned storage. And of course with a silver and a bronze we've Aligned different other things. You probably saw that we assigned quality of service Attributes a ceiling associated to a given type Such that you prevent a given tenant from exceeding That and of course there's a lot of different reasons Why you do so. You know prevent the the noisy neighbor syndrome but Also frankly don't deny yourself as a service provider If that is indeed who you are. The option of selling them something that's more Aligned to what they're actually consuming. So this is just sort of a depiction of what that Looks like with our e-Series driver. And then with our clustered on tap driver in this Case silver was composed with a replication characteristic So that when that occurs it went ahead and provisioned Into a container that has a replication policy. So there's a number of different options depending upon Which of our drivers you use. We avail both NFS and iSCSI. We help deliver NFS in the originally within the Community. Wrote the reference driver for the generic driver if You will. And you might ask well why NFS after all we are Talking about block storage service. It's simply vastly more scalable. You run out of initiators and loans well before you would Files in a given export. There's a variety of cloning advantages as well that Will get into a second. But we support paralyzed NFS for the first time in Icehouse by default if available and of course NFS has been around for a little bit. On our e-Series systems iSCSI presently and you Will look to expand upon those in the generative Time frame. So as it applies to okay creating new instances In opensack compute. Most of you may be familiar that a day in the life of a Given virtual machine exists as you know hey i'm Going to interrogate my fleet of hypervisors and Determine what's the most appropriate locale given The flavor and the image selected and once i've Done so if the image is not already there i copy it Over i have to actually curl it over htb copy it to The location which can be quite expensive now it is The case if you have a subsequent request you're Going to have the advantage of that one copy cached but It's only local to that specific hypervisor if you Have like i said a fleet of them that copy operation Occurs on the next one over and you know if you have Lots and lots of images this can be particularly Expensive so what we've delivered is a capability Where we can collate collocate glance and our Cinder backing store on the same system if it so Happens that they are not in the same system perhaps Swift is elsewhere and is actually the glance back End and senders on our systems we'll make that one Copy but that copy actually ends up being global to The entire fleet of hypervisors and i should also Point out that it's a boot from volume activity we're Looking at so it means that your volume your Your bootable volume is in fact not ephemeral By default it's persistent by default if you Have used cases that want an ephemeral instance Then select delete upon terminate you get the effect And i'd argue that it's far more effective to go from Persistent to ephemeral than it is to try to go from Ephemeral to persistent where your needs for a given Instance to have changed along the path so it's Significantly faster we clone aggressively are Being net of cloning technology no new no additional Space consumed until there's a net new writer Overwrite the effect is at least on the storage Component of the boot process of new instances is Essentially instantaneous not to say that you know the Boot process doesn't proceed and that doesn't take Time the storage component of is dramatically Sped when it comes to the array of storage services Within open stack we think there's a critical Emission in 2012 i believe it was idc Specified something like 65% of all storage sold In the market at large was for deployment of Shared file system so if you think of open stack Is the de facto open infrastructure service Capability there's a critical issue when it comes to Infrastructure as a service so shared file systems as a Service specifically so we've endeavored to create a new Service building community around it i should point out Called manila which does what for shared and distributed File systems what cinder does for block storage if you Know what cinder does then you know what manila does There's additional considerations for it and There's a little bit of additional complexity but Conceptually you're there so this is just a depiction of The addition a quick demo to show you that this is in Fact real on wednesday i believe it's a 10 a.m. We're going to have a session an hour long session that Is in fact actually community activity we have a number of Other vendors on stage with us demonstrating their Capabilities around manila so you know a net app Conceived of prototype develop you know contributed Capability but we are to be clear working with the broader Community and encourage encourage folks to join so you'll See some of those other folks on stage with us on wednesday Again not to go into the depth of manila since we don't Have the time here but it does exist those are the Horizon interfaces that's not smoking mirrors so it is Presently on the path to incubation we go in front of The technical committee well i don't have the exact date But sometime after this after the summit to get the official Not on it you can play with it you can deploy it today it's on Stack forage like i said on wednesday there's a pretty in Depth session so that's kind of a summary of what we've Done i mentioned in ice house that we debuted a couple of Capabilities support for our e-series and e-f which is An all-fresh array version thereof systems in the way of Cinder that reference architecture i talked about was Swift is in fact predicated on e-series a quality of e-series Underneath we default to paralyze nfs where available meaning The host os on the compute node the location of the Hypervisor must of course support it and of course we'll Negotiate it down to whatever version is in fact available i Talked about the the rapid instance creation efficient Creation we did an additional optimization we have a Capability that copy off loads where if in fact glance And cinder are not on the same net app systems we can actually Do an out of band copy between them directly instead of having To wash it through the host that's running the open stack Services and then we have a variety of reference Architectures and we endeavor to deliver both pop it in the Future chef automation with each of our reference Architectures so it's not just a simple exercise in Reading it it's an exercise in making it so Juno is a whole topic unto itself but more to come I've got a limited amount of time so i just wanted to get Into a little bit of some of the reasons why we see folks Deploying that app underneath open stack there's a Variety of different ones one is i've got a variety of Cloud native applications i've got a variety of like Classic posix applications and i want to deploy them on a Single highly available highly reliable infrastructure if i'm Only building like that entirely cloud native ephemeral Application then there are reasons why you might do so for Storage efficiency the total cost of operation in terms of Environmental sustainability perspective more often than not Most customers have a collection of both and to have Only one stuff pipe infrastructure for one style of Application tends to be rather expensive and we see a Lot of folks you know kind of moving towards this Hybrid cloud model when they want to repatriate from an a Ws when their workloads become more of a steady state and It's frankly do costly to keep it there or maybe to actually Like you know deploy it first on-prem and then be able to Burst out to so open stacks essential qualities of having Essential api compatibility with its equivalent a ws Services are attractive in that sense and i'm running out of Time but a variety of other reasons so we have a number Of reference architectures net app dot com slash open stack More to come we'll see you in france hopefully and follow us at Open stack net app and like i said net dot com sends you to Our deployment and operations guide or community some of the Reference architectures i discussed and there's some Other sessions this in these ensuing days that are of Relevance three of them on monday i'm sorry on wednesday And then a few of them other times in the week including net Apple feature in the nebula keynote and then likewise in the Triple-O session we address net up systems via ironic thanks Very much run over just a little bit so i greatly appreciate it Have a good day