 Good morning. I'm Robert Esker with NetApp. I wanted to talk to you a little bit about NetApp's integration with OpenStack, in particular in the Havana release, what we've been up to, and give you a little sneak peek into capability we're introducing for Icehouse and beyond. By the way, it's loud here. Perhaps you can hear me. I certainly can't. So if you have any questions, let's pin them until after, and I'll be happy to answer them. So just a little bit about NetApp and OpenSource. We have a long history of working with and around and in the OpenSource communities. If you look at our core operating system data on tap, it is itself BSD-derived. We are the primary employers of some of the primary maintainers of NFS and Linux, for example. And so something like OpenStack was very organic for us. It would allow us to join as the first major storage vendor as early as we did. So we've been involved for over two and a half years now. This is the sixth design summit we've been part of, having sponsored those. Our first integration started showing up in the Essex release, and I'll talk to you a little bit about Manila here in just a bit. So in a nutshell, and this is certainly highly distilled and maybe perhaps oversimplified, our core set of core competencies, the things that we do best, whether that be shared or dedicated storage solutions, making sure that those are delivered to you in a flash-excelerating cloud-integrated fashion. So obviously, in this context, we're talking about OpenStack. Our clustered on tap operating system, which I would expect most of you probably aren't familiar with, is a fundamentally horizontal scale out capability, clustered as in horizontal scale out cluster, scale out for performance, scale out for availability and such. That lends itself very much to the OpenStack Design Center. So it's where most of our emphasis is placed. If you've probably seen a variation of this, it's certainly not an app-specific way of looking at the world. But increasingly, future IT looks like a broker of services, whether that be something you build on your own premise with an OpenStack or maybe perhaps it's some form of interaction with a hyperscale provider or some other form of service provider. So how do we enable that? The notion of hybrid, hybrid cloud, I think, is still very much unrealized. So how do I actually connect the points? How do I actually move data between them? And is there a common runtime amongst them? Of course, OpenStack significantly eases that to the extent that it's adopted and that you could potentially have an OpenStack API availed at one of the top two places and certainly run it yourself. A very common use case, a growing use case that we see in that app is the use of OpenStack as a way to repatriate from Amazon web services. Or perhaps it's maybe not to bring it all back, but to establish more of a rational relationship so on an ongoing basis you can burst into it. What's OpenStack do? Well, it provides you, among other things, an API compatibility, the ability to take a code that was originally levied against an AWS and have the runtime on your own prem. That still leaves the remaining problem of data is most unlike a utility or a power utility in cloud. It's sort of the proverbial water utility. And this is certainly something I'm borrowing from. I can't recall who it was that set it first and how to attribute it, but the basics of it is that data has the qualities of mass, of gravity. It's hard to move from one location to the other. It takes time. The width of the pipes matter. Format is a consideration as well. So how do we actually achieve that? Basically, look for us to start unveiling some capabilities that connect these endpoints that allow you to establish a common data fabric amongst them and ease the movement in an opaque way and in a thin way between those endpoints. So again, OpenStack is critical in providing that runtime compatibility atop all of this underlying data fabric. So a very intentionally simplified view of OpenStack services and where NetApp's product portfolio maps to it. We see folks deploying OpenStack image, aka glance to great effect using our duplication technologies. That required zero lines of code to enable, but I'll talk to you in just a minute about some enhancements we've made. We've placed most of our development emphasis on OpenStack block storage on the object side. I'll talk to you a little bit about the deployment of Swift on a product that we call E-Series in a kind of a unique way that helps save you some money and scaling it at the extreme. And there's something missing when you kind of consider OpenStack as the leading open infrastructure as a service capability. So it's the mystery guest. I'll talk about that in just a second. And so NetApp does lots of interesting things well. My point isn't to go into all of those, but that's where we start with. Make sure that those capabilities that power your business or power your cloud, power what it is you actually need to deliver for your tenants, to meet your SLAs, and make sure those aren't hidden by the abstractions of OpenStack. So you want the abstraction to an extent in a sense that it becomes a way of keeping open and allowing you to move from one vendor or one implementation to the next, but at the same time, you don't want to dilute the value of those capabilities behind the abstraction. So that's where we start with. Make sure you can get NetApp unique capabilities through something like a block, OpenStack block storage service, AKCynder. So just briefly, let me show you a little bit about how we attempt to do that. So this is a, of course, horizon, and I'm going to show you how one, you can establish volume types. So here we've created a Hong Kong volume type and then a Macau volume type. It's entirely arbitrary. Whatever you want to call it, red, blue, green, dogs, cats, birds, whatever makes sense to your tenant base. Once you've established those, then you are able to take unique characteristics of a back end, so the Cinder driver instance. Some of those are listed here in provisioning, de-duplication, compression. QOS is not listed there. That's another capability. And you're able to compose those into these types. And so in the earlier example, I showed you a volume type of Hong Kong and a volume type of Macau. Now I'm going to actually attach. So you'll see that, oops, sorry. You'll see that the volume types we created and then you'll see volume type extra specs. And of course, since we haven't actually established or associated any with the volume types, we'll need to go ahead and create them. So in Havana, we have basically taken most of the gamut of OpenSec set of unique capabilities, whether they be efficiency, availability, performance assurance, right? Quality of service or availability capabilities and express them in the form of a volume type extra spec. You can take those capabilities and here we're going to take mirrored, we're going to take compression. And I can't remember, I think I used de-dupe as the last one. And we're going to associate them with the volume type. And you'll see that here just in a second. Sorry for my slow typing. And so now you see that, in fact, those are associated with volume type. So maybe a little bit more clear depiction of how this is all working. So, hey, I need to fire up eight Centos instances. Perhaps I'll select from an image that has a lamp stack on it. And I'm going to select from volume type silver. And so in that prior example, not the demo one, but the prior slide, we had defined silver as having a replication policy. And so what basically happens when you select from it is the volume type that's created becomes automatically replicated. It automatically becomes a mirror to a remote destination. So I also want to talk to you a little bit about the creation of instances, in particular where you might want to boot from volume. And some of the reasons why you might want to do that is if you actually want a persistence model by default. If you're not familiar, open stack, by default an instance is ephemeral. There's no assurance of data being retained over a boot lifecycle. That's certainly appropriate for certain style of application. But if you're moving any sort of classic infrastructure into an open stack as a service kind of model, then you will almost certainly want to provide a persistence model for that application stack. So let's boot from volume, but do it in a more efficient way. Here we're actually booting from an image. I want to show you an optimization that we put into Havana where we're able to take some of NetApp's unique block sharing technology or cloning capability to speed the creation of new instances. So again, by default, an image is copied out to the compute nodes. And if it so happens you want to light up another instance based on that same image that's been copied out. It'll take that cache and then use it. But if it happens that Nova says I want to put it on another compute node, then that copy operation has to occur again. And basically we're instead going to use boot from volume and where Glance is located on NetApp, the Glance image repository, will instead of copying it out the first time, will clone it, which is instantaneous. And it doesn't consume any additional space until there's in that new write or overwrite. So that's the first instance. And then thereafter, the same thing. We'll take that first cloned version and we'll create a hierarchy of clones. And the net effect is that you're able to create instances significantly faster than otherwise and there's not the storage burn associated. It also enables models when you're using boot from volume where you might be able to boot stateless. You don't necessarily have to have a local disk if you don't care to. So there's a variety of options in Havana for deploying NetApp with open stack block storage. The first question you ask is our classic mode, our seven mode or our next generation horizontal scale out capability, I just described, clustered on tap. Whether or not you want to inform the provisioning decision through some middleware capabilities, some NetApp products that allow you to do some higher order things beyond what we've described here or to have more of a direct model of interaction which tends to actually align better to hyperscale requirements. It's also simpler to deploy. And the last is ISCA's or your NFS. You may, one of the things that we contributed back in Fulsom, I believe, yeah, Fulsom is when we debuted it was an NFS reference driver and then a corresponding NetApp specific NFS driver. So you might think, well, a block storage service, how could NFS have anything to do with that? It's quite simple because the majority use case for consumption of a standard volume is an instance. It's a persistence additional capacity for an instance. We're basically rely upon the mediation of the hypervisor, the mediation of a component called libvert to mount NFS to the location of the compute node and treat files as a virtual block device. And we do that for reasons of scalability. You can do vastly more files than an export thing. You can do ISCA's or LUNs per storage system. That said, we don't have any preference here. There are obviously uses of Cinder where you in fact want a real LUN back, perhaps bare metal nonvert type of uses. So a summary of the things I just talked about, we also collaborated within the community, IBM specifically to enable a migration capability to go from one backend to the next. And the thing that I was referring to as missing is specifically support for shared file systems. So in going down this path, we've come to understand why there's some additional hard work. If you accept, and I don't mean for this to be very controversial, that OpenStack is very much built in the image of an Amazon Web Services. Well, certainly Amazon Web Services doesn't avail a shared file system as a service or distributed file system as a service, so thus OpenStack doesn't. However, we think that this has the ability to grow beyond it being an image, and I'm not gonna get into the API debate that occurred in the community over the last couple of months, but we very much do believe that the native APIs of OpenStack are valuable unto themselves, but so, in fact, is the shim support for large hyposcale providers. One of the things we're adding, though, is support for shared file systems. There's a new project called Manila in a nutshell. Hey, I've got instances X, Y, and Z, and I wanna provide access to an existing CIFshare to do gainful work. So this basically is the orchestration of that activity. I wanna create a net new NFS export. We built Manila to be sufficiently abstract to a shared or distributed file system type. So we've got folks like IBM who are looking at this from a GPFS perspective, Red Hat from Bluster, and we're actively trying to build community around this. We've just recently submitted the formal application for incubation. Unfortunately, we were a little late in getting that decided upon ahead of this design summit, but in mid-November, the technical committee will render a decision. We are already fully integrated in Stackforge, so we welcome anyone who is interested to join the community and look at and implementing your shared file system of choice. So I do wanna briefly mention, NENAP has a leading shared, a converged infrastructure solution called FlexPod. We work with Cisco on their UCS and Nexus line. The very first version of that is we're unveiling here this week in the form of a Red Hat OpenStack platform, a RELL OSP, Red Hat Enterprise Linux, OpenStack platform-based version. There is a reference guide, a reference architecture that's been published on nenap.com slash OpenStack. That's in a preview form, but the full release will appear here in the coming months. I also wanna just mention that we have an E-Series product which is a very dedicated, high throughput storage device. We'll have support. We already have a prototype driver for Cinder and we'll have that upstream in Icehouse. So look for that as well. And I did wanna briefly talk about Swift. So let's see how much time. I got one minute, so I'm gonna speak very quickly. Swift uses a consistent hashing ring for protection placement, which makes perfect sense given its initial design goals. We happen to think that one of the qualities of our E-Series product is it allows you to deploy Swift in a much more efficient manner, reducing the total amount of disk and equipment that need be deployed. Classically, Swift has not been aligned to a parity protection scheme because, frankly, you're exposed to very long rebuild times. We've taken the same crush algorithm that Seth uses, that same academic work and implemented another version of it in our E-Series line within the frame that mitigates the effect of these long rebuild times that you can now use a parity scheme again. And so basically it reduces the replication count significantly and it also reduces the pressure of the ongoing replication and removes an inhibitor to the ultimate scale you can achieve with Swift. That's also a reference architecture that'll be published on net.com slash open stack here in the next week. And I really wanna appreciate everyone's time here. Come see us at the Manila Unconference session. And likewise, if you're attending the design summit, we're trying to work within the center community to establish a common notion of all type extra specs for, and this is based on deploy or request. So if you have any opinions on that matter, I'd love to see you there. Thanks very much, appreciate it.