 All right, welcome everyone. Happy Wednesday. Let's say, hey y'all in Texas. So the name of this session is Jumpstart, your production open-stack deployment with Flexpod. So, but first a little bit about me. My name is David Cain. I'm a reference architect and technical marketing engineer working at NetApp. I've been there about a year and 11 months now. Previously to that, I spent about 10 years in a data center environment touching all things storage, networking, and compute for a very large IT department, a large IT company. I have a BS in computer science from NC State University. And if you'd like to follow me on Twitter, my handle is up there. So for this presentation today, just four agenda points that I want to take you guys through. The first is just some enterprise challenges that we hear here at NetApp about open-stack adaptation. A little bit about NetApp Y, NetApp specifically for open-stack and Flexpod for open-stack. I got a little bit of an announcement that I want to take everybody through. And then we're going to talk a little bit about some infrastructure and integration proof points that we have. One of the favorite things that I have that I do in my jobs, not only give presentations in front of customers like you, but also do real work in the lab because I don't like to get too far away from the technology. So talking is good, but doing is better. So we'll talk about that towards the end of the presentation. So a couple of key challenges that I hear as a TME, and when I talk to customers through either VIP meetings or even conversations that we have ourselves here, I hear that it's operationally, it's an extremely complex to deploy piece of software. It's as a six-month release cadence, it's too hard for me to keep up with. There's too many knobs, there's too many buttons to push. I don't even know how to get started. I might have had an experience deploying open-stack and it was quite challenging for me. There's risks of implementing open-stack for me and my organizations. How do I operate it? How do I support it? How do I scale the infrastructure? Because the infrastructure has to be there to support the cloud. And how do I rapidly provision instances for the eventual consumption of my customers? Also from a design perspective, how do I have efficient and scalable cloud resource utilization? How can I ensure that I have scalable infrastructure, alluded to that earlier, that can support my desires in the cloud? Do I have a compute platform that I can know and rely on? Do I have a storage platform that I can know and rely on? Can it scale with my initial deployment? Because the age-old adages, if you build it, they will come. You better be ready for when that happens. And high availability, the enterprises that I speak to, that's an extremely important tenant. How can I ensure full HA of all the components, the infrastructure and the open-stack components? How can I do that? So why converged infrastructure? So converged infrastructure for those folks that may not know. In the open-stack community, appliances, that's one of the synonymous terms I've heard, applied towards converged infrastructure. But it's basically a compute, storage, and networking resources that are bundled together in a pre-validated, pre-tested platform. This reduces risk. Going back to the design and operations type concerns from earlier, this is a proven platform. This is something that you know you can rely on. It reduces total cost of ownership. So whenever you buy these, roll it into your data center. This is something that you can rely on. You can scale with it. It has independent scale points. Contrast that with maybe a hyper-converged infrastructure, where you can scale just to what you need. Scale the storage if you need it. Scale the compute if you need it. And it increases speed and ease of deployment. So a little bit about FlexBot. FlexBot is a converged infrastructure platform. A joint venture between Cisco Systems and NetApp that's recently celebrated five years. In that time, we've had about $5.6 billion in revenue as a result of that joint technical engineering effort. And we've produced about 100 validated designs across many different platforms. Enterprise applications, certainly OpenStack, VMware, Hyper-V, various platforms. So it's an ideal platform for virtualization and cloud infrastructures. And what does it comprise of? It's the Cisco Nexus family of switches. This is the 9Ks, the 5K variants. From a storage standpoint, it's both NetApp, FAS and or E-Series storage, represented as the storage layer. And at the compute layer, it's the Cisco Unified Computing System, B or C-Series servers. So why run OpenStack on FlexBot? Well, really, one thing that I've seen a lot at the summit here is both developers and operators. So purely from a developer, why do you care about infrastructure? Why is that important to you? Well, with OpenStack on FlexBot, you've got a private cloud infrastructure as a service much faster. You don't have to worry about setting up any of the infrastructure bits because those are taking care of for you. When you write your applications in your workloads to take advantage of either traditional applications in OpenStack or these new cloud-native type applications, Block, File and Object is ready to be utilized in the resulting deployment right away, out of the box. And you can concentrate on developing your applications. Specifically from an operator, this is a complete data center in a single rack. All of the components necessary for you to deploy OpenStack or any other reference workload on. All of the robustness of FlexBot having high availability components, everything being deployed in pairs, from the networking layer, from the storage layer and from the compute layer, everything comprised in the infrastructure is HA on day one, including the OpenStack components. So, as I talk about the distribution components here in a couple of subsequent slides, that as well is fully HA in the resulting architecture. And Hybrid Cloud, you're ready on day one with FlexPod. As you have this in your environment, we have a vision at NetApp called the data fabric. And what this really means is you being able to take control of your data, being able to burst a public cloud, being able to burst to different environments, however you choose to do so, through our SnapMirror protocol. You can run a version of our operating system called Cloud On Top in Amazon or Azure or IBM Software today. And so you can seamlessly connect to these clouds using FlexPod as a basis. Now, why NetApp and FlexPod for OpenStack? So at NetApp we're a chartered gold member in the OpenStack Foundation. In fact, we sponsored, initially there was a sponsor of the original Diablo Summit. If you look at the timeline down at the bottom here, we have elected board representation on the OpenStack Board of Directors. If you look at the OpenStack user survey results consistently, NetApp and SolidFire, our recent acquisition that we had in February of this year, were the number one commercial storage offering for production deployments, purely by user survey results. We're a huge deployer of OpenStack internally as well. In fact, I just gave a presentation with my colleague Mansi Prabhakar about just how we're taking our OpenStack deployment internally. The fact that we have about a 70,000 VM capacity today in that internal cloud and FlexPod serves as the basis of that own internal deployment that we have inside NetApp. Community and project leadership, we ship all of our drivers upstream, straight to the open source community so they can be consumed in with any distribution that you choose or if you choose to roll your own. There's nothing to download. It's all there upstream in open source. So comparing and contrasting those do-it-yourself type of deployments versus running OpenStack on FlexPod, I think this chart kind of illustrates some of the facets of that. So rolling your own, you don't really have a product roadmap specifically with upgrades taken care of in that. With choosing a convergent infrastructure like FlexPod and the integrated relationship that we have, NetApp, Cisco and Red Hat, we consistently refresh our solutions to accommodate the ever-changing ecosystem and six-month release cadence of OpenStack. So instead of having a six-month lifecycle of the upstream release, you can utilize a distribution like Red Hat OpenStack platform which has a three-year lifecycle. So download it, install it on a FlexPod today and you've got that ease of use and knowing that it will be supported if you need to reach out to Red Hat for three years. With OpenStack on FlexPod, you have a more accelerated production timeline. So these reference architectures and design and deployment guides that I mentioned, you can use these to quickly and effectively set up the infrastructure using best-of-breed components as well as architectural guidance and advice. And it's an integrated infrastructure platform. So as I mentioned, storage, compute and networking all bundled together, supported, it's a validated design that lowers your risk. So just to take you through some of the components that are represented, the tight integration between FlexPod and Red Hat OpenStack platform from a nova perspective. We can rapidly clone instances with FAS. This is a differentiated offering when you consider deploying OpenStack internally, utilizing it with FlexPod. We have a lot of storage efficiency features there that really make cloning of instances very fast in the resulting cloud. More on that in some of the proof points in a few minutes. From a glance perspective, your image repository contains a collection of operating system images that you want to deploy in those resulting instances. We can turn on our storage efficiency features with our NetApp FAS platform. I'm reminded of the presentation I just gave where internally at NetApp we have about 65 terabytes worth of storage being used for OpenStack. When we turn on that space efficiency, we're only using five terabytes. So that's a 91.8% reduction in the physical space that's stored employed using that technology. From an object storage standpoint, I mentioned that as a developer benefit, in our solution we have our NetApp E-Series platform represented and storing the Swift object storage, both the objects themselves and the associated metadata on our NetApp E-Series platform. As the sizes of disks increase, six terabytes, eight terabytes, traditionally Swift with DAZ, it takes quite a long time to rebuild those disks. Well, the rocket scientists over in the NetApp E-Series engineering came up with a basically a evolved RAID algorithm called Dynamic Disk Pools Technology. And that actually speeds up the rebuild times by a factor of eight. So much faster in rebuilding those failed drives for the Swift object store implementation. Manila, we have our shared file systems as a service project featured and highlighted in Red Hat OpenStack platform. That's where you can spin up a file share hosted on our NetApp FAS device automatically for you. So for those that may have run departmental file shares in a previous life as an IT administrator, this provides the ability to do that in the resulting OpenStack cloud. So your users don't have to get a ticket to you to say, I need a file share as a service. That project takes care of that. That's integrated in Red Hat OpenStack platform. From a neutron standpoint, we feature our Nexus 1000V virtual switch from Cisco, specifically running on KBM. And that's a common pain point I hear from customers. Neutron is complicated. I don't know how to administer it. I still have a network team that needs to be able to have visibility and control in the resulting networking in the cloud. We can deploy the NXOS operating system inside of the resulting OpenStack deployment redundantly through the use of VMs that run NXOS. So it's a common look and feel for those network administrators that are familiar with that technology to be able to automate and orchestrate the networking and get a viewpoint into the networking in the cloud. From a sender's perspective, we highlight our block storage service that's basically storage as a service. So I mentioned cooperative support. This is an overlooked facet of owning a converged infrastructure platform that I wanted to stress. So for those customers that may have had a support experience with a vendor before, and it turns out to be a different issue from one that you originally called in about, you get the phone tag situation where you're said, oh, that's not our problem. Call the other vendor there. Well, that's not so with FlexBod. We have support built into the offering to where if you're most comfortable with calling NetApp about any problem that you may have, and it turns out to be a problem with the OpenStack services or a problem with the networking, some set of detail implementation that you may have, it's a seamless handoff. We have a team of experts internally between the three companies that we coordinate our support of. So you can have that seamless experience of support and not being bounced back and forth. We have a web where we can recreate that problem that you may have collaboratively amongst the three companies and get to a solution to your problem faster. So integration matters. And so I've been talking a lot about FlexBod OpenStack on FlexPod. Happy to announce we've just published a new NetApp technical report, Red Hat OpenStack 8 on FlexBod. For those that may not know Red Hat OpenStack 8 numbering, that's based on the Liberty release a few weeks ago. And so this is a comprehensive validated deployment guide that you can use to implement REL OSP8 on a FlexPod in your lab. And there's a link to it. So along with that paper that contains exactly what you need to set up all of the infrastructure bits and deploy Red Hat OpenStack platform through their director, we also have contributed upstream code to NetApp's GitHub site which I'll put a link to that in a future slide here, where you can get Cinder, Swift, Manila completely automated and installed for you automatically. So you can pull down these templates that we've put up on GitHub, specifically heat templates. So the director being triple O takes heat templates as an input, consumes those, feeds them to puppet and that's what does all of the customization in the overcloud. More on that in a second. But the point being, there's no manual configuration required to get NetApp E-Series up and running after you get the initial infrastructure up to be taken advantage of by the OpenStack installer. Also, which I thought was worth mentioning here, I started playing around with OpenStack around the Icehouse release and doing some of the operations, the common operations that I thought were common through the Horizon dashboard was tricky. It was a little challenging. And in this technical report, we've actually been able to demonstrate common operations all through the Horizon dashboard. So definitely, normally if you have a large cloud environment, you're going to be doing anything with the RESTL API anyway or automating it, less so the Horizon dashboard. But if you're new to OpenStack or if you're just not sure how to write those commands or do some of those atomic actions, everything is demonstrated via Horizon. So definitely a kudos to the Horizon developers. In Liberty, I could do everything. I could spin up a project, spin up a user, create a volume, create a share, even all through the upload objects to the object store, create networks, routers, all that stuff and all of that's been demonstrated in the deployment guide there. And afterwards, we had a resulting cloud there, you know, a fully HA-capable deployment. We decided well, let's test it a little bit. So we utilized the OpenStack rally, which is a Benchmarkers of Service project written for OpenStack and did some comparisons there, which I'll share with you in a few minutes. So specifically from a deployment perspective, I'm just going to take you through really quickly what that paper shows. It shows you exactly how to configure all of the physical infrastructure bits, the Cisco UCS servers, the Cisco Nexus networking switches, the NetApp FAS storage, the NetApp eSeries storage, all ready to go. Now it takes you through how to deploy Red Hat's OpenStack platform director. So that's based on the upstream Triple O community project. They used to use Formin and OSP6 and switched to director in 7 and 8. But really to take from that, the director could also be called the undercloud node. So what that means is it's kind of an interesting synonym here, Triple O. It's spelled like that T-R-I-P-L-O. It's actually OpenStack on OpenStack. That's what that project means. So it's using OpenStack to actually deploy OpenStack. We have such a great ecosystem why not use it to deploy an environment. So what it does, it's a DHCP server, TFTP server. It's a lifecycle management project. It runs heat, specifically to build out the overcloud through Puppet. And the overcloud is, for all lack of a better term, the resulting OpenStack deployment. So that contains in our design a controller node which runs all of the inherent OpenStack services like the Cinder Scheduler, API, the Manila Share service. Basically everything except running the VMs. That's the compute node host profile. And so the platform director takes care of everything involved in configuring those systems, representing the OpenStack deployment, automated all in a highly available manner. So after you deploy the director, you download those heat templates that I mentioned there. Download them to a local site. Again, all of it's demonstrated in the technical report. But you can customize those heat templates based on your own reflective network infrastructure, whatever subnets you use or anything else. Very little customization done there. Hopefully to help accelerate your journey in deploying OpenStack on a FlexPod, we've tried to limit the amount of manual configuration entries there so that automation is there to help you. And then you deploy the overcloud. So you take that automation, you feed the director, and it will boot up all of the servers. They boot from network specifically, storing all of their operating system and configuration information on the SAN. Being boot from SAN, so stateless computing. It's a tenet of FlexPod. And happy to report that in our lab, a four controller, three compute host deployment took us 35 minutes with this tool. So extremely fast, ready to go, ready for work. So after that launched the post deployment scripts for the Manila project. So you can get Manila in the resulting cloud deployment. Unfortunately, we couldn't take advantage of the upstream triple O heat template community integration for Manila. So NetApp, we were really very passionate about the Manila project and we want customers to have the file share as a service project inside of OpenStack. So we've contributed shell scripts in that same GitHub repository that helped you deploy Manila in an automated fashion. Non-disruptively integrated into the pacemaker cluster, which is the software HA bits that comprise the OpenStack deployment automatically for you. And I just mentioned that. So Cinder using NFS as a back end by our NetApp FAS platform, Swift using iSCSI backed by NetApp pSeries and Manila are all configured automatically for you. So you don't have to do anything in the resulting deployment. Now, along with the paper, I just want to bring up a couple things. I mentioned these at one of my last sessions. One of the things that's differentiating with utilizing NetApp storage in a resulting OpenStack deployment is something that we call our storage service catalog. And what that really means is you can define volume types or share types. So go back to Cinder that has the volume type capability or with Manila it has the share type capability. You can define classes of service based on those underlying storage features that you want to surface up to the tenants that utilize services in OpenStack. So really that's storage as a service. So by default Cinder, the scheduler will round robin through a selection algorithm basically which back end has the most space. Well, that's not really helpful if you want to surface enterprise class features to the tenants to make sure that their Cinder volumes or their Manila share types land on specific volumes that have solid state disks or have space efficiency features enabled on it. So what does that look like? So what we can do with that capability is actually define where those volumes will end up. So take it from a perspective instead of the silver gold bronze moniker that we refer to in classes of service maybe think of it more as aligning your workloads or aligning your volumes to actual workloads. Like say you have a dev test use case where you want to make sure that thin provisioning, compression encryption are enabled on the underlying storage volume that encompasses those Cinder volumes. You can do that through the Cinder volume types and you associate extra specs with it. So our unified Cinder driver as it runs the Cinder scheduler will pull through the available back ends and actually pick up statistics these extra specs like this volume that's exposed to open stack services. It's got thin provisioning enabled or it's backed by SATA disks or SATA or solid state disks. That's reported up through the scheduler and now when you define those storage types and you align those extra specs to them your volumes will pick and successfully through the scheduler go on the right storage back end based on what you desire there. So if you're a service provider or an enterprise customer where you provide charge back services to your respective organizations, you can effectively bill based on intent so charge back meets intent. So you can align those workloads whatever they may be with those underlying enterprise class feature sets that our customers know and expect out of having enterprise class storage in an open stack environment. So specifically there's another use case here VDI we have data protection enabled on that, deduplication. So you set those extra specs on the volume type and the scheduler will pick out intelligent placement to ensure that those volumes or those shares land on the right place for that storage. There's just a couple more examples here. So another thing that's differentiating with having net up as the back end for your storage is rapidly provisioning and instances that are both provisioned quickly and efficient on the amount of space utilization that they consume. So from a glance image stored back to a cinder volume perspective we at NetApp have had many years and great technology with our snapshotting technology. So pairing our storage platform and our unified cinder driver together enables a feature called enhanced instance creation. So really what does that mean? That means in your resulting open stack deployment with the NetApp unified cinder driver automatically employed in tandem with our storage platform you can get rapid in time copies of instances that utilize our FlexClone technology. So in the lab, so what FlexClone really means is employing snapshots to really clone those images out based on like a DevTest use case that we mentioned earlier. So if you need to spin up 100 instances rapidly from the same glance image we can do that automatically or quickly with our FlexClone technology. So rapid persistent instance creation through those cinder volumes that underlie those instances. Also as I mentioned the space efficiency associated with having the glance image store have deduplication technology enabled on it. Certainly we've seen up to a 90% reduction in the amount of physical space that's used for those resulting deployments and as I mentioned earlier about our own internal deployment we see about a 92% savings and that's not just open stack that's other platforms like VM or Hyper-V and that's not just a best case lab thing that's what we see in the field too. Okay. Enough talk. Talk is great but doing is better. One of my colleagues used to tell me that all the time. So as I said after the resulting deployment was done we decided to employ the open stack rally benchmark as a service project to see just exactly how our cloud performs. And so we took rally and we configured it to have a 35 concurrency. So for those that are not familiar with rally what that means is at any one point in time there's 35 concurrent requests happening on the open stack control plane. So we are effectively stressing the control plane in open stack because if there's no control plane there's no users that can be able to do anything with any of the services in open stack. So why not build that into the pipeline there and do that as a portion of this test. So we started off by saying why don't we create 2,000 instances and again in our deployment it was 4 compute nodes and 3 controllers. So we spun up we put a flavor the flavor itself one VCPU 256 megs of RAM and 60 gigs of physical disk space. We uploaded to Glantz the standard CRS which may not be able to do very much so this is a real live image and we launched a rally job right there and part of the thing that we wanted to do here was compare and contrast utilizing maybe the generic NFS cinder driver versus the NetApp NFS cinder driver in the resulting deployment. This will tell us just all the integrations that I mentioned earlier about the enhanced instance creation the space efficiency. Let's see if that's really true. So with the generic NFS cinder driver to create those 2,000 persistent instances so that's creating 2,000 persistent volumes booting them and then deleting them afterwards. How much time did that take? Well in our lab it took us about 68 minutes. So we flipped the switch we turned on the NetApp NFS cinder driver and it took us 20 minutes so that's a reduction in time of almost 70%. What about how much space those images took up? Let's talk about the generic cinder driver first. Fedora Core 23 I believe is 200 gig something like that. We looked at the amount of space that utilizing the generic NFS cinder so again the rally, atomic task creating those volumes. How much space did that take up? Well it took us almost 1.2 terabytes. What about the NetApp cinder driver? 42.9 it was amazing it was a 99% less physical space allocation for those 2,000 instances with the difference of using the NetApp unified cinder driver versus the generic one and I think that goes back to what I was saying earlier about the snapshotting capabilities and the flex cloning. If we flex clone 2,000 instances from an image in glance there's really not that much difference in data from that original image so we don't have to store that information on disk to and so that's a testament to that technology being employed in an open stock environment. Okay, that was interesting. What about a bigger image? What about something that has a lot more data associated with it that's a lot bigger? It's instead of choosing just a generic image what difference does it make if the image itself is a lot larger? We took the Fedora 23 Core image, downloaded it and we filled it with 35 gigs of randomized data so we did a devu random cat into a file there so this would simulate something along the lines of a large database or something of the like so just the exercise here was get a larger image more so than just a small image to demonstrate what difference it makes on the flex cloning technology or how long it takes to spin up an instance so we still use the same flavor size and still the same but this time we only did a one concurrency with rally. We weren't as interested in stressing the control plan at that point because we had done that earlier and there was 100% success rate no matter which of the drivers we used so just to not have too many different proof points there, we decided to lower the concurrency to one to really just accurately estimate how much time it takes given to the comparison of the drivers so with the generic NFS driver for one image so that 35 gigs on disk clone from our copy from Glantz to Cinder took us 12 minutes took 12 whole minutes to make that one initial image with NetApp took us 32 seconds so that's another testament to the cloning capabilities that we have inside of OpenStack to be able to utilize the NetApp NFS Cinder driver to make that instance creation that much faster so that's less time that your customers are waiting to get instances booted up now what about space utilization in that case so this time we did I know we did 2000 instances before but this time unfortunately I know it sounds funny and corny but we're a storage company we ran out of space so we had to reduce the amount of instances that we are creating at this point to 100 with the generic NFS Cinder driver with those 60 gig size images that are being copied from Glantz to Cinder volumes it took a full 6000 gigs 6 terabytes so 60 gigs times 100 simple math 6 terabytes even within provisioning enabled on the generic NFS driver what about with the NetApp generic driver 87 gig so even with this huge image that we have that has 35 gigs of randomized data it still it still takes up very little space similar to the exercise that we showed earlier with the 2000 instances and again that's a direct result of the efficiency of our cloning technology by only storing the deltas of those images in disk because that 35 gig randomized data is still the same data on disk we can deduplicate those blocks and only store the resulting deltas on disk for those 100 images I'm going to be asking what was the point of this exercise well we would have liked to have benchmarked this against maybe different storage providers but we our lawyers would come after us if we did that so the point was not to tarnish or rip into the generic NFS Cinder driver it's still a perfectly valid option for deployments but we're just trying to stress that infrastructure and integration really does matter not only the physical infrastructure bits but also the software integration into the NetApp NFS Cinder driver it really makes a lot of sense and it really saves a lot of time and space for resulting OpenStack clouds so a couple key takeaways that I'm going to leave you with today of course I'm going to say FlexBot is the ideal converge infrastructure platforms our platform for deploying Red Hat OpenStack platform in production environments this is not you know trial by fire these are these are this is a proven verified architecture that you can use to get going and speed up your OpenStack deployment for true production workloads so you can concentrate on getting those workloads in the environment and less on the infrastructure bits let us take care of that for you you can spend more time like I said developing those applications and less time designing and deploying the infrastructure bits that comprise an OpenStack deployment with FlexBot and our cooperative support program that provides peace of mind with an infrastructure you really can count on so if you run into a problem or run into an issue because not everybody has you know rocket scientists inside of engineering that no OpenStack in and out you can use FlexBot cooperative support to help you should the needs arise so a couple of resources that I want to leave you with too that TR that's the same link that I published to earlier that was that that's 150 page document that hopefully is useful for your respective deployments but we also released a solution with Red Hat OpenStack Platform 6 back in the October time frame that's a Cisco validated design both the design and the deployment guide are there also as referenceable sources but also if you just want something that's maybe a little bit more introductory rather than I know the deployment guide is quite lengthy and long you can download this solution brief here so that's just a four page quick introduction that hopefully summarizes a lot of the integration and pieces that I mentioned here today as to why FlexBot is an ideal convergent infrastructure platform for OpenStack you can definitely follow us for all things OpenStack at NetApp at the Twitter handle OpenStack NetApp I also want to highlight a couple other sessions that are occurring here while we're at the summit and this is Wednesday these are still sessions that are coming on specifically shared file systems management Sucia NetApp are both presenting about that Wednesday the same level in meeting room 12 the open container initiative and the courier projects and the CNCF IBM and NetApp are co-presenting on that integration Thursday 9.50 to 10.30 and we have another session specifically led by NetApp where we can use the Magnum project which is the container project in OpenStack to do big data rapid prototyping that's Thursday at 11 o'clock and you can always see us or me or any of my colleagues down at the NetApp booth we'd love to talk to you as to all things OpenStack at both NetApp and SolidFire and as I said you can always visit us at netapp.github.io there's all kinds of great distribution agnostic information as to enabling NetApp storage with OpenStack that you can use very good documentation that's a one stop shop for that stuff we've got a couple minutes left if there's any questions alright I'll be available here afterwards I did bring some printouts from our booth that list all of the sessions that NetApp and SolidFire have done or will do at the summit encourage you to take one be happy to give you one even if you wanted to watch some of the videos that may have already occurred they're all up on YouTube and freely available thanks everyone