 Okay, hi. Hi and welcome to the session today. This is FlexPod with Red Hat Enterprise Linux OpenStack Platform 6. My name is David Cain. I'm a technical marketing engineer for NetApp. I have a Bachelor's of Science in Computer Science from North Carolina State University and I have about 10 years of data center experience. If you want to follow me on Twitter, my handle is D. Dave Cain. My name is Eric Rayline. I'm also a technical marketing engineer at NetApp, part of the Converged Infrastructure team focusing on private and hybrid clouds. I've been in the industry about 20 years, spent a long time in internal IT in a variety of infrastructure roles, worked for value-added resellers in a pre-sales architect role, as well as a post-sales implementation role, and now here working at NetApp. You can follow me on Twitter as well at at erayline. Okay, as we heard in the keynote session, you don't need a PhD to run OpenStack. Well, is OpenStack too hard? We here frequently heard that it's too complicated. There's too many manual steps. There's too many log files to go through to figure out what's wrong, what's broken, and it's changing too fast to keep up with. So this puts folks in an analysis paralysis mode of not making a decision as to whether or not they want to enable or begin an OpenStack deployment. All of the above can lead into more investigation than actual implementation and customers we see spend more time second-guessing a deployment than actually getting production workloads running in OpenStack. We think there's a better way. And that way is FlexBot. So by deploying your OpenStack environment on a tried-and-true infrastructure, so that you don't need to think about the infrastructure, you can just focus on the OpenStack. So for those of you who don't know what FlexBot is, FlexBot is a joint collaborative technical engineering effort between NetApp and Cisco. It's basically comprised of NetApp FAS and or E-Series storage as well as Cisco UCS servers and Cisco Nexus switching. So again, these are engineering efforts that have been between the two companies to ensure that we've got the best practices that we know the right ways to integrate the components together and the best way to deploy them for you. This is all about trying to reduce the risk and lower the total cost of ownership of deploying OpenStack or any internal infrastructure for your environment. It's about increasing the efficiency of your administration, increasing efficiency of deployment, and especially increasing the speed and ease of deployment. Enable you to roll the FlexBot in or build the FlexBot out very quickly and begin deploying your OpenStack environment. One of the big things here is that this has been a long ongoing relationship between the companies. We've got a lot of joint engineering effort over the last seven years and that has really resulted in about 64 what we call validated designs. So Cisco Validated Designs, CVDs or NetApp Verified Architectures, NBAs. So these are basically design indoor deployment guides that really are lab tested, vetted out, very prescriptive giving you explicit guidance on how to deploy a FlexBot or how a value-added reseller or distributor might create a FlexBot for you that you can just roll into your data center, plug in, power on, and get moving. So talking is good, but doing is better. The proof is in the pod. I know we could stand up here and talk, marketing speak to you, but let's get to the meat of it. Let's talk about what we were able to do in the lab. We took an existing FlexBot in our lab and NetApp in RTP, North Carolina and rolled it together in a rack. And to take away from this, this is a reasonably specced hardware. It's not a lab queen. These are about one-year-old components there, but still a validated FlexBot there. And we took and deployed Red Hat Enterprise Linux OpenStack Platform 6 on this hardware. And we took advantage of FlexBot and NetApp integrations and enhancements that we're about to go through in a more of a deeper dive. We want to take you through some lessons that we learned as we scaled up the number of instances in this resulting OpenStack Cloud. And really, to paraphrase it, to ask the question, how far were we able to scale with this? In this diagram here, illustrate the components of FlexBot. Pay attention to the chassis itself. We started with eight nodes to begin with. How far were we able to scale with those eight nodes, with four of them being compute or hypervisor nodes? A thousand? Two thousand? Five thousand? More? Well, stay tuned and find out. So let's lay a little groundwork first and just kind of talk about what we're going to be doing in order to get there. What integrations are we going to be taking advantage of? So NetApp has a long-standing history of contributions to the open source community and to the OpenStack community. We've been a contributing member of the OpenStack community since about 2011. And we've actually been a sponsor of every summit since 2011. Since then, we've actually released significant amounts of code upstream to provide sender drivers and other integrations in. As well as constantly increasing new features and new value adds into our contributions. Again, we try to actually develop all of these as upstream contributions. You don't need to go and download most of the code from our site. You just get it from whoever is providing your OpenStack distribution. We're going to say Red Hat's a good one, but there are plenty of others that will have our code included in them. And we are really grateful for the fact that because of our contributions to the community, because of our involvement, we've been included in the latest OpenStack survey results as being the number one commercial enterprise-class storage system being used in OpenStack environments. This is not the first year, but what's even better is that we're actually growing. So we've actually increased the percentage of NetApp utilization inside of OpenStack environments from last year to this year. And we really appreciate that, and we think that's just a testament to, again, our contributions and really to how well the OpenStack technologies and our technologies really work together. But it's not just about standing here and saying, hey, great, come buy our stuff, come use our stuff, we're great, we're cool. We're also a consumer of OpenStack. We're a customer of OpenStack. We use OpenStack a lot internally for multiple environments, production, test dev, engineering efforts. So we really are kind of using OpenStack or involve OpenStack at every layer at every possibility. And from this slide, we joined the OpenStack Foundation in 2011 and we've been contributing code since then. The point to take from this is even here at the Liberty Summit, we've announced fiber channel protocol support in our center driver. We've been here for a long time and will continue to be there. So I want to talk about just some of the integrations that lead into the scale numbers that we're going to talk about in a few minutes. One of those items is Glance with cluster data on tap, our FAS platform that runs that operating system. Two things here. One, copy off load. That's a piece of technology that will eliminate the initial network copy from a Glance image repository to a sender repository. So if you use our FAS platform and cluster data on tap, what we can do instead of having that network copy from those two different NFS flex volumes, we can enable a network copy. So instead of copying through the network, we instantiate that first volume or template as we like to call it through the storage system. So it's very fast and it avoids that first copy through the network. The other is space efficiency. Our deduplication technology enables common 4K blocks or coalesced into a single block which enables a lot of space savings on the underlying volume that holds all of your images. From that fact, most of those images that live in that Glance data store, they are either renditions or different variations of operating systems in there so they can share some of the same blocks. And with deduplication technology turned on, we use pointers to only store the deltas that have been changed between the different images that are stored inside of that Glance image store. So it's very space efficient. We've seen with other hypervisor platforms, VMware, Hyper-V, almost 90% deduplication rates. And that's not just internally, that's out in the field. So it's a very space-saving measure using Glance with NetApp Storage. Cinder, we've contributed a lot of code to Cinder over the years. Most of the code that we have from a Cinder standpoint has been there since the beginning. One thing that you can do that's differentiating with Cinder is create what we call a storage service catalog. So in Cinder when you create Cinder volumes, you now have the ability to specify a volume type. And that volume type can have a name. If you look at the chart here that I have, I have three examples here. One is a transactional database. One is disaster recovery. The other one is test development. Really you could use any arbitrary name that you choose. You could call them cat, duck, whatever, silver, gold, bronze. It's really a differentiating feature where we can enable either the tenants or the users that request Cinder volumes be able to take advantage of our NetApp technology in the back end when you use it for a Cinder deployment. So we'll talk about the first one there, a transactional database. You may have a customer workload that the customer wants that data to be specifically backed by solid state disks or flash disks in the NetApp storage device. So we can go in and through the extra specs library create these volume types and instantiate or associate those specific extra specs with that volume type so that whenever a customer goes and actually uses Cinder and specifies a volume type either through the command line or through the horizon dashboard, they can get Cinder volumes that are exposed on those back end NetApp storage systems. So differentiating in the fact that if you have three different flex volumes that have different features associated with them, whenever the tenant or the user requests those Cinder volumes, those Cinder volumes that are created will be backed by those provisions on NetApp storage. So we align those volumes to workloads. Instance caching. This is another very, very good feature that we provide through the Cinder driver. Really take you through it for a minute. Once the Cinder volume is created from the glance image, we cache it in what's known as an NFS image cache. So if I look from the Linux systems perspective, if I do an LS on that directory, after that first image is instantiated, hopefully through the copy offload technology, I can see that there's a cache inside of the volume there that's backed by the NetApp storage system. Future volumes that I can, from that image cache are cloned. So whenever the user or the tenant requests more Cinder volumes to be cloned from that image in the image cache, they're not copies through the storage system. They're actually clones. We instantiate through our Cinder driver back end calls to our API that actually clone those instances out. And we found that that's very fast from a dev test perspective in creating rapidly instantiating instances that have persistent volumes attached to them. And it shares the same blocks as the cached image. So again, only the deltas take up new blocks on disk, and it's very efficient. And so pairing what I said earlier with glance and Cinder with the copy offloading and with the instance caching and our flex clone technology, we can dramatically reduce the creation time of Cinder volumes that are used for persistent images. So we'll talk about Swift as well. So the last two features we've been talking about in terms of the project integration with OpenStack has really been focused on our FAD storage. We also take advantage of our E-Series integration with Swift which I think is a really good match. So when it comes to object storage, too, the primary concerns really are around resiliency and scalability. You want to make sure that when you're writing your objects you're going to be able to retrieve them later. You're not going to have to worry about losing them. You're going to be writing a lot of objects. You're going to start scaling pretty large. So there's just various concerns you have as you start trying to scale these things up. So in a typical Swift implementation, you've got your Swift nodes, and each Swift node has its own local storage. So as you're scaling up Swift, you're scaling up the compute for the Swift nodes, but also scaling up all that local storage. And all those local storage, of course, are comprised of disks. And as time goes on, disks, of course, are getting bigger and bigger. The rebuild times associated with a failed drive so you start talking about one terabyte to two terabytes to four terabytes to six terabytes and larger, that rebuild time can become significant. And because of the way that Swift works with replicating those object copies, when you have a disk failure, now you're actually having increased network traffic in order to be able to do that rebuild. So the rebuild is going to take time, and during that time you're exposed to potential data loss depending upon what other drive failures might occur in the environment, and you're increasing load on the environment. With using our e-series with dynamic disk pools, you're able to dramatically reduce that rebuild time by a measure of about eight-fold, reducing your window of exposure, as well as offloading much of that traffic to the back-end storage system and not going across your front-end network. As well, when it comes to replication, so Swift stores multiple copies for data protection across the nodes. By default, it's going to store three extra copies. By doing that, of course, again, it requires a lot of network traffic in order to handle that replication, so it's more load doing that. By offloading things onto our e-series with DDP, you can reduce that data replication from three times to about 1.3 times. And the big thing is that you're not going to have to have as much of this local storage attached to each Swift node. So you're going to be able to dramatically reduce the amount of hardware required, the amount of storage space, the amount of rack space, the amount of power, the amount of cooling, just generally giving you a much greater TCO by using e-series for your Swift environments. Now, lab validation. Now, I mentioned earlier, we took a FlexPod and built it in our lab in RTP. I just want to take you through some of the components of that FlexPod and dive into some metrics that we were able to measure from it here in a minute. So from a storage perspective, we had one NetApp e-series device which has dual controllers for high availability. We took a NetApp FAS8040, which has two nodes for HA. And the important point to mention if you don't remember anything else from this slide right here is from the NetApp FAS device, we used a quantity of 24 900-gig SAS disks with one shelf. No flash in here. From a networking standpoint, we used two Cisco Nexus 9396 switches with 10-gigabit throughput throughout, configured highly available. And from a compute standpoint in this initial validation, we used eight Cisco UCS Blade servers. So that's what we built. Now, I just want to take you through what does it mean to instantiate REL OSP6 on FlexPod? So just kind of the steps involved here. Number one, configure the physical infrastructure. You know, obviously take all the components, mount them in the rack, cable them up. And from a physical infrastructure standpoint, get onto the Cisco Nexus switches, configure them with a VPC together in high availability mode, instantiate subnets on there that we'll consume later from the perspective of the REL OSP installer. Get onto the Cisco UCS Fabric Interconnects through the Cisco UCS Manager application. What we're going to do there is create a service profile template and for those not familiar with Cisco UCS, it's just a basic abstraction of the compute below that down in the lower corner of the rack right there. Basically, I define variables for my intended deployment and I can scale out from there. It's very easy to take the compute nodes that are down there and define important metrics from a hardware perspective as to what those entail. So network interfaces, what its boot order is. In our case, we're going to boot from network via Pixi in order to instantiate the REL OSP installer. Other metrics, like if local disks are used, in our case, no. We're doing stateless booting via iSCSI to the NetApp FAS device. Another associated metric, so later we can take that template and clone compute nodes out from that. So if we scale up in the environment from a compute node standpoint, it's just a couple button clicks. It's very easy. So from the NetApp eSeries standpoint, we need to instantiate through iSCSI IQNs that are going to be mounted for Swift and then physical data stores like Eric mentioned with dynamic disk pool technology, instantiate those ahead of time through the Santricity tool that we have. The NetApp FAS device, similar to the rest of the components, can figure it, set it up with the ability. You can either use the command line if you're familiar with that or other orchestration tools. Create two different volumes, one for Cinder and one for Glance that will be used later for the installation. And then that's it from a physical standpoint. Next, we're going to deploy the REL OSP installer. We're going to take one of our service profiles or compute nodes that we have and pick it for the installer. For all intensive purposes, that's a management node. It has a pixie server, DHCP, DNS. It basically manages the lifecycle and it has a puppet master server. It really coordinates the relationship and the deployment of REL OSP 6 on this hardware. So that's the management node. You're going to log into it and you're going to associate subnets with it. You're going to set up your deployment to match what you physically set up before. This includes in the installation an easy button so you will select NetApp as the backend storage system for Cinder and then for Glance we use the NFS volume backed by the drive that we created earlier. Once that's done, we're going to drag compute nodes that have been booted up via pixie and discovered by the REL OSP installer and then we're going to drag them into a controller node relationship which has several services that will be instantiated by the installer later through subsequent boot. We do the same with compute nodes. Once we're done with that we hit the launch deployment button. Everything reboots, everything boots to network and everything talks via pixie gets Red Hat Enterprise Linux 7 booted down to it through Puppet, through orchestration, Anaconda Red Hat Enterprise Linux 7 will be laid down on all of the compute nodes. Once the deployment is done, everything reboots Puppet takes over and starts installing OpenSnack services on there for you. No manual configuration files. Everything is automated. All those services are instantiated on there and then whenever the deployment finishes, Cinder will reach out as configured through the easy button to the NetApp FAS device. The glance NFS volume is instantiated and mounted inside of the construct of the recently built cloud and then we have it up and running. After it's running to do Swift, unfortunately that's not orchestrated through the installer but you can either use Puppet Manifest or you can install the packages yourself or through Ansible, whatever automation piece you choose to do that and then have that be orchestrated through iSCSI over to the E5524 NetApp E-Series device and then you're off and running. You have a cloud built now you can start to create projects, tenants and spin out instances. Very simplified process but very powerful in the fact that the resulting deployment is a truly HA capable OpenSnack distribution. So I want to take you through what we did with that. Once we had all of that stuff up to date and built, what were we able to do with it? Through this chart right here just base out of the box, I was able to do 200 and a thousand instances in volumes and when I say that 200 and a thousand, I actually create either 200, run through that, run through the first line really fast, 200 instances or 200 volumes and then I do boot commands through some scripts that were written that instantiate the Python Cinder client and the Python Nova client library. So right from the first line right there we were able to clone 200 volumes from Cinder in a time period of 73 seconds, very fast. And then after the volumes are spun up another script goes through there and I just say boot. Just boot these instances from those persistent volumes and that took about four minutes. So if you add up that amount of time right there, that's about five minutes to do 200 instances in volumes. So I did a thousand next. The thousand instance creation right there, the same thing again instantiate through those scripts, create one thousand Cinder persistent back volumes on the NetApp NFS and that took about six minutes. Remember that's very fast because we're taking advantage of the FlexCone technology that's inherent on the NetApp storage system. The boot command right there to have all those systems booted up and on the network ready to be accessible, we did it in about 31 minutes. So that's a thousand available accessible Fedora 21 cloud images available and on the network in about 31 minutes. And again this is using a very modest storage configuration of all SAS. This is no flash involved whatsoever. No hybrid, no all flash strictly SAS. So not as satisfied with that. We looked around in the lab and found some more hardware. We noticed that in the previous two tests we could create Cinder volumes at about the exact same rate no matter how many volumes that we created. About three a second? Yep, about three a second. And so we decided to add more compute to this. Very easy, mount them in the rack, have them cabled back to the Cisco fabric interconnects they show up in our deployment. We already have our service profile template defined and active right there. It's just a simple matter of cloning that service those service profiles which are compute nodes if you're not familiar with UCS and BAM we're in the environment. With this time we have 20 nodes available so just through the scripts and stuff earlier I decided let's scale it out some more. Let's see how more we can get out of this. And again this is instantiating all the pieces of OpenStack FlexPod NetApp technology that we talked about earlier from a Cinder and Glance standpoint. It's the coffee offload the cloning, the template using FlexClones using the NFS instance caching, booting the instance with each new volume that's created there and then measuring the amount of time that all the instances are up and running and then some cleanup statistics right there just terminating the running instances and then deleting all those Cinder volumes. We're able to do 5,000. This is just a picture of the Horizon dashboard right there where we've highlighted the amount of instances the amount of CPUs going back earlier slide one CPU one gig of memory per instance there. So as you can see we've highlighted on the right that box I know it's kind of hard for me to show 20 nodes of hypervisors in there but we were able to scale that out to between 260 and 280 instances per compute hypervisor node so very impressive and the numbers match there similarly with the volume because remember these are all persistent images they're not ephemeral instances they're backed by persistently stored volumes. So again 5,000 that's pretty impressive I mean it's five times what we were doing before it's five times what we were planning to do for this infrastructure but I don't know is that good enough? Well wait there's more we did 10,000 I did this Saturday before flying out here to Vancouver we did a run we did 10,000 instances on there so as you can see in the column on the right in some cases we were able to get over 500 instances per compute node on there and you can see this is an updated picture from the Horizon dashboard got a clever idea well they're online they're active, they're available or at least Horizon so what about from a network standpoint are these things actually accessible on the network can they ping, do they respond well I did a cute little ping sweep with Nmap and ICMP and you can see that the tunnel subnet that I used from a VXLAN standpoint was like a slash 18 so it's about a 16,000 address subnet and I booted like you said earlier 10,000 instances on there well 9,600 of them are responded now 10,000 persistent instances almost, we almost got to that point but if you look at the background at the limit summary where it says instances 9,888 and then 9,600 actually responded to ICMPs through a ping sweep that's about a 97% of those instances available on the network so very impressive one single shelf 24 gigs of satisfies with backing all of this this is a quick and dirty test and we have not completed the rest of our engineering efforts who knows where these numbers will end up before this project is over with but pretty impressive so just some of the things that we had to do after resulting RELL OSP6 deployment on there to kind of get to this level of scale to be clear the 201,000 instances that were done earlier with the four compute nodes that's available out of the box but when you start scaling out between 5,000 10,000 or even greater numbers you know I'm sure we'll play around a little bit more with the scale and get back but just some of the things that need to be done from a controller node deployment you got to increase the collection resource returns for single responses so if you're doing a cinder list, a glance list that's instantiating the python libraries there by default that only returns a thousand results obviously we have to scale that a little bit more than there so bump those variables up in Cinder & Nova to 10,000 we had to increase the number of maximum open files the process may have opened at a time I increased that to 64,000 kind of an arbitrary value but just to increase that so that wasn't a bottleneck from the Galera database I believe the default was 1,024 connections I just wanted to make sure from a database perspective that we could scale that out so we increased the value of the maximum number of connections to 10,000 to accommodate that from Neutron standpoint we disabled L2 population from Neutron ML2 it seemed to be around 5,500 to 6,000 instances it looked like to me that it was a race condition that was apparent whenever some of those notification commands come from L2 population back to Neutron and OVS to prevent the whole problem with these overlay networks with unknown broadcast traffic whenever I do that ping and whenever it arps for all the responses, all the destination max for that I had to disable that I enabled IP sets whenever you instantiate an instance it's doing IP table security groups in the background I didn't disable security groups I just enabled IP sets which is supposed to enable faster processing of that from the 5,000 to 10,000 instances I had to increase the timeouts on HA proxy I was seeing that as I got up into the 6,000 instance range that saw the timeouts starting to occur there I believe the default is 30 seconds there bumped up to 900 basically said yeah wait for this and it did probably the biggest difference was RabbitMQ I was noticing that some of the processes associated with that once I scaled up to about 5,000 a little bit before that things were starting to fall off the truck from like a Cinder client or a Nova client type perspective the log files indicated that there was response times that weren't being met and so the booting was failing so increase those variables there then Celiometer we just made sure not to fill up the disk there Compute nodes obviously enabled debugging need to know what's going on with Nova as we instantiate this many instances we tell it also that the virtual interface which is instantiated through ML2 and the OBS to not fail booting if you can't instantiate that in time so I said you know if you can't instantiate that keep trying 5 minutes is fine it will eventually go on just for context that 10,000 instance run I think I had maybe 10 instances fail to boot for one reason or another disable L2 population and then of course the quotas from a default open stack installation you need to increase those because that's just a limited quota right there from Nova, Cinder and Neutron so that's great some of the lab testing results you know but what else does FlexPod buy you yeah so we just talked about what we saw from a scale perspective and how easy we were able to develop everything and actually produce the open stack in FlexPod but there's a lot of other things that FlexPod provides to an open stack environment to an infrastructure environment that we only kind of touched on or didn't touch on at all so the server abstraction with UCS service profiles we've talked about several times and I think it's a really good example of how you can have the underlying infrastructure scale as quickly and as easily as the open stack components themselves do being able to easily add in in this case tripling the amount of compute nodes that we had or sorry tripling the number of compute servers in general that we had for the environment very quickly very easily same way we can scale it up or scale it down being able to utilize the sand booting technology so in conjunction with the service profiles and the service abstraction you're able to basically take it so that everything is stateless if you have a server node fail you can easily just reapply that service profile to a different server in whatever chassis in the environment you care to and all of that including the boot profile will map to it and the server will boot automatically no additional downtime besides the time it takes you to actually just re-associate that service profile great for DR scenarios as well from a networking perspective I mean we're talking about industry standard you know kind of the de facto industry standard of Cisco networking Nexus switching the command line interface the feature sets that everyone is used to very large amount of familiarity and usually cutting edge features we're also talking about from an open stack perspective with networking you know taking advantage of the ML2 VX LAN or the ML2 Nexus modular drivers inside of the RELL OSP installer from a storage perspective we've talked a lot about the Cinder integration especially but also the Swift and Glance configuration most of which is being done automatically for you inside the RELL OSP installer so not a lot of manual configuration being done after the fact that you need to worry about but we didn't really talk too much about the underlying storage itself if anything we've probably talked more about the compute side of things but the storage takes a lot of notable point of fact as well we start talking about we're talking about Netafaz we're talking about cluster data on tap we're talking about the only unified scale out storage system supporting block supporting NAS supporting block and NAS at the same time as well as being able to support just spinning media hybrid with spinning media and flash or all flash all of them with inside the same scale out cluster if you want to the same pane of glass the same single management interface no one else can touch that today no one else is able to do that and it makes it very very easy for you to implement whatever kind of storage services that you need for your environment and to be able to share the same management plan the same storage services with your open stack environment as your existing environment hybrid environment baremail servers whatever it is we're going to be able to support any of the services that you require and part of the how that those storage services are actually being presented is being done through what we call storage virtual machines which is a way of taking your physical array and virtualizing it so whether it's a single HA pair whether it's 12 HA pairs taking to these 12 disparate arrays and be able to present them up to your host either as a single entity or as multiple entities so the concept behind the storage virtual machine is that to the host it's connecting to a different physical box regardless of whether it's being done on one node or 12 nodes or 24 nodes this includes all the MAC addresses IP addresses WWPNs any of those things this works really well for a tenancy perspective where you can map a SVM to a particular tenant to the admin tenant to an individual tenant and between the SVMs the storage cannot be modified cannot be modified so if you've got tenant A storage on its own SVM nothing that happens with that SVM from any management interface is going to impact the storage is being presented from tenant B or tenant C all the way down so the thing we do is a lot of high availability of the box so you know cloud platform is fantastic but you need to make sure the infrastructure is going to be there underneath it to support it so we make sure that all of the kind of basic stuff is there so redundancy everywhere multi-path everywhere so that you can withstand multiple failures without having any degradation to the environment and being able to withstand failures is great but it's kind of old hat we're expecting that you're going to be able to withstand failures but you also have to be able to do things like online upgrades whether that's upgrades of your UCS firmware upgrades of the Nexus firmware upgrades of data on tap from an OS perspective or for any of the subcomponent firmware disk firmware shelfware etc. be able to do all those things online without impacting services while everything remains running and then extending that to any other maintenance operation so again this is a scale-out cluster so we can move as we need to whether it's from a load balancing perspective whether it's from a hardware refresh perspective how we want to we can move around the logical interfaces that the tenants or the hosts are actually accessing the storage with between physical nodes wherever we want we can move the volumes those cinder volumes or the glance images are all living on to any node in the cluster we can do this again for rebalancing to give it more performance perhaps that cinder volume before was fine on sata but now it's going to need to be on ssd then it can all be moved in the background live to give the required capabilities that's needed for the application without ever impacting the tenant without ever impacting the actual workload itself other than it's suddenly getting better performance and again because we're able to do all this abstraction about the compute and the storage level we can do online expansion or online contraction as you need to so whether you're growing your private cloud or maybe shrinking your private cloud is you're turning into a hybrid cloud and starting to burst into hyperscalers either way you'll do that up or down seamlessly online again without having any impact and then talking about scaling up and scaling down the infrastructure ultimately needs to build a scale with with open stack we've already shown you that we could do almost 10,000 on a pretty small configuration we're talking about 40 CPUs for the compute hypervisors in a single UCS domain we can actually scale to 160 half width servers which equates to 320 CPUs with over a dozen cores per CPU so you can scale up a tremendous amount just from the compute perspective from the CPUs and core count from memory and if you need to scale beyond 160 you can go beyond that as well and still maintain a relatively single pane of glass kind of management by using things like UCS director and UCS central from a storage perspective again that fast scales up and out so we can scale up to as much as over 8 petabytes in a single HA pair with our largest 8080 EX system or out to as much as 100 petabytes in the fully scaled out NAS cluster so whatever kind of scale you need whatever services that you need we can support that within it fast and just touching again on the storage version machines in the way that we're able to carve up the presentation of the storage arrays we can actually scale up to 250 SPMs in a sand cluster or a thousand SPMs in a NAS cluster and that really is more a feature of how many fiber-chilling interfaces and things like that you're actually going to have out there to be able to do to connect to the end host and in the factor of how many storage controllers we're actually supporting inside of each of those nodes where a sand cluster can scale up to 8 nodes and a NAS cluster scales up to 24 so quite a bit of capacity quite a bit of performance to be able to support from both the compute and the storage side and a cluster that's a single point of management it's a single point of management if you choose to you can actually delegate management at the SVM level but otherwise as the administrator as the operator you can do everything from the same pane of glass the same GUI the same CLI everything's consistent from the small system to the largest so we'll wrap up we really appreciate you guys listening to us and if there's anything we want to take away it's really that FlexPod is really the proven infrastructure that you can use then use that to stand up your open stack on top of it take advantage of all the joint engineering efforts all of the decades of experience that we have with storage that Cisco has from the networking and the compute side to be able to maintain a very consistent environment a very resilient environment both FAS and E-Series are actually rated for over 5 nines of uptime and the fact that we've provided a lot of integration between the companies and between the communities that we take advantage of and we basically turn that around into validated designs and documentation for you to be able to use to build that your infrastructure or that someone else will use to build that your infrastructure for you but either way be able to build off of the sweat tears and late nights other people have spent and speaking of all that blood sweat and tears and the ongoing collateral so what we've been talking about today this architecture and this testing there is going to be some documentation coming out in the very near future that you'll be seeing that this is on all the result of this joint engineering effort between NetApp, between Cisco and Red Hat in providing this this implementation already today we actually have the installer video available showing you how we actually went through and did an installation in this environment you can come check it out at our booth it's about a seven minute long video or you can just go online and check it out yourself very very simply as you're showing you how easy it is despite all the disparate parts all the moving pieces it's very simple quick and easy to get real OSP up and running on a FlexPod right and scalability is not on here but for those that say that OpenStack does not scale well it does on FlexPod so we've got a lot of sessions here at OpenStack we actually have 17 sessions 15 of them we haven't actually had yet we could put all 15 sessions on here but you won't be able to read it and I wouldn't have time to tell you about them but there's two here to point out specifically they're both focused on FlexPod and OpenStack we've got actually two different customers who are going to be speaking about their experiences we've got Telus who are going to be speaking tomorrow as well as Verde Data who will be speaking on Thursday so feel free to check those out check out any of the other sessions that we have online while you're here speaking of online we've always got our OpenStack deployment operations guide available this is continuously updated out there on netapp.github.io which is also where all of our code is actually housed as well so you see all of that upstream code living up there and you can also follow us at OpenStack at OpenStack NetApp on Twitter and certainly come see us obviously here in the ballroom right down there in s13 just down here on the right come down stop in take a look at the demos ask any questions just come on down and talk to us if you have any questions we'll be down here below the stage thank you for coming thank you very much