 All right, ready? All right. Hello, everyone, and welcome to the session. Today, we're going to talk about OpenStack at scale inside NetApp, our trials and tribulations. Mansi and I both work in the NetApp product operations division. And hopefully, you're in the right room today as Wednesday, April 27th, 9.50 AM. So a little bit about us before we get started. Good morning, everyone. And thank you for showing up to do a morning for this session. I'm Mansi Prabhavalkar. I have been working as a systems architect at NetApp for the past two years now. I basically deal with all things OpenStack inside of our engineering-shared infrastructure services organization, or ECIS, as we call it. Before NetApp, I was doing my master's in computer science from NC State University. You can follow me on Twitter. My Twitter handle is at MansiP11. And hey, I'm David Kane. I'm a reference architect, technical marketing engineer among many hats at NetApp, the Bachelor of Science degree in computer science from NC State University. Go Wolfpack. Go Wolfpack. And I have about 12 years of data center experience. A little bit different in my role at NetApp. I evangelize NetApp storage products. Specifically, I sit in the FlexPod Converged Infrastructure Engineering division. But we're both in the core engineering division at NetApp. If you'd like to follow me on the Twitter's, my Twitter handle is there. That's the Dave Kane. So rather than a generic boring agenda slide, we have a timeline that we'd like to take you through for this presentation today. And with that timeline to take away from this slide is about a year and a half worth of great work internally at NetApp. We've segmented the presentation into four distinct interjection points. The first, we're going to give you a little bit of a background about our internal organization, which runs OpenStack. How we got started with OpenStack and really what we do and a little bit about some of the infrastructure bits that we have there. We're going to talk about Puppet, a little bit about some of the automation that we initially wrote and deployed in Research Triangle Park, North Carolina. We started our journey with OpenStack. Once we got there, we started doing automation of that through non-disruptive upgrades to the Keelor release. Then we went big with globalizing OpenStack at NetApp where a global company and we have labs all over the world. And once we did that, we globally did an upgrade of the Liberty release. After all that, we're going to give you some future steps of where we're taking our cloud next. But first, a little bit of history and a little bit about the organization. So let me tell you briefly what we do as an organization inside NetApp. We are ECIS, we are a global engineering shared infrastructure services organization that handles around nine R&D lab sites across the world and have a huge customer base of 5,000 customers. So our customers are basically NetApp engineers, QA testers, software developers who test out our NetApp products inside our infrastructure so that the products that ship out do not have any bugs in them and the other customers can benefit from those. So out of the 130 team members that I work with, eight of us currently support OpenStack inside our organization. Again, we are customer zero, so we consume the NetApp products before they ship out to our customers. So we try to be at the bleeding edge of technology by implementing pre-release NetApp and partner solutions. We also provide seamless resource delivery so that we can drive innovation inside NetApp. And keeping this in mind, we introduced our internal private cloud GEC back in 2013. So this is a global engineering cloud as we call it, which is operational today at five different sites across the world. It serves as a self-service one-stop location for our users to request for resources instead of the stupid ticketing system that we had before. And these resources are processed by a workflow engine at the back end that talks to the various different hypervisors that we have in our environment. So currently we have VMware, Hyper-V, and the recent addition of KVM on OpenStack. Right, so that's a little bit of a background as to what the organization is. Now what about some of the infrastructure that underpins the cloud that we're talking about here? So a little bit more about the global engineering cloud inside of NetApp. At a glance, at a high level, we have about a 75,000 total VM capacity in that cloud, and that's comprised of those three hypervisor platforms that we mentioned earlier, VMware, Hyper-V, and KVM. Of that entire capacity today, we have about a 9,000 VM capacity worldwide comprising inside of that cloud there. So we checked right before we came out here and did this session, just exactly how many live VMs we had running right now servicing real workloads inside of the organization being used by our devs and testers who had about 3,600. Now, of the entire capacity that we have inside of the global engineering cloud, 15% is KVM, 45% is Hyper-V, and 40% is VMware. And this is an infrastructure as a service play. Remember the portal that Monzi mentioned? That's where the developers and the QA testers and everyone goes to a single place to select resources for their various job functions and roles. And they don't care about which hypervisor they get, they just want their resources and we just provide them. Right. So some of the hardware bits that underpin it because if you don't have infrastructure underneath that your cloud does not function very well. So to that end, we internally at NetApp utilize a FlexPod data center like architecture there. And what's a FlexPod for those that may not be familiar or aware of what that is? It's a joint technical engineering effort between Cisco and NetApp. It's a converged infrastructure platform. So it's storage, networking, and compute. So from the compute side, it's the Cisco UCS unified computing system. From the storage side, it's our NetApp FAS and or E-Series storage platforms. And from the networking side of the house, it's Cisco Nexus 5000 or 9000 series switches. So what about the automation bits that we have inside of this cloud? As I mentioned earlier, we utilize Puppet. And why did we choose Puppet over all the other various Chef, SaltSack, Ansible? We were most comfortable with that. We had a previous investment internally orchestrating some of the VMware and Hyper-V bits. And so it was a natural choice to begin our journey there. We utilize Jenkins for continuous integration, continuous delivery, and of course Git where we store our Puppet manifest as code, infrastructure as code to be consumed later by those other eight folks that work on OpenStack internally. Why OpenStack? Well, as Moncie mentioned, we're customer zero. We need to provide infrastructure services for all of NetApp that serve as a basis point for, our customers do development and deploy solutions in OpenStack. So we need to provide those same services internally there as well. And from a development perspective, NetApp has been a member of, as a chartered gold member of the OpenStack community. And we've been involved since the Folsom release in OpenStack in continuous driver development. All of our drivers are contributed upstream so you can take advantage of those today. So it was a natural extension to begin that and have those resources internally at NetApp. Now that you heard about the bits inside, now a little bit about the architecture that we've deployed. Now the technical stuff. So it all started back in August 2014 when I first joined at NetApp and this was my first project ever. So I'm very much close to this one. So we had decided to implement OpenStack inside our organization. But before we got started, we wanted to define a clear use case for it in our environment. We didn't want to do too many things at the same time and just get lost in there. So our goal was to deploy OpenStack as a part of the GEC workflow. In order to deliver VMs, we are the KVM hypervisor. We still wanted our GEC portal to work as the user front end. And we wanted OpenStack as just a package for the KVM hypervisor at the back end. So now we had to come up with an architecture which was production worthy. So we started off with phase one, which was the all-in-one controller. So in this phase, we hosted all of the components of OpenStack onto a single machine. And we also had our ZinOS monitoring in place in order to evaluate the load on this architecture. So we found out that the Keystone and the Horizon components were very chatty due to the token creation and everything and were overloading the controller node. So we went to phase two. In this phase, we decided to split out the Keystone and Horizon components and host them onto dedicated separate machines. This allowed the controller node to be relieved of all that overloading and it could focus all of its processing power on the more important VM operations. But now that we had a stable architecture, we wanted to make it fault tolerant as well. So we achieved this partially by going to phase three, which was HA services. In this phase, we spread the Keystone and Horizon components across three different machines and put them behind a pair of active passive HA proxy load balancers. Now the only concern remained was the controller, which was still a single point of failure for us. This takes us to the architecture that we have today in production. So far, we discussed the Keystone component, the Horizon component, and we also added a highly available GaleraDB cluster to our environment. Now to address the controller issue, we decided to adopt regions architecture. So let me explain what regions are in OpenStack. A region is its own deployment of OpenStack that has a central authentication service and a shared dashboard. We tailored this architecture to our needs by also adding a highly available GaleraDB cluster, as I mentioned before. We stamped out a region with one controller node, one database node, one MongoDB node, and 15 compute nodes. This allowed us to make our architecture modular and allowed us to scale by adding more regions to the environment that shared the same Keystone and Horizon components that were pulled out in the shared region, which was termed as region zero. So now let me tell you what the GaleraDB database hosts in our environment. So it hosts our Keystone database, which is again pulled out in the region zero, and it also hosts our Glance and Cinder databases because they are shared across our different regions in the environment. This allows us to have a shared image store as well as a shared Cinder volume across all the regions. And as you can see, the Glance and Cinder services still reside in each of the Nova regions, but they have the same database back end which is pulled out in our shared region zero. So let's talk about Nova and Neutron services. Those are hosted within a region and also their databases are hosted within a region, which allows us to give a region a slash 22 network slider, giving us a VM capacity of 1,000 instances within a region. So how does this help us? This helps us to scale out by 1,000 VMs at a time, and it gives us a starting point to start off a region with 1,000 instance capacity. Also our 15 compute nodes are backed by the same shared NetApp NFS back end, which allows us to live migrate VMs between the different compute nodes within a region. This greatly helps us during our non-destructive upgrades as well as when we want to put a compute node in maintenance mode. Let's talk about how this is highly available for us. So as mentioned before, our users go to the GEC portal in order to request for resources and they don't really see the OpenStack dashboard. So they don't know that there is this region thing going on at the back end. So when the OpenStack request comes in, it gets routed to any of these regions in the environment depending on their capacity score as well as their health score. So if a region is in maintenance or it's full of instances and is not able to process anything further, then the request gets routed to the other operational regions making it highly available for our customers. Okay. So now that you guys are familiar with our architecture, let's talk about what all we learned during this phase. So first and foremost, we learned that some of the keys of some of the OpenStack services can be bit chatty and we decided to make an early on decision to host them onto separate servers. Also, as an OS monitoring tool, greatly helped us make this decision and helped us get to the production stage very quickly as well. Also, using regions architecture, we can scale out in capacity by adding more regions to the environment as well as scale up in capacity by adding more compute nodes within that region. It also gives us a good starting point with a thousand VM capacity in each of our regions and it allows us to know about our scaling model because we know that if you want to add more capacity, we can always scale by thousand instances in our environment. And the most compelling reason why we went to regions architecture was its similarity with the VMware and Hyper-V architecture that we already had in our environment. So operations is a huge part of our engineering organization and we wanted them to be more comfortable with what we were doing with OpenStack. A region in OpenStack is synonymous to a vSphere or Hyper-V cluster of 15 compute nodes that we already have in GEC and this made our operations team a bit less skeptical in adopting OpenStack as a new addition to our team. Also segmented operations. So region is a smaller OpenStack deployment in itself and so it's easier to support or troubleshoot if something goes wrong. So you just have to worry about those 15 compute nodes if something goes wrong within a region which is a good thing to start off with. And that's what regions architecture helped us to do. Also, it allows us to add multiple generations of hardware within our environment. So being an engineering organization we keep on getting new gear and we keep on replacing our old gear as well. So we have some regions which are the newly added regions that are pretty faster than the older regions that we have. So the regions that are with the latest hardware have more processing power in them which allows us to add more compute nodes to the environment and gives us more VM capacity within those faster regions as well. And the multi-region approach allows us to do non-disruptive upgrades but I don't want to spoil that right now. We'll talk more about that later. Okay, so some advice for you. I know Moncy touched on a lot of those points and some of the lessons that we learned on this initial stage but really from a monitoring perspective, set expectations early, do a walk, crawl, run type mentality. Hopefully the timeline of this presentation accurately reflects that. So in the first stage, test things out, make sure that the services through maybe your already existing processes where you already have virtualization deployed internally in your organizations today. Make sure to monitor those with tool sets like Zenos, Nagyos, those are a couple good ones but you may have something you already have deployed there. Don't reinvent the wheel there and break out those busy services but monitor them at the same time. And as I mentioned, infrastructure matters. For us utilizing a convergent infrastructure like FlexBot is very, very easy to in tandem with the multi-segmented region architecture that we have for the OpenStack services, it's very easy to do the same with the architecture or the hardware services involved with the compute nodes. We can easily add compute nodes to the environment which I'll talk about here in the next slide selectively to those OpenStack deployments or those regions that we have there as well as storage. It shouldn't be rocket science to add storage to a OpenStack deployment. Just add a shelf to it. It's a modular-based architecture in FlexBot and you're up and going and ready to go and a lot of that really helps us with our constant upgrades that we have. We upgrade firmware. We upgrade firmware on the compute nodes whether the storage operating system has new firmware code or even new hardware like Monci mentioned. The architecture needs to be able to support that and even most importantly at all if your users don't notice that you're doing upgrades constantly there's less phone calls, less tickets and less headache. If you design it on the initial onset to cover upgrades and scaling, you'll sleep better at night. So now let's talk about automation and upgrades. So as I mentioned, converged infrastructure, FlexBot, we've got to start with a base before we enable some of that puppet automation that we hinted at earlier. So as Monci mentioned, there's a slash 22 network that we have that we augment and implement on a multi-region basis. So as we mentioned, we had VMware and Hyper-V in the environment there. We had a series of automation there that would spin up VLANs inside of the resulting architecture that 802.1 Q VLANs to support the eventual instances that we're gonna create. So create those VLANs on the top of rack networking switches. So next we spin up NetApp FlexVolumes for Cinder and Nova Storage. So FlexVolume can be thought of as an abstracted virtualized storage concept similar to how VMware abstracted compute resources in the data center. We at NetApp have NetApp FlexVolumes there. So it's a storage presented to the both Cinder and Nova nodes for the eventual consumption, but it also hosts our bootlines. So we do stateless booting for those that may not be familiar with that. We boot all of our compute nodes via iSCSI, all hosted on the NetApp storage device that we have inside of the FlexVolume architecture. And so what we can do is we've been really good with snapshots at NetApp for a number of years, our copy-on-write snapshotting technology, it creates instantaneous copies of golden bootlines that we have. So think of it the same with VMware where you create a golden template for your VMs and then you customize it afterwards. We do the same thing in this infrastructure. We have a golden REL7 image that we deploy onto the compute nodes. And that FlexVolume technology that I mentioned, we can clone a 20 gig image in 0.3 seconds. So it's very fast, it's an instantaneous clone of that. Now we assign Cisco UCS service profile, which is the compute node, that eventual node or that bootline so that it can come up seamlessly. Now Puppet takes over. So we use Puppet open source for automating deployments in OpenStack. And so this is our Puppet master, which represents the knowledge and the code that is necessary to spin up a production environment based on the architecture that we discussed before. So based on that, we have eight different roles in our Puppet master. The first four roles, which are web, load balancer, Keystone and GaleraDB make up our shared region zero. Whereas the last four roles, which are controller, compute, database and MongoDB, those make up are Nova regions. So when a node is fed to Puppet after assigning it the bootline and the service profile, it gets one of these roles assigned to it and it is configured by Puppet for us. Thus, all of the nodes in our environment go through these two phases of automation, hardware as well as software, and get transformed into a controller node, a compute node or a Keystone node, finally giving us a production ready OpenStack environment backed by NetApp storage. So now all we have to do is plug this into our GC portal and make it available for our customers to use. So thanks to our Puppet automation and our modular architecture, we were able to go to OpenStack Juno in production in just 90 minutes with 45 compute nodes and a VM capacity of 3000. And I think we did that in just two months when we started off our architecture and everything and in two months we were in production with OpenStack Juno, which was the latest release at that time. Okay, so now that we had done our deployments, it was time for upgrades. So as mentioned earlier, we wanted to achieve non-disruptive updates while in production and we were eager to go from the Juno to the kilo release of OpenStack. We already had a modular architecture. Now we needed a strategy that was repeatable for the other OpenStack releases, automated using Puppet and non-disruptive for our end users. So we segmented our architecture into three different sections and decided to tackle them one at a time. First was the region zero, which was the shared services. We first decided to upgrade the Keystone node as it was the central authentication service in the environment. We did that serially and this allowed us to maintain service continuity and no downtime for our end users. So once the Keystone was upgraded, the Kilo Keystone continued to work with the Juno components because of backwards compatibility in OpenStack. Then we moved on to Horizon component and we upgraded it serially as well. But then the users do not see the dashboard at all, they go to the GC portal, so this was non-disruptive again. Now that the shared regions were done, we moved on to region controllers. So we toggled off a region in the GC portal to stop any new deployments to it. Then we went on and we upgraded the region controllers serially and once it was upgraded successfully, we toggled the region back on in our GC portal. So again, when the region went down, still the VMs in that region continued to work because they were hosted on the compute nodes. So that caused no downtime for our end users. As well as when the new request came in during this time, they were routed to the other operational regions in our environment, making it non-disruptive for our new end users as well. Now it was time for our compute nodes. We upgraded the compute nodes serially within a region and in parallel across the region, just to cut down the upgrade timings. So we took the first compute node in each of our regions, live migrated all of the VMs off it and only when it was empty, we upgraded it to the Kilo release. So as you remember, we had shared NetApp backend for each of these compute nodes which made the live migration pretty easier and efficient for us and that really cut down our upgrade timings also. So this was our upgrade strategy. We tested it out rigorously in our staging environment thanks to the continuous integration and automation using Jenkins, Git and Puppet again. And once we were comfortable with it, we rolled it out into production. So thanks to all of our testing and everything, we were able to upgrade our environment with zero service interruption for our end users. Yes, we pulled it off. Okay, so this was our first experience with deployments and upgrades of OpenStack. So we learned a lot along the way. First and foremost, we experienced it firsthand that you can actually do a non-destructive upgrade in production. All you need to do is plan. You have to define a strategy that is suitable for your architecture and then just roll with it after you're done testing it in the staging environment. Also, things were not all happy and everything. Something went wrong during the upgrades, but it was the compute resources. They had some bad memory issues and some dim errors which caused some roadblocks during our upgrade process. But we knew that the next time when we are going to do our upgrades, we should fix and investigate our hardware. And once we are sure about the hardware that is underneath, then we'll roll out the software upgrade on top of it using Puppet. Also, we forgot to take into account the VMs that were in the powered down state and needed to be cold migrated. So we took all of these lessons and we documented those so that we could apply it to our next upgrade cycle to make it a more smoother experience. So we documented the good things, the bad things, everything so that our good things can contribute to our best practices to deploy an upgrade open stack in our environment and the bad things will always tell us too that you need to keep on improving your strategies as you go ahead. Absolutely. And some advice for you. I'd like to take away from this section is really just to define a strategy for upgrades and non-disruptive operations in open stack. As we're all well aware, a six month release cadence doesn't have to be a pain point in your open stack deployment. If you plan ahead, you have automation that suits your user segments and can both scale whenever deployed both with the deployment of open stack but also with the upgrades in open stack. And a lot of that goodness that Monsy took us through really helps us to do that with Jenkins. And also with that automation, it needs to be able to support our user segments. So don't try to reinvent the wheel here with your own automation here. There's plenty of great tools out there that the community has that you can take advantage of. And one of the things to take away from that is, whenever you scale this stuff out globally with a global team, the team will be better suited to utilize known automation, something that exists out there that's customized for you versus a role in your own type deployment. And infrastructure matters. As I said, that really helped us take the open stack deployment internally to the next level because it could both scale with our architecture and the non-disruptive operations of that infrastructure really helped us to be successful and take it to the global globally. Now, I'm a numbers guy and so I mentioned earlier about some of the storage efficiency that we utilize, some of the fast and time copies that we use inside of NetApp. Well, as an exercise, we added up all of the inherent storage that we're using in our open stack environment, added it up, and we came to about 64 and a half terabytes. This is for all of the glance image stores, the Cinder volumes that we have in the architecture as well as the bootlines that I mentioned earlier. And through space efficiency technology that we have enabled on those volumes, thin provisioning and auto growing, we looked and saw exactly how much space, all of our volumes using and doing open stack internally, how much they took up. And that was 5.26 terabytes. So that's 91.8% savings. And this had our 50 glance images as well as our around 4,000 VMs that we had in production, but like savings were awesome. Yeah. All right. So we talked about locally what we did inside of RTP. Now it's time to go big. Yeah. So towards the end of 2015, we already had an open stack environment running in production. So this environment was at our local site in North Carolina. This had five regions, 75 compute nodes and a VM capacity of 6,000. We had scaled this out by now. And by this time, our operations team was comfortable enough to support open stack on a daily basis. We had successfully deployed as well as upgraded open stack while in production. We were constantly refining our automation and documenting all of the lessons that we were learning while gaining experience with open stack. So all of these factors motivated us and we decided that it was time for us to go big. We are a global organization. So we wanted to expand the scope of open stack globally as well. We had our core puppet master here at RTP North Carolina. This represented all of our learnings, our experiences, best practices with open stack. And we decided to use this as a basis to spin up three more puppet master at each of our other sites. So the first one was Bangalore, India. Then there was Sunnyvale, California. And then another one in RTP North Carolina. Each of these puppet masters were then used to spin up their own open stack environments. The smallest one was in Bangalore. This was pretty small. It had one region, seven compute nodes, and 100 VM capacity. This was because we didn't want to do too much at the same time. We wanted people to have a field of open stack, get acquainted with it. And after that, scale it out to a massive proportion as we had in RTP. So this was just a starter thing that we started in Bangalore as well as in California, which had two regions, 14 compute nodes, and again, a smaller 600 VM capacity. And the largest one in the globalization effort was a smaller site in RTP North Carolina with three regions, 37 compute nodes, and a VM capacity of 2,000. So now it was time for global upgrades to the Liberty release. As mentioned earlier, now we had four different sites, and not the one that we had before. So the smallest one was the Bangalore one, which we decided to do first because we were scared of doing it in production again. So this one had just 14 total nodes to be upgraded, and we just took an R to upgrade this environment to the newer Liberty release. Then we moved on to a slightly bigger environment, which was in California, which had 600 VM capacity again, and 22 total nodes to be upgraded. It just took us an R and a half to upgrade this to the newer Liberty release. So from back to back successes with the upgrade in production, it was time for us to come home back to North Carolina. We started with the smaller site, which had 30 total nodes to be upgraded, and it just took us two hours. And for the biggest site we had, like we have till now, it has 86 total nodes to be upgraded, which is six times larger than the smallest production environment that we had in Bangalore. But thanks to our serial and parallel approaches, it just took us four hours to upgrade this entire environment to the Liberty release. So also, as I mentioned, maybe I didn't mention this, but we use the RDO distribution internally at NetApp here. We are all about open source. And so you have to also define a strategy for release cadences of RDO. So like the Metaka release right now, that just came out recently, I'm sure. Mansi is very excited to employ the same methodology here, got back home and to globally upgrade all of the sites that we have to Metaka here. But if you look at this circular chart right here, we start with a staging-type environment where we download a release candidate of RDO and test our puppet automation there first, making sure that it both deploys and upgrades from the previous release, Liberty to Metaka and do that in staging. We refine the automation there. Once the GA release candidate comes out, we roll that into a segmented region where we can play around with the queue so specific users can go in there and we can make sure that it functions properly with the global engineering cloud front end that we've been talking about. Once two weeks pass there and we're confident with that, we roll that out into a smaller site like Bangalore and monitor that and then roll it out to all the rest of the sites. So a couple of lessons learned from this section here. For us, Keyload of Liberty was much easier. Once we had, like I said, documented those lessons learned from the previous experiences of previous upgrades, this is becoming easier and easier. Definitely a kudos to the community for a more polished release cadence and having the releases having us easier to be able to take advantage of internally. It keeps getting easier and easier for us but really take from this set expectations. OpenStack is a lot different from us internally what we've experienced in the past. It's not VMware, it's not Hyper-V but that's okay. Adapt a walk, crawl, run methodology. Test things out locally before rolling it out and have the backing and the support of your management as well. I can't stress that enough and with common automation like what we've demonstrated, you can really set yourself up for success there. And of course, as I mentioned, the NetApp stores that we had played a really positive role in the deployment and orchestration of our OpenStack deployment. It's non-disruptive, it's very easy to scale and one thing that we found through instance creation, the NetApp unified sender driver is actually 50% faster than utilizing the generic NFS driver in tandem with NetApp storage. Go ahead. Okay, so time for some advice. As Dave said, always test out your upgrade strategy in the staging environment before you roll it out into production and make use of known automation and known continuous integration CI CD tools so that your operations team and your other folks on your team are already familiar with it and it's not a huge learning curve to do just the testing of your strategy. Also read through the release notes carefully. We missed this a bit while doing an upgrade to the Liberty release. There was a new parameter added that limited the number of concurrent live migrations in an environment and by default it was set to one. So this can really bog down your upgrade times and you should really know about these parameters before you do your upgrades. Also, we are a globally dispersed team so we have our global peers who need to be brought up to speed with OpenStack in our environment. So we always, as Dave mentioned again, that you should always test your automation in your local geography before you roll it out globally. This helps you be more prepared and confident to roll those out in different production environments and if something goes wrong, you're always there to update your peers about whatever challenges they might face. And also training sessions play a huge part in globalization efforts. We went through that. We converted our months of documentation, our experience with OpenStack into resources to bring our global peers up to speed and get acquainted with OpenStack. What this enabled us was this enabled each side to take full control of its Liberty upgrade, making it easier like we just spent just a week to upgrade all of our four environments of OpenStack to the new Liberty release just thanks to the training sessions and the enthusiasm of our operations people. All right, now a little bit of a reflection on some next steps as to where we're taking things internally and up next. We already have some OpenStack projects in the pipeline. Ironic is really hot right now. Through GEC portal, we just provide virtualized resources right now to our users and we want to expand that scope to bare metal resources as well using the Ironic project. Then the second one is Trove, which is database as a service. DB servers are very popular with our test engineers and we want to use the Trove project in order to provide them with a wide variety of databases in the form of OpenStack images through our GEC portal. And the OpenStack Manila project, that's a project that NetApp spearheaded about four years ago and recently was certified production ready in the Liberty release. We're certainly gonna take that internal internal for consumption of our developers and testers. That's the file shares of service project for those that may not be familiar. Also, we're in the planning and implementation stages of deploying Cisco application-centric infrastructure. So we're gonna utilize a lot of the orchestration and policy engine that ACI provides to unite physical and virtual resources and to create multi-tenant, segmented lab environments for our users to be able to take advantage of in an OpenStack context. We're gonna repatriate some of our workloads from Public Cloud internally. As I mentioned, our OpenStack development team has some resources in the Public Cloud today and given the speed and the ease of this automation and the success that it's brought us internally, we're actually gonna repatriate some of those workloads back internally to be able to take care of that scale that the development team needs internally at NetApp. Not all of the workloads, but just some of them there. And we're gonna be able to, we're looking at bursting to Public Cloud as well. So that portal that Monty mentioned, having the ease of being able to select which resource or which region you're in, regardless of hypervisor, we're gonna extend that to the Public Cloud too, like Amazon AWS or Microsoft Azure. So we can test those workloads, our engineers and developers can have those workloads in the cloud as well. So as I mentioned earlier today, we have 15% KVM, 45% Hyper-V and 40% ESXi right now. And I'm proud to say that through all of the good work that we've done internally at NetApp, we're planning to, by the end of September 2018, to be 70% on the KVM hypervisor, 15% on ESXi and 15% on Hyper-V. We're all in on OpenStack at this point and through all of the goodness of automating the deployments and the upgrades, this is something that really makes sense for us and we're gonna continue to invest in that space. So some other collateral, well, I have a couple of minutes left. I just wanna mention, if you'd like to take advantage of some of the infrastructure bits to be successful in your own environment, some collateral for you that I wanna make sure you're aware of, we at NetApp just published a new technical report based on the Red Hat OpenStack Platform 8 release, which just came out a week or so ago based on the Liberty release of OpenStack. So this has prescribed guidance, deployment instructions, architectural diagrams, we love diagrams, as well as code collateral that is contained within that document to help you accelerate your production deployments of OpenStack. And we had a previous solution released in October of last year, it was Red Hat OpenStack Platform 6 based on the Juno release on FlexPod. It's a Cisco-validated design, those links are here. And if you'd like to hear more about how FlexPod can accelerate your production deployment of OpenStack, I'm giving another session today at 11.50 a.m. on this floor, I think in the meeting room 19 over there, if you'd like to hear more about that. And some key takeaways that we want to leave you with. Make sure you have a good foundation as we've been stressing. Have a good foundation that you can count on from both a infrastructure perspective as well as a architecture that makes sense on top of that infrastructure. FlexPod-like converged infrastructure provides a scalable and a highly efficient platform for us internally at NetApp and it can for you too. Also with OpenStack, you should always set your expectations right, plan ahead and document well. These things really come in handy when something goes wrong in production. Not that it does, but sometimes. Also automation and non-destructive upgrades played a huge part for our success in OpenStack. The non-destructive upgrades allowed us to include all of the recent developments and the goodness of OpenStack without causing any downtime to our end users. And the automation allowed us to stay consistent across the various global sites that we had in our organization. And last but not the least, thanks to the trials and tribulations that we went through with OpenStack, our global engineering cloud today is backed by an OpenStack ecosystem that is highly available, upgradable between various releases and provided at scale across the different geographical regions. Okay, unfortunately, I wanna be perspective of time. If you wanna come up and talk to us after the, down at the bottom of the stage right here, we'd be happy to answer any questions you have. And I can even give out some pamphlets that we have in our booth that lists some of the OpenStack sessions. A lot of them have occurred already, but from back of a better word, they're all listed in here. You can of course watch later on YouTube. And that's our presentation. Thank you so much. Thank you.