 Yeah, wasn't too bad. So morning and thank you for joining us today. My name is Simit Kumar. I work at the TME for NetApp. We also have with us today Andrew Sullivan, who works at the TME as well. And Jim Hall, who works as a principal engineer for NetApp. So we're going to try and keep things a little interesting today. So Andrew is actually going to play the part of a developer and Jim is going to pretend he's from IT. To be clear, I have no business developing anything. And Jim fits this role very well because he's very serious. Yes, absolutely. So we're going to talk about how we use OpenStack and Docker, how we're starting to use OpenStack and Docker to accelerate development and deployment, some of the key areas of impact we see for Docker, and then some solutions, potential solutions to problems that IT and dev teams usually run into. All right, so talking of problems, Andrew, you're not very happy yesterday. Tell me about it. Tell us what happened. You know, my guys are working really, really hard. They really are. They're writing all kinds of things. The problem is when we go to build, when we go to actually test these things, we're waiting for weeks on the IT guys. Hey, don't blame us. All right, it's like we're sitting around waiting for you to ask us to spin up a VM or a database or something. We have actual work to do. I'm pretty sure that's what we pay you for. OK, guys, calm down. So NetApp.IT faced a similar challenge as well, and we found a solution that works for us. So the approach that we took, we took a hybrid cloud architecture and focused on cell service and automation. So let me try to explain that visually. Now, at the top, we have the cell service portal that serves as a gateway for our devs to go in and request, let's say, compute, storage, and networking resources. So think of it in very simple words as a graphical user interface where you can go and set your requirements and request, let's say, a specific flavor of a virtual machine. Now, next, we have the cloud management platform, which takes over in the next phase. And based on the type of workload that's been requested by the developer, it decides which cloud offering is best suited for that kind of workload. So for example, for a disaster recovery kind of scenario, right, it's a scenario that it's a use case that you're probably going to use once in never, very rarely. So it may not make too much sense to spend on, let's say, space, storage, power, maintenance of hardware for these kinds of use cases. And if you're using once in a while, and these are the times when you need a lot of compute resources, it might make more sense from both an efficiency and a performance point of view to just burst onto the public cloud, right? So next, we have the enterprise, private, and public cloud, all augmented by NetApp storage. An open stack plays a key role in it to keep our costs low. And it also gives us the control and flexibility that we need. And at the bottom, we have the NetApp data fabric that gives us the control over our data that we need. So it allows us to be flexible. It allows us to move our data around and not have to worry about paying huge sums of money to public cloud providers. OK, so let's just re-create and summarize the solution. So we had to change how we did things. We had to change from being a traditional, from a traditional IT model to a hybrid IT model. And that included how IT delivers its services. So we moved from being a broker of services to being, sorry, from a builder and operator of services to being a broker of services. And like I said, we also did not want to be tied down to a single solution. We wanted to leverage all the different platforms and the best things that they had to offer. So we went with a hybrid cloud approach. And again, the approach that we took that I showed you in the previous slide, it allowed us to stay in control of our data. Now, this philosophy also allowed us to embrace new technologies and move with pace. Another thing we did to kind of keep things consistent and easy for our developers is that we have utility-styled chargeback even for our private cloud. So for example, if someone's already used to how, let's say, AWS or Azure charges you, it's per on depending on how much time you're using a specific resource. So for example, if I use Compute for five hours, I only got to pay for five hours. And that makes sense. So we have that kind of chargeback even with our private cloud. Now, automation played a very key role because it allowed us to move from a ticket-based system to a self-service system. And that really helps because think about it, right? If there is a developer, if, let's say, Andrew wants to go in and request a database, a specific environment, he no longer needs to submit a ticket, wait for, let's say, hours, even days for someone else to go set it up for him. He can request it, and that gets provisioned for him. And that's a big win. So, Jim, I know you love automation. Oh, I like automation, right? Let's make all this stuff take care of itself so I can relax and get my real work done. Okay, and? You know, that sounds really great, Jim, but VMs are easy, right? VMs are a PCK. If we've been automating the stand-up of VMs for years, what about my storage? I got a lot of stuff that we got to move around. Can we automate that? Absolutely, we've got to automate it the same way we do our virtual machines. Yeah, we'll talk about that. So, before we jump into that, let's talk about VMs and VDI a little more. So, you can have instant workspaces with OpenStack and NetApp. That's no surprise. Now, if you want to set up, let's say, a virtual desktop infrastructure, totally end-to-end, automated and secured, you can do that as well. Now, we won't jump into the details here, but there's a company called LeoStream, and they're just one of the providers. If you use their services, for example, then you have a cost-efficient and flexible VDI service on your OpenStack cloud, and that's great. Now, if you tie that in with, let's say, Manila snapshots or enhanced instance creation from NetApp, you can literally provide instance workspaces for your teams, and that cuts quite a lot of time from your entire development cycle. So, basically, your teams can get started instantly on actual work and not have to worry about, hey, now I got to go request, let's say, a specific environment from IT, wait for it, and so on and so forth. All right, so, again, we talked about hybrid cloud, and what do you feel about it, Jim? How do you think, you know, is it going to affect your migration or let's it work there from one cloud to another? Am I going to be on call, so whenever a developer wants to migrate something, I have to immediately be able to respond with a tight SLA, and if I am, how am I supposed to maintain standardization across different clouds? Right, so, well, one very simple solution, well, I shouldn't call it simple because there are complexities involved, but one potential solution would be Docker. Now, how many of you have heard of Docker? I'm going to assume most of you have. Perfect, have you run the Docker run command? Okay, okay, perfect. So, for those of you who are not aware of it, think of it as a lightweight alternative to traditional virtual machines that focuses on abstracting away the underlying architecture and the associated complexity. So you basically take an application, you put it into a Docker container, and now it's portable, right? You can run it anywhere, and you don't have to worry about setting up different environments or even compatibility between different clouds. And to be honest, I think that Docker and OpenStack work better together. Now, I might be biased again, because when I joined, when I started working for NetApp, I was part of the OpenStack team, but I was working in Docker. That was my initial area of focus. But like I said, I've always seen these two technologies working together, and I think they work better together. And the reason I say that is because it's very common to find OpenStack being used in conjunction with another cloud. It can be a private cloud or it can be a public cloud. Now, Docker comes into the picture and it provides us with a way to really make these applications portable and run them anywhere, right? So think about it for a second. I mentioned about mobility and how do you move things around in a hybrid cloud. Now, if you use Docker to kind of make these applications portable, you don't have to worry about, you know, one, let's say maintaining different environments and two, compatibility between these different clouds. So for example, you can use purely OpenStack in let's say your development environment when you want to keep your costs low. And then in production, when you need additional compute resources, you can burst onto the public cloud, right? And you can do that with ease. So we have a quick demo now that will show you how you can, you know, containerize a Java application. Now, this is a Java application, but the process more or less is the same for any other application stack as well. All right, so we start off by starting our Docker machine which serves as a host for our containers and it will get assigned an IP address. Now, note that this might not be the same depending on the host operating system that you're using, but this is a Mac, so this is how it works. Now we're gonna run the Docker PS and Docker images command to check what images and containers we have right now. And as you can see, we had none. So we'll pull an image, a Docker image that has Java already configured on it. And we can use tags to pull a specific flavor so I can have let's say the latest version of Java or a specific version of Java or a container with just JRE or JDK. So you have that flexibility. So we can see that we indeed have the latest version of Java, the container with the latest version of Java pulled. So we go ahead and run the container now. So you can do that using Docker run command providing the container with a name if you want to and then giving it the image that you wanna use and followed by the command that you wanna run. So in this case, we're just running Java-version. Now very similarly, like you can also bring up the bash. So in this case, for example, you just add bash to the end and then you can do whatever you want from within the bash of the container, obviously. So all that's good. Now this is one obnoxious issue you can keep running into especially if you use like a lot of tabs in terminal. I do that all the time. And it'll basically tell you, hey, I can't find a Docker Damian. What's going on? So you basically get around it by reconfiguring your shell. So once we're done with that, we're gonna try and see if we can mount a volume and do some cool stuff. All right, so mounting a volume is pretty straightforward. You just use the dash-v command line argument and then you provide it with the directory that you wanna mount and also where you want to mount it within the container. Now in this case, what we're doing is we're mounting the project directory within the container so that it's accessible and then we can package it and let's say run the application from within the container. Okay, so it was mounted. We're gonna make sure that it mounted correctly. So what we're gonna do to check that is we're gonna create a test HTML file within our project directory and then we're gonna go inside the container to make sure we can see it. So there we see that we created the test HTML file. And then once we're inside the container here, we can see that, we can see it over there as well. So our volume mounted correctly. Now let's try to run the container application. Now there are two ways to do that. One, obviously, is you can do it from within the container itself. But the interesting way to do it is once you have everything set up, you can create an image and now you can run it externally. And this is really useful, especially if you want to, let's say, automate the entire process. And we have another demo where we show you what you can do with it. So this again, running your application from outside, it's just sitting in the container. This is really useful. So it's very simple again, as you can see, we're providing it with very simple parameters. And once it's done, once we have finished running the application, we're gonna try and access it. So here we are using the IP address that was assigned to us in the first step when we started Docker Machine. So again, like I said, that was a very simple demo, but this is like the starting point. This is how you make things do better, right? Okay, so persistent storage with Docker. So the volume that we mounted, right, it's not gonna stay persistent. And I'm gonna let Andrew explain that to you, because I know Andrew, you play around with Docker a lot in your Dev environment. So tell us why persistent, yeah. A lot of it has been an understatement. Okay, so submit surmised really, really succinctly, right? That Docker is a tool for decoupling the application from the underlying operating system, right? It's a new method of deploying our applications. We take the binaries, we take whatever is associated with our application, its dependencies, and we put it into a package, the Docker image. And we can deploy it wherever we need to. And this probably kind of sounds a lot familiar, right? This is the premise behind OpenStack, behind Nova, right? Automated deployments of the application into that environment. But you'll remember that one of the original projects inside of OpenStack was Cinder, right? Back before it was even Cinder. So applications need persistence. Doesn't matter what you're doing, you're processing data. All businesses have data. All businesses need data. We thrive on data. Data is money. So how do I persist my storage when it comes to containers? Because if I take my application and move it from a Nova instance into a container, that doesn't go away. But it changes a little bit. So the first thing that changes when we look at these is that, well, now we have to attach our storage directly to the container. Now there's a number of different ways that we can do this. And one of the first ones that NetApp worked with was a company called Cluster HQ. They have a product known as Flocker. And particularly when we're talking about OpenStack, they have Cinder integration. So if you are running a Docker cluster, Swarm, Kubernetes, MISOS, whatever it happens to be, and you want to leverage Flocker, you can leverage that to automatically provision and attach Cinder volumes. It manages those volumes regardless of where the container instance gets created. So if I instantiate it on host A or virtual machine A, and I move that container to B, Flocker manages that movement as well for us. So from a direct Cinder integration process or perspective, Flocker is a really, really great tool. But it's not the only tool that we have at our disposal. So very, very recently, my real engineering team released the NetApp Docker volume plugin. And these guys worked really, really hard in order to create a native integration for cluster data on tap with the Docker volume paradigm. So the Docker volume paradigm decouples persistent data management from the container, much like containers decouple the application from the operating system. So now I can independently create a Docker volume, Docker volume create, and it exists outside of the construct of our container. With the NetApp plugin, I can now use that to create LUNs and NFS shares coming from cluster data on tap. And in the very, very near future, like it's checked into our build tree, we're waiting for the last few approvals, we'll have E-Series as well. And as soon as I can break John Griffith away from OpenStack Summit, we'll have SolidFire built in as well. So all of this is moving very, very quickly. And the greatest part of it, it's open source. It means anybody can go use it, see it, improve it. We love to get feedback. It's really easy to find, github.com slash NetApp. So please, if anybody's interested, go and check it out. So a couple of additional details about that, as I already touched on, is multiple different plugins. It's a plugin system itself that allows us to have a bunch of different storage back ends. So we're able to include all of NetApp's portfolio products. But not only that, we can do things like customize that storage based on what the user is wanting to do. So for example, if I want to maybe provision against an all flash fast aggregate, when I instantiate, when I configure that Docker volume plugin, I can specify an all flash aggregate. If I want to specify or customize my snap schedule, I can do that on a per volume basis. And if I have the snapshot directory turned on, which it is by default, that means that the user can now self serve their own recoveries. They don't have to go to anybody else. They can go into that snapshot directory directly from inside of their containerized application, recover the data if they want to. And those are things that we can change at any point in time. If you want to add a volume option, you can. Again, it's open source. It's the great part about open source. So I have a very brief demo. It's about 40 seconds of what this actually looks like. And it should be pretty self explanatory. What we're going to see on the top side of the screen is instantiating a container, leveraging a NetApp Docker, I'm sorry, a Docker volume created from NetApp. The bottom side is simply a watch command that shows us what volumes exist on the NetApp that match our naming pattern. So as we look at this, I broke it. There you go. Thank you. Sure. So as we watch this, pretty easy, Docker volume create. As Summit showed earlier, Docker is a pretty self explanatory, pretty easy to use binary. If you haven't used it before, it's a really, really easy to pick up. Don't be intimidated by the massive documentation that they have. So all I'm showing here is that I have created a volume and now I'm going to attach it to a container. I show that there's nothing inside of that container, or I'm sorry, in that volume, and then I create something inside of there, the touch command. We then exit the container, destroying that container, and then instantiate a new container with the same volume, and effectively prove that, hey, our data persisted. And if you look down below, which I spoke too slow, the volume already was destroyed, but if we look below, we can see that that is actually a volume over on the NetApp. That is being mounted through NFS. This demo is configured with NFS, but you can also use iSCSI if you choose to. Right, so we talked about Docker. We said all of those good things, but you have to understand that Docker might not be the best solution for you. But there's so many things to consider, right? So you want to make sure that you're making the right decision. Now, there are lots of good resources online that can help you understand whether something works for you, if not, why not. We at NetApp are active participants in Docker space as well, and we constantly try to create content that we think is going to be beneficial for you. So for example, we recently published a blog post that walks you through whether or not Docker makes sense from an enterprise use case. So go check it out and let us know if you find it useful. And if you have any questions, please feel free to reach out to us. So we are happy to answer those at any point in time. Really, really easy to get a hold of us too. Open source at netapp.com. So please, by all means, don't hesitate to ask questions after the session or reach out the email. Okay, so we talked about Docker and we talked about higher infrastructure utilization through Docker. So Jim, how do you feel about- Hold on, yeah. So the utilization is great for the bead encounters. But if the utilization gets too high, it gets really hard to manage. I mean, I'm gonna be missing SLAs all over the place if you crank the utilization all the way up to 100%, right? Yeah, I mean, that is one challenge. Now, if you think about it, if we do decide to, let's say, go with Docker, right? It kind of puts us in an interesting dilemma. So on the one hand, you have higher infrastructure utilization, you have more applications running on the same hardware. But at the same time, your compute resources, the capacity, your compute capacity remains the same. So you now have more applications demanding compute resources, right? So there's gonna be a point of time where you're gonna run out of your compute. So how do you deal with that? Now, one, again, potential solution. If you don't wanna spend money and, you know, upgrade, buy more compute, another potential solution is you burst onto the public cloud. So, you know, imagine I have my private cloud, runs on the open stack and I have my compute nodes and I've run out of compute now. So there is a new driver from a company called ThoughtWorks which adds Amazon EC2 as a compute node and then you can always, you know, put it in a different availability zone if it makes sense to you. So it's just AWS for now, but you can also extend it to, let's say it's your or some other platform. And the driver is open source and it provides you with automation and control tools. So for example, you can automate it to, you know, automatically burst onto the public cloud, let's say AWS when you have reached 90% of your private cloud's compute capacity, right? And then it also allows you to set limits so that you're not, you know, hit with a surprise bill and you're like, oh my God, I got to pay all this money now. So, you know, that's always good to have. So it's available on GitHub again and it's open source so you can go add, you can, you know, it's really useful to have. I've played with it and it's really cool. So we got a demo now that actually shows you how it works, well, it's a very quick demo that will show you how you can, you know, burst onto the public cloud. Now, in this case, we're gonna imagine, okay, I have a LAMP stack-based application. In this case, it's gonna be WordPress and I have run out of compute resources. So I'm gonna walk you through a bunch of manual steps, right? But like I said, all of this can be automated so you don't even have to worry about it. You don't even have to know that it's working under the hood. Okay, so we log into Horizon on our open stack side and as you can see, we added, you know, we chose the public cloud availability zone and everything else remains the same. Now, we're gonna go into the post creation step, add some scripts just to make sure that our, you know, environment is set up properly on the public side as well. Now, if I go into my AWS dashboard and this is after I clicked create instance, then I can see that my instance, my VM, is being created here. And once that's done initializing, we're gonna go ahead and copy the public IP so that we can access our application. So we're gonna copy that public IP. But before we check our application, let's check their system logs to make sure, you know, our environment was set up properly. And like I said, you can automate all of this. You don't have to even, you know, once you configure this, you don't have to do any of this manually. So there we go, our application was, you know, our environment was set up properly. So now once we go to that IP address, I can see my posture server running and then I can access my WordPress application. So again, here, my application was utilizing compute from AWS while being managed and run on, let's say my private cloud on OpenStack. You know, this is all well and good, although honestly, I'm a little disappointed because we would always have internal competitions to see which builds could generate the most phone calls for Jim. So I'm a little disappointed in that respect. But you know, it's all good and well, but what about the data? You know, I can't reliably build these applications. I can't reliably push to production if I'm not using production data during the test process. So how can we address that? Sure, yeah, we can do that as well. Now again, another demo, right? Let's me start that. So that's a, you know, regular scenario, the developer purchase code to get GitLab and then that triggers Jenkins and you build and deploy and so on and so forth. We're gonna bring in NetApp Snap Creator in the picture and create a clone of a production database and it's gonna be efficient and fast. And then again, we can run tests against that cloned database. So if you're familiar with Jenkins, this is a pretty basic step. You're just gonna create the first, you know, phase, the build phase of our project. So we provided with the URL. I've already got it configured. And then for the goal, we're gonna try to deploy our application in a container. So it will get deployed in a container and it'll get hosted on our internal Docker registry. So let's run that and see if we can do all of that. Okay, so it finished successfully. So we'll now jump into our internal Docker registry and make sure that we can see the container with our application that we just deployed. All right, so there we see it. So you can find a lot of information about NetApp Snap Creator online, but in essence, it's a tool that allows you to have NetApp's functionality with third-party operating systems, applications and databases. So in this case, you're gonna use it to create a clone of our production database and do it quickly and efficiently. That's my project directory. I have build deployment and test. I have my NetApp SDK folder in my test folder and that's where I downloaded it from and made sure everything was in the right place. So again, it's very simple to do. We are using a very simple script and I have my configuration, my usernames and passwords, but that's the main script. And like I said, it's very simple. All we are doing is creating a clone of our database and most of it is being handled by the NetApp SDK. So that's easy. And we make sure we are also able to mount that volume on our host. And next, we're gonna make sure that our containerized applications is able to access that volume. So we can again, automate that using the Docker compose file. So I added the other two phases, the deployment and test to my Jenkins build. And the first one is green because obviously we ran it previously. So we're gonna run through the entire build cycle together. So it tested and deployed it successfully so I can very easily check the log for a specific step. So just to reiterate what we did was we used Snap Creator to create a clone of our production database and we ran tests against it. And we were able to do that quickly and with efficiency. Now, if you combine that with let's say, Manila, Sahara for big data, Magnum for containers, then you really have so many possibilities. And we have a demo later on in the presentation where when I say efficiency, I'll show you what exactly we can do and how much more you can do with it. So if you're interested in this topic or this domain, Andrew's actually doing a presentation tomorrow. It's called Big Data Rapid Prototyping by using Magnum with Cinder and Manila. So go check it out if you find this domain interesting. So by using this combination of tools, processes and philosophies, we were able to make our build pipeline reliable, faster and without overhead. What about making it smarter? So I know, Jim, you really talked yesterday and you were complaining, hey, I'm not impressed with this demo or how the CI works. Yeah, well, I mean, making the CI super fast, that's great for Andrew, okay? But the really important work is the root cause analysis when he inevitably breaks everything. I don't know what you're talking about. We don't create bugs, we create features. Well, you know, I mean, you can do it, right? You can make it smarter as well. So one again, potential solution is you configure your snap creator to automatically, let's say, take a snapshot of your entire environment, every time a dev checks in code and everything wants as expected. And if, let's say, Andrew or his team, no offense, if you check in bad code, you can revert back to the previous snapshot and that way you prevent anyone from breaking your build at all. So that's really cool. Okay, so that makes it easier for my guys to find out which individual check-in caused the problem and revert it. But you'll be better than that if it didn't even have to do it all. Can we automate that? Yes, of course. Like, you know, same process, right? You just, since you're using NetApp's snap creator, you can use scripts through the SDK so it can all be automated. That's got potential. So yes, it can be made smarter. So again, we can use the same, you know, ideas in the deployment phase as well. We have automation with cell servers, that serves as a key. And you can use NetApp's cloning technologies to create multiple copies of your, let's say staging or deployment environment. And we always have Docker to make it easy to move our applications from one environment to another. Well, what next, right? What should we start thinking about once we have already deployed our application? So what do you think, Jim? What should we, you know, talk about next? Well, you gotta solve the backup problem, right? I know the devs don't care, but I have to make sure that all of our data is protected and we can get it back. Yeah, I've suffered because of not having good backups too. So I can agree with you there. What about you, Andrew? I mean, I know you were all about doing things bigger and better. We created MySpace 2.0. We expect it to be globally ridiculously popular. 30 trillion users. So scaling is huge for us, right? We need to be able to accommodate all those, you know, selfies, quite frankly. Okay, so let's talk about backups. Now I'm not gonna jump into details here because we have other sessions that focus on backups. But, you know, in that sense, you want your backup systems to be efficient, reliable, and seamless. So we now have a demo. So I've been bragging about NetApp's efficiency for a while now. So let me show you what it looks like. Okay, so we start off by going to our on-command system manager. And that is actually the volume I've been using to underpin my Manila deployment. And as you can see, we have been using about 1.81 terabytes of space. So we're now gonna go and create a clone for Manila share. But first, we're gonna create a snapshot. And we're gonna use the snapshot as a data source for our clone. So we just select our data source as a snapshot that we created. Now this snapshot, this clone is using about 0.88 terabytes of data. So when you add that to the previously used 1.81 terabytes, the total should be around 2.69 terabytes, right? A simple math. But when we hit refresh, we'll see what the actual number is. All right, so it went up by just 0.01 terabytes. So imagine what all you can do with the savings you make, right? There's just so many possibilities. Okay, so let's talk about scaling. Now there are lots of good resources, official resources online that talk about how you can scale your open stack deployment. But I'm gonna talk about how you should scale your storage and from how I see it, there are two aspects to it. First, you wanna keep it cost efficient. And one potential way how you can do it is through the storage service catalog. Now the concept with storage service catalog is very simple. You are basically creating a tier of services. So let's say you create a bronze, a gold silver bronze type hierarchy with gold as all flash, and then bronze or silver as flash or hybrid solution with compression and de-duplication turn on. So it makes sense to use, let's say my all flash solution, my gold standard for my latency sensitive applications, and then use my bronze or silver for use the home directories where I wanna make sure that I'm not spending a lot of money and there's gonna be lots of duplicate files in my data. So you can really slice and dice your storage and make sure that you won. It's cost efficient, you're not spending money where you don't want to. And two, you're getting performance where you need it most. So talking about performance, you want to go with the storage solution that can keep up with your cloud and that can provide you with the efficiency and it's basically built for scaling and performance. So again, quick demo that shows you how NetApp Storage performs under pressure. Okay, so this is basically my performance analytics dashboard at the top. I have stats from my application layer and at the bottom I have stats from my NetApp layer. So we're gonna try and push the workload until we start to see some latency. So we saw about 12 milliseconds of latency. Now the cluster, the storage cluster I've been using so far relied on a hybrid storage solution. So we're gonna go ahead and add to all flash fast nodes to my cluster. And once we've done, it dropped back to a millisecond of latency and the great thing about it is that there was no downtime, right? So like on my application layer, my customers don't even know what's going on. They don't have to suffer. Now keep an eye on total number of IOPS and total number of snapshots because we're pushing it through the roof right now. Now let's say I have reached my cluster's limit and I want to scale, I want to go bigger, right? So how do you do that? Very simple, the same things like the same process that we followed previously, we do it again, right? We add more nodes and that's very simple to do and it's non disruptive. So any workload, any new workload that we add automatically gets distributed in my cluster. So I don't have to worry about, let's say, hey, I bought this new storage and now it's not being utilized. Also the average latency dropped, it's still maintaining at 0.9 millisecond and total number of IOPS is way above a million and total number of snapshots and clones has reached a big number as well. All right, so let's talk about results. What have we achieved by following these processes, tools, and ideas? And then you can decide for yourself whether it makes sense. Now again, this is part of our results of the survey that we did. Now for our corporate hybrid cloud which powers our corporate business applications, we were able to see that, okay, we got things about 98% faster and we also saved a lot of money because we're not spending money on, let's say, maintaining hardware that we didn't really need. Now the global engineering cloud, as the name suggests, is what is used by our engineering and dev teams to request their test beds and whatnot. And since it has, it's built on a bigger scale, the numbers are even better, especially for the cost savings. All right, so key takeaways. It is possible to do better. I mean, regardless of what phase, off your build cycle or deployment development cycle we were talking about, from fetching your work space to deployment, to build and test, there's always room for improvement. So go out and find how you can do things better. Make sure you embrace OpenStack, Docker, and other new technologies as soon as you can because that helps you keep up with the rest of the world and attain that edge. Also make sure you leverage smart solutions in partners because they increase your potential and they can help you reach success. So I also wanna jump in and say that all of this really is real. We have done all of this inside of NetApp. We have, for example, our internal private cloud, we have the experts here with us, the people who are implementing and maintaining this. We can talk about how CodeEasy internally used at NetApp to reduce workspace cloning time for developers by something like 95%, I think it went from 80 minutes down to two minutes. So we have, let's see, who else do we have here? Oh, the internal continuous integration process which Summit alluded to, right, where every 20 or so commits to our main product code lines, it does a build. And if it finds a bug, it figures out which commit was it created the bug, automatically kicks it out and sends a message to that developer but moves the other 19 or however many it was forward seamlessly. All of this is real. We're happy to talk about it. Right, and we also did some sessions around this, right? So for example, the two above are still, they're gonna happen tomorrow, well, to once today and the other ones tomorrow. But the other three ones about backups, the other ones about how we use OpenStack internally at NetApp and then, you know, how you can make Docker and Cinder work better together. So these three, I guess they're already done but you can still go and watch them on YouTube since they're publicly available. The other two are once today and the other ones tomorrow. So another interesting thing is if you wanna see what NetApp brings to the table or if you wanna get started with OpenStack, you can take a test drive. It's free and it's available on our website. It's netapp.github.io. So it's really cool if you wanna get started. So thank you for listening. We, if you want more information about some of the sessions and our presence, then we have some flyers with us. You can also go meet us at our booth. We got some giveaways. Thank you for listening and have a great week. And if you have any questions, you know, think we're out of time but we're gonna hang around here or outside if you can always come talk to us. Thank you. Thank you. Great.