 We all feeling well rested, not like Wednesday morning. I mean, I know it's like the crack of 11 and early in the morning I can't see anything up here. Has everybody been having a good open stack summit? Yeah. Been lots of good content. I've only been able to make it to a few sessions, but they've been really good sessions. So, I know we've got about two minutes before we start, but I figured I'd come up here and say hi and banter a little bit so that way I feel a little more comfortable in front of you guys. Anybody find any good music last night? I heard that there was a band playing like two blocks, I don't know what direction, two blocks away from the Hilton that was pretty good. So, this is Austin, the center of live music, right? No, I would imagine not, yeah. I don't know, we walked 6th Street at one point last night, and it was very quiet, surprisingly. There was not a lot of people out. It's Wednesday, or it was Wednesday. All right, so just a few seconds. So, I'll go ahead and start since I don't see a line at the door or anything. So, first of all, thank you everybody for coming out. I do appreciate you, it's not early yet, but I do appreciate you coming out to the second set of sessions for the day. So, thank you very much. My name is Andrew Sullivan. I'm a technical marketing engineer with NetApp. Effectively, that means because I have both technical and marketing in my title, nobody actually takes me seriously on either side of the house. So, ultimately, the analogy I like to use is if anybody saw Office Base, the guy who, you know, I deal with the customers, I take the requirements, the engineers, we kind of do the inverse. We go and ask engineers a lot of really stupid questions and then take that and turn it into things that are consumable by the rest of the world. So, today, I'm here to talk about a couple of interesting things. So, one, or first and foremost, is going to be containers. Containers are an interesting topic. Containers are something that a lot of people are really interested in, and in particular, when we look at the open stack environment, containers are sort of a natural transition. So, containers are not virtualization, and we'll talk about that more in a few minutes, but in particular, the focus of this session is rapid prototyping with your data, your data that's on Manila or Cinder, and the container ecosystem, as managed by Magnum inside of OpenStack. So, rapid prototyping is something that I think we all do or should do at some point in time, but I'm not limiting it to just rapid prototyping. That was what was on the title. But really, what we're talking about is test and dev. We want the ability to leverage our actual data, or as close to production data as we can get when we're doing any operations that are non-production. Think about doing test. I go and I implement a new feature. I modify something in the database, and when I do that, how do I know that when I push it to production it's going to behave correctly? Now, this is something that we can do and have been doing, I would have to assume, inside of the OpenStack ecosystem for some time. But how long does it take to push an application when you're using Nova instances? Maybe a few minutes, maybe a few hours, depending on what you're doing, what's all having to happen inside of there? Well, with containers that changes. Containers instantiate in seconds. We can create an entire application very, very quickly. And what this means is that when I'm doing that prototyping, when I'm doing that testing, when I'm doing development work, I can much faster iterate over what I'm trying to do. So from an agenda standpoint today, I'm going to talk about a number of different things. First and foremost, the components. What are we using inside of OpenStack in order to make this easier? What components are we using? I also want to talk about containers. So if anybody's unfamiliar with containers, it'll be a very high-level overview. You're welcome to always ask me questions at any point in time. We'll talk about container orchestrators, in particular the ones that Magnum is aware of, Magnum abstracts away for us. We'll talk about the application that we used in order to create the demo that's coming in part three. And then we'll actually demo what this looks like. What does it look like to go through and clone our data, bring it into that environment, and then begin redeploying our application in multiple instances. And finally, a brief summary and conclusion. So I would feel remiss if I didn't talk about something that is commonly referred to as continuous integration, often abbreviated as CI, and usually lumped together with continuous delivery or continuous deployment. CI is a concept. CI is a notion. CI is not a tool chain. CI is not something that you can buy, much like DevOps. But the goal here is to rapidly, to constantly be building. We want to find out as quickly as we can when there's an error in the thing that we're creating, in the application, in the code. So that way we can just as quickly fix it. Fail fast is what is often used. So we're taking this concept and we're bringing it into our open stack environment. And more importantly, well, more importantly to me, we're layering it on top of containers as well. So continuous integration is something that has become more and more prevalent. It has become more and more interesting and more and more popular amongst many organizations because it's becoming easier. Open stack is taking off. Open stack is becoming more and more popular. So the tool chains around it are becoming easier. So the components that we're looking at today, I hope that at least none of these first few are a surprise to you. These are things that have existed for a long time. We're just using them in maybe a slightly new or slightly different way. And the first one I want to talk about is Manila. Manila is the software, I'm sorry, the shared file systems as a service offering. Cender's been around. It was one of the original projects inside of OpenStack and it provides block storage. Manila, on the other hand, provides NFS, SIFS, HDFS, whatever type of shared file system we happen to need. And this is important for a number of different reasons. Sometimes you need to have one file system that is shared amongst many different, right? Many tens or hundreds, potentially, of clients, right? Nova instances, container instances, whatever that happens to be. Additionally, one of the things that we like to take into account is the neutron integration, right? We want to ensure that only our tenant network has access to it. But in our particular case, we're really concerned about a couple of things. Notably, Manila, just like Cender, supports snapshotting and cloning of those datasets. And with most modern storage systems, most enterprise storage systems, and yes, I am wearing a NetApp shirt. I do work for NetApp. I might be a little biased. But almost all of the enterprise storage systems support the thin cloning concept, right? Where we're not actually cloning our data. We are instead creating a snapshot. We are creating a new volume, effectively, that only contains the changes and the new data. So it doesn't matter if I have one gigabyte or one terabyte or one exabyte worth of data. Having that thin clone feature means that I can rapidly create those clones without consuming additional space. So going forward, if you don't have a storage system that's capable of doing that, evaluate whether or not it's something that's necessary. And there's lots of ways that you can work around that as well. Even if you're using local storage, something like a copy-on-write snapshot leveraged through LVM is capable of doing this or very similar type of work. So the second thing we want to talk about is, of course, sender. I'm not going to bore you with the details of sender. I hope by now we all know and love sender pretty well. But much like with Manila, there's a couple of things that we care about here. First is being that same concept of snapshot to create a new volume based on that snapshot. We don't want to clone all of the data. We don't want to literally copy the data. That is slow. That is expensive. But also important is what many of the vendors are doing with the extra specs, where we're able to pass additional information down to the storage so that we can capitalize on other features. Now, again, I'm a NetApp employee. I'm most familiar with our feature set. So we have things like QOS. I know we're not the only one. And QOS means that, well, what if I'm running my production data sets on a set of disks that have, you know, their performance limited? Maybe they're 10K drives. Or 7.2K drives. I don't want to impact my production data set so I can put a QOS policy that effectively limits the secondary copy, the secondary data set, and prevents it from interfering with the production data set. These are critical things because, well, we don't want to inhibit the actual production application. We don't want to impact the business. We don't want to impact the things that are happening up there. We're most familiar with the extra specs that your storage offering has. Make sure that you intelligently apply those in order to maximize what's happening on your cloned data set, as well as your production data set. So the third open stack solution, I'm sorry, component project that we want to look at is Magnum. And Magnum is particularly interesting in that it is not itself a container orchestrator. A container orchestrator is responsible primarily for two things. The first one is orchestration, right? Scheduling. If I have a thousand containers, I want to figure out, I want you, scheduler, orchestrator, to figure out where to execute these at. Think of this as the way that Nova operates. I don't choose which host my virtual machine is running on. Nova figures it out for me. But Magnum is not an orchestrator. Magnum abstracts the creation of those orchestrators for us. In particular, Docker Swarm, Kubernetes, and MISOS. Technically, it's marathon on top of MISOS, but we'll cover that in a minute. Magnum is interesting because not only does it abstract the creation of these, it also abstracts the interaction with them. I can choose to interact directly with Kubernetes or Swarm. I can choose to interact with it through Magnum. So it's a very powerful tool. But it's the gateway into what we're actually trying to do here. In the demo that we'll have in a few minutes, what we did was deploy Swarm on top of Nova instances. Interestingly, if you look at most of the research that is being done by companies like Datadog, who are on the Expo floor, they publish periodic reports. The vast majority of containers are running on virtual machines, and the vast majority of them run for less than five minutes. I think it's something like 29% run for less than five minutes and 40 something percent an hour. This tells us that containers are being used for these ephemeral type purposes. I want you to test something. I want to instantiate it and quickly destroy it. So I also think it's important to talk about containers themselves. Containers are confusing, particularly for those of us that have been long-time infrastructure administrators where we're familiar with how virtual machines work because it's pretty obvious. And I think it's important to realize that a container is not a virtual machine. A virtual machine is exactly that. It's a logical abstraction of physical hardware. There's a virtual motherboard with virtual BIOS and a virtual network adapter plugged into it and virtual hard drives attached to it. And all of those things have overhead. All of those things have additional needs. But a container is a process. We're simply creating a process in the host kernel, just like any other process, and then we leverage a couple of features that Google introduced a decade ago. Namespaces and cgroups. Namespaces make that process. It fences. It isolates that process so that it is by itself. And cgroups layer resource constraints on top of that. But namespaces aren't limited to just the process. We can do things with namespaces for the file system, for the user space, and this is where Docker comes in. You see, most people use the terms Docker and containers interchangeably. But in fact, Docker is not containers. Docker is an abstraction for containers. They make it so that mortals like me can consume a container at any point in time using a simple command, Docker run. It instantiates everything that we need in order to do that. And that namespacing, is what makes us think, what makes it look, taste, smell and feel like we're inside another operating system in that container. You see that container process, whether it's bash or Java or Jboss or whatever you happen to be running, that process can have a new file system attached to it. That's what the namespace does. When you think of a Docker image, a Docker image is nothing more than a folder that contains all of the files that represent that file system. When I instantiate the process, it gets namespaced off, we attach a new file system at the root inside of our container and that process, bash can now look at root and see maybe it's the Ubuntu file system layout, maybe it's Sintos, maybe it's Red Hat, whatever that happens to be. It's not executing Ubuntu or Sintos or whatever you're using in that scenario. It's that process with a familiar file system layout, a familiar tool chain, a familiar library set. That's all a container is. So it's not a virtualization technology. It's a process isolation technology. So containers themselves are just a small portion of this. As I mentioned before, imagine running your open stack environment, dozens, hundreds, thousands of Nova instances having to choose where each one of those runs in real time. That would be terrible. And this is what happens with containers as well. If I'm just using Docker, vanilla Docker on an individual host, I have to instantiate it manually. Each one of those has to be, I have to choose where to run it. And this is where the orchestrators come into play. And the orchestrators as I mentioned before bring two primary things. One, scheduling. They all have some sort of scheduling mechanism that allows me to create a pool of resources and then distribute the containers across them. And the other one is additional, basically, discovery services. This can be in the form of overlay networking so that containers on separate hosts can communicate with each other as though they are on the same network. Think of this as neutron. They also have things like discovery services. So either literally or something analogous to DNS. So when I instantiate a container, I can give it a name on that container network, that overlay network. And now I can talk to that individual container, that service, just as though it were on a real network. So the first of the orchestrators that I'll talk about is Docker swarm. Docker swarm is an acquisition that came that Docker did about a year and a half ago. And it is arguably the simplest which is both a good and bad thing. It's good because that means that we can quickly spin these up and use them. And that is exactly what has been happening. Many of our development teams, whether or not you realize it today, are probably using Docker swarm. When I talk to customers, I like to ask particularly infrastructure teams, has your development team stopped making so many requests for new virtual machines from you? Have they suddenly dropped the number of resources creating new nova instances or VMware virtual machines suddenly dropped the number? Because what's happening is they request a handful of resources, they put some sort of container orchestrator on there, and then they deploy on that. So instead of constantly churning through virtual machines, they do it in containers. So the important thing to remember about Docker swarm is that it is effectively a pass-through to Docker itself. I can take a number of Docker hosts, something up to a thousand or so in the current version, and I can talk to them, I can treat them as though they're a single Docker host. Using the same exact Docker CLI as I have when I'm talking to an individual host, I can talk to the swarm and say, run this container or run these containers, and it will do exactly that. Docker swarm uses the compose application definition, and that's something that I should have mentioned earlier. Each one of these orchestrators has an application definition language, if you will, where I have to describe what my application looks like. It has this many of these containers, right, this many that are running maybe Apache, this many that are running MySQL, this many that are running Python, and here's how they communicate with each other. So it uses that to build, to create, instantiate the resources around it. Swarm is arguably, again, the easiest of these to consume. Now the second one that I'll talk about is Kubernetes, and Kubernetes is a particularly interesting project to me personally, because it is built off of Google's Borg. Now, Kubernetes recently donated, given to the CNCF, the Cloud Native Computing Foundation, so it is now underneath their management, which is a Linux Foundation project. So Google is most definitely playing good corporate citizen here, and open sourcing the whole thing, but it has a really, really good pedigree, right? Google Borg is what Google uses in their own data centers. They took all of those lessons learned, and they're applying it to Kubernetes. So we know that there's a lot of really good things that are happening inside of there. Now, the primary difference, well, at our level, the primary difference is simply how we define applications. There's lots of differences at the underlying level with the scheduler and all of that type of stuff, but we don't need to go into that here. So within Kubernetes, instead of using compose in order to define our application, Kubernetes uses the concept of pods and services. A pod is one or more containers. I say, please execute this pod. We leverage a replication controller to say, make sure that there is at least 10 instances of that running at all time. It does exactly that. Think of it as a high availability mechanism. And then we have a service which abstracts the access to that. So if I need to access that underlying pod, I give it an IP leveraging a service. If anybody watched the keynotes with Craig McClecky and Alex Polvy on Tuesday, they were showing Kubernetes running the open stack services. When he was destroying, when he was ending those containers, it's the replication controller that restarts them inside of that cluster. So this is technology that is very much in use today. So the third one is a little bit confusing. MISOS is a resource framework. MISOS doesn't actually schedule things. It is simply there to say, I have this much resources, this many CPUs, this much RAM. What do you want me to do with it? Somebody tell me what to do with this. And what we end up with when we stay inside of the Apache foundation is marathon. Marathon is layered on top of MISOS and it acts as that container scheduler. It has its own application definition language as well. And the two of them work in conjunction with each other in order to execute those containers. In order to go through and ensure that, well, enough of container A or container type A, container type B are working together. Now interestingly and confusingly, MISOS is again a generic resource framework. Kubernetes and swarm can both use MISOS as their underlying execution engine. There's other companies, MISOS sphere for example, who recently open sourced their data center operating system where you can execute other frameworks, other tasks directly against that MISOS cluster. So things like Cassandra nodes. I don't necessarily need to put it into an application definition using marathon. I can execute it directly against MISOS or Kafka or Hadoop or any number of other things inside of that shared resource cluster. So Hadoop tends to be a very large, the largest scalability wise. There are clusters that are tens of thousands of nodes in size and also the most flexible of the solutions that are available. We see very, very large customers using this. And there are some who have publicly stated that they're using it. Apple has come forward and said that they use MISOS with Siri. When you instantiate Siri, when you press that button, it stands up a container running on top of MISOS, somewhere in the Apple cloud. So for the demo, we sat down and started creating an application. Submit one of my teammates and I, we sat down and we created a fictitious application which we fondly referred to as a project obstinance and walked through and started to write the code inside of here and create all this other stuff. And then it occurred to us that it doesn't really matter. The application is technically unimportant. Everybody has a different application. Every application has a different architecture. So instead, we cheated. We went with the most highly scalable, most robust, most widely used application we could think of. Fafo. If anybody's looked at the developer application on the open stack website, Fafo is the sample application that they distribute that they use. And it turns out it's actually not a bad example use case. We have multiple services running inside of there. We can treat them as though they are a service oriented architecture type system. If we want to, we could break those out into microservices. So it makes for an interesting play. And it was also done. We didn't have to put a whole effort into it. We did make some minor modifications to make things easier to read, easier to use, for example. But again, the point that I'm trying to make here is ultimately the application that we use in this demo is unimportant. This applies regardless of the application that you're using, regardless of how much data you have. We've got a few tens of gigabytes of data inside of here. The exact same principle applies if you have a few tens of terabytes of data. The concept doesn't change. So what's going to happen when we look at this demo? There's a number of interesting things that are going to happen here. And the first one is just looking at our application running. Hey, it really is running. It's running inside of Nova instances. And then we want to take the underlying data store. It doesn't matter if it's Manila or Cinder. In this instance we use Manila. And we're going to clone it. Snapshot. Create a new volume based off of that snapshot. Once that's done, we instantiate a new swarm cluster, leveraging Magnum. Again, that only takes a few minutes. We're simply instantiating Nova instances and deploying software on top of them. We introduce that Manila share and at that point we can deploy any application we want inside of containers and provide access to that data. Now, there's an interesting workflow that evolves that comes out of this. In that, snapshots give us that point in time recovery. So one of the examples I like to use is a database. If I'm a developer who's going through and creating the next version and I have to do a database schema update. Well, I can write out the SQL in order to do that. And I can execute it in production and cross my fingers. Or I can go in and I can iteratively develop that. So take the concept of, okay, I created this environment. I now have a separate snapshotted, isolated copy of my data. I do the first set of tasks. It worked. Create a new snapshot. All of this is intrinsic to the technology. I can go forward, take the next phase on. Maybe I fail miserably. I can roll back. So I can now treat my data as an asset during the development phase. Likewise, when I'm going to do my production rollouts, leverage those snapshots. Leverage snapshots so that you can roll back quickly. So finally, after the data has been reintroduced, we simply prove, or I'm going to say the data has been reintroduced and we have it inside of a containerized environment. We simply prove that no, it really is the exact same application. And at this point, we can do whatever we need to it. From a data standpoint, from a storage standpoint, again, modern enterprise storage systems are only going to save, only going to store the deltas. So yes, if you go in and you accidentally are on the entire file system, you're probably going to create a very large array of that. Generally speaking, we're not deleting terabytes of data during an application update. So it's a very small change. It consumes a very small amount of space. They're very quick and there is no reason why we should not be using these. So let's hope that I can get our video here to work. There we go. So during this, and I'm not going to narrate exactly what's happening, but during this phase, what we're looking at is that we have nova instances, and we have a manila share that has been provisioned. In this instance, I used awk in order to limit the fields that are returned because it is a gigantic, very difficult to read, a very wide output. Once we have our share, or in this instance, we're proving that we have our instances and our shares, all I'm doing is jumping into our application services, nova instance. In my deployment, this is running the AMQP server as well as the database server. We're going in and we're just showing that there is indeed a real database inside of here, one that has 150,000 rows, as we'll see in just a second. We created a lot of fractals. At one point, I spun up several dozen instances of the worker threads in order to create all of those. So our MySQL database is 15 or so gigabytes in size. Pretty good size database, not one that we typically have, or fairly large for a web application, fairly small for an enterprise type application typically. We'll see here there are 7500 pages of fractals, that's after I modified it to display 50 at a time instead of 5 at a time. So lots and lots of data that's going on inside of here. Now at this point, what we want to do is deploy our magnum bay, our swarm bay. I did this first because it does take a couple of minutes. I will note that all of this is done in real time. I'm not using any video magic, I'm not speeding anything up here. I did go through and edit out where I made typos, so I'm not a perfect type for the record, but all of it, aside from that, is done in real time. So we can see creating these nodes, creating a magnum bay is pretty simple. It just uses heat. And because it's just using heat, we can go through and instantiate this, we can see all of those resources that are associated with it, we can see what's happening with each one of them, if there's errors inside of any of those, and it's very, very easy to destroy and re-instantiate these. The concept behind containers is that the application is portable. Wherever I instantiate that application, it's going to execute the same. So we see here the bay that I deployed has one master and two swarm nodes. This is a demo, so I'm showing all of this manually, because it would be really, really boring for you to see me write a Python script or execute a Python script that does all of this. That's just not fun, right? So all of this can absolutely be automated. In this instance, again, because we wanted to show for demo purposes, this is what swarm looks like. You see we have two nodes running inside of there, you'll notice there's two nodes running inside of there, you'll notice I'm using the standard Docker interface to run with it. In this instance, what I'm doing is pulling my internal git repository and showing that there is a Docker compose file that defines our default application. So we instantiate that application, going through and creating our services. I browse to that particular container, or I'm sorry, container host that's running inside of there, and we see that we have an instance of the application that does not have our data at this point, right? We have not mapped that Manila share into the magnum environment. So let's bring down that application, which does take a second. And after we get through this phase, what we'll do is we'll go and actually create the snapshot of our Manila share. And this is something that takes very, very little time. Remember before I was referencing the very long output, so there's our very long output of the Manila list. We create a snapshot. After that's done, we will go through and we will create a new volume, a new share based off of that snapshot. And then the last phase is to map it over into the network for our particular application. Now Manila shares are interesting in that in this instance we're using NFS, which means that we have to enable a subset or an IP range to be able to access it. Because this is a lab, I went with the highly secure everything, but it is one of those things to be aware of, particularly if you're using application networks. You want to make sure that your tenant network, your application network is the one that has been enabled for access to that. After that, that little listing there was simply showing that we really did create a second share. And now we're jumping into our Kubernetes host again. I'm sorry, our Swarm host again. Now, the interesting thing here is that we're using a Docker volume plugin to create our volume. In particular, this is the NetApp Docker volume plugin, although there are many other storage companies that have the same thing, EMC in particular with RexRay. So what we're doing is because we have the name of an existing share, we can map that in to Docker. And now it can become managed just like any other Docker volume. So what we see here, we created that volume using the same name of the share. We see it twice in that output because each of the two nodes hasn't mounted. And now we can go in and all I'm doing here is showing that, yeah, that data really is there. I created a temporary container, mounted it in there and showed the database tables. So at this point, we're going back into the application folder and showing a second Docker compose file which leverages all of that data. We're creating new containers that are taking advantage of our data share. And now we create the application again. So the point here is that at this point we have now recreated our entire application inside of containers with the exact same data. And there is no tie between the production data and this application-based data, right, this test data. Whatever I do inside of here will not affect the production data set. So there's nothing that I can do to harm it. I can do all of the testing, I can mess up, I can remove the database, I can do all of those things worry free. So as a developer and in particular as we're going through the QA or the acceptance phases, it's really good to have production data, right. Now this is also interesting in that we could also do something like test migrating from a Nova-based deployment to a container-based deployment all without having to worry about affecting our data. I can reset, I can rapidly iterate over that application inside of containers. Again, you saw how quickly it stood up much, much faster than the Nova instances, which on average took about a minute 45 when I was doing all of the testing and setup for this. So containers took about 15 to 20 seconds to instantiate to the exact same thing. So big time savings. So a quick summary of what we've seen here, right, some of the key takeaways that we want to think about. So first and foremost, applications themselves aren't getting any smaller. NetApp's very own Val Bercovici gives a presentation and I meant to grab the slide from him before setting this up that shows applications are growing in number of lines of code as well as in data that's being generated. If we look at the Apollo mission, right, the Apollo lunar landers, there was a couple dozen thousand lines of code that were running inside of there. If we look at this watch on my wrist, which is a smartwatch, there's a couple million lines of code inside of here. There's dozens of millions of lines of code in modern cars and I'm not even counting Tesla. Applications are getting more and more complex. And we also have an affliction or an affection depending on your perspective for data. The concept of big data. The concept of I want to save everything because I don't know what I'm going to need. But that also introduces risk. What happens if I screw it up? I've worked with customers before who are afraid of automation. They will not automate their infrastructure because, well, if I have one guy who's doing it and he messes something up, he just broke three servers. If I'm using automation, I just broke 3,000 servers. So we want to be careful. We want to de-risk the process as much as we possibly can using the tools and technologies that we have at hand. And this is getting easier and easier and easier as time goes on. And this talks to the second bullet, right? Testing and validation with real data is critically important. Particularly for business critical applications. I've had a personal mission for the two plus years that I've worked at to remind people that IT does not exist for the sake of IT. IT exists for the business. IT exists to help the business do whatever it's doing. Making money through selling goods or helping people do whatever it is that they happen to need helping with or whatever your business is doing. We're in the business of selling things. IT exists to help the business. Make sure you're protecting the business. The other thing that I like to remind people of is data is money. Data is the life of your business. It's what you use to make decisions. It's what you use in order to well, go through and in our case sell things. Make better things. So losing data is not an option. And finally, containers are a tool. They're not the answer to everything. They are not a panacea. They are a tool that can be leveraged to improve overall operations. Just leveraging containers does not automatically mean that you are dev ops or that you have deployed microservices. But they're a tool that can be used in order to help that testing, that validation process in conjunction with things like this cloning concept and reintroducing it to the containerized environment. So I'd like to thank you for your time. I am happy to take questions, whether in the group. If you want to come up to me afterwards, I'll be in the booth today. I think the booths are open until 1.30 or so. I noticed our booth manager scheduled himself for duty from 1.30 to 2.30. That's awful convenient. So anyways, I'm happy to take questions at any point in time. And finally, I would very much like to thank you for your time yet again. I hope everybody has had a great open stack summit. What kind of strategies would you utilize for doing dev tests? This whole idea of moving production data into dev test. But a lot of times there's private, confidential, secure information in our production databases for scrubbing that before we do that transition. So I have done this before when I was a customer. We used to do this and it was through an automated system. Either we gave our developers the option of using a nightly snapshot. So every night at, I think it was 1 AM, we would create a snapshot and we had a task that would go through and sanitize the data. It took about five minutes. Alternatively, depending on the project and the quantity of data, they could in real time request a clone of that data. It was through a storage as a service portal rather than other ways. Now, inside of the open stack ecosystem, it's something that you'll have to address with the developers directly. Because in this demo in particular we natively used Manila or you can natively use sender to do the same thing. So unfortunately, if they have enough access to do that, if that's what they want to do, at least from my awareness, I can't prevent them from doing that. Again, if they have the permissions, you can clone it, but you can't access the data. It doesn't make sense. So maybe a storage as a service portal or something like that where you can request or they can request those services to happen may be the best way. I've had success personally with that before. Any other questions? It's hard to see up here. Thank you again. Oh, yes, sir. The question is what type of access controls, security controls are we putting in place in NetApp products for containers versus anything else? I got you. Just to be sure I'm clear. So containers introduce another layer into the paradigm. And what you're asking is how do we secure at the container layer versus at the Docker host layer, for example. So right now we do that through the Docker volume plugin and the authentication mechanisms that are inside of there. By that, I mean if I have my Docker volume plug-in instance, I have in a username that I authenticate with against cluster data on tap, for example. You have yours. I cannot see your volumes because we are using different users. Now from an export rule standpoint and things of that nature, it's still being done at the host level. And this is where things like Magnum are really, really useful. Because if we're separate tenants your Kubernetes swarm mesos cluster are going to be instantiated on your Nova instances associated with you. Mine will be with mine. I don't know that they exist separately. So it's not one of those the products themselves, we're not changing anything with how that authentication mechanism works. It's more relying on that layer above and simple, regular authentication authorization at the user layer with storage virtual machines or we have the same concept in SolidFire. E-Series is slightly different, but not significantly. Well thank you very much everybody and have a great rest of the day.