 Very right. All right, is everybody ready to get started? There's like a, I think we broke the 20 barrier, so maybe we're ready to start. My name's Scott Piper, Director of Product Management at NetApp and owning tech marketing solutions and OEM and business development relationships. I'm here with Matt Tangvall, he runs our solutions team, and we're gonna talk about how to move things from our kind of a proof of concept, early advancement in the enterprise into more of a DevOps deployment. So today we're gonna learn a few things that we've been working on and hopefully there's some feedback that can help accelerate your deployment of applications. And more importantly, the idea, if the idea resonates in this, make sure we catch up afterwards and figure out how to continue to move things farther through the process. All right, so we're gonna talk about the evolving marketplace. There's a couple of analysts out there that talk about different terms here, but it's really the old school versus the new school. Maybe it's the David versus Goliath and Goliath has been out there in traditional sand platforms. They're focused on resiliency and data protection and we've been doing this forever and it's our world. And then there's the David who's like, hey, this is cool, I've got some new stuff and I'm gonna change the game a little bit. And IDC calls it the first, second and third platforms. I think Gardner calls it mode one and mode two. And so we're gonna talk a little bit about how that changes between the two worlds and how to find ways to integrate the two worlds into a more cohesive way to deploy things today. Got some pretty cool ideas and cool applications but actually deploying them is really the challenge. So whether you call it a mode one, mode two or a platform two versus platform three, you can see that most of the growth over the next few years is gonna be in this platform three space or third platform. And that is the new economy, the new world, software-defined storage, software-defined everything. And that's why we're here because the rules are changing and we want to accelerate and drive innovation. So that's all goodness. Oops, hit the right way. So if you look at the traditional platform two, this is the sand group and there's a few people in the room that are in that more traditional storage space. There was a great session earlier today. It was a podcast session and it's talking about enterprise storage and the ecosystem. And if you get a chance to listen to that, that actually had some really good conversation around how do you take advantage of the investment for the last 30 years? Not a commercial for enterprise storage but it's how do I leverage that investment? Now if you look at the third platform, the new world, you've got new ways of doing things. It's focused on mobility, quickness of deployment, things to really drive your innovation. How do I take something new and bring it in quickly? And I wish we had a third era and I was thinking about putting it in there but maybe there's a 2.5 platform or a mode 1.5 and that is taking all this great innovation, the 6,000 people that are here and marrying that with enterprise storage to create an answer that gives me something that my DevOps team, my IT procurement team, my CIO wants to do today that they're comfortable doing today. So that's what we're gonna spend the time on. So navigating the data center. Now sometimes you have a really cool idea and that idea is something that not everybody's quite bought into yet but you know it's gonna be great and you gotta do unnatural things to get there. Personally, I would not want to tightrope walk between two hot air balloons. And if you look at the guys, they're like, oh yeah, do it, Joe, you can do this, this is really cool, go. And the other guy's like, yeah, come, you can do this. But somebody's gotta step out there and do it and that's hard. So back in 1991, there was a book called Crossing the Chasm. And it's talking about the early adopters, the people who really want things to happen, they buy into it and everybody has a smartphone when they first came out it was kind of a fad but now everybody's got one. So think about that. Think about how Linux has made the transition. Now OpenStack is making the transition. This third platform is making the transition but somebody's gotta step out onto that tightrope walk and do the walk and it's not easy. So as you start to think about it, what are those unnatural or crazy things that you're being asked to do? How do I navigate that chaos that's called the IT department? How is that different from the traditional things that I've done in the past? It's a different world, it's a different set of requirements. So I think Cisco, John Chambers, he had a great comment. When companies and countries go digital, IT becomes a board level concern. Now think about that, a board level concern means it matters to them about how I do things. Not necessarily how great that application is, is it the best one out there? But these are things about running my business, making business critical stay up and on time. So we did a little engagement with ESG and they did some work. So the right side of the chart is the actual data, the left side is the summary because it's easier to read. But if you look up there between total cost of ownership, price, features, et cetera, innovation is not number one, it's not number two, it's not number nine. IT shops are really worried about a few things. They're very traditional, very safe people. Think of your insurance salesman, right? They wanna know that things are gonna work and they wanna know that they're gonna work consistently in the same. So this ESG paper is gonna be posted here, hopefully within a month or two, but it really brings out the idea that cost, price, features, vendor relationship, it's in there. There are customers that we have from a net app perspective that they stop buying spinning disks, they're only buying flash and they're repurposing things. They've got gear, they wanna know what to do with it. They wanna know that when I'm doing an upgrade or when I'm doing a patch, that it's gonna work. So think about that as you're trying to navigate that move from kind of a proof of concept into the mainstream enterprise, I have a new set of suitors that I have to go convince to use my application. So it's a different world. And the way I look at it, there's really two options to take. There's the roll your own kind. Really cool, I've got a bunch of smart guys that create something and they're doing it with a bunch of white box servers. And historically that was my job in a previous company, building white box servers. Love the concept, love the theory, but it does have limitations. So white box server with a bunch of disks under it, you can start to build things and you look really smart. I'm taking advantage of this new world, this new architecture, but I do have a few limitations. I've got a few extra things to learn. Then there's the version of enterprise DAZ or enterprise direct attached storage. So I'm gonna marry the two ideas and I'm still gonna do them both in the open stack or open source construct. And so whether it's you pick your flavor but I can take those disks and create a different world that has a built in data protection. And you can pick your data protection type. You can use your EMC version, your NetApp version, your XYZ version, you can use the racer coding. You can use whatever flavor but there's a level of built in data protection that's not replication. It's been kind of hashed out for years. No single point of failure, purpose built hardware. Next one would be, what about performance and protection beyond hardware? So predictive drive failures, what is my drive doing? This drive is kind of showing up a little bit slow every once in a while. Maybe I need to replicate the data or evacuate the data off of that drive. There are things that the enterprise storage is gonna bring that is not yet there in the community. So we're gonna talk about that theory and say, all right, Scott, give me some proof points. Give me some value that says this isn't just a really cool marketing pitch to make EMC or NetApp or HDS or somebody else have a reason to be here. So we've got an example here. We took a stack of white boxes. So fat servers and we made a Swift cluster. So fat servers have the CPU, the memory, the networking, the storage disks. They have all these things together. And as I need more, I just stack another one. It's a pretty cool concept. Does have some limitations though. When you look at it from an enterprise DAAS perspective, you have an HA pair and you can start to look at no single point of failure. You have things like compute networking storage that I can scale independently. I can do the value of I need more CPU, I need more networking or I need more storage and buy the right components in the lower, lowest level of common denominator there independently. I also get no single point of failure. So the design in the box is for HA or high availability. I mentioned the features that are protecting you beyond just the hardware failures. You can do things that are predictive drive failures and more data analytics there. You also get better serviceability. When the drive fails, the enterprise storage system will tell you which drive it is and by the way, a disk is probably being sent to you. And there's a cost to that. And there's a cost to it, but we're gonna look at some of the other things I do get. I get things like density because I'm using a different level of data protection. I can use higher density. I can use fewer copies. I also get, in this case, I went from 10 2U servers to one or two 1U servers. So I went to a thin server architecture on the right and then I put the external storage underneath it. So we get a lower footprint, better carbon footprint on the right side. I also have a lower total cost of ownership and I'm gonna show you some details on that in the next slide. But I also have about a 20 to 50% reduction in disk footprint. And how do you do that? Well, you have to pick your favorite flavor of data protection. And that, again, could be leveraging some existing infrastructure or your NetApp and EMC or Hitachi or whatever flavor, or you buy into Crush, or you buy into Striping, or you buy into something else. Using different schemes that allow you to meet different objectives. Okay, so we are going to talk about cost. So if I ask you what is the most expensive cost storage to deploy, what's the answer? Fiber channel sand, that'd be EMC, right? No. Leaders in the fiber channel sand. You think it would be external storage, wouldn't it? So we did some work and we started to look at it and actually it depends on how you do your replication layers. So if you do a one layer replication, you know, one X high availability, I'm gonna buy into NetApp or EMC or some high availability storage, that's the blue line on the right. It's really low cost. It might cost, or sorry, yeah, low, one on the right. It costs a little bit on your acquisition cost but over time it's actually pretty cheap. Not a lot of people wanna put everything on one copy. So what if I go to two copies? That's the red line. High availability, two copies. And what we did was we compared it with the Jbods or Whitebox. Now Whitebox had a really, really good initial acquisition cost, my cap X cost, but over five years that green bar continues to grow because there's more cost in it associated with higher number of server count, higher footprint, more network traffic, more disk failures. So there's a lot of things that go into this, everything from bodies, what does it cost to have my people on the floor doing this to things like what's my power, my rack space, everything that we could think of throwing in there with through in there and this is not including the software stacks on top but this is maintenance and the whole thing. So it's important to note that as you start to look at the total cost, the things that you're number one or number two on your IT shop, they care about is your total cost. Question? Do we do? That's usable, right Matt? Yeah, it is usable. This is a three replication copy and I think we swift in this example, wasn't it? So it's three copies of Swift versus two copies of enterprise storage. You're looking at 192 with usable replicated twice versus 192 usable replicated three times. Yeah, it's two versus three, okay? Yes. You do replication on the JBOD, we would use high availability mirrored or high availability with two copies on the enterprise DAS, okay? Yep, all right. So with that, you use the term erasure coding. Everybody thinks they have their favorite tool. It's usually a hammer and everything looks like a nail. I've got a white box, I've got this, I've got cluster data on tap, I've got VMAX, I've got XYZ. So everything looks like the one thing I can do and I can hit it with a hammer. IT shops don't really care about just a hammer. They wanna know that we have a tool belt and sometimes I need a screwdriver. Sometimes I need to do things a little bit differently. So as you're starting to look at your proof of concept into the other application or into a DevOps environment, what are the tools required? And it might be reliability. It might be footprint or power or something else. So keep in mind that we have to look at more than just a hammer. We have to look at all the different tools that are gonna get us to a deployment. Okay, so standard white box replication, I believe Matt's gonna share a few ideas on how that application works. Right button. Thank you, Scott. So really when we start with this, so again we've got these three servers and in this case we have JBODs underneath them but they're dedicated JBODs to each one, to each server. But in order to actually connect them, we're using high performance ethernet. Originally we started with one gig, now we've moved to 10. And you can see that each of these servers has the same components in it. So I have CPUs, I have RAM and I have disks. Well, in this case when we have a disk that fails, well first of all when we have IO, we wanna copy it in multiple places to make sure it's there. But when we have a disk that fails, we have to resilver from somewhere else. And when we have to do that, there's a huge tax because one, I'm using this data network and really the goal of that data network should be just to write data or to read data. But now I'm having to suck up a significant portion of it for reconstruction. So I'm potentially impacting the performance, the read write performance, while I'm resilvering. And the time to actually fully copy some of these drives, especially at six terabytes, isn't measured in minutes, it's not measured in hours, it's measured in days. So this is a real non-trivial process to actually rebuild that. Well what does it look like? And again, not only are we abusing the network, sorry there, we're also abusing the CPU and the RAM. I need places to store that data, I have to take CPU overhead to handle all of the IO. So again, other ways that are gonna degrade the overall performance, provide an inconsistent performance footprint for the environment. Well, when we look at it and just have two servers and two copies, we have the same thing. Now in this particular case, we're gonna use a standard RAID level, let's say five or six, and we do have a hot spare. Now we have multiple different types of RAID technologies, everyone in here that's been exposed to it. Well, there we go. Just making sure everyone's awake. Excellent. So again, in this case, this is just looking at it with a traditional RAID level. But there's multiple different ways to do it. We actually have an implementation of Crush with RAID six, we call it dynamic disk pools that allows us to grow and be flexible. And in order, there's gives and gets with that type of technology. For us, we have a lower right performance envelope. However, when we have a media failure, we have a significantly less impact during rebuild to our IO. And at the same time, we can rebuild faster. But in this case, let's just assume I've got RAID five or six with a regular situation. We've got IO, I lose a disk. Well, now I'm gonna spend time inside the box recreating that data. Now, that was completely transparent to the application environment. I'm still getting the benefits that we'd have, say, in a Swift environment. I've got two copies here, so I'm able to balance them across the two servers. I've got dual proxies for dual interface. But when we have the simple case of just a media failure, instead of having to burden the entire system with copying over the primary network, I can reconstruct the data underneath the covers. When we talk to our most IT managers, one of the things we find, especially storage admins, well actually application admins, more so, they really don't like yellow indicators on things. So we've looked at some very large scale Swift deployments. And one of the things that we've found is that some of our larger customers actually have stated that 50% of their data is inflighted at any time because it's being recopied because they have thousands of disks and they're serial ATA disks. They're not enterprise serial attached SCSI disks. So it's amazing that we have to have all of this data flying around, independent of even bringing new data into the cluster in and of itself. Well in our case, this is completely transparent. The application says it's in a healthy state. We'll get an actual air light that will show up, which is something you don't get on your white box typically. If you've got our auto support technology or auto report home, phone home, most enterprise products do, a drive will show up and a drone will simply replace the drive and this will happen. And this will go back to fully normal and that'll be repurposed as a spare. But it didn't impact the application at all. It was completely transparent. At the same time though, I'm still getting to maintain all of the good benefits because while I can scale vertically and add more disks, I get the horizontal infinite scale because I can just add more servers with more enterprise storage underneath it. So we refer to it as a better together story. So what I'd like to talk about for a couple of minutes is how we actually proved this out. So when we started looking at this a while ago, our folks inside of our team that were working with OpenStack and then I was one of the founders of OpenStack had come to us and said, we think this Swift thing could work on E-Series but we don't know, but let's test it out. And so we tested it in our lab and it worked and so we started talking about it and it turns out that a customer, our field team came to us with a customer, the University of Melbourne, in Australia they actually have a federal mandate to have research clusters available country wide. And so if you think about it, you've got millions of people that wanna have access to infinitely scaled storage and compute resources and what they wanted to do was they wanted to make sure that they had the most reliable implementation of this that they possibly could. And so they're like, hey, we saw that you guys said you could use Swift on top of enterprise storage, tell us more and I'm like, well, here's what we know, here's what we've tested. I haven't really put this out in the field yet but it looks good, cost numbers look really good, the performance is good, we simulated some failures. And they're like, well, why don't you come down to Australia and talk to University of Melbourne about this? And I was like, well, hold on, who's actually gonna manage their OpenStack environment for them? And they're like, well, they have a partner, AppTira. So I'm like, let's talk to AppTira because what was more important was understanding the person that was actually going to have to set up, deploy, manage, deal with all of the day-to-day headaches that they understood this opportunity, this implementation and the differences and whether it made sense. And when we met with the partner, AppTira, they actually at first were a little bit like, well, wait a minute, isn't, you're talking about these things we're not, we don't talk about in white box. We're talking about RAID and high availability and redundant controllers and fans and proprietary hardware and all of these things that we, we're not supposed to really do that, but we went through the value prop as it turns out folks in the AppTira had actually used E-Series for a more traditional database deployment. And so we had a couple of conversations and they really got behind it. And in fact, we deployed a two petabytes Swift cluster with them. And actually we have almost four petabytes on the floor there because we have Cinder as well, one of our other platforms. And we're looking at another four petabytes for other scale object and block storage with them because the first deployment worked out so well. They've got information on this, on the project, essentially up on their website. AppTira is also publicist, is a case study on their website. So we were very, this was very exciting because all of these different things that we've talked about, really about the proven reliability, the fact that customers know. I mean, in our case, we've sold almost a million units of this particular platform. And so there's not really a question of whether or not our technology works, it's will it perform and deliver in this particular environment? And what are the sacrifices or challenges with it? And what we've found is that it's better together. So we're very excited about it. Now, is two to 10 petabytes the largest sort of thing you're gonna see? No, not really. I mean, it's an object storage land, it's pretty much the base. But it's a great starting point. And it's really about how you can repurpose or bring in enterprise expertise that are aligned. AppTira had a whole bunch of storage admins that already knew how to use enterprise storage. And so instead of having to get a new DevOps team or train up people to do the white box disk problems, they were able to use their existing resources. It saved the partner money because they're managing it too. I think Scott wants to come and talk about another use case. Well as we keep going here, any questions on the case study or the concept of the enterprise storage in this environment yet? All right. Have you guys tried on your standard NAS or just on E-Series? So we tried it on FAS as well, is he serious? Yeah, so you certainly can, I mean, that's what's beautiful about having software defined storage. You absolutely could run it on top of FAS. But here's the challenge that you have with that is that Swift is doing the replication, Swift is doing the load balancing. Well, cluster data on tap does all those things for us. So you would be essentially over provisioning yourself. You would be buying a hardware technology and integrated appliance platform technology and then a software defined and then you're gonna have to figure out how you wanna split between those. The beautiful thing with E-Series is it's about reliability, performance and scalability. And we really, really mesh well in an environment where the data management layer is inside the application stack. So software defined storage technologies are a fantastic fit for your classic block sand, block enterprise DAZ environment. In fact, our heritage, we've been doing this for a long time. It's almost in 40 years. But this platform that we've sold almost a million of, we have the world's largest parallel file system installation of 200 petabytes running Luster at Lawrence Livermore National Labs. So this isn't the first time we've done it. It's the first time we've done it with object storage. But it's actually core to our DNA. Once we realized what actually was going on with Swift and we sort of got over the allergic reaction of, ooh, that's white box stuff. We don't wanna talk about it. And the white box guys got over, ooh, enterprise storage, we don't wanna talk to you. All right, so I'm gonna take that concept of enterprise storage, white box storage back and forth and bring in one more case study. In this case study, we had a customer that went to deploy an analytics application. Pick your favorite one. They had theirs, but it was business intelligence. And they needed to do something in the about 300 terabyte space. And it was something that they really wanted to deploy with a direct attached storage. They wanted white box. The application vendor said, hey, you should do this on white box. And the POC guys, we wanna do this on white box, but the people that had to support it and deploy it said, no, no, no, we're gonna go with enterprise storage. So there was a little bit of challenge there because which one do you follow? The guys who were receiving the traditional goliaths that make the, pay the checks or David who's come in, I got something cool and I wanted to do it on white box. So we went with the enterprise DAS solution. In this case, it's the E-Series 5600 using a bunch of six terabyte drives and about 300 terabytes per site usable. And it had complete DR. What was really cool about that is if we won against white box because we had better performance hands down. Now in our labs, we have actually tested a number of different open source solutions on our E-Series platforms and up to two and a half times better performance. Now obviously you have to tune and tweak and do things but we got some pretty smart architects that are playing with this stuff showing that we can actually do better performance, better reliability and better uptime. Those are really important messages to an IT shop. They want to know, can I afford this and will it work in that environment? And if you put that cool app on top of Enterprise Gear, you get the best of both worlds. As Matt called it, the better together. We thought about handing out Reese's Pieces to everybody to see if that would make the story stick but didn't want to get our fingers dirty. Okay, the competitors here, HP, HDS and Cisco. There's a lot of ways to go after this segment but Enterprise Storage in that environment is going to give you the values and the support that you need to accelerate those applications in the new environments. All right, we're almost done guys, hang with us. So a little recap, proof of concepts. Typically I bring in something that's really simple. I've got low CapEx cost, I have my white box servers. I'm a Dell shop, HP shop or whoever and I'm going to deploy these new applications on top of that. I look, stand on the left. I look at a decentralized storage management and it's okay if there's a little risk because I got a bunch of smart guys who are going to help me get this deployed. Moving to the right side in an optimized for production environment, game changes. I've got people who are worried about upgrades, uptime. Is my network going to be there? Who's going to replace the disk? Who's going to do all these things if something doesn't work? What about zero day currency? So everything's cool on your side for your new app but I also have something else running on that network. Are they compatible? Who's done that compatibility check? So it's really about TCO, amount of risk that you're willing to take. How do you take those cool things in with the least amount of risk? So I'm going to come back to my original fun slide. You guys walking across there, he's thinking I'm going to make it. He's company's got his little GoPro on his head and everything's going to be great. Doesn't look too bad, right? What's unique about what he's doing though? Boo. He's wearing a parachute. Nobody's dumb enough to try to do that without a parachute. So what's your parachute? Is it enterprise storage in that environment? Do you have a plan? Because with an enterprise, with a parachute, I might even try that. Don't try to go into an environment where you're not set up for success and the community's eventually going to get there to help accelerate the adoption of applications. But today we have things that are maybe a 2.5 platform, a mode 1.5, or whatever terminology you want. But a lot of those challenges of buying into the enterprise environment have been solved and the comfort is there. So get yourself into that environment, look at working with your enterprise storage partners to solve some of those problems. And I think we're done. Any questions before we wrap up? Right here. Security, another big one. Back to the Cisco comment, right? When you're looking at a global environment, security of all my IT assets is a big deal. Yep. Other questions? That red shirt. Well, it depends on which application, which deployment. So we do have sender drivers for both the NetApp fast side and the e-Series side and depending on what your primary objectives are, is it performance or is it scale? Do I need certain replication engines? We will decide which platform to recommend based on IT requirements. Well, some of the ones that are critical, one of the just obvious cases where FAS absolutely shines is FAS's data retention deduplication snapshot, snap replicate, mirror cluster technology is fantastic for virtual machine images. I mean, it is absolutely, it's just a slam dunk to think of that. So when you have super critical data that absolutely has to be available at all cost and you want to just put it there and know that it's there and have it all managed on the back end, that's where you would use FAS. But how much of the data is that super critical inside of an entire private cloud? I think, you know. Do you even have financial institutions? Well, I mean, but so then with e-Series, it's a little, yeah, I mean, there are so financial institutions, right? But I mean, when we talk about things like object storage or actually other cases when you want persistent performance with scale, like if someone tells you they need 100 terabytes that they want to be able to persistently mount and have access to with multiple virtual machines, again, e-Series with something like dynamic disc pulls over ice scuzzy fabric makes sense. Later this year, we're gonna add fiber channel support for the e-Series and then there's a whole conversation about slicing up flash loans if you want to spin up database instances, for example, over a fiber channel cinder connection. Not so much availability. It's with FAS, you get the opportunity to replicate, do your snapshots with no penalty. With e-Series, you're gonna have different things. You'll have consistent performance. So as you're doing things like mirroring or disc failures, that performance is persistent. It's consistent. It's expected. Okay, other questions? Right here. Did you get paid? No, okay. Well, then we gotta discount his comments then. I give security, stability, sighting, and handhold, and parachute is that really, really good sense of agility. But there's people now that got high useability to continue to set quick role in the production gap we need. That's something that doesn't quite happen in the enterprise. So I'm gonna be interested philosophically. Philosophically, here's my response. So philosophically, you mentioned the term agility and hyperscale. Hyperscalers can typically buy a whole lot of engineering people, developers, to go do this. Most IT shops don't. And so they're more traditional. And you're gonna bring that in and you want me to do what? What do I have to learn? But I've got all my scripts and all my stuff that works. And so you're really looking at the investment of does the developer, let's say you're a guy who developed that really cool thing and you want to go get it deployed. Do you go with the application to deploy it? Or are you going to have it so that it's deployable and that you can continue to develop? And that transition is something that a lot of developers need to think about. Hyperscalers, they can afford that. They have that army, service providers, hundreds of people. We're in both. We do, well again, from an E-Series perspective, not a product pitch, but we are part of the hyperscale solutions group. So we can scale there. But a lot of those partners will go through a business partner and integration partner where they have the resources to do it themselves. And if they can do it themselves, I'm a fantastic building block for them to shortcut a few of the other things that they don't want to have to go develop. Question? And I would like to accelerate it because we are giving you the security. We're giving you security. We're gonna give you data protection. We're gonna give you those other features that say, all right, now I need to work on what is my number one concern with that new application? And I would take it one step further, just as an example of taking this to the enterprise. One of the things, when we started this about a year and a half ago, everyone's like, well Matt, FlexPod. Where's the FlexPod version of all this stuff? And I'm like, wow, I am not selling to Enterprise 100 accounts yet on this. This is something that we're responding to. We're being reactive to the market. And we're really taking our heritage and parallel file systems and moving it forward. Well, we actually are working on a FlexPod and it's gonna have phases with a Cinder target for critical data management. And then we're gonna actually, because it's a set of blade centers, we're using iSCSI to connect to E-Series and a sand fabric, which if you told me a year and a half ago that we'd actually have any implementations that we're saying, it was mind blowing to me that we're getting there. But that's because Cisco, NetApp, and Red Hat see the value in it because we have customers that are asking for deployable rack infrastructure based on all tier one proven enterprise hardware. Well, but it's a reference architecture and so the less agility is the only part that I would disagree with about it because I think in some ways it's faster because if my IT people already know how to manage Cisco UCS servers, Nexus class switches, and E-Series enterprise sand as configurations, then that's like, and they don't have to learn how to deal with white box. Like I can actually get this to happen faster. And so we're not saying it's a one size fits all approach. We can scale in a lot of directions. And in fact, our own object platform, storage grid web scale, which scales over to 70 petabytes and 16 locations with geo-distributed racer coding, uses hierarchical racer coding, running E-Series as the data protection underneath it. So I mean, we can take this in almost any direction, but it's really been about being able to adapt and respond to customers that have questions that already have these existing skill sets and determining how they can get the benefits of DevOps of elasticity while still maintaining some of that core value set that they already know how to do to accelerate deployment. All right, guys, we've got one minute. Any last questions? All right, thank you very much for joining. And if you have any comments, we'll be around for a few minutes. Thank you.