 So, first off, thank you for coming today. It's Thursday morning, last day of the show, die-hards, this happens every year, especially I do EMC World next week in the Thursday sessions. It's like, you see everyone that was hungover from the party the night before, so luckily the party was on Tuesday night, so not as many hungover people this morning, hopefully. But we'll see how it goes. So, you know, I'm Jason Brown, Ben Silverman. I do work for EMC, and I work for Scale.io, and Ben works for Morantis, and we put together a session today as kind of a discussion type thing here, you know, we're not going to try and kill you with slideware or anything like that, we just want to talk about software defined storage and OpenStack, and if you are looking at software defined storage, if you're already using it today, if you're POC-ing it, give you some tips and some hints on, you know, decisions you should think about when you are going to deploy OpenStack or for not, you know, to be honest with you, obviously this is OpenStack Summit, so we'll have a little bit of an OpenStack flair to it, but just in general, right, and talk about essentially what is SDS, why SDS, what are the benefits, what does it look like, why are people interested in this, what are the analysts saying, and you know, we'll use our products as just kind of examples, but we're not going to get too hardcore into it because I know you don't want to see a product pitch, so we'll try not to bore you too much with that stuff. Questions are great, you know, I want to keep it interactive, like I said, Thursday morning, last day of the show, so if you have a burning question and you want to ask it, feel free to get up, you know, we're happy to interrupt the session to do that, don't want you all falling asleep, and if not, then we'll just do some questions at the end, all right. So, I'm Jason Brown, like I said, I work at EMC, I've worked at EMC for 15 years, believe it or not, everyone looks at me with big bug eyes when I say that because I have, you know, the babyface going on, but yeah, you know, I've worked in engineering, product management, now in product marketing, I can just tell you that my involvement with Scale.io and OpenStack really revolves around just trying to do education, coming to trade shows like this, talking to customers at executive briefings, and just telling them that, hey, if you're looking at, you know, this third platform thing, and Scale.io is a great solution for you because it can work for platform 2, traditional Oracle, SAP, Microsoft, et cetera, but also works great in OpenStack Clouds, and works great with Splunk, and all that other stuff, and that's my product pitch. So yeah, so there's the shirt. I mean, you can see ELECT 2016, if you know what that is, awesome, because not many people do even at EMC, but I'm not going to go anything beyond that. All right, thank you, Jason. My name is Ben Silverman. I'm a cloud architect at Mirantis. You can all read my mission. I do make magic from unicorn tears and kitty glitter, and out to the other side come some great OpenStack architectures. Prior to Mirantis, I architected an engineer in the first OpenStack Cloud at America Express, where we had quite a few storage challenges, and were able to get to about 5,000 instances. Today, I know they're farther than that, so I know some members of the audience here are with that company, so congratulations to them. And my real passion is performance and scale, and obviously that dovetails in with architecture very nicely. I've been in tech for many years, and I'm a real fan of software-defined storage, so feel free to ask me any questions as we're going through. Obviously, I work for Mirantis, so I'm very familiar with the Mirantis OpenStack and the Fuel Installer, so if there's any questions with that, I'll be glad to answer them. Might as well stand. This is easier. So obviously, non-technical guy, technical guy, if you were wondering. But I have some scale ale folks in the crowd, too, can answer some technical questions, too, if they pop up. So I guess why are we here? I could have mentioned this before. We want to talk about software-defined storage. It's a passion of mine, it's a passion of Ben's, and hopefully we can just share some of that passion with you. We can get you excited about it if it's not something you're familiar with. We want to talk about what it is. We're talking about why automation of software-defined storage helps with optimization. We're talking about the various ways of deploying it, because some people think just commodity with software-defined storage, and it's a little more than that, and it can go really, really complicated or not. It depends on what your desire is in that sense. And then you don't have some fun, right? It's OpenStack Summit. It's supposed to be a fun show. It's Thursday. It's the last day, so let's have some fun. So I guess first and foremost, we want to start with the traditional storage challenges, right? These legacy arrays. I know many of you guys, so hopefully there's OEMC sales guys in the room, because it's all right. But essentially, what are the challenges today, and why was SDS even thought of as a thing and why is it gaining so much traction out there? It's because of the traditional challenges that we are finding with these, you know, monolithic arrays, sands, things like that. Obviously, the high spend on storage hardware, right? You need to buy a big box. You can't buy a small box. It always usually has to be a big box. And you're probably not going to use all the storage or compute inside of that box until you're three, four years down the line, because you need to buy what you need up front. So you have this huge capex cost up front, and you just kind of pray to your capacity-planning gods that is going to work out by the time you need to do a tech refresh data migration. And that's obviously a dirty word as well. And they're pretty rigid. You know, you don't get much flexibility here. You don't get to decide what I'm going to run it with, how I'm going to deploy it, what it's going to look like, what the form factor thing of that, you just get sold it. And then you have to have a professional service person implement it. And then you have a global service person support it, and you just kind of run it. And of course, obviously, the administration aspects of it as well can be pretty complex. And then, of course, scale. So one of the biggest things is scale. So with software-defined storage, you've got a lot more capabilities in that scale perspective. Scale out, scale up, but scale down and scale in as well. In a traditional array, as you probably are familiar with, usually they're essentially scale up. And now with some of the all-flash things that have come out there, scale out as well. But, you know, is that optimal for your workloads and your needs? Maybe not. Bender lock-in, blah, blah, blah. Agility as well. I think Agility is a big one. You see this move to the public cloud because of the Agility, right? You can get this on-demand storage right away by just putting your credit card number into the computer. And all of a sudden, hey, I've got Amazon. Sweet. I can do stuff with it. And if I need more, I'll just, you know, add more. In the traditional model, you don't get that obviously. You get the resiliency, which is great for private cloud. But that public cloud agility, it's pretty lacking. And then, of course, the best part about it, if I build work or not, it didn't work. But you save money, right? TCO. A five-year TCO of a software-defined storage platform versus a traditional array, you can see anywhere from, say, 20% to 60%, depending if you want to run a storage-only model of SDS or a hyper-converged model of SDS. And I'll talk about that in a little bit more. So you can read the slide. What is IDC? Say, software-defined storages. It's really two pieces. It's some kind of controller software. So this isn't meaning just like management software. It means the software with the data services, with the persistency, with the management tools, with everything like that. On some kind of, I won't say commodity, because that word is being overblown, but some kind of industry standard x86 or even ARM, you know, base server, right? That's what they define it as. With no proprietary hardware components, we're using off-the-shelf white box servers for this. So no proprietary, kind of custom asics and CPUs and et cetera, et cetera. Really, they try to get as generic as possible there. And it runs on both physical and virtual instances. So whether you're running hypervisors like ESX or KVM or Zen server or Hyper-V, or whether you're running, you know, listed Linux or Windows, right? It's the ability to be flexible and give you some of that agility on that platform is really what that means. And it's a standalone system. It's autonomous. So you've got storage services, you've got data persistence. It's a system, right? So that's their definition of it. What's your definition of SDS? You know, I think this is all very, you know, on point. I think that, you know, if we think about even the convex of that, what SDS isn't. And in my mind, it comes back to the monolithic storage platforms. It isn't a huge truck roll. It isn't a huge professional services engagement. It's something that a customer can do on their own, using their own operational staff, using their own, their own hands and eyes and their data centers. No more, you know, making these 15 meetings to figure out when we're going to get this huge new platform in, 15 meetings to figure out how we're going to pay for it and how to amortize the cost of this monolithic platform over a large amount of time. Again, that based and then, you know, what isn't it? It's also not a platform that is rigid in its connections and its agility. Now, the community and customers are driving the development of SDS. So with the different SDS platforms, they're saying, you know what, I want to connect this way. I want to be able to use the storage in this manner with your, you know, large monolithic appliances, your old type of storage. Companies decide the way you'll use it, the way you'll interface with it, the way you do operations. Again, hand in hand with OpenStack. This is a grass root effort to get people more involved, to hear what people want, and let them have input into the process. And as, as a result, these software defined storage platforms, whether, you know, whether they're from open source, whether they're closed source, they are catering to you, the customers of OpenStack, the, the, the users of OpenStack, the operators of OpenStack, the administrators, giving, you know, full agility, full options that you may have not had before. Yeah, right. Makes sense to all, all of you out there. See some head nods from what I can see. Oh, I see a question. Yeah, absolutely, go for it. You want to go to the mic so everyone can hear it, if you don't mind. Right behind you there. So what may be some perceived or real disadvantages with SDS as you compare with traditional storage solutions? Sure, so my take on what's a, a question was, what's the disadvantage of SDS? I think it's not necessarily a disadvantage, it's a challenge is that the flexibility of SDS can lead to complications in deciding what the heck am I going to do with this thing, because if you've, all of a sudden, you go from that rigid architecture with not a lot of agility that you're used to, and then all of a sudden you've got options like I can deploy hyperconverged nodes, I can employ storage only nodes, I can run some nodes with windows, I can run some nodes with Linux, I can run, you know, half my system with 20 servers then with another one, I can have five applications in the same single cluster. I think when you start to look at the, the all the flexibility in agility, which is a big pro, it can also be a challenge, right? What do you think? I think another disadvantage, and I'll touch on this a little bit later in another slide, but is companies, traditional companies, traditional C-suite, they love the one-throat-to-choke model. So there's a cultural adaption that has to take place when you are putting on software that, again, as I said, either open source or closed source from, you know, a company or a group of individuals on to just, could be disparate hardware, have a little bit of Dell, a little bit of HP, a little bit of Cisco UCS, doesn't matter, again, software divine storage, it's white label, bring what you got and it'll work. You know, the C-suite looks at that and it's a threatening proposition because the question is, well, all right, well, who do I contact when my storage is down? Once that cultural adoption happens, when they're told, hey, we have the technical expertise in-house, we've always had it. What we haven't had is a cultural expertise in-house to manage these large monolithic appliances, that's why your bill from XYZ company is multiple tens of millions of dollars a year because they had to put four guys on site to run this, upgrade it, build it, ship it, put it together, you know, expand it, but you could, a fraction of that investment can be put in people that we already have to run this and people that are running our, you know, our open-stack platform. So it's an advantage and a disadvantage depending on who you are and where you sit in the company. And one thing that you break up there too, and you're talking about this later too, it's a cultural shift within the organization to go to SDS. You think about a traditional three-layer architecture with, you know, server team, application team, your fiber channel networking team, and then your storage team. There are three disparate teams that do their own jobs, sit in their own little silos and do their thing. And there's not a lot of cross-collaboration or talking there. In an SDS world, there's a cultural shift in the IT organization where all of a sudden now I only have two layers because I've gone from having servers, fiber channel, and storage arrays to servers, just servers, but I still have application servers and I still have storage servers, so there's some delineation there. And then my networking team now is working on an Ethernet network or a TCB IP-based network. And then if you go all the way to a hyper-converged architecture with SDS, now you've got the application, the storage, the compute all together in a single box. Then all of a sudden the IT organization, especially the storage admin goes, well, what am I going to do now? You know? So, you know, I don't call it disadvantage, but it's food for thought when you're looking at an SDS architecture. Not only is it a technological shift, but it's also an organizational shift as well. I know you're going to touch on that a little bit more too in a little bit. Any other quick questions? Sure. Definitely. I mean, I see your case against the monolithic storage, but they do stay ahead of the curve in terms of the storage technology itself. If you take flash or if you take 3D NAND or cross-point, they are ahead of the curve and server-based storage is going to, I don't know, when they can get to that point. So some workflows and some applications that we need performances from technologies that can support that will be logged out of that if we entirely go to the server-based storage. Yeah, absolutely. So Ben's going to touch on that a little bit later. I'm not saying that this traditional P2 architecture is going to go away. Obviously, I work for EMC. EMC is still a lot of say in the NAS and will continue to. It's like mainframe, right? Mainframe is still around. It will still be around forever. So there will be specific workloads and depending on your risk tolerance, where these traditional monolithic arrays will still have a place, right? And in the technology curve, Ben will talk a little bit more about that as well. But yeah, absolutely. So there's a valid point. So I'm not trying to say, ooh, SAN is bad, arrays are bad. Just saying that in the industry and in my world, we're seeing a shift where customers are more interested in being more flexible and agile, but still getting that kind of resiliency that they're used to in a new model. They don't want to buy arrays anymore. They're not interested for their new P3 workflows. They're open stack environments. They don't want to put an array there because then they feel like they're taking a step back. They want to move forward with some of this new technology that SDS helps open up. We kind of cover this already, right? SDS benefits. I think you can probably just tell me what they are based on what we're talking about. Obviously, the decoupling of the software from the hardware. So now you're not locked into any specific hardware. You have the flexibility to choose what your hardware looks like. Are you going to buy servers with SSDs in them? Are you going to buy them with some PCI flash? Are you just going to run SAS on there? Are you going to have a hybrid model? You've got that flexibility, et cetera, et cetera. The best part, I think, is that there's just kind of a cost savings here like I mentioned before. If you look at a five-year TCO cost savings when you look at traditional architectures versus these next gen SDS architectures, if you do it right, then certainly there are significant cost savings that can be achieved over that five-year TCO period. Anything else from you there? All right. So this is just to show you that SDS is like the bed and jerrys of IT and of storage, right? There's a lot of things that you can need to think about and decide here because, as I said before, a flexibility agility. So if you're building your own or if you're just looking at what's out there today, there's a lot, a lot, of things to think about from the data organization architecture. What storage are you going to run on this? Are you going to be a multi-purpose Swiss Army knife where you can run file, block, and object? Are you going to be a purpose-built system where you're just going to run native block? What's your goal there? From your persistence layer, are you going to be a mesh mirroring system? Are you going to use erasure coding? Are you going to provide options there? A lot of decisions there that need to be made. What kind of technology inside will I have in terms of snapshotting, backup, how I work with other products already in the data center, et cetera, et cetera? Data services, of course, right? Do I want to put D-dupe in there and compression and take the CPU hit there because I don't care about performance? What's important to me when I'm looking at these architectures? What features and data services are critical there? The operating mode, right? Am I a DIY shop where I want to just buy the software and go haywire with it, right? Or do I want someone to just give me a turnkey solution that I can just plug in and just get going and I'm already on this journey? There's lots of things to think about. And never mind all the other stuff here, whether it's cloud-based or not, whether it's using it for P2 applications or whether you just want to use it for P3, whether you want to go and decide, is this brownfield or greenfield, right? Again, a lot of choices. All right. So a little bit of a build slide here to kind of reinforce what I've been talking about here. But the CEO-CEF goals are pretty straightforward. It's to make money, save money, and reduce risk. And the trends we're seeing are around explosive growth of information, data, et cetera, et cetera. I don't want to kill you with the marketing. And so what we've seen here with a .now architecture or three layers is that you've got the server layer where you've got some protection and management of sharing within there. But you're not using a local disk. You've got the fiber-chill network. You've got the SAN. And what do you do when you need more storage? Well, you add another box, right? It's proprietary-purpose old hardware. And you buy a bigger box when the clicker works. So you can see the trouble here, right? You're just doing data migrations and tech refreshes and things like that over and over again. And what we've seen is that now these SANs have become siloed. If you remember why SANs were created, it was to get rid of the islands of DAS that were out there. And what has happened is that worked good for a little bit, but now all of a sudden we're creating islands of SANs where we have one application per SAN. And now the data center is just becoming overloaded with this hardware and architecture and the complexity of managing all that together is cumbersome. And, of course, no cost money. It's expensive. You've got all these OPEX things around tech refresh, migration, et cetera. So what we've seen is a shift now to the hyper-converged architectures where I just want to use my servers now. And this is where an SDS allows you to do. And this is where I go and say, okay, now my application is going to sit on the same server that I got storage and compute on. And I love that because it gives me the maximum cost savings. It gives me a lot of flexibility and it reduces my administration headaches that I've got because it's all together now in a single thing. When I want more storage or more compute, or in this case both because they're in the same box, I just add more servers. And it's flexible. It grows how you need it to grow. I mentioned the operational change and yeah, there's some cost savings there that you can do. And when I talk to service providers, they love this model, right? Because what's the goal of a service provider? It's to make money. How do you make money? Well, you charge a lot, but then you're not competitive, or you reduce your costs. And a hyper-converged software to find storage architecture is the most optimal way to get the best TCO. And so that's why we see a lot of service providers implementing this model. But the Enterprise is a different store. Remember that organizational shift I talked about in the IT organization? I mean, Enterprise has built these huge, huge organizations in the IT department to manage three layers. So when you go to this one and they're like, oh, they're all generalists now. What does that mean, right? So in the Enterprise, what we see is something called storage only where they like the idea of software-defined. They like the flexibility it provides. They like the agility, but they're not ready to go all the way to that hyper-converged architecture. So they do what we call a storage-only architecture, where you can do this delineation using software-defined storage where your app servers still stay as just your app servers. And then you can bring in the other servers to create what we call a storage-only model where those are the nodes that are providing the storage. So you still have that separation. You've moved to a software-defined architecture. You're using TCP-IP-based networking, but your organization hasn't been as severe or as hardcore, right? And this gives you cost savings as well. And so a lot of enterprises, because they like the Titanic, it's gonna take a while to do that turn to get hyper-converged, maybe they have the storage-only model better. And what's great about SDS in general is that, especially with Scale.io, you don't need to choose one or the other. You can have some nodes in your system that are storage-only. You can have some that are hyper-converged. You can mix and match. So the power of SDS gives you these options, but also it can go above and beyond that as well and give you a lot more. One marketing slide, what's Scale.io, just in case you don't know what it is if you didn't attend the Monday session, or it hasn't been clear. It's three lightweight applications that are installed on industry-standard X86 servers to create a virtual pool of storage and compute across these servers. It's scale out, it's scale in, it's scale up, it's scale down, so you can add servers, remove servers, add disks, remove disks, et cetera. But essentially what it is, is that when we acquired Scale.io in 2013, we saw that the founders had really just found a way to give that public cloud agility with that private cloud resiliency. So customers want the best of both worlds. They want that flexibility, that agility, that on-demand storage. But we're EMC, we sell the enterprises, the service providers, so they need that resiliency, they need the peace of mind to be able to store tier one, tier two data on the system. It's the native block system. Remember, this is block storage. So you're gonna be having high-performing applications running on this thing. So they needed something like that in the environment. And then from a consumption choices perspective, this goes back to what I talked about slide before. We try and package it so that customers have options again in how they consume it. Are you a DIY customer and just wanna go haywire because you've got a relationship with Supermicro or Quanto or Dell, we love Dell, right? Dell's great, go EMC, right? So how, we can do that for you. We can just sell you the software and then you go do whatever you want with it and that gives you the ultimate flexibility and agility. But then you go talk to a hospital or someone out there and they say, well, that sounds like a lot of work and we don't really have the resources for that. Just give me something I can put up in the corner and turn it on. And that's where on the other end of the spectrum, the VCE VXRAC System 1000 Flex, I did not come up with that name, trust me. So it's a mouthful. That can give you that turnkey solution where now VCE in conjunction with EMC is gonna give you software, it'll give you hardware, it'll give you the switches, the rack, the management software, everything. That's really a true hyper-converged rack scale architecture. And that's great for some customers. And the middle part of it, that may be too much, right? So we have customers that say, well, that sounds great, but that's too much and it's too expensive. I just want hardware and software. So give me the server, give me the scale.io software and I'll go do the rest. I'll put on the OS, I'll put on the hypervisor, I will get the switches and networking, et cetera. So we've got that option in the middle too. So from an SDS best practice first productive and there was optimization, I'm gonna let this guy talk a lot about that. Thank you. Thank you, Jason. So far we've heard, what is SDS? We've heard, what are some of the options for SDS? We've heard about scale.io. We've heard about some of the architectures and some of the turnkey solutions. But how do we get to this point? How do we decide, do we want SDS? Do we need SDS? How do we make this an optimal decision? So what drives your storage needs? Your storage needs aren't driven by the way your data center looks or whether one of your employees thinks that the technology's cool. Actually, some of the companies are like that. But it's really your workloads. Your workloads drive your choices of storage, your choices of network, your choices of platform even. So without that data, without data, say average IOPS or sizing data, you can't even go down the road of considering a storage option because you're not making an informed choice. So that's why here for the best practices, I always have to say the first thing you need to do is visit either with your current platform, administrators, visit with your business teams, find out what are their needs for performance. How performant of a storage do they need? Do they need 150 IOPS per VM or are they some sort of trove consuming database, real time scanning app that really will suck up five to 10,000 IOPS per VM? Because that makes a difference in what you choose in your storage. SDS really isn't for everything. And we're here, we're promoting it. But the reality is there is a time and a place for appliances. There's a time and a place for flash. There's a time and a place for monolithic appliances that provides specialized performance and feature sets that are catered to your apps. And I think those are corner cases. I think of a lot of general purpose three tier apps, cloudy apps, you know, your Node.js, your Apache, your Tomcat, all of these types of cloudy apps, they can fit on some form of SDS. A lot of people are quick to dismiss it and say, well, I have these requirements so SDS won't meet them. And I think that's a little naive. But without the data of what you need, how do you know what storage you need? So if you're gonna highlight anything from the best practices, this is the most important. Yeah, I really love that. To find your mission, what is your mission? What do you want to do? If you don't go in with a clear goal, and your goal could shift as you learn more, but if you don't have a clear goal at first, you can get into trouble. So if you have a clear goal and something in mind that you want to know, and I talked to Ben about when we were talking about this session, that define your mission, if that's the one thing you take away from this, we'll be happy, honestly. And in defining your mission, as I said, we talked a little bit about performance, but there are other things, replication. Do you need, how much replication do you need? Do you have policies in your enterprise that say we need triple geo-replication with one copy on Mars? I mean, really is, do you have data that is FedRAMP, PCI, Sarbanes-Oxley control? I mean, what are your security requirements? What are some of the things that you need to pay attention to when considering a different storage platform and location-based? So is your data, do you have a two-region data center? Do you need replication between the SDS platforms in the two areas? Because that really does make a difference, not just in your appliance, but also in the SDS realm. Some SDSs can't do replication. And if that replication, what kind of replication is it? Is it block replication, object replication, or even file storage replication? So again, these things all need to be mapped out before you can even begin to make a decision on what storage platform you're gonna run on. And I've been in IT for a few years. And I've seen people putting things on what we used to call DASD when we were mainframe areas. And the storage they chose, they didn't do this assessment. All they did was they assessed the salesperson's meal that they bought them, and then they went out and put this platform in. And later on, they ran into problems because they did have a feature set that they needed going in, they knew about, and the storage wasn't able to deliver it. So I don't wanna hype this up too much and say, this is a revolution. We're disrupting the industry with SDS. That's good marketing. Yeah, yeah. And as our CMO said, this is a disruptive time. This is a chance for you as operators, administrators, thought leaders, business leaders to really come in and say, the way we're doing storage today may not be right. The way we've always done it is not always the best way to continue doing it. And here, based on this information that we have from looking at our mission, looking about our requirements, this is why. So those two played play a part together. People also forget about networking. There's a lot of talk about software-defined networking. Remember, we still have physical networking. Data is bits, it goes across a wire. When we're talking about moving data, even within a data center, and again, some of these SDS clusters can get big. I've seen an SDS cluster with an exabyte on it at a large service provider. So this isn't play time. I mean, these things can get really, really, really big and still be performant, still deliver the IOPS, that you need, but you also have to keep things in mind. And again, I have here create separate front and back end networks. A lot of people forget about that. When these things are doing replication, if you were to lose part of it, it needs a network to replicate across. And a lot of people put their access and their replication on the same network. So from a technical standpoint, these are some of the things we need to look at. We need to look at networks, how the networks are set up. Are they private? Are they iroutable? How is all this done? Forecasting initial growth, or initial capacity and growth. So growth is something we'll talk about in the last bullet with capacity planning, but initial capacity. So when you're building out your first SDS cluster, you need to think, what do I need to get started right now? What is my minimum viable storage that I need to get started? Don't overbuy, don't underbuy. Use your mission, use your data to figure out what you need to start. And then we talked about cultural shift, very important. Make sure that you have buy-in from everyone. Storage, network, business leaders, what you're gonna do this. And then create ongoing capacity planning. Again, traditionally 12 to 15 week lead for a new appliance. I mean, almost by the time you put it in, you have to start be planning for the next one. And again, that's a large cash outlay. With this, capacity planning can be done on a monthly, a weekly, almost a daily increment based on your churn, based on your elasticity of your storage use. So again, it's much easier, much easier to understand and forecast your capacity planning with SDS. But again, you must understand your applications. You must understand your workloads. So as Jason said, if you don't take anything else away from this, it's that storage planning, SDS decisioning, storage decisioning, whether it's a multi-tier strategy of appliances, SDS, and other, all begins on your workloads and assessing what you have. Anything else? No, that was great. Any questions about that? You know, we do have five minutes left, so I'll make sure if there are any questions about what Ben just said, that it's clear. But it's really true. You gotta know your mission. Yeah, SDS is cool. Yeah, it's cutting edge. Yeah, it's the next big thing. As a marketing guy, you know, I love saying that it's disruptive and things like that, but you gotta be smart about it. You can't just jump into a blindfold because then you'll get into trouble. So at another point, you wanna make a good start, it's like automation needs optimization. So thank you, Jason. So optimization through automation, again, I'm not gonna kill you with marketing for Mirantis. However, EMC and Mirantis are very close partners. We use Mirantis Fuel as our orchestration engine, as our deployment management tool for our platform. EMC has worked carefully with us to create what we call a plugin, which extends fuel, so no more will you have to install the platform and then go in and fat finger all your configs and then wonder why it doesn't work. Through our platform, the plugin is installed and as part of one of the GUI menus inside the platform installer, you can put all of your scale IO parameters and then your whole platform is installed with scale IO. If you wanna add additional capacity, it's done through the fuel installer and guess what? All of these parameters are saved. They're all part of your platform installer and they continue to be populated without admin intervention. Nobody can fat finger it. Yeah, it's about being smart, right? If you use a DIY, you wanna do it yourself and spend the time, you can write your own scripts and things like that. Perfect, but if you want to have a consistent, smooth, outcomes are known, then automation is critical and Mirantis and EMC can help you do that, basically. So last thing before I leave, what about Seth? I was gonna ask about Seth because scale IO is native block. It competes with Seth. If you were in Tokyo, then you may have seen Randy Bias and Jeff Thomas do a live battle between scale IO and Seth. But it really comes down to kind of what Randy wrote in this blog a few, probably a year ago now. Are you multi-purpose or are you purpose-built? That really is what it comes down to. Are you a Swiss Army knife like Seth where you do have block services, file services, object services, and then underneath the covers, it's an object storage layer with a file system. Or are you looking to get kind of the best of breed technology for your specific workload? So going back to what Ben said, what's your mission? Is your mission multi-purpose or is your mission purpose built? Are you an object store or are you native block? Because we're block, we look at performance as really the key differentiator when looking at Seth and scale IO. Because there are trade-offs within Seth because of the fact that they're an object storage layer to begin with and then they build up and the fact that they have multiple access points because they try to be more of a Swiss Army knife approach than a traditional just here on block, I do performance, great, this is what I do. So when I get asked about it, really, this is kind of just the high-level thing, right? They're both great. Obviously, Seth is open source, so it gives you some more flexibility if you want to tinker with it. Scale IO, we have a free and frictionless download so you can go get a non-production version of Scale IO for free with no time restraints, no capacity restrictions, no feature limits, but it's just for non-production use as no support. So if you do want to do a bake-off on your own, you can do that, we've enabled that for you. And if you want to watch a shorter version of the battle, the battle itself was like 40 minutes on YouTube, we've created a six-minute video for you if you want to check it out just to see what's there. But it's really all about multi-purpose versus purpose-built. I'm not going to bash Seth because I don't need to. Scale IO is built for block, it does block well, we can get awesome performance numbers where we can compete with all flash arrays. But that's kind of our mission wasn't it. If it's your mission, then it could be a good fit. And if you don't believe anything I say, you think I'm full of crap, and you think Ben's full of crap, please see what the analysts are saying. This server-sand and software-defined market is going to explode, it is exploding, it's happening in real time. I used to say the future is coming, now it's the time is now. So we're seeing across the board, whether you're EMC or not, the investment in these traditional arrays is decreasing. And you can see the negative CAG are here. And where is that money going? Where are those dollars going? They're going to these software-defined storage technologies, whether they're appliances, whether they're racks, whether they're pure storage, that's where they're going. And you can see that the growth rate here is tremendous over the next 10 years or so. And if you want to learn more about Scale I.O., then you can go to EMC.com.scale.io. I didn't put a merantis slide in, sorry, I should have plugged a merantis slide in, but it's sure as merantis.com, right? Yeah, www.merantis.com, you can check out all of our plugins and check out our latest OpenStack release with the fuel installer, which is now part of the foundation, is now fully part of the big tent. So not only can you download it and use it, but you can contribute to it, contribute with your own plugins and your own code. And if it wasn't clear, Scale I.O. does work with OpenStack, we have Cinder and Nova Drivers. Just to make that clear, I didn't put a slide on it because I thought it was self-explanatory, we're here at OpenStack Summit. But if you didn't, then of course, you can come down to the booth on the show floor to learn more about other EMC products or just what EMC is doing in general with OpenStack. You know, we're a gold member on the foundation, we've got an EMC code team creating all sorts of cool stuff for open source technologies. So EMC is here to build, extend and optimize your OpenStack experience. So we can definitely help you with that if you have any questions about that. So I know time is up, but any questions? We'll take a moment to thank them. Do you all asleep? Yeah. Thank the Marantys Bear for showing up. The bear showed up, you want a picture with the bear? About this time of the week, the bear usually can't make it to the ballroom, can't really make it far enough away from the vodka to get down here, so I'm very proud of the bear. If you want pictures with the bear after the session, feel free. And I've got some Scale.io experts here too, if you have specific Scale.io questions, we can help answer afterwards as well. No questions? Well, I think we have some out in the audience. You go over to the microphone. Yeah, just go to the mic if you don't mind so we can hear you and see you. Kind of. Yes. It says, I'm like, free bird. Yeah. Should wear sunglasses. I know, right? Hello. How you doing? Just a definition from converse versus hyperconverse. Sure. Linear. Sure, so it's all about kind of the architecture. I feel like in a converged infrastructure, you're bringing together the bits and pieces, the networking, the storage, the compute, but the application is still separate, right? The application is gonna still sit in an app server somewhere and then talk to that converged infrastructure. In a hyperconverged infrastructure, you're bringing that application into the infrastructure itself. So it's actually running and living on the same hardware or infrastructure that's using the storage and compute as well. That's kind of how I see it. Yeah, I mean, I agree. There's some platforms out there. For example, like the UCS platform, the converged platform, and it has obviously compute, network, and a control plane to kind of control all of that. But the applications that live on it are separate from it. Whereas with Scale.io, you're bringing storage, network, orchestration model, everything into the servers themselves that are also running the service. So it's like super hyperconverged. You're not splitting anything out to separate command panels or things like that. Yeah, and it varies too. You talk to different people. Some people think hyperconverged appliance, right? Like a Nutanix or someone, right? Because you've got the VM or the hypervisor included with it and then the application as well. That all bringing it together means it's hyperconverged too. So depending who you talk to, it can vary. That's the point I was trying to make. Is it hypervisor converged? That's why it is hyperconverged. No, that's not the definition. That's good though, that's good. I think I'm gonna work on something with that. No, but it's not. Hyperconverged doesn't mean you're running a hypervisor. Hyperconverged just means that the application sits on the same exact infrastructure as the storage and compute, basically. And I think it's a semantic game. Because with scale, we talk about, we're gonna run this at scale, and oh no, now we've reached hyperscale. Yeah, what does that mean? How many compute nodes is hyperscale? Where's the line? Yeah, you know, it's 999 in scale and 1,000 is hyperscale. Yeah, exactly. No, it's just, again, it's just a semantic game, how we use the words. Any other questions? Going once? Going once, going twice. Going twice. Thank you, everyone. I appreciate it. Like I said, know it's the last day. I know everyone's a little tired. Congratulations for making it here and still being upright. Thank you. And thank you for the questions, too. I really appreciate it. Appreciate it.