 Good afternoon, guys. Can you hear me back there? Okay? How was your day going so far? Awesome, awesome. So everybody's been talking about, you know, containers and microservices and platform two applications and platform three applications I'm gonna talk about storage because at the end of the day everything is on storage, right? And it's often in the context of private cloud. It's an afterthought But it can be a real real bottleneck if it's not really thought thoroughly. So I'm gonna talk about EM6, TMIOS, scale out our storage really designed for this virtual era and really optimized to make better take benefits of Everything flash has to offer Right. So what's happened in the data center? If you think about last five to ten years data centers have undergone dramatic transition Storage, however, hasn't fundamentally changed. So what's happened in a data center? You know the environments have gotten extremely dense If there are hundreds if not thousands of VMs usually in an enterprise data center So what happens when there's a we there's a whole bunch of VMs sitting on top of hypervisor trying to access the storage layer, right? So hypervisor essentially randomizes all those requests breaks it chops it up Randomizes them. It's called IO blender effect So storage was not designed for this kind of really high random IO small black random IO kind of use storage can often become a bottleneck if it cannot serve this request timely manner Secondly, there's provisioning and cloning of VMs. It's something entirely different. It's not like provisioning a new hardware machine So these provisioning and cloning activities actually, you know, they've become a lot more Automated so it's not only the VM admins who do that anymore If you're if you're having a private cloud your end users are cloning the VMs provisioning the VMs You have no control as a VM admin on when that's going to happen So these cloning activities are extremely high in terms of their IO load on the storage These activities start kicking in why is it going forward these activities start kicking in at the wrong time of the day If you have your Oracle database running your mission critical application Somebody says clone me to 20 20 VMs Your Oracle is going to come to a crawling halt. That's another reality Next you often have policy-based automation, you know, you balance your load balance your VM balance your environment in a very dynamic manner So that also, you know, you don't know when that's going to happen again really really really stressful for storage So traditional story architecture was really not designed for this if you think about storage fundamentally hasn't changed in last few years Right so storage really hasn't changed fundamentally It's the dual control architecture that have been prevalent for last 15 20 years Maybe there are evolutionary Incremental changes to how storage is optimized but really fundamentally it hasn't changed for these new realities, you know for this Random small block block massive random IO request for you know cloning, you know multi-gigabyte VMs on the fly For balancing, you know V motion in your VMs and moving them on the storage tier as well as compute here Storage really hasn't changed it can often become a bottleneck in these new environment So what do you need actually to make sure your private cloud? Open stack private cloud runs optimally runs predictively on a storage Right, so if you think about what are the storage requirements the first and the most basic one is acceleration Essentially that means you want to be able to deploy your private cloud in a quick manner time to value has become extremely important It has become a competitive factor for many organizations in order to be able to compare competitive in this environment So you want to be able to deploy new services your private cloud quickly It also needs to be able to provide that kind of performance that your new applications who end up using a whole lot of data need Right so acceleration not only from deployment standpoint, but also from application performance standpoint The storage platform needs to be able to consolidate Multiple different kinds of workloads very effectively, you know in a private cloud by nature is a mixed workload environment You have no control over what kind of environment it's going to be host hosting, you know Sometimes it may be you know OTP kind of environment I mean it may have VDI on it it may have you know when any other kinds of workloads and you have no control So you need a storage platform that can handle this diverse workload characteristics efficiently You want to have a story environment that not only can can provide that performance for those diverse needs But it also can consolidate that efficiently. So you have you know hundreds and hundreds of VMs That they're duplicate inherently. So how do you? Architect a storage that can make use of these duplicate duplications and deduce the storage problem So that's also critical consolidation consideration And then lastly storage although it's at the bottom of the tier it needs to be able to integrate with Everything that's about it compute management application orchestration all of those layers So that's what you need in the storage platform and XTMI I'm going to demonstrate how XTMI you kind of fits into this how XTMI can help you achieve all of these things Really optimally really efficiently and a really in a really simple manner So I'm just gonna Run you through a few slides that explain how XTMI was architected What's the philosophy behind it and how it's really designed for today's reality? So flash has been has been on the market probably for a few years when the flash first came to market Everybody's first instinct was now we can get a whole lot of IOPS now. We can lower the latency We can go a lot faster with flash of course those are table stakes flash will get you to go faster But what else can you do with flash if you can really? Design your storage architect your storage with the new realities with flash in mind You can actually get a lot smarter and that was the philosophy behind XTMI So not only get fast but also get smarter and maybe put it put together it will enable some unique business value With that in mind XTMI was designed and so it's kind of evident XTMI has been on the market Probably a little over two years. He was pretty much the last or flash storage to come to the market You know many other competing products were already there despite of that XTMI is now number one all flash storage today With 34 percent market share in 2014 2015 we are estimating somewhere close to 40 percent market share That's number one number two number three put together greater than that whole lot of data hosted on XTMI You know for 60% of 1400 customers use XTMIO. So although I mean we took time to design this storage, right? We were lost to the market, but our approach worked So what is XTMI at the heart of it? I kind of think of these four Quadrants as four defining pillars for XTMIO. The first one is the scale out architecture. That's absolutely essential in today's world XTMIO scales like in a building block kind of manner brick by brick by brick and as you add the bricks You're adding capacity as well as performance in the form of controller resources So it scales evenly It performs predictably because of this scale out architecture with this scale out architecture we can offer All these data services and I'm going to go into more details on how these data services work But all these data services they help reduce your storage footprint They help protect the data that is optimized for a flash Encryption and the writable snapshots which are XTMIO virtual copies all of that is done in line all the time And has absolutely no performance implications. There's no performance degradation when you use this I mean there is no way to turn these services off and there is no performance penalty for using the services you know you take those snapshots and you know how you how you manage the life cycle of those copies how you integrate with Applications and that becomes integrated copy data management. It's something new that XTMIO has pioneered copy data management as a bolt-on solution has been in the market But there are benefits of integrating that with the storage and XTMIO did just that with this new architecture I'm going to show you how it really changes the way You manage copies whether it's copies of VMs copies of databases copies of volumes and then lastly XTMIO has This strong ecosystem of data center services that EMC offers whether it's for data protection continuous availability things like that and it also has strong ecosystem of Applications that it really works well with So let's what is an x-prick so x-prick is a building block. I'm going to show you what's inside of an x-prick So you scale in the form of x-prick you start with an x-prick You can add another x-prick to that you have two x-pricks You can you can add two more x-pricks for x-pricks and you can scale up to eight x-pricks in a cluster So inside in the x-prick there are two active active controllers Each controller has humongous RAM. It has you know plenty of CPU cores it has you know two fiber channel and two ice because he Ports for back-end connectivity and each x-brick has 25 mlc drives now x-pricks are available in 10 terabyte 20 terabyte and 40 terabyte versions These are raw physical capacity Without factoring in any data direction. So no matter what size it is it uses 25 drives So that enables us to provide you because it's consistent performance no matter what the size of the brick is so How many of you know what is scale up and what is scale out? All right, so scale up architectures having around for quite some time Essentially it works like this You have two controllers one is active one is passive and you have a bunch of drives you need more capacity You add more disk trace or disk enclosures, right? So that your controller resources don't change in the scale scale up architecture You're actually wasting half of the resources just standing by being in a passive mode just in case the active one fails So you're not you really utilizing your controller resources very well Moreover as your capacity grows as the scale grows the ratio of the performance that controllers can provide and The capacity that this controllers have to support deteriorates over time So that essentially runs into bottleneck and this architecture was kind of okay When the media was spinning media because spinning media often used to be the bottleneck Controllers rarely became a bottleneck So it was kind of okay in certain situations it would not work But in most situations it did work as soon as you change that to flash The bottleneck is moved up to the controllers and what's the point in having fast media if you're going to choke it at the controller level So that's the problem with a scale up architecture in today's all flash world Where scale out is absolutely necessary So look at this so this is extreme I of scale out you can put eight X bricks in a cluster You can have 16 active controllers working together all the time They're closely coupled tied together with an RDMA fabric in the back end they work like one cohesive cluster You have 16 controllers in a in a in a in a in a cluster even if one fails You're not going to lose a whole lot of performance and these are all active all the time and this active active in Kind of architecture is really hard to get right and that's why extreme I was lost to the market But with a good architecture So with extreme I you can scale Capacity and performance together now what the numbers that you see there they are realistic numbers They're not hero numbers. They are while keeping latency below millisecond With with 8 k 8 kb block size what can you achieve? You can achieve up to 2 million read IOPS You can achieve up to 1.2 million Read write IOPS mixed IOPS and they scale really linearly You know to 2x bricks will provide you twice the IOPS then 1x brick would Exactly twice while comp in the compromising no latency latency stays below millisecond Another thing with a scale up architecture has been the controllers, right? It's a dual controller architecture You cannot keep all the metadata in these two just two controllers So the metadata one it cannot be very granular. It becomes very chunky. Secondly, the metadata has to be de-staged to drifts The metadata is really key if you want to do any anything smart with what's inside the storage And if you're going to de-stage that metadata to disk every time you need it you need to Move it back from the disk into the controller play around with it and put it back on a desk You know messes up with your performance Also, you cannot do many of the things like you know De-duplication in line because you don't know if the data is duplicate or not just in the controller because you have to access The disk to figure that out Not good. That's one of the reason Extreme IO scale out architecture works really well because we keep all of the metadata in these controllers in this example You have four X bricks eight controllers every X break has 25 drives. So you have hundred drives supported by eight controllers Everything is in the memory. You want to take a snapshot? It's done right in the metadata. You a stream of data is Coming coming down to the storage cluster You can figure out if there's a duplicate data in that right in the controller You don't have to touch the disk disk the data pin at all and all of these controllers are tied together With a 40 gbps rdma fabric that facilitates this intra intercontroller communication without any performance degradation Now you put it all together That's really what you need if you want to have a consistent predictable platform that delivers high performance Day after day under any load conditions right flash by itself is not enough Scale scale out is is important, but it's not sufficient On top of that we lay out this fingerprint based uniform data placement What that does is no matter how small a stream of data you write it gets broken and spread over all available drives Any volume you put goes on all available drives in the last example. There were hundred drives You create a volume that's going to go on those hundred drives. You write one megabyte of data That's going to get cut up on all the drive So you're actually getting performance read and write from all of those drives No matter what the data stream looks like and all of these data services because we have metadata in the memory because we have you know abundant controller Resources a CPU course as well as RAM we can deliver these data services in line all the time There's absolutely no way you can turn it off and you don't want to turn it off because there is no performance penalty Then provisioning so any volume you create is thinly provisioned by default. No space is consumed until data is written Inline deduplication, it's global now. It's not only Within that volume is not only within that break It's in that cluster if that if the stream of data exists anywhere in any of the volumes in that cluster It will not be written. It will just be a metadata operation. No disguise will be spent Compression things that that don't dedupe really well databases Compress they compress really well So we see some somewhere like 2.3 to 3 is to one kind of compression ratios on databases Extreme I have data protection so it replaces read it really simplifies you don't have to figure out what read you want Extreme I have data protection optimized for a flash you can lose up to two drives in one x-break So in a cluster you can lose up to 16 drives At a time simultaneously and you'll still keep going It has extremely lower head about 18% of the capacity spent for the data data protection and it gives you performance better than rate 5 Encryption is done right on the drives. So no performance penalty. No bolt-on solution. No added cost Encryption is right on the drives And then once you have data whether it's VM or a database or a learn you can take extreme I virtual copies again copies by definition are duplicated We have everything in a metadata. So it's a metadata control plane operation. There is no data plane operation when you're taking a snapshot So you're not spending any capacity wasting any capacity. You're not really creating any disk hios when you're making snapshots So what's the practical implications of that, right? What can you do with it? So before that so ICDM or integrated copy data management So you put it all together you have a hardware platform that is designed to deliver these functionalities Without performance degradation you have smarts within the storage array to Create these extreme Io virtual copies you have application integrations with Oracle with SAP LBM with Microsoft And then you allow end users your DBS you application owners to use the storage functionality The extreme Io virtual copies from their application interfaces and you have integrated copy management You don't need a bolt-on solution for you know, whether it's for data protection or for DevOps You can do it right from extreme. Io. So what are the implications, right? Extreme Io virtual copies and it's all the way we do in-memory metadata copies. Why are they so important in today's world? So this is how conventionally a VM is cloned Hypervisor says I want to make a copy of a VM, which is how usually VMs are provisioned. You have templates and You say yeah, this template make me 20 copies or five copies or whatever when that happens The data is actually read and written to the disk on the data plane in traditional situations So five copies will take five times as much space It'll create a whole lot of Ios if it's 20 gig VM 20 gig into 500 gigs of data being read and written Lots and lots of disk Ios wasted It's a brute force copy its capacity sprawl disk Ios and performance are hog and it takes a whole lot of time With extreme Io you change all of that, right? You have VM See your template VM that's optimized the Linux VM has Oracle in it really optimized fine You want to make a copy because you want to give it to your DevOps team you say copy that VM? It's done right in the mirror data See the top tier here is a mirror data in the controllers The bottom tier here is the disks in the data plane nothing actually happens in a data plane It's only mirror data operation new pointers are created pointing to the same data The implications are the copies are immediate The copies are free. You're not wasting any space unless you start changing data There is no capacity penalty and there's absolutely no performance penalty because you're not wasting any Ios in reading and writing the data Reading and writing is a dumb operation if your controllers are smart enough. Why would you waste those valuable Ios which are better? Preserve for the workloads that demand those not for dumb copy operations Another example right take any example at any database or application environment say you have a 10 terabyte Database usually that translates to more than 70 terabyte if it's something like SAP About 150 terabytes by the time you're done making copies for test dev QF staging whatever it is and this doesn't even Include the copies for your you know multiple copies for your DevOps teams. So 10 terabyte means You call you want to make six copies of that it's extremely slow It's brute force copy lots of Ios on the disk lots of data on the network and They're extremely rigid if you want to refresh those copies it doesn't work It's it's a brute force copy all over again. You take that on with extreme Ios virtual copies You have 10 terabyte database, let's conservatively assume that you can compress it to two is to one It becomes five terabyte You want to give six copies or 60 copies? It doesn't matter they're free and they're immediate and a Month from now your developer says hey, I want to look at the fresh data It's not a brute force copy again. It's just assigning new pointers pointing to the new data. That's it so very efficient instant refreshes instant copies and Near zero cost and near because as soon as you start changing that data there is going to be incremental cost But how much of that change is going to be? Realistically, you know in in in database environments. We see you know if you count copies People achieve, you know five is to one six is to one kind of data reduction with these copies and then The last thing I talked about was integration So we have native integrations and tools for application integrations But we also have these rest apis open rest apis that we ourselves use in many many different Ways we integrate so the plugins that we develop now You as an enterprise has custom workflows, which most of our customers do they can actually use these rest apis to very Effectively use extremi or storage capabilities and integrate with their workflows So that's essentially that's the platform and that you know This is architecturally whether you're using it for open stack or or or any other purpose This architecture is always going to be you know beneficial for you So from open stack standpoint what we do in is the rest apis I mentioned we have standard driver that uses this rest apis XMS is the XT my management server and that's how they invoke native XT my functionality while with with your Operations that happen on on open stack So I just want I'm not going to go through each of these I just want to show you that we know we have been committed to open stack for a while and we've you know Over time we've added more and more advanced functionality For for our center driver. So this is what's in Juneau This is what we added for kilo This is what we added for liberty and this is we're working on Mitaka. This is what we're going to add for Mitaka See one of the reference architecture. So we also rigorously test XT my with many different open stack distribution This is this example is a red hat You know, we're working on this a paper should be out in a month if you want to go to emc.com slash extremi You can look it up and see in a month's time and you should have that paper But we've done a lot of work with VMware for VM in any great open stack and Trevor's gonna help explain the work that we've done there All right. Thank you Vikram. Sure. Thanks to so we are Working actively with all of our storage partners It's including emc to make sure that any kind of open stack experience that folks want to have on vSphere works Well with their platforms. So we do storage a little bit differently with vSphere As you may or may not know we leverage the concept of data stores and all the management of the drivers and all of the Carving up and lends is done in the background and then just presented to open stack in a simple way So what we want to do is get a reference architecture with extreme IO and We worked on this particular project where we were trying to see what kind of performance We could get with working with the benefits that vikram mentioned before including Extremely fast deployment the ability to eliminate overhead from our VM copies And you will see in a few slides some of the performance benefits that we got Okay, I'll just switch to the keyboard Okay, so before I get into the actual test results. What exactly is VMware integrated open stack while you take your existing vSphere environment And then you combine it with open source open stack We're not making any change to the open stack code. We're def core compliant 100% and then we configure VMware drivers to support vCenter VMware NSX API calls Optionally, if you'd like to get some visibility into what your environment is doing Then you can get vRealize operations where we have free plugins to report on what's going on in the environment As well as vRealize log insight for your syslog aggregation and Then the cap to all that is that it's a fully validated architecture so we're testing it making sure that our code is actually working properly together and With all of our partners like EMC and on top of that, there's optional support So if you want to get a support agreement with us we can support from open stack all the way down to vSphere And these are the projects that are included with vIO Including Nova neutron and cindering lance. Of course, those are basic Building blocks you need including Keystone and Swift and they correlate to vCenter NSX and so forth And I'll pause for people to take pictures of this beautiful slide I worked along and hard on it. So please take as many pictures as you'd like All right, all the cameras are down. I'll move next Okay, so let's get back to the actual test results and you can actually see our reference Architecture at that link that Vikram put at the bottom of the side So it's a good one to take a picture of and you can check out the paper later But here are oh, let me go back. Sorry. I Was too fast for the cameras Okay cameras down. All right, so moving on to the test results for our first test We wanted to see how long it would take to stand up the control plane on an all-flash array and especially with the benefits that extreme IO provides and Just the VM cloning process for our control plane Took less than a minute and that's about 13 VMs and these are production-sized workloads and not just Test environments where I scale down the VCPUs or the amount of RAM or the amount of storage So in less than a minute, I had 13 VMs for my control plane Already cloned and ready to go and then from there. It was a matter of running the ansible playbooks Flash does accelerate that but it's not only flash. It's also because we do it just in the controller RAM So it's happening at the speed of RAM. No data copy happening Yeah, I'm sorry. Vikram. I didn't mention all of the extreme IO goodness there But uh, yeah, definitely taking advantage of all of the in-memory operations that extreme IO is able to perform So that was the control plane what about when I'm going to deploy instances So I created two images one was 64 gigabytes thin provisioned and the other one was thick provisioned and Despite one being thin versus thick the performance was pretty good. I mean I've seen Thick provision instances on other infrastructure platforms take up to an hour For this size of the disk and we were able to get it in under 10 minutes Both for thin and for thick provision storage and then I did a deployment of a fairly large number of instances 50 instances took less than two minutes a hundred instances took about three minutes So pretty much linear progression as to how the performance scales with a number of deployments And last but not least image imports can be also a time-consuming exercise and now the slide is complete Feel free to take pictures So we have the 64 gigabyte thin image versus the thick image and really I was really surprised to see Even that much performance. It was all fiber channel, but still it was pretty Good to see how quickly I was able to import my workloads and Vikram had to stop me from running out the building with an X brick And I challenge him to find out if whether I took one or not But I think I did I was able to get away with it But I just showing you the power of working with OpenStack on top of Xtreme.io All right. Thank you, Vikram. Thank you So I guess we have one more summary slide That's it So, you know, we talked about you know what happens in a new new world You know, it's it's entirely a different world. The storage hasn't really changed, you know, you got to look at architecture You know in a more innovative manner if you want to be able to meet today's realities Xtreme.io is designed for this new normal, right? So the way it scales out that it's all flash is you know Flash is becoming the new normal that you know We're expecting the the price of flash is going to be at parity with disk Anytime this year In memory mirror data makes a whole lot of difference and enables this zero penalty data services and copy services Which are tremendously applicable in today's world compared to your ten years ago and then Xtreme offers exceptional simplicity Consistency performance. There's absolutely no tuning, you know, think about you don't have to worry about your raid Think about you and there are no tears to think worry about, you know You're not tying your volume to a certain drive So you don't have to count your spindles how easy the life would be if you don't have you put the box in there And there's nothing you want to do with it. That's it and Comprehensive OpenStack opens OpenStack support and Commitment we've been working with OpenStack since you know Consistently because it consistently adding more and more functionality and then lastly VIO provides good enterprise car class Support and reliability around vSphere put together that combination can be what enterprise may find really appealing. Thank you any questions Iops did you won't have any so 1.2 million IOPS is it was it with deduplication so deduplication has no implications on IOPS So you say for example, you're running a slab at 8k Size block size mixed, you know 50 50 read write That's what you would get in a 16 a 16 controller 8x brick cluster And the point is that you know one next break would give you an eighth of that So you can scale your performance predictably, you know, two will give you twice the one Which is really rare to see in a storage environment traditionally any other You talked about zero cost snapshots, right in your presentation. Yeah, so obviously there's nothing at zero cost, right? So it's it's really You are delaying the cost for later so yes, so if I may ask it's it's like Once thought that VM starts changing and some of those VMs would be changing very fast So what is the price that? The delta the delta that that you you know you have to cater for so if it's a you know You take a Windows VM that has Oracle or say Linux VM that has Oracle running in it You clone that out of the gate. It'll have no cost, right because it's exactly identical After that what's gonna change your Oracle database may change Oracle application is not going to change the Linux operating system is not going to change So it really is depends on what's what and how much the rate of change you're looking at And I know if you take the VM Uninstall Oracle and put something else on it, then it's probably going to be a whole lot more change. I Agree with that statement. What I'm asking is what's the mechanics behind that like do you have some kind of a watchdog that that monitors? With respect to the time that you have taken the snapshot versus what's been changed since then and keep writing it somewhere I mean So the snaps would be treated as as a separate entity and you can monitor that from your extremio Gui you will see the capacity. It's a volume that you're snapping, right? So you'll see from the capacity standpoint how much extra that that specific volume is taken You had mentioned that your your data is distributed using some kind of fingerprint I quit that some hash some kind of hash function Mm-hmm is that done on the client side or is that done on on once the data gets to the array And then three already may fabric you it happens when the data comes into the array and then you pipe it out Yeah, then you put then the controllers take care of that. They'll basically assign if the data looks duplicate It'll be same fingerprint So then that me does that mean that you're kind of fully symmetric on your on your front end What do you mean by that so I can do IO access from any controller to any yes, absolutely Yeah, yeah, yeah, it doesn't matter you have four four controllers or eight controllers because the volume is gonna Skype across all of those Yeah, yeah, and that's why we need that 40 gig infinite band fabric What kind of recovery time would you have if you lost a disk and need to replace it and Let's say in a fairly active scenario, so you can lose two disk and there won't be any downtime But like a rebuild time to get your okay Rebuild time to get once you once you put the new drives in I am unfortunately. I'm not I don't have the answer for that Thanks Question on the deduplication so you mentioned the one point two million IOPS remains a constant So if I have a five is to one deduple ratio, so it's only a capacity play at that point with no penalty whatsoever on the Latency or the IO rate. Yes, I mean you have five deduple or compression will have no implications on IOPS and latency absolutely No matter it's a five is to one or if it's a VDI environment, we'll see something like between 10 and 20 is to one Yeah, it's only the capacity and the thing is you know, it's gonna get no matter how small it is as It's gonna get broken into 8k chunks. That's that's our block size and it's gonna get spread on all available drives So I mean the data stream can be fairly small, but it's gonna sit on all of those drives So it doesn't matter how much you do. It's still gonna sit on all of those drives Anymore questions? Thank you