 Coming in sit down I think we're about ready to get going ready. Yep Hi, I'm Randy bias Probably many of you know me. I'm for those of you who don't I might have been involved with this open stack thing for a little while since about summer of 2010 and During that time. I've spent a lot of time talking to customers and people who are deploying clouds And there's one thing I tell them very consistently over and over and over again I say they're sort of the old way of doing things call it legacy Call it enterprise computing call it second platform Whatever you want, and then there's the new way the cloud native way third platform And when I tell them this I do this because I want them to understand that you can't really cross the streams Like if you take second platform or non cloud native applications, and you try to stick them on Amazon Just lift and shift right they have a problem. They can't manage their own availability, right? I mean, I don't know if you've ever tried to put Shaffer puppet around Oracle. I have and it's not pleasant, right? And so the whole thing is it's very difficult to apply DevOps and those kinds of things to a platform 2 app and stick it on platform 3 infrastructure right a cognitive infrastructure, and it's also difficult to go back the other way, but for different reasons and the reason is that when you take a Cloud native application you stick it on something like VMware or vblock It's very expensive to run a lot of what we're trying to do is reduce costs. So you don't get the value out of it So true story there was a analytics company a risk analytics company, New Jersey That was telling me very proudly about their Hadoop deployment and they had put Hadoop on top of their Cisco UCS B-Series blades attached to a fiber channel sand Right, and so they didn't get the value of Hadoop because they were running, you know a modern cloud native application on a gold-plated infrastructure But if we stop and we think about it at some point in the future Kind of cloud native systems Amazon web services Google OpenStack all those things Become good enough to run at least some of the third platform or excuse me some of the legacy workloads And so I did a blog post down Friday to kind of tee up this session And I talked about this thing called the rancher's dilemma Which is that you might be thinking pets you might be thinking cattle But if you're a rancher, you kind of got to deal with both whether you like it or not, right? And that's where most enterprises are today So what I thought would be really cool is if we could show you some technology the EMC has namely scale. I oh think of it as a competitor to Seth That does some of these things that allows us to have certain capabilities that you wouldn't that you couldn't have in In sort of more modern cloud systems so that you can afford to so that you can run legacy applications So a good example of that is Oracle rack Oracle rack is a distributed version of an Oracle database And it requires writing to a multi-master disk, which means every database instance has to write to the same disk right and so scale-o can actually Allow for that particular use case and that's very difficult actually when you're running a scale model So today what we're going to show you is we're going to show you Running a very high performance workload of Oracle and then live migrating that and it's completely enabled by scale Thank you, Randy. Can you hear me? Cool, so Just as far as the flow goes I'm going to talk a little bit about scale-o for those that aren't familiar How we've integrated it into open stack talk a little bit about what we're what the demo is built on And then I'm going to do the demo and so there'll be periods where you can ask questions while we're waiting for things in the demo to fail over and Then at the end we'll have a lottery to give away a free Amazon Echo That's what you all came for right So a little bit by myself I'm the scale-o product manager and my goal is to make sure that scale-o is integrated as best as it can Into open stack through our vendors or directly with open stack as well and so please do reach out to me if you see any gaps in anything we've implemented and in our in our fuel our Morantis fuel plug-ins in our Canonical charms that we're developing right now I want to hear it and I want to make it as good as it can be because I think that Scale-o is an absolutely natural fit for open stack as you play with it and download it It's free to download and play with with no limits. No time limits. Just non-production Let us know what you think we want that feedback We want to improve the product to make it what it needs to be to be the best scale-out block storage for open stack So what is scale-o So scale-o is a scale-out block storage solution that uses ethernet or fiber channel or sorry Yeah, right up Edward Ethernet or infinite band it just needs in TCP IP connection between the nodes and takes that local disk and Shares it out and it does it with an extremely small amount of overhead We're talking 5 to 15 percent of the CPU is used just a few megabytes of memory is used on each one of these hosts Which enables you to deploy this in a hyperconverged manner if you want to or you can do two layer keeping your application in your storage separate So again aggregates all those and scales linearly So we can go from three hosts all the way up to a thousand twenty-four hosts and you're getting scaling of IO and Not only the performance, but the capacity as you grow it and we can add this on the fly So we can actually add hosts on the fly we can remove hosts You can actually use this as a migration tool if some new great storage technology comes out You can buy some new servers with those add those to the cluster remove the old ones The data will migrate automatically or I can say I just need more compute I can purely buy nodes with no disks and add them to the cluster and have them just consume the storage That's being provided if I've already got enough performance or capacity from what I've built Or I can buy pure storage nodes with very small CPUs very little memory and just similar disk Configurations and add those to the cluster or I can buy systems that do both and I can scale both my Storage performance capacity as well as the applications that I can run So it's extremely flexible. I honestly don't know how you could make storage more flexible We can run on anything between Windows and in boot to for an operating system. We support all the major hypervisors We support all the major cloud infrastructures Docker VMware OpenStack and we can work on any media hard drive SSD in VME it doesn't matter whatever comes next it'll work because it's software that's abstracted from the hardware But it's fully leveraging the hardware we take full advantage of those raid controllers You have in your boxes to optimize the performance to those devices and what's really amazing is these can be all on the same cluster I could have a hundred servers with ten windows ten Linux ten redhead Whatever it is and they can take their local disks share them out and then consume them back But aggregate all of that performance capacity without having to plan ahead of time For different points where I'm gonna have to go buy more storage I can make that decision on the fly and it doesn't even matter what OS I need to put on that box That also means that you could run it for you could have it attached to your legacy systems Which could consume it, you know for like VMware clusters and also OpenStack simultaneously absolutely Even bare metal so you got bare metal application that still needs block storage You can just add a client to it just like you would an HPA driver, but without the actual fiber channel card So logically we break there's three major components to scale IO one of them is the client Which we refer to as the scale IO data client or the shorthand we use as SDC This is just a driver. You load on any system that you want to provide storage to the scale IO data server or SDS is the component you load on any box that you want to take the local drives and share them with the pool and Then the metadata infrastructure has managed what we call MDMs or metadata servers And that's a cluster of three or five depending on how much resiliency you want again All these have very low footprints And so I can run all three of these on a single box and take very little resources on that box while I am doing that work And the cool thing about this architecture is the metadata server is not in the path in any way shape or form So when I'm reading and writing data the client knows exactly which hosts have my blocks of storage that I need to go Out to and in a massively parallel way go out to all those SDS's Grab and read and write that data and the metadata server is not a bottleneck in that growth path And so the only time the metadata server does any work as if there's a drive failure There's a node failure it takes action and communicates to the other components or I create a volume And then I map it to a host it's taking action, but other than that It's not doing anything other than monitoring the system So for integration we've actually been integrated with scale open stack Since Havana, but we got upstreamed in the Liberty release and right now we have beta plugins Which Randy's development team has been working with me on for Mirantis fuel canonical? I've got links at the end Please do download them and try them if you're using those distributions if you're using other distributions We have ansible and puppet scripts out there to try But if you do a manual load of this you'll see it's actually very simple You could take any automation framework and automate this deployment process yourself So the integration with open stack is really just a sender driver calling our API, right? We've got a gateway service that we run. It's using a rest API That talks to the MDM to say create a volume delete a volume grow a volume and then the Nova Compute component that attaches that volume once it's been provisioned and Just a couple notes with what we did in mataka that just came out. We added full QoS support So we've actually had QoS support before and I think scale is a pretty cool QoS You actually can take a volume and say this volume gets this much I owe to this client Or you can say this volume gets this much bandwidth to this client or I can even do both I can use both metrics to limit how much that that client is going to consume and so we prior to mataka We could do this but you had to create a volume definition and you had to use extra spec definitions Which totally works except that it was very specific to scale I o and that's not the open stack way The open stack way is to make it generic so if I ever change storage vendors I don't have to go do that again, right? So now we've got full support for QoS and open stack We also added support for consistency groups, which is really useful when you're running like an Oracle database Like we're going to show here where you get several volumes that when you need to take a snapshot You need to capture all of those simultaneously And then finally we added the ability to bring volumes under management or take them out of management So to make open stack aware of existing volumes or not So in the lab what we've got is we've got an Oracle VM That's got 64 gigs of RAM 16 virtual CPUs. We're going to use the swing bench tool as the client to generate load It's a free tool you can download and the demo flow the first thing I'm going to do is I'm just going to show you a really heavy I o load so I'm going to do an fio Job just to create some massive load and show you a migration in that process just to show how that works And then we'll do a 20 database user load and then a 200 database load user load And we'll do migrations for each of those and so we'll have some time as that's kicking off to take some questions So what hardware we're using so I'm going to connect remotely to gear in Massachusetts We've got actually eight servers seven of them are going to take their local capacity and share them out They're using three SSDs each So we got 21 SSDs is going to be doing all the work in this demo Now we actually have hard driving these boxes to some of them have six some of them have ten We've added them and they are a separate pool I could put volumes on them as well and again just showing the flexibility of the system It doesn't matter I can have a mix of nodes with different numbers of drives And I can still use those and I can still allocate storage to those and still make take full advantage of those On the compute side. I'm going to take the same seven nodes. I'm going to run. That's where the applications will run and Then I've got one other client there That's just just purely a client attached to the storage but not sharing any storage So as you can see we're doing hyperconverged two layer all on the same system Two layer meaning that the storage is on dedicated separate storage nodes and separate from the computer, right? Yeah, the compute separate So there's some applications and actually Oracle tends to be a prime example for customers where they want to truly isolate them because the License cost of oracles can be so high that they want to separate that and get every bit of CPU They can out of those boxes that they've paid Oracle a lot of money for to run So the software we're using is Oracle 12 OpenStack Liberty sent OS 7 2 and scale a oh 2.0 So just a little map of how we've wired everything up So here's our eight nodes the first one is going to be running our OpenStack controller And that's what I'm going to be SSH to for most of my most of my tasks That box is running the client and the server so it's sharing storage consuming it The next three boxes are sharing storage and consuming it as well as running the metadata infrastructure The next three boxes are sharing storage and consuming it and the last box is purely consuming it and each one of these hosts has Two 10 gig connections for all that it's doing so all data whether it's storage and application traffic are running over these two 10 gig interfaces And as we're doing a lab migration, I'm pushing 64 gigs of memory across the wire We'll see a little bit of impact to scale a oh, but it should hold up like a champ it better So you see my three instances here Prod DB one is the instance where I've got Oracle. It's installed. It's not running right now I don't want to have it running at the at the moment. We'll at we'll do that after we do the FiO load You'll see a few volumes here attached the database the application We also have one called big IOPS that is a volume We've allocated just for doing FiO raw to the device so we can generate some load on the scale IO side We have this is our user interface. You've got the raw capacity So the capacity is licensed by the way with scale IO by how much you allocate how many how many how much capacity you give to The SDS is that's how the licensing works So it's very easy to predict as you go forward into license more if you want to purchase more I've got my IO. You can see I've got eight SDCs. I've got eight clients attached I've got seven volumes mapped So that means I've got seven volumes created and they are mapped and attached to host right now But I've actually got 19 volumes defined. I'm only actively using 17 I've got seven SDS's and among those I've got actually 79 devices that are being used That's a physical disk drives. Yep, and then on the management side We have three MDMs and we have two replicas and two and one tiebreaker in that architecture and The warning is purely the I'm using the free version warning And the last one here is protection domain. So scale IO has a concept of protection domains for isolation So once you grow to to say a thousand nodes in a cluster You want to isolate where your data is being written? And so you can take a hundred nodes and create a protection domain of those and a hundred nodes and within that protection domain We have two storage pools a hard drive and an SSD pool So the protection domain that's largely because if you start to have bad behavior or problems within one Area you want to spill over like suddenly you've got problem crossing your entire thousand node cluster. Yeah, it's all about Isolation So if we look at the front end so we have this concept of a front end you've got volumes You've got clients. You've got snapshots. If you look at the clients we can see here C6 has six Volumes that are mapped to it. So that's obviously the node that is currently running my database instance If I look at the back end We have a protection domain that we've named open stack and there's two pools here one That's SSD and one that's hard drive. There's really nothing going on here But we can see that we have three devices for each one of these and if I quickly look at the hard drive one We'll see this one has ten hard drives and if I scroll way down here, we'll see some nodes actually only have Remote desktop Six hard drives so but those are all still one pool and I could read and write and create volumes to those right now So you don't have to have a homogenous system So let's go ahead and fire up FIO There we go So I'm SSH is root into the system that Is running that instance and we'll see the IOPS grow here How big are these blocks? That's a good question Think they're 8k so we're getting about a hundred and fifty thousand IOPS off of the the Seven nodes with three SSDs each So while that's running I'm gonna go ahead and create some logs here or I'm gonna I'm gonna loop some logs so you can see what's Going on as we do this migration so right now we can see that we're on C6 We're active the instance is active there. I'm gonna go ahead and SSH to C6 And I'm gonna tail the Nova compute log Which is actually not all that exciting, but they're all geeks. They can handle it And then I'm gonna run a loop with VRSH to look at just the memory So the woods you'll see here once I start a migration as you'll see it'll start counting down the memory So it's trying to move the memory from one host to another and it'll count down its memory that it's migrated And it's constantly doing a calculation of can I get this moved over in the default threshold? Which I believe seems to be about a second I haven't got any confirmation on that from anyone But it seems to be about a second as the tolerance for a quote-unquote downtime for a migration So it's calculating constantly as it's moving this memory over the wire Can I get it over in a second? Can I get it over in a second and it once it can it then does the flip So let me do a live migrate So I'm just doing Nova live-migration in the name of the instance This is all the built-in open stack live migration support. Yep. Yeah, nothing special here So we now see the status is migrating And the Nova compute logs up top should start to refresh and on the right here, we should start to see a memory countdown Did you pray to the demo gods in advance? I did I even sacrificed a chicken at lunch It was delicious barbecue So you can see the memories counting down and what you'll what we'll find here is that with FIO There's almost nothing really active in memory So this will actually flip over pretty quick as we start to do database loads where there's lots of things going on in memory This will take longer As it's trying to get memory but stuff is still happening in memory So it has to keep trying to move new rights in and out in VMware world these do they still like recommend that you have like You know a billion network interfaces so that you can have them dedicated to the emotion and all that stuff Do they still do that? Honestly, I don't know. It's been a while since I took my VMware training They do Yeah, so This is a little bit gee wisdom because we're running the transfer over the same network as the storage Yeah, we're doing this all with two 10 gig mix for everything So we're we're basically migrating a pet. Yep. Yeah, and we wanted to show you Oracle rack today But you know, that's that's coming. Oh, yeah, Oracle rack. Yeah, we're gonna. We need a little more time in the lab to get that Oracle That'll be good. That's a Codra of pets and there it's flipped over so now on the left here should switch from migrating to active It should tell me which host it's moved to so it's now active on C5 so I've done the migration and Still was able to get all the so we went down for about 150,000 IOPS to 117 at the very very in there And now it's bouncing back So that's pretty good So I'm gonna go ahead and start up Oracle database because it takes a minute And that number the number under the IOPS where it says 1.1 gigabyte a second. That's the that's the throughput. Yep Okay. Yeah, let me go kill the FIO that's still right and those network interfaces are bonded. No No, just two separate networks. Okay two v-lands so the database is up and let me go ahead and start the listener service and Now that that's up. Let me go to our web interface If I can find my mouse and start swing bench so I've got a Console here on the other instance that I'm using for low generation And this is a little bit of a pinky tool a finicky tool So I may have to stop and start it a couple times to get it to go just a heads up. It's not it's not us Is this an official Oracle load tool I do not believe so I think someone at Oracle wrote it on the side Okay, it's not clear to me if it's swing bench or if I'm still waiting for the database to get ready to do stuff So when I was first automating Oracle databases with puppet like 2006 I found out that when you make an empty Oracle database, it takes 30 minutes to complete. I Don't know what it's doing like you say my SQL Create create database and it's done like a split second with Oracle takes 30 minutes Just empty database just nice All right, so that works. So right here. This is literally out of the box swing bench settings All I did is change the users to 20 and I got 20 active users and what you'll see here in The right is that it says 20 of 20 So if any any user loses its session that number will go down So in our migration if something get really horrible that number would go down and here I can see my transactions per second is averaging about 200 transactions per second You can see my ILO is actually not that high. I'm only got about 4,000 IOPS for 20 users to do this workload Now well, that's running. I'm gonna go SSH to C5 to run those logs again. Let's start the memory loop Okay So let's see our load gen is Ramped up yep little over 200 transactions per second still so we're stabilized. This is just default settings That's why it's 200 TPS So I'm gonna run the live migration again Windows so you can see them. Well, I mean, I think you'll be able to guess what happens, right? It moves shouldn't be any problems important thing is that You know, you shouldn't see any significant dips in the transactions Here and I don't expect to at 5,000 IOPS But you know, I think it's interesting because you know one of the main problems I've seen people have is trying to put pets on cattle clouds and cattle on pet clouds And and I do think that there's a way forward where we could where we can start to do some of that I think there are limits on how many on what kind of pets you can put on a cattle cloud and there's a lot of gaps, right? We don't have like DRS VMware DRS and HA type capabilities and open stack really yet. We have some HA type stuff We've got live migration tools. There's a big dip and but we don't You know, we don't have quite all the pieces that are needed for people to like SAP deployments or ERP systems or some of the really classic kind of, you know Workloads that can't manage themselves and so this isn't the only tool that's required But you know, we thought it'd be really interesting to kind of showcase it and show how it works We did performance in Tokyo. We sort of talked about performance of scale IO But you know, you can get performance out of any kind of system I think at this point, but supporting things like live migration of heavy workloads transactional workloads supporting live migrations of things like the Multimaster write disk for Oracle rack, you know that kind of thing is is actually pretty tricky and So, I don't know I mean does anybody have any questions or thoughts or feedback and you go to the mic if you have questions So that that dip is actually expected It's just when it first starts because basically it was queuing the migration But it actually once it starts you get this little brief dip and then it comes right back up Name and company and then question Good question So on the back end here, we can see At the pool level we can see the the IOPS and what each device is doing and there's a whole bunch of advanced views in here We can actually see device latencies as well. Oh We can we can put the FIO back up Fio's not running Right, we could put it. Oh, yeah afterwards we can run it again Honestly, not sure the best way to get I think they want to know under pressure like if you get the right behavior So the dissub system you can see the response time at the application level here, too And when it was doing that first one it went up, but other than that we're averaging 34 milliseconds, but that's an application light. That's a pretty light 200 DPS Yeah, I agree. Well, we'll up at 200 here in a minute So you'll see it's kind of fighting there's more memory work going on here So it's kind of fighting it'll go up and down and up and down and it should finish without help Now once I get to 200 it's not gonna finish without help that default one second is not going to be enough time I find that it's about one and a half seconds that it needs with the delta of new in-memory changes versus the time It takes it to copy it over the wire to get to the other house So it's it finished flipping over there And we saw again just a brief flip there sessions never dropped the application kept running. Oh Geez, I know you Identify yourself regardless Mark Heckman you be soft This is a boot from volume VM. Yes, okay And so will this work with mataka because I believe the support is sure been in mataka as a standard Cinder volume What so live block migrate Are you taking a femoral we're talking persistent? I don't know what you mean I mean the instance itself the root right of the instance would be on local instance storage so ephemeral Right, so you do a live block migrate not not a standard live migration, but a block migration So Inder would be your scale. I would be just a regular cinder volume, right? So so two things one is you can't mix your your local if you're doing ephemeral locally with a persistent volume on cinder and do a migration with it You can today. No, you cannot Why not we do it? So you can do boot from volume, which is what this is doing So the volume the root volume is on scale. I owe we created as a persistent volume and told to boot from it We we are contributing code to Newton to add support to do ephemeral on scale I owe so that you when you provision ephemeral you can say I would prefer that to be not local But on scale I owe we actually have the code today and it works with the plugins the plugins automatically apply that patch but It's not built into open stack yet because migrating cinder volumes doing live migration of attached cinder volumes As far as I know is a freshly supported in mataka. Yeah, that was that was Multiple volumes on attach the Oracle database via center that we're live migrating. Yeah, we're doing that right now with Liberty Okay, these are all attached to cinder with Liberty Except that except that your your your actual OS is on Is on is a boot disk is a boot from volume right, right? That's different. So it's not Nova ephemeral. Yeah, correct, right But we'll take this up. We'll take this up. Yeah, we can tell you that they're We have Nova ephemeral drivers working and my team is working on upstreaming them now in Newton And so there will be support for scale. I owe both for Nova ephemeral and cinder volumes Now I actually don't know what happens of you. Do you know what happens if you have it works? I mean it's supposed to work as of mataka even on ice house with some special red hat pokey Right at open-stack patches on ice house make it work as well. Okay Cool. Thanks mark What's the next demo so now we've got 200 users we're doing about 1900 transactions per second And we're on c6. So let me start logs. You said this doesn't work or you just said the live migrate never finishes No, no, no, that's with 200. So on this one. We're gonna have to help it along I'll show you what I mean in a minute here switch over to c6 Around the memory loop and start the live migration. So we're doing about 1900 transactions per second and What we'll see again is as soon as it starts the process We'll see a little bit of a dip and then it'll come back up and then it'll start to try and what you'll see is if we don't help It along it'll keep going up and down and that will have some impact on performance again It doesn't stop the user from doing its work but the transactions per second kind of wobble as it's trying to Flip over and keeps fighting the memory to get it over the network to the other host as it's working So you should do that and then you should flip over to the file again Yeah, we could answer the gentleman's question about latency because I think that's a really interesting question because A lot of the pet workloads make assumptions about the latency of disks because there seem to be you know, essentially local and so when they behave really badly like I've had problems with with VMware systems running on top of like ZFS a long time ago and Where the system would become unresponsive in that Linux would make the file system read only because it's like the disk is bad Doesn't know that like you're going over the network, right? Just thinks the latency is related to this is like oh high latency disk. It must be a bad disk read only file system So we don't destroy things. Yeah, I kill it. Yeah You have a question. Yes name and company Ketan Nelangekar from Veritas so on the on the target here the target node that you're migrating to is there a Copy of the data already available or is that being block migrated as no It doesn't need to be because because it's a distributed block store system So the data is everywhere. So it's just accessing it over the network from from the source correct Yeah, it's really just doing a mapping change and say okay This volume is now available to this host and just doing that flip so that API call is immediately saying okay Change the mapping from host 5 to host 6 so once the live migration is done is there an attempt made to Get the data over or does it always it's already distributed across all of the disks within the pool So all seven nodes are hold holding pieces of that volume all the clients can see all of the disk potentially They just have to have the probate permissions and mapping all the data is shared And one more question on the QoS when you said you have a At what level do you maintain the QoS? meaning as the data is coming out of the VM it travels through your What your scale IO client? Where where all do you enforce the Q it's at the client level? So the client side driver basically says you you're only allowed to have this much IOPS or this much bandwidth Okay, thanks for this volume. So it's specific to the volume All right, so we can see it's kind of struggling. It's fighting Going up and down so let me let me change the threshold. So the command is VIR SH Migrate dash set downtime Instance name and then the millisecond so I'm going to give this a little help and set it to allow it to go up to Five seconds for a downtime. It really only needs about one and a half two seconds to do this flip over and again, it's spiking but The the sessions never never die So they're just not getting as responsive They're not getting as fast as a database response out of the database system So you should you just set a file so we can get into that just Jones question But I remember I was talking to SAP recently I don't know if they're in here, but they have they have a special boxes for SAP HANA in memory database They're like one terabyte Don't do this with that. Don't let my great one terabyte around More questions Yes, sir. Would you mind going to a mic or taking this mic from me? And I'm no FIO expert. So you tell me how to get the latency data I Just look at the disk. I think might be a good start. Yeah, so Richard college GDT Do you support any type of like data locality or anything like that with with scale IO? So we have a concept of fault sets So if you're concerned that say multiple nodes inside of a single chassis like a 2u4n chassis or you're concerned about rack availability You can set a fault set to say this node is one this rack is one fault set And what that tells scale IO is that to create its mirrors outside of that rack? So that entire thing can go down in your your systems available You can also use that for maintenance as to if I'm gonna do reboots on all the hosts inside a rack I can I can tell the system. I'm gonna go into maintenance for this fault set my default If you don't define a fault set, it's every node is a fault set So it's basically telling the system mirror my data outside of this node is the default behavior But you can make it whatever logical concept you want. Let me answer that just slightly differently So the part of the reason for the production domains is not only to have sort of bounding of faults But also bounding of performance because you don't want you know noisy neighbors or whatever to serve somebody next door But the the production domains are arbitrary there however you want to do it So one way you might want to do is you might want to have a production domain per rack And what you're essentially making sure is that like all of the data is basically across all the disk And you're on a single switch or a couple of switches and at that point whether you're talking to a local disc Or talking to another disk in the rack. I mean there's no there's literally zero overhead from the network So you have locality in that for performance perspective You're not adding additional latency with east-west traffic across racks So you could do that if that workload for and you can do that on a per pool basis I believe right so you could set up a pool That's like per rack and they have another one that goes across racks depending on what you're trying to accomplish So you can you can get that effect that you want But there isn't the concept of sort of make sure that like all the data this databases writing is on these Set of discs and also copied somewhere else Makes sense All right, so I got the FIO job right FIO job running so we'll see if we can figure out I get the latency We'll at least look at the latency on the disc And if you want to go into the back in there and this gentleman can go ahead and identify himself and ask a question real quick Yeah, Ryan Nicometo concurrent So you talked a little bit about protection groups just could you talk a little bit about protection domains protection domains Yeah, you talk about data protection in general and I handle that Do you have a erasure coding kind of equivalent or do you rely on individual nodes for that? I'm gonna let the experts instance. I think I know the answer, but so we basically do a mesh mirroring algorithm So it mirrors across all the all the SDS is within that storage pool or within that protection domain So it's it's mirroring outside and that's what makes it very fast and allows us to have a very small footprint and Allows you to run hyper converge run these applications on the same systems because we're not we're not doing all the extra low We're just simply taking the pure performance that disk with very little overhead and giving you full access to it and doing It's two replicas though inside of a protection. Yeah, yeah, it's only two replicas and that's to optimize rebuild speed Right. Yeah, and so if we lose a disc, well, how long does it take us typically to rebuild onto it? It's a matter of minutes It really is fast. So answer your question and the rebuilds are emassively parallel So we have seven nodes when a drive fails all seven nodes are involved in rebuilding that data in a very efficient way So the the time is actually for an SSD is a matter of seconds really So it's been optimized for tier one performance and throughput not certainly for Extreme data protection. You should still use backups and all that good stuff It's been something when you're an FIO your IO death is 128, which I think is quite unrealistic What is the performance when the IO death is one or two instead of 128? You're you're you have a problem with this running 128 for IO depth. Yes I mean, you should be able to get 32 to each disk, right? Why would you only run with one or two IO depth? That's just to understand the latency in that case, huh? To be to understand the latency. Oh to better understand the latency. Yeah, okay You want to run that IO depth at one or two? You want me to run one or two? to To it is to Okay, and then I don't know if there's a great way to see the latency like this on the On the disk. Yeah, and somebody got a VM stat or SAR or some IO stat on their back back pocket. That's who I should have asked Thank you and what number am I looking at? I think it's the one of the weights Yeah, so There you go. We're Was that microseconds milliseconds so we're half a millisecond so Yeah, between a millisecond less 500 microseconds, right? So that's pretty good. I mean in our tests. We're basically You know, we're a very thin layer on top of SSDs like if you run scale a across a whole bunch of scale out SSDs You basically get near near linear performance increase across the entire cluster up to a hundred plus nodes I think we've tested far above that so it's pretty impressive because a lot of times you would go to You know classic Sam like extreme IO or another all flash array I'll even mention my competitor solid fire pure or whatever and you know, you got two boxes You're getting a lot of throughput, you know, basically as much as you've got SSDs in there And but we can get that same speed out of you know, just regular commodity SSDs and commodity hardware But like scaled out in ways So any any final questions are I think we're at our time. Yeah, we're out of time. So In summary, I like to say don't wait If you're gonna be looking at open stack those deployments take time But either way you need to find a storage technology. You're comfortable with it scales out You don't want storage to be your bottleneck when you're providing cloud services. You don't want to tell your customer I'm sorry. You got to wait till I get another rack of X in to provide some more compute for you So I think it's important to think about what your storage plan is and get comfortable that technology Scale IO is useful for building any cloud. You saw the technologies. We support you can download it for free Give us feedback. We've got beta versions of the plug-in. You'll have access to these decks later Chad blog has a Posting on it. There's some recent reviews from storage review on our technology where they hammer it And there's a session shortly after this by Swiss comm Swiss comm built their cloud on scale IO with open stack with Docker and they start with so what was that didn't Swiss comm start with Seth. I think so. Yeah, I heard that too And there's there's a session on Thursday And if you want the Seth versus scale IO debate, you can watch our session from the Tokyo summit That's online as well. Our best customer is a ex-sef customer. So if you're using Seth today, that's fantastic Please let me know when you have problems with it and we'll be happy to help you out Thank you for coming And Yeah, I stand by we're giving away free goods Free goods free goods. That's why we kept you all here. I was wondering why you didn't just escape. Yeah All right, the number is 305 310 310 Bueller Bueller Oh, we got one Come on down Let me verify trust but verify That's it Thank you