 Hello, today's sections we wanted to introduce to you guys how to discover of fresh set for a high performance storage pool and My name is Marco Marco horn and I'm the technical lead for the software defined storage and open stack and Hi everyone. Good morning. My name is to shower go hard. I I'm a cloud software architect at Intel So let me quickly go over the agenda real quick here. So You know that the emerging theme for For the software different storage with Seth is all flash back ends so we we're actually gonna go through a quick journey of You know our experiments with all flash stuff some tuning outcomes from a from a cluster that we set up and Some architecture discussion as to how how do you go about? Designing a self-node for best performance for outflash On this on these slides. I want to talk to you what We are and what we do and you know you guys know about the QCT QCT is the global data center providers and The quanta you you guys may know already is the Howard manufacturer company and QCT is the soul the subsidiary of the quanta and we also Deliver a total solution the turnkey solution for the customer and As a technology partners we work very closely with the Intel to deliver the software defined solution and the The the platform optimizations So before we talk about today's topic I just wanted to get a quick show of hands You know how many of you are familiar with Seth or have worked with Seth? That's awesome. And Have you I mean how many of you have played with all flash stuff like all SSD back in for Seth Okay, and and one last question how many of you are familiar with nvm express as a storage interface Super so this is gonna go go smoothly today. So You know the a lot of you may have seen seen this slide, you know The emerging theme is is the media transition from your spinning hard drive based media to to all flash You know as as this slide points out, you know by 2020 we do see the The media costs, you know the dollar per gigabyte for all, you know The solid-state media coming in parity with the with the hard drive media So that's why this this becomes an important theme to discuss, you know how does Seth do today with all flash or all solid-state storage and Where do we go from here? So so if you look at the spectrum of use cases For for Seth today, I mean Seth remains to be the the block storage of choice for for most open stack deployments With RBD and in terms of the workloads, this is a this is a chart that shows you the spectrum Across the on the y-axis the performance scale and and the x-axis the capacity scale As as you can see the you know, you're you're more capacity oriented use cases still still tend to be on the more Generic hard drive kind of media for for dense capacity nodes Whereas the the non-volatile memory of the solid-state focus is is is basically the other bubbles which are mainly block workloads so in this exercise that we did jointly with QCT We we focused on databases as as a workload. So that's what this The slide set is gonna walk you through so on these slides we actually we build Cluster based on the the finals or NVM is safe cluster and that you can see the spec is with the Intel 2699 v4 and 88 core the virtual core after hyper-straining and With based on the rail 7.3 red headsafes to the one store, which is your version and Yeah, of course is the fire store base is not blue store and The very far left side that's we we pair with the 20 piece of the terabyte Intel P35 20 3D name technologies and the so with the two times locations and The total the full level capacity is 82% So under a very far right hand side a button is we can do with the 10 clients with 100 RPD volumes and So you can see that's the that's an architecture. We design in the QCT lab So by the way, we have the 20 piece We populate all the NVM device into a five of the five of the QCT D 51 BP servers So that's the testing result we want to share with you it's 4k random rate So on the left hand side of the blue linear That's the the default configurations So we don't that in that scenario. We don't do any optimization on the self comf and it neither on the The operation system level always level so on the orange long deniers We do a lot of tuning on the self comf and as well as on the operating system Like the kernel and the the TCP window size all those kind of stuff so you can see that's the the big jump between the default value and the the tuning about the The tuning after the tuning we see the performance like a top of this top of the performance Yeah, before we go on to the next slide. Just wanted to point out real quick So so the optimizations that Marco described going from going from the Going from the default to the, you know, which is the blue blue line chart to the orange Which is the tune line chart, right those involved the very well documented Tunings on self comm such performance so far we you know together as Intel and QCT We have contributed quite a few of those tunings at the BIOS OS as well as Ceph level So so so this result was only with Software-based tunings only at the OS level This is this actually on their aspect that that comes into picture like when you when you go beyond the just the software aspect Right, so when you when you look at a standard high volume server today It's it's pretty much analogous to a dual socket You know Xeon class server, right where you know instead of using a single memory pool You you have a concept of local and remote memory Which which is basically the the non-uniform memory access where your latency goes up as if you as you go over this bus card QPI bus Over to the other socket and that that affects your performance, right? So so basically by by simply Core pinning or socket pinning the Ceph OSDs has to has been the traditional way But what we're trying to say here is that you know There is actually more to be had by paying attention to your node design from a new one point of view if you were able to basically isolate your your storage and networking Affine to your Ceph OSDs on the same socket you you actually could get Quite a bit of performance benefit and and here is a proof point that we're gonna show here. So So these are some some guidelines that are you know traditionally Been been documented talked about at several open stack summits But what we wanted to add was basically the the point number one Which is no balance network and storage devices across CPU sockets, right and and what what this does for you So so this this diagram quickly shows You know kind of a before before scenario where we had all of the NVMe drives as you can see you know to to on CPU zero which are marked by the by these in blue and that all networking on the other socket, right and when we go from this picture to a More well-balanced picture where when you have NVMe's and and next Evenly split across across the CPUs where you know your OSDs don't have to cross the sockets, right to To do their IO or networking, right? This is this is what you get so so again the the orange line is basically the the Only software tuned line where we we only relied on OS BIOS and CEF tunings The the right line is basically where we we actually split the the storage and networking across sockets As you can see, you know, you we actually were able to get even at q-depth 8 You're able to get 40% better IOPS and This is actually, you know, the latency Portion is what I wanted to draw your focus on the average latency was almost hundred percent better and Yeah, and there was a big improvement in the 99% latency as well, so okay, so That's the the server we are using and for these Architectures and on the right-hand side is to NVMe devices. We are applied to these architectures, so We are on the VCF enabled high performance workloads if you are IO IOPS intensive You better looking at the at the right servers with the right hard disk So you can have to your maximum performance on your clusters and So here we not only dedicate or for IOPS optimized solutions We QCT only to do also deliver the throughput and the capacity optimized SKU for your reference So so if you are interested in about a solution QCT you like to deliver So please welcome to the booth at B5 and I will be on the booth and to answer all the question you guys have