 Good morning, morning. Well between what you heard earlier with the band and the free coffee I can guarantee you there is no one in this room who's got a super bowl hangover at this point Those guys left about half an hour ago. All right, welcome and thanks for being with us here today I'd like to point out this if you like is the live studio audience We have many thousands of people that are following this event. We have broadcast in this live on emc.com Along those lines what I'd like to say those of you online you will have a chance to interact With us this morning if you have got questions for Pat Gelsinger Please feel free to tweet those questions on the hash VF cash Hashtag try saying that after a couple of drinks But if you can tweet on that hashtag then later on today, we will have a chance to ask Pat a few questions Okay, so 12 months ago What did we do about this time last year? We had a big what we called internally mega launch we announced 41 new products We announced new products at the high end. We announced new products in the mid-range We announced new products at the low end and over the last year We've really taken share in every single segment, so we're pretty pleased with the way that went But we start this year Really innovate in along a new vector many of the products that we released about a year ago There were refreshers of existing products this year we want to kick off the year Really talking about innovation and a new disruptive technology obviously that disruptive technology Being flash and we hope that you know just like it brought Cash garden back to life a second ago. We're hoping that for many of the applications in the data center It'll bring those applications back to life as well Now flash is already really dominating the consumer world Okay, so the company that has been singularly responsible. I would say for driving that in the consumer world Is Apple we're all beneficiaries of flash technology I'm sure in the audience here today with their iPhones and iPads and Everything else that we've got we hope to do a similar thing in the enterprise And in fact, we believe that flash will have a similarly profound effect in the enterprise So over time we actually feel like flash is going to transform the look of the data center Still very much a role for hard drives, but we think some of the high performance drives are clearly Going to be replaced by flash technology to deliver the kind of performance that those mission critical critical applications require So EMC we've been driving the adoption of flash in the enterprise And in fact what I'd say at this point if you look at the enterprise storage players We probably ship more flash arrays and in future flash cards Then the entire rest of the storage industry put together and we feel that you know, we've only just gotten started What you'll hear about today is a whole new wave of technology You're gonna hear a roadmap of new features new capabilities that we're gonna release not just today, but over the next 12 to 18 months So everything that you're gonna see today is coming from our new flash business unit We have a number of folks from our flash business unit with us here today We have an interesting model to drive innovation within EMC We very much have a divisional structure and we try to apply the same focus on Specific areas of technology inside EMC that many pure plays would apply outside And that allows us to compete and I think EMC is one of the few big Corporations that has been to come been able to compete Effectively with new innovative technologies amongst all of the pure plays that are out there So singular focus of this unit to really drive the innovation and adoption of flash Now with us today, we have a number of folks from EMC Most notably of which we have Pat Gelsinger who'll be up here in a second to talk you through the story Agenda today is pretty straightforward. Pat's gonna come up here We're then gonna split the room. There's a number of folks in the room who have one-on-one briefings They're gonna go behind the the curtain so to speak and do those one-on-one briefings And then Dan Cobb who's the CTO from our flash business unit is gonna come up here and give us all a deep dive on the Technology and I just want to reiterate one last time. We will do a Q&A session at the end of Pat's remarks Please tweet on the VF cash hashtag if you would like to ask Pat a question over the next 45 minutes So with that, let's get on with the show. Let's invite Pat on the stage Pat Thank you very much. I know you're disappointed that I'm not wearing the blue tights like flash Gordon But it just wasn't in my attire this morning, so let's dive right in here and since The Pentium Pro right which was really the beginning of the standard high-volume server The Intel roadmap has just continued unabated to deliver increasing server performance capabilities as Moore's law has scaled and then the advent of multi-core x86 CPUs which you know personally I participated in for almost 30 years right has been this approximately doubling every two years or about a hundred acts every Decade and that continues unabated Into the future and against that the IO capabilities have largely stagnated and if you look in terms of IO's per drive since the 15k drive in 1998 right sort of like nothing has happened and this gap right has created this enormous IO gap that you know has a throttle the performance capabilities of the overall data center right this lack of ability to keep up with those multi-core x86 CPUs and against this flash technology really provides a Fundamental disruptive capability to step in and deal with that when I first started at Intel in 1979 I was really annoyed because we had the e-prom guys in the lab right next to us and I was part of the you know the tail end of the business at that point the money losing microprocessor business and The flash guy or the e-prom guys at the time right which is the precursor to flash used to use UV light to erase and they were making all the money and I was really annoyed because my bonus was nothing and they were making money right I figured maybe I showed up in the wrong side of the lab turned out okay over time and then that led in 78 to the idea of a programmable right of electronically programmable and then Toshiba Right took that a step further resulting in eventually the core type technologies that today are represented by flash And as Jeremy indicated the company in the consumer electronics industry who most Adopted grabbed and took that technology forward has been Apple right the iPod and an iPad and Air Mac Right all of those has really brought forward the disruptive capabilities of flash and the consumer electronics against that Exactly what EMC is doing with respect to flash for enterprise and enterprise data centers We were the first company in 2008 to bring forward the enterprise flash drive is really the first embodiment of a technology Utilizing flash as a separate to disk array technology that was followed right by early adopters You started to utilize that technology But at that point it was still that you had to specifically provision it or take lungs and specifically organize certain working sets on it It wasn't a broad market technology and that led to the second fundamental innovation Which was how can we bring flash technology for broad adoption across a wide range of enterprise applications and the cross that gap was fast technology and fast fully automated storage Storage tiering allowed a broad range of applications to take advantage of flash technology Because essentially we said fast fully automated storage tiering would take care of getting the right Data to the right spot in that storage hierarchy and thus with a very small amount of flash You were able to actually boost the performance of the entire Arrays in very very fundamental ways Now right, you know as we look at this our fast Technology has allowed us to drive an explosive amount of flash of shipments last year 24 petabytes of flash shipment as Jeremy said, you know, we believe this is larger than the rest of the industry Combined but what's even more powerful about fast is that a small amount of flash maybe one to five percent to the Arrays enables us to accelerate the whole array and last year our fast software technology was supporting over 1.3 exabytes of storage right utilizing right just 24 petabytes of flash shipments last year and this shows the very powerful capabilities just a few percentage of flash Right enables the performance increment of very very large arrays of storage And this is why fast technology has been such an extraordinary value proposition for our customers But what if we could even do much much more, you know What if we could say let's do another order of magnitude improvement in performance? And that's exactly what VF cash is about can't we make a step to see a dramatic improvement in IO rate and in latency by moving to the other side of the wire and You know there have been right you know these Dislocated steps of performance improvement. We saw hard disk drives array flash and now with VF cash of 4,000 X improvement versus the performance capability of spinning media right electromagnetic drives And this is exactly the gap in the opportunity that we see with VF cash today Now there are some early PCIe based flash products on this side of the chasm again But much like the early uses of enterprise flash drives. They need to be specifically provisioned There are lot their islands of flash the application Environments in the operating system environments need to do all the management and the app awareness associated with those technologies and Fundamentally it can't be a broad use case for many of the enterprise applications that exist today and crossing that bridge Is exactly what VF cash is all about it's enabling right a broad Utilization across essentially every application that data centers are taking advantage of today with Management with persistence right with integration with support with sales Capabilities all those things that enterprise data centers expect and that's exactly what VF cash is about EBC clearly the number one storage provider for enterprise data center use cases today and against the you know the broad range of Applications today number one against each of these applications right and against each of these data center use cases And this has afforded us great insights great customer Relationships and great partnerships and as we've begun engaging with those customers about how to continue innovating with flash technology Right they've given us extraordinarily positive response to VF cash and what we can do with this technology to further extend Their applications with performance and latency improvements The F cash is built on three technology objectives that we have in mind first unparalleled performance second intelligence and third protection performance this 10x kind of improvement intelligence extending the fast architecture technology not just within the arrays but all the way through the storage hierarchy and all the Way to the server and protection making it safe making it integrated to the overall storage hierarchy Such that we're taking advantage of all the protection and data services that exist elsewhere in the data center Let's start by looking at performance EMC in this category is taking a multi-vendor approach But our preferred partnership is with micron and we have Glenn and Greg here from micron at today Representing us and the p320 card is the prime launch vehicle for the VF cash technology set I'm sure all of you saw over the weekend the incredibly sad news about Steve Appleton and CEO of micron and Steve was a personal friend for almost 20 years and a Really a powerful right aggressive brazen leader of micron technologies And you know if you just joined me just in a moment of silence just to respect to Steve and the great Contributions that he brought forward to the industry today The micron technology has truly been a great Enabler for VF cash and we're very excited with the performance capabilities that the p320 card brings forward And if you look at the specs for that a 300 gigabyte drive using SLC technology Best-in-class performance capabilities in terms of read IOPS latency access built on PCI gen 2 by 8 technology So extraordinary IO performance capabilities And if you look at those as a direct comparison to the industry first with Fusion I osa product You see our specs stand up very well. Simply put it's not just that it's from EMC This is the best technology in the industry as well to deliver server-side performance Capability we're not just an innovator We also have to consistently deliver the best products in the category and unquestionably VF cash is that product today And the results of that are some extraordinary improvements in performance and looking at Oracle workloads right a 60% better response time a critical factor in terms of database Transaction responses or total throughput of Oracle databases tripling right of the throughput of the Databases looking at SQL server similar 80% better response times or throughput times of you know three and a half acts improvements at throughput on SQL server unquestionably extraordinary gains in critical performance centric application categories but VF cash is an extension right of our storage hierarchy and This data right shows the powerful Complementary nature that VF cash shows by itself a huge performance gain But it also shows the incremental performance when done in conjunction with flash inside of the storage arrays And in this sense is not one versus the other But clearly and or both gives by far the best performance results overall And again, you'll see over and over again this idea of extending the storage hierarchy right and making it a complement to building on those storage Arrays very much gives the best of both worlds One of our early customers PPG was a great example They were one of our early beta customers for VF cash they got very very excited about the performance capability and in fact they held our V max sales guys Hostage because we weren't shipping the product yet right and they said the only way I'm going to actually give you this PO as if in fact you bundle VF cash with it So somehow we were able to make an exception and find a way to include it in the PO even though it wasn't shipping yet You can see PPG right a large industrial firm right as a huge Advocates of the VF cash a technology crazy fast IO Right but done in conjunction with their industry standard data protection environments with the V max arrays But it's not just performance. It's also intelligence and fast fully automated storage Tearing has been this idea that hey, let's get the data in the right spot if the data is performant Let's move it up to the flash tier and if it's not perform it e.g. Everything else let's move it to a lower cost here and in this sense we're giving our customers the best of both worlds We're giving them cost savings or better performance hard to complain and over time We expect that increasingly it'll be right less and less fiber channel drives less and less of these 15k drives and Increasingly utilizing lower cost or higher performant tiers and that's the essence of fast. It might have been very successful for us Today we extend the fast right and now we have this ultra performance tier right with PCIe based flash right which now gives us another alternative for the tiering technology to move it to ultra fast Fast right or the cost-saving tier and to us that's very much this intelligence and we're building VF cash as an extension to our fast Technologies and this is where intelligence come in the ability to get the data to the right spot where applications can most benefit from it Customer heritage auction gallery the largest high-value auction sites in the industry and you'd see again a huge Excitement for VF cash and the performance capabilities that it gives them on their application tier But integrated to write their persistence and data Large data storage environment they have as well and in this sense they're able to improve performance Don't have to do application rewrites fits exactly into the database Architecture that they were already using a great early customer heritage auction site performance Intelligence and finally protection VF cash is architected as a write-through Product and in that sense is taking advantage of all of the data services in the arrays Right and doing so that it doesn't create this flash island somewhere in the data center that needs to be separately Configured managed right service and in that sense it's taking full advantage of the data arrays behind it While still giving the performance benefits of having a read cash on the server side performance without giving up protection in the broad set of the data services that are available We also come to the market with a broad set of server support And ultimately this is a server side card and while we have great experience in delivering server side products with many with millions of HBAs that we Deliver install and support for our customers today It is a new product for us in the server side and thus we have Cisco Dell HP and IBM Right certifying and this represents by far the majority of the server volume today in the industry and in particular right Cisco as a great partner with us in this area and Satinder and Paul thank you. Sorry right from Cisco here to join us for the launch today So thank you both very much a great partnership again broad support by the Server industry and a great partnership with the Cisco our partners in VCE and Numerous other areas in the industry. So thank you very much to Cisco This is v1 of VF cash and we have a rich roadmap of capabilities that will be delivering Over the next year. We will increase the performance and one way we'll do that is by server side Ddupe technology and our software stack will also be delivering more Intelligence right enhanced the ray integration will be taking these fast interfaces and being able to do pre-fetching able to tag and hint between the storage array and the Server side capabilities will also for multiple flash cards have distributed cash coherency across multiple server cards Also deep integration into the vmax and vnx management suites and support suites as well So making it an extension of the vmax and vnx arrays Also extended flash options will have different sizes of products will have slc and mlc versions of the products Will have mezzanine cards as well as SSD drives as well So a whole range right a form factor sizes and different configurations to cover a broader and broader set of use cases in the industry So over the next year a rich roadmap of VF cash technologies coming forward But when lightning strikes a few seconds later there's always thunder and I wanted to give a sneak peek of the thunder technology that will be bringing forth to the marketplace and Project thunder is out to attack this question of how do you scale PCIe server flash today? And in many cases right maybe a high density blade environment You don't have the slots to put in the amount of flash you might want to put into the server environments right you might also be in a situation where right you have applications that you want to share that flash workload or Share the workload across a set of flash Resources and against those two problems specifically is what thunder is aiming to address What thunder is is a server networked flash that allows us to combine the flash Cards the lightning cards into a server network flash appliance and there's again these use cases high density Servers but don't have slots available Maybe it's a Oracle rack environment where the working set is shared across a small number of servers and that's specifically what? Thunder is off to deliver against server network flashed right for the shared configurations and it also provides not just read but also write caching capabilities inside of the server network flash appliance so overall right and that the next innovation in this space in addition to The roadmap elements that just described thunder and we'll be doing the first technology previews of this with our customers beginning next quarter So what you've seen is is that we have storage architectures right utilizing flash at every element Right of the data center beginning inside of our arrays and we've had great success you know high capacity hard disk drives and While hard disk drives haven't kept up in performance their aerial density improvements continue to make them Unquestionably the cost per bit winners right they are the place that data will be stored in high volume But to compliment that array flash and that's what our you know beginning in 2008 has been extraordinarily successful for us an EMC Seen as a leader in both technology as well as in volume in that category Server PCIe flash today's VF cache announcement right again bringing flash innovations to the data center in new and powerful ways and finally the first glimpse at project Thunder right and with thunder bringing the server network flash capabilities right into a shareable right server side network Configuration so overall Unquestionably EMC as the leader in enterprise flash technology for every layer right of the data center and for every class of application You know in summary first to market with the emc right with flash a technology in the array with fast technologies Today very excited to bring forward the VF cache technology a breakthrough capability for the industry and for our customers And finally a first glimpse at project Thunder a server network flash Which will have our first customer engagements beginning in Q2 of this year and with that Thank you very much, and I think we have some time for Q&A's Okay, so Q&A obviously if you're in the physical world that would be all of you good old-fashioned put your hand up that will work fine Just want to remind folks that are following this online if you tweet your question on VF cash VF cash We will endeavor to get your quents question answered as well Pat clearly can't answer every question that's coming through But we will get the team on it and make sure that we answer all the questions that are up online as well So without anybody would like to ask Pat a question. I have a microphone. So if you just give me a minute I will be able to run around and Do my duty Hi curious. What's the pricing for VF cash? VF cash It will be competitive right if you look Fusion IO is the category leader We're going to be competitive versus their pricing right and you know Let's say at a modest discount to where they are in the marketplace as we bring it forward. So we'll be price competitive When when do we expect thunder products to come to market right we'll begin customer engagements in Q2 and today We're not announcing a GA date for them yet. So, you know customer engagements will start in Q2 and You know, we'll be you're bringing them broadly available somewhat after we go through some of the customer and Beta processes with customers, but not announcing a GA date yet for them today This year, you know normally everything I'm talking about is within the next year, right? Or we wouldn't be talking about it, but again, we're not trying to be too specific with an announcement date quite yet You know, we did our first preview of lightning at EMC world last year Right. So may of last year's when we gave our first preview of lightning. So right if history is any guide But we have a question online Is the disk drive dead isn't an old flash away that array the way to go is the disk drive dead Well, when I like as an old semiconductor guy, I always look at things as Against the Moore's law. So it doubles every two years and how does storage do how does compute do how does networking do? The big three versus that and basically networking is way sub Moore's law, you know Computing is approximately Moore's law and data density is super Moore's law So we're seeing data increase faster than Moore's law Going forward and we expect that to sustain over time and you know, you see did a study last year or 44x improvement Or increase in data storage, right before 2020 so we continue to see the storage volume Continue to explode against that right, you know, the hard disk drive today is anywhere from 30 to 50x less Expensive per bit. So if you look at it down to the bit level and that's an extraordinary cost gap So there's no way no way right that flash can possibly Right fill that gap. In fact, I've seen some studies saying if we just wanted to do that It'd be something over a hundred billion dollars of incremental flash fabs would be required tomorrow, right to go satisfy that demand Not gonna happen, right? Right great. So we see that flash will always be a complement to hard disk drives going forward and Performance goes to flash right large Persistent store will always be on hard drive and if you look at aerial densities if anything the hard drive industry is out Pacing in terms of the price per bit the cost per bit of flash technologies So we see it really as tearing as those two technologies working in combination for decades to come We just don't see any change to that in the future So I guess that was a long answer to simply say no right to this drive is far from dead, right? The one drive, you know, we do see that high performance drives Could be dead in the future because performance will move to flash But volume drives will be the mainstay of high volume storage for decades to come What's the go-to-market plan for this solution and potentially the future Thunder solution Are you OEMing through the server vendors or are you gonna sell through your direct sales force? We will sell Primarily through channels as well as through our direct sales force will be doing both of those We don't have any OEM agreements that we're announcing today, right? But you know those are possible as well in the future, but to prime areas channel sales and EMC's a large direct sales force into the enterprise Hi Pat couple questions one first class active moment of silence for Mike Ron Can you talk about the micron real role in the relationship and also talk about? Co-op petition with the server vendors in Dell and HP in particular and IBM who are competing with Stuart you on storage So you know first, you know micron We are truly delighted with our partnership with micron and you know, I definitely You know was just as shocked as anybody Seeing the news over the weekend and we have worked very closely with micron as our preferred hardware Partner for bringing this forward so working through the software stack the drivers all those type of things the details of hardware That they're doing and it simply is a better hardware card today I showed you some of the specs the way the control functions are right it utilizes hardware-based control capabilities So less load on the CPU So just a number of performance characteristics that gen 2 right by 8 Characteristics the way the controllers done. It's just the best in class product today So we're very excited to be working with them, right? It is a multi-vendor hardware approach strategy on the EMC side. You might have seen some of the LSI comments We expect that we'll be using them as well So we expect that we will always be taking the best hardware Technology we'll be working closely with micron to hopefully have them always being the best as we go forward But we do have a multi hardware Approaches our strategy overall just like we've done in the disk drive Industries and I think I forgot the second half of your question. Oh Yes, yeah, and you know fundamentally this is a PCIE card, right? You know it's our software that we go in the stack and you know just like we deliver You know literally millions of HBA cards today that go into lots of people's servers, right? This is sort of just another card that goes into the servers And it's a very standard business practice for HPE IBM Dell So just go to have a broad range of cards that are certified for their servers And I showed you the data here, right? We're not in the server business, right? You know we are right extending the storage array into the server side, right? We're participating in the IO stack of the server. We're not going into the servers, right? We have Cisco as partners and obviously HPE IBM Dell and you can expect that we'll have others going forward So from our perspective, this is truly cooperation. We're not competing with them So there is no Co-op petition in that regard. We're very much complimenting the server capabilities of each of those partners question online and Does VF cash? Compliment Vplex or work around it Does VF cash? Compliment Vplex or work around it well VF cash could sit in front of right sits on the server side of the IO stack So essentially right it is before the V the Vplex layer right in the IO hierarchy So in that sense it complements it it would work with it It could cash workloads that would go through a Vplex into some Distributed storage environment over distance the unique capabilities of Vplex. So it's clearly Complementing what Vplex does at this point. We haven't done any unique integration, right? VF cash and Vplex and you can certainly expect that we will do those type of things in the future question the back Could you talk a little bit about how you see power path being a competitive differentiator here around VF cash And what you think that means to the future of VF cash when it comes to unified storage Yeah, so power path is a great, you know footprint for us on the server side of the fact We leveraged some of the power path technologies in the building of the IO stack for VF cash So given that the power path architecture is just another Opportunity for us to leverage in the IO stack, right? We are utilizing some of the technology right in this first product That we're bringing forward we expect to do that even further as we go forward with power path And obviously it's just another aspect just like I answered with the you know the compliment to the server side given the and the Leverage that we have from the large presence of HPA presence that we have power path is another point, right? We're already doing a lot of server side software integration with power path So again, it's just there's another aspect of the safety security the momentum We already have in delivering server side software a stack This is David Fleuer from Wikibon what operating system environments are supported by VF cash We've certified it on Windows on VMWare on Red Hat Linux any others Oh hi yeah Windows hyper V so you know win win server as well as hyper V VMWare and Red Hat Linux at this point and over time we'll continue to expand that list You'll also see us doing more work With VMWare and hyper V in terms of deeper integration with respect to the virtualization environment We think there's some very clever Optimizations that can be further done inside of the virtualization environment as well one more online here is Project Thunder a server will it be able to run server workloads? Project Thunder will certainly have some CPUs in it, right? Obviously to run the management stack et cetera, but its purpose is to really be seen as an IO Appliance in that sense. So we're not intending it to ever run right virtualized servers Workloads. We're not expecting to expose ESX hosts or anything like that in it It is very much a server network flash Appliance that's simply optimizing a large set of flash resources and some of these you know I mean we're gonna be putting terabytes of flash in this server network flash Thunder Appliance so it's not intended to run any compute workloads. It's fully intended to be optimizing the IO Shared IO performance that's possible from a large array of flash Is this a protocol agnostic sand solution or is it dependent on the type of sands that are deployed So remember it's server side Right, so in that sense, it's not seen in the sand side at all. It's on the server side network I'm sorry. Is your question on lightning or on thunder? Okay, so lightning sits essentially it's seen as an ice guzzy accelerator, right? It's how it looks on the server side partner but so but agnostic at that level and Danny will be up here shortly you can ask you know, he'll give a bit more details on that thunder right as appliance We expect it's a high performance server side device So it'll be 40 gig or infiniband are you know how we expect thunder to be seen as a high performance low latency Server network flash device very you know with very minimal overheads into that shared array Customers putting flash inside their servers with it Would there still be a need for flash and storage systems or will there be less petabytes of flash Shipped in a v max and vnx and that's exactly you know if you look at that one graph And maybe I didn't explain it as well as I should have right, you know What we showed was sort of 1x right with just disk drives It was like 3x right with vf cache and it was like 5x with flash in the Right array, and it was like 8x with flash and vf cache And in that sense we see it as highly complimentary workloads different workloads will vary You know a smaller working set clearly vf cache will work much better right in that regard if it's a extremely large working set Then you're going to want much more of the array side benefits. So it's the complementary Amount rate we see as being high across many workloads But the better configuration of more server versus more in the array side will clearly depend on the characteristics of the workload That's being accelerated, but we fundamentally see it as complimentary We don't think one replaces the other and part of the power of the vf cache and the fast Technique is we're going to have this hinting going back and forth between the array and between lightning So lightning will be telling the array. I'm caching this you don't have to right or the array can be pre Caching things that are seen as being hot or soon to be hot as well and bringing them up to the vf cache here as well So that's the intelligence We expect to put across the array and into the vf cache Capabilities on the server side is that ongoing optimization Capabilities, but on the workloads that we've tested so far, right? We've seen it really being additive not one replacing the other At least all the testing that we've done today I'm sure there's going to be some very you know vagrant workloads that it's one versus the other But everything that we've tested so far is it really enhances the performance by having both Okay, all right, so thanks to Pat. Thank you Okay, and as I said, there are many more questions as few technical Well, you know Pat Pat will be modified that you know I didn't ask him all the technical questions that are online because I know he likes to You know take a run at anything and everything, but I've noticed that Chad Sackatch our virtual geek is already online answering questions And we'll make sure that all of the questions that are asked online will provide an answer to as well Okay, so what we're going to do now that the folks in the room who have one-on-one briefings Going to migrate behind the curtain for those briefings the rest of the folks are going to remain here because Certainly is the questions online have gotten a bit more technical Then it seems to me that we want to do a bit more of a deep dive into the product And so what I'd like to do now is invite Dan Cobb on the stage who is the CTO of our flash business unit Dan So good morning good afternoon good evening to the physical and virtual audience out there I'm going to take a little bit of time and go through Following up on the the sort of what about vf cash a little bit into the how and why why you should care What's important about it and for some people who might not have been paying attention in computer science 101 a little bit of caching theory a little bit of how impact of low latency high throughput devices can really have a significant impact on on real-world applications, so You know Pat described for you vf cash and really this the design Montre if you will of the team was how can we deliver high performance for real-world applications? How can we deal with the intelligent? IO path between server and storage and provide that as an integrated service that delivers enterprise capabilities and How can we give? Customers the the ability to leverage and extend the capabilities of high availability of Snapshots of data services of replication of quality of service of SLAs in their storage platforms that they've built so much of their So much of their workflows and so much of their IT structure around So we're going to dig into a little bit more of that today in this session and maybe get a little more technical And certainly a cover a couple more topics So the backdrop of what's been happening in the flash and you saw Pat refer to The the Apple Situation and what's happening with people taking the fundamental NAND that Folks like our friends at micron have have been building for a long time and innovating around and turning that into IT value in the past 567 years Ventured capital funding for turning NAND into IT value has exceeded a billion dollars, right? So it's a lot of money been spent into there and you know the joke back at the office is While we're doing a presentation or briefing another flash start up might have come out of stealth during that meeting So a lot of activity really exciting time to be in this space But so that billion dollars is now turning into IT value as delivered from from our friends at the at the smaller companies And they like to position themselves as moving fast as as being aggressive as well We've been the only ones who get this cool technology and everything else And I just thought it makes sense to put it this into context a little bit, right? 2008 EMC delivered enterprise flash drives right well 2005 we started looking at enterprise flash drives and essentially to sort of paraphrase Thomas Edison We taught the flash guys a thousand ways not to make an enterprise flash drive All those learnings all that capability all that understanding Translated itself into being able to ship enterprise flash in 2008 followed up by significant software Investments coming forward to deliver fully automated storage tiering Fully automated storage tiering caching functionality came next all flash configurations last year and this year VF cash So I don't think EMC has to take a backseat to any of the startups or any of the venture guys out there Because I think our investments probably in you know taken taken in total Exceed what other folks in the industry venture or otherwise have invested in leveraging this particular technology for IT value If you then Publish your crystal ball a little bit a lot of our friends who forecast where all this technology is going to end up really give you a an eye into Kind of two major categories where where flash gets deployed the orange here being flash has deployed In a storage platform It might be a two and a half inch electromechanically compatible device that slides Transparenly into a slot software stack doesn't change nothing has to notice But suddenly your device has gotten gotten much faster and enabled much higher throughput Other people might say the right place to put it is in the server So you see you know server as a Significant deployment option for people going forward if you look at these things though and you say hmm If I'm only about flashing the server then I have the kind of data protection and operational challenges that that result in a stranded capacity and and the types of things that When the data is only as available as the server is available if I only put flash in the array Then I get the benefits of flash and we've seen a lot of significant benefits there in terms of delivering IOPS through real world applications But the distance of that flash from the processor It limits the number of the amount of performance gains I can actually get so No doubt you can see that there's a there's an in-between answer here Right. There's a way to build these technologies together such that you you're deploying flash across the ecosystem So you have the right data in the right place at the right time No excuses on performance and no compromises on data services or data protection If you're one of the orange guys or you're one of the yellow guys You might look at this and say well I'm my customers are storage administrators or my customers are server administrators and You know, there's they talk about this as if though it's open warfare between server admins and storage admins and things like that I'll tell you the the great thing about my job as I get to talk to a lot of people in the IT world I get to talk to a lot of CIOs and and and IT staff folks and They don't want open warfare between server and storage what they want is an intelligent ecosystem that delivers real value to their lines of business They don't want people fighting over turf fighting over budget fighting over ownership of Of who gets to deploy the latest cool things They just want this problem solved with technologies that that provide the right return on investment and Satisfy the needs of real business problems and real applications that they have today So this pitting one against the other Is probably the wrong thing technologically, but it's certainly the wrong thing in terms of organization So we're gonna take a little bit of a look into the VF cache Architecture and show you a little bit of block diagrams and and some of the pieces here And so I'll set a little bit of a backdrop for that You know one of the questions that came up recently was about power path It's sort of the role in power path and if taking a step back One of the things that you need and a caching solution in the IO stack is a very lightweight IO inspection technology And what we were able to do is leverage literally thousands of developer years of being in every single enterprise kernel IO stack and And the care and the optimization that happens when you're when you're in the IO stack on every enterprise operating system on every Enterprise platform Capture those learnings capture some of that intellectual property and turn that into this lightweight IO inspection layer that was inspired in a large Set by the by the power path team Combine that with what really is emc's bread and butter DNA right emc knows caching emc has analyzed Millions of workloads billions of IO's Right and and and in that analysis as understands access patterns understand sizes understands the impact of various caching Characteristics and algorithms that might be that might be employed in this thing and our ability to take that DNA to marry it with the The power path Experience there really gives us a way to leverage some key emc resources and then building vf cash The last part of vf cash then is this advantage hardware platform You know we really talk about the the need of once we have the software pieces done the need to have a Hardware platform that's optimized for throughput high throughput optimized for latency very low latency and Highly efficient it shouldn't require you to spend you know 10 20 30 percent of your of your Applications CPU time just doing error correction and ware leveling and and those kinds of things for flash devices You ought to be able to actually use your server for real-world applications And so we'll talk a little bit more about each of these going forward So the architecture then for for lightning Lightning see I still want to call it lightning for vf cash is Is embodied in this picture and I dare say this is probably one of the few pictures you'd ever see from emc Where the server box is bigger than the storage box Why well because most of the components that vf cash manages and we're giving some insight into today Belong in that in that server box. So obviously the most important thing here is the application real-world applications real-world Workflows real-world business processes and things like that running on today's servers The vf cash driver then has this inspection tech technology and this cash management technology Manages what data is stored on the PCIe flash right transparently passes IO requests back and forth to the storage array with no additional overhead and sits in there and in a sense Manages which data can be rapidly returned to the application and which data needs to be passed through the array and we'll tackle kind of three important Use cases for for this stack So if you're familiar with caching you'll you'll hear all the caching people talk about cash hits and cash misses and cash fills a Cash hit is when the data you need is in the cash You get it back very quickly a cash miss is when it's not in the cash And you've got to go get it somewhere else and a cash fill is hey I just had a request for some data that I didn't have maybe I ought to put it into my cash So it's available there next time. We'll talk about all three of those So in this case an application Issues a read application is totally unchanged doesn't even know vf cash is there the beauty of vf cash is transparent to the application and transparent to the underlying storage Vf cash driver takes a look inside the cash decides Hey, I have this block and very rapidly returns it to the application in Microseconds not in milliseconds. This is where the performance acceleration comes by getting very low latency very high throughput read responses from cash hits Sometimes the data is not in the cash. We call that a read miss same thing happens application issues a read Vf cash driver takes looks as I don't have the data. I've got to go back out to the storage array to get it Close to the storage array goes back to the application again no additional time delay no additional Hops introduced in any of this the same IO request that would have gone out to the server without vf cash present And then in the background that data that was provided to the application is copied into vf cash So that the next time it's needed It's there and it's able to to be responded to be able to respond to the application very very quickly So that's kind of the cash miss case The other piece to this then is what happens when the application issues right and As you might have guessed right now Right comes through right read caches don't accelerate right so vf cash really isn't involved in the right path here That's why pat called it a right through cash And that data is then fetched from the storage array and pass back up to the application So again minimizing Latency as perceived by the application issuing issuing a right So that right is acknowledged immediately when the data is safe and protected on the storage array Then in the background just like on the read miss example We copy the data into the PCIe flash card so that it's available for those those Applications scenarios where I immediately come back and read the data after I've written it and those are fairly common in a database world so The design principles that we talked about for the hardware environment then Really kind of transcend a lot of what we what we did in software and they also apply to To hardware and I want to really take my hat off to the to the micron team for partnering so closely with us as we've Focused on you know on throughput on latency and on efficiency of this stack what you see here in the In the column on your left really it represents throughput, how many IOs can I get in and out of the hardware device per second in a real-world application scenarios applications issue 10 16 30 to 64 128 256 IOs at a time So being able to have a highly parallel data path Right delivers two to three times more Iops than the most popular solution in the marketplace right now And those Iops that directly translate to better application throughput Having the Iops is one part of the story being able to deliver two to three times It's many Iops with only one third the latency is the second part of this So while that app while those those Iops are being delivered in this case, you know over 200,000 8k Iops right they're being delivered with very low latency compared to the Compared to the competitive offering one third the latency so three times as many Iops at one third the latency That translates directly into improved application response Right and we what we see is this type of response when the when Applications are very busy when there's a lot of IO happening in parallel really has a net impact on Elevating the performance levels of of the application of reads and writes Lastly then is this I hit the button too fast, and they told me I can't go backwards But lastly is that CPU impact one? I mentioned this earlier, but the whole notion of being able to run the flash management stack off system on on an intelligent PCIe card the ability to do You know where leveling flash management Translation essentially, you know log-based flash IO activities error correction and Elimination of errors by by extra redundancy in the flash card all that is happening Off-board all that is happening in the PCIe flash card And so what you see is while delivering three times as many IOPS While delivering each of those at one third the latency The CPU impact is really negligible compared to the the popular competitive card that's out there right now So you end up being able to do more work quickly more quickly with less impact on the on the server and Frankly you bought your server and you bought all those cores to run your application not to run your storage stack, so very very Hard focus on this these particular Architectural tenants, and they also will apply to to thunder when we start looking looking forward to that as as pat preview So I call this one kind of once upon a workload you've talked about the stack You've talked about efficiency and you've talked about latency and everything else At the end of the day if you're talking to someone who's looking at an application and looking how it how it gets its data From a file system from a set of volumes from storage You'll hear them talk all day long about the trade-off between achieving low latency and the and achieving high throughput and If you if you just take a step back and say well the best way to achieve the lowest possible latency is to do The least amount of work just give it one IO and get it in and out of the system really fast And that's the lowest possible latency you can get That's great for one late one IO, but what about all the other IO's the application wants to issue So you start issuing multiple Ios 2 4 8 16 Etc. And you notice that your throughput is going up 32 Ios It's very still fairly low latency and and much higher throughput at some point and just about any system And this is just here for example that we ran into this wall at 64 You know you notice that you really have run out of system capacity You've really run out of the system's ability to do more work in parallel And so eventually all these Ios that are coming in stack up and you start to increase latency and not get Significant increases in throughput anymore. This type of behavior is canonical This type of behavior exists on every storage platform on every workload on every device So you can just start looking at these kinds of things and saying, you know, well, where is my curve, right? How do I trade off between latency and throughput? Why did I spend, you know, 35 seconds of your valuable time and this room's oxygen discussing this well Because the VF cash benefit then the VF cash effect is to move that entire workload Down and to the right right with no changes to the application no changes to the back end storage platform to achieve significantly lower latency and Significantly higher throughput through this intelligent application of server-side flash Okay So now let's look at what happens in the real world on this side, we're looking at a TPCC like application and a kind of a before and after VF cash So this brownish line is the baseline This is basically the system running the same workload without VF cash workload starts Time goes on left to right here and the X or the y-axis is really how long does each Transaction take inside the database layer? What's the average transaction latency and you can see here in this case? It's you know a hundred milliseconds and that's pretty standard all the way across This particular workload. So now the workload, you know Now we're at an hour 40 an hour and 50 the workload kind of runs for For upwards of two and a half hours The pink line though is same application same storage same computer same everything else now turn VF cash on At the beginning there's not a lot of data in the cash So we're getting a lot of cash misses. Maybe they're reading this is maybe they're right misses as we talked about earlier But as the cash fills we start getting more and more cash hits and these are the cash hits They're taking microseconds not milliseconds You'll recall and the more full the cash gets the better the benefit of this is to reducing latency So we've moved from you know a hundred Milliseconds to 60 milliseconds to 50 milliseconds to sometimes 40 milliseconds of latency This is that 60% improvement in application perceive latency that really is the impact of of VF cash for real-world Applications the other way to look at that is I can get more work done in the system So I can get if the baseline for the same workload is is 1.0 But think of that as transactions per minute as delivered to the application as perceived by the application With VF cash I can get three point one times the same amount of work through the system Tripling the amount of work through the system having the latency of each piece of work So pretty significant application gains so VF cash Is one of these solutions? I think this is true of any read cash solution that really adds significant value But it adds significant value in the in places that are amenable to to read caching. There's no black art here There's no special science What you need to understand a couple of things you need to understand? Locality or working set in caching terms what that means to storage people is do I have the type of application where? 100% of the IOPS are on 100% of the data I'm reading an entire data set or I'm writing an entire data set or I'm doing a backup or My IO is just totally randomly across you know all gigabytes or terabytes in the system If so we'd say you're in this category right this lower lower left category what we learned with fast and and and our tiering strategy was well oftentimes in fact most of the workloads 80% of the IOPS are on 20% of the data or 90 10 or 95 5 or 70 30 That type of IO pattern means you do have some locality You do have a working set and it's that type of workload is amenable to a caching solution The other thing that's that you go into when you look at your workloads for is Is are these workloads mostly read or mostly write? Right right mostly workloads maybe log files and and journals and those kinds of things Well if most of the time I'm writing the benefits of a read cache to me are kind of negligible, so You know wouldn't slow anything down, but certainly wouldn't speed anything up Fortunately for us most IO most workloads in a data center are read mostly 50 50 60 40 70 30 and odd up So if you have read mostly workloads with well-defined working sets, they're excellent fits for vf cache and This represents the vast majority of workloads in an average data center today So real-world applications real-world IOs 4 kilobyte IOs 8 kilobyte IOs 16 kilobyte IOs Today's OLTP systems email systems and things like that all great candidates for vf cache Of course there was a question earlier about vmware and vmware support I thought it would make sense to share an architectural view of how vf cache functions in in vmware so the blue box here is essentially the server again and The server is running ESX or ESXi so you've got the ESXi storage stack You kind of get their v scuzzy layer and vmfs and drivers and things for network devices and HBAs and things like that Well the vf cache hardware card gets installed into the box and a native ESX driver Is installed to control that card? right Inside of each guest Linux or windows running the same applications that were being run before using the same You know system services as before the vf cache filter driver is inserted the same exact filter driver that That we saw on the earlier diagram that gets inserted into the IOS stack is doing the lightweight inspection Is doing the cache management and Transparenly sits inside of each vm so I can cache an individual vm. I can cache a group of vm's I can give one vm more cache Than another and I can start to manage applications in a virtualized world against service levels and against the sharing that's inherent in In a vmware v-sphere like architecture the control path for these things pull all the information up into v-center So a v-center plug-in the same v-center plug-in technology that we use with our vnx and vmax and And isilon products and things like that All come up into a v-center plug-in and you can do cache management and control monitoring of the virtual world in The same plug-in that you do hardware monitoring and events and things like that in the physical world I'll pull together into one consistent user experience So Pat did a little bit of previewing. I wanted to say stole my thunder, but that's a bad pun for this particular thing It was we could talk about what we're doing going forward I think the high points of what I'd like to like to mention though is the leverage that vf cache has by being able to Apply and refine and reuse and leverage EMC's Incredible history of innovation we talked about the power path stack and some of the things that we were able to take there for this lightweight inspection technology we talked about caching and how the caching DNA and the analysis of data access patterns significantly impacted how we we we address caching in in in vf cache the IP that comes from data domain and avamar and things like that for For deduplication and high-speed compression will help us deliver larger effective cache sizes for for future versions the ability to engage in Deeper Metadata sharing with with storage arrays the ability to pass pass hints and tags back and forth about what information Should be cached what information is being cached up here you can imagine that vf cache is holding a bunch of information There may not be any reason for the array to hold it in its cache for any longer than necessary So we expect that through this deepening integration some of which exists today some of which is is coming in the future This integration really allows the most efficient use of both array and vf cache resources going forward the ability to do Cache coherency with technology again, that's leveraged from what we do in vplex The ability to do management integration so that when an administrator sits down is used to managing a set of lungs on their storage platform They can double-click on a lung and see the vf cache statistics understand the overall performance of vf What vf cache is doing in the server and the impact of the technologies on the storage array and get an end-to-end view of a single Holistic IO stack in there as opposed to having to go out and chase component by component by component that operational benefit is Huge no one wants to go log into multiple servers to query multiple things and then bring them all together Into a meeting in fact you'd have to bring six or seven people into the room just to look at an operational issue With this kind of management integration you'll be able to start looking at you know the storage path itself and the lung itself and Then the related technologies that apply to it And then you know Pat mentioned the need for more form factors making this more consumable new technologies or other other technologies for deployment like SSD form factors like larger cards like use of MLC technology like mezzanine cards for For blades for blade servers and things like that So you continue to see this the hardware part of vf cache evolve into a family of related offerings That that will apply at all levels of the of the IT ecosystem And so now we get into You know something completely different project project Thunder One of the most important things I'd say about project Thunder is It is being designed, you know buy with and for our some of our leading-edge customers It's a highly participative design process where we have essentially adopted some leading-edge customers Who basically are looking for the best-in-class? Capabilities that I described earlier best-in-class throughput best-in-class Latency characteristics and best-in-class Efficiency I offload the the host CPU and let it run the application while the rest of the ecosystem does its job to To manage high-speed low latency storage. So think about Thunder as the performance of PCIe flash and the advances that are being led there through Moore's law incremental units of server flash Deployed in order to fill an appliance, you know, so scale up within a box scale out between boxes Attached to a server network attached to high-speed, you know FDR infiniband or 40 gigabit ethernet and beyond Attaching them to the network brings in the power of Metcalf's law now They're there multiple servers can see them multiple servers can access them scale out workloads Clusters and those kinds of of technologies the agility of workload mobility being able to v-motion workloads from server to server to server and still maintain the data footprint on very high-speed flash footprint and You know, it's the perfect complement to blade systems the blade guys have done a wonderful job of increasing compute density of increasing memory density of increasing connectivity option density and things like that in blade form factors But that focus on those on those particular factors of density really has left them few options for Internal expansion options a few options to actually stick physically a PCIE flash card in a blade server Sure, there are some custom solutions and there are some some things to do that But it's it's somewhat unnatural given the design goals of today's blades a Networked flash device network server flash device is the perfect complement to those dense pack blade server environments So what it really looks like coming forward then is a hardware platform at a software stack And you just want to think of this in terms of Some pretty big numbers right a to you or for you appliance something that fits neatly into you know into a rack and And begins to deliver very dense IOPS that goes with the dense compute and dense connectivity that are out It's already there a shareable scale-out deployment model Terabytes of PCIE flash and the performance that that comes with that Tens of gigabytes per second of throughput Millions of real-world IOPS these are not on a sunny day downhill with a tailwind I might be able to achieve a certain number. These are real-world IOPS measured in millions and The types of low latency that ones come to expect from from PCIE flash Loaded latency not best case latency, but back to that that curve. I showed earlier The software stack goes back to the design principles that we mentioned for VF cash We care about high throughput. We care about low latency and we care very much about high efficiency So we're moving this to an remote DMA and our DMA stack that's been optimized for minimal impact on host CPU and non-appliance CPU and will be Allowing thunder to be the flash footprint for lightning or for VF cash In in some of these scenarios going forward So wrapping everything up Today we have VF cash. We've got the performance. We've got the intelligence. We've got the protection We've talked about the intelligent IO stack from server to storage and we've talked about you know, no excuses on Performance and no compromises on data protection and data availability We've previewed a little bit of project thunder and the whole notion of this network server flash How it applies to dense pack blades how it applies to scale out? applications Clustered applications like rack and some other things and how it's really aimed at the ultimate in terms of performance in terms of throughput in terms of latency and server efficiency and I think what we've would have also tried to do here is to just take a step back and to say As far as EMC and as far as the flash business unit is concerned It isn't about any one particular technology. It isn't just about NAND It isn't just about SLC or MLC. It isn't just about flash in the storage array It isn't just about flash in the server. It isn't just hardware. It isn't just software. It's real world Applications, it's real world IT ecosystems. It's real server administrators and storage administrators Putting their heads together and figuring out how to get their jobs done and meet and meet business needs And it's the ability of EMC to engage at all of those levels whether it be solutions services support Participative design with engineering as we're doing with with thunder Consulting around deployment models and application models our deep partnerships of the ISV ecosystem out there and our ability to Engage very deeply and meaningfully with these Data management platforms that are that are driving so many of these workloads that's really what it's about is our ability to put the whole thing together today and Starting with VF cash version one and going forward into this The type of ecosystem that I think our customers need and our customers expect from EMC So thank you very much All right, Dan. Thank you. Those of you that have questions for Dan Dan will be around Afterwards what I'd like to do now is just formally say thank you to everybody who is with us online Thanks you for your questions. We will endeavor to answer all of those Overcoming minutes and hours for those of you that have attended in person today. Thanks very much I hope we've given you some insight into what we're doing in the realm of flash And I think as you've heard over the next 12 to 18 months We'll be back again with more So, thank you very much and we'll see you again soon. Thanks