 All right very pleased that we edited the John McGuinness video at the pertinent point there you'll remember what I mean all right So day three today and day three is emerging technologies day We have an absolute bumper session for you So in just a moment we're gonna be joined by Jeremy Burton who is our president of EMC products and marketing and my boss along with Chad Sackatch and CJ Desai really to kick off this session We have a slew of special guests that are gonna be joining them on stage today So they're gonna be joined by Bill Moore who's the president of our DSSD team and Randy Byers who many of you will know who is The VP of emerging tech emerging technologies division We have a special guest panel with a Bev with Kumar and we deep mark from Intel Verizon and SAP so very very pleased to welcome them to EMC world this year After the general sessions we're running our final guru session so this is the information generation guru session This is gonna be happening today at three o'clock in Venetian F with Jake Paul way and Jason Silver Don't miss this session is really really gonna be fantastic The beatburton challenge is still going I beatburton And the good news is that Henry is while still in the top ten is no longer at the top of the leaderboard But if you want to go beatburton, then the challenge is still going on down in the village And the you've pumped a ton of water ton of water and you I wouldn't say you did it, but you did a lot yesterday. So lots and lots of opportunity 2800 you've you have already pumped water over the last two days. We've pumped 5400 liters and that translates to about 280 Jerry cans of water, but definitely go down there obviously we'll be matching all the donations from down there Now today, we're also going to be doing a huge giveaway anybody in the room would like an Apple watch Couple of people I can tell you you should get an Apple watch. Oh I feel I feel the envy, but what's better than one Apple watches? two Apple watches All right, so I know it's very early in the morning, but commit dozen characters to memory hash EMC win a watch Because we're not giving away one We're not giving away two if you commit these 12 characters hash EMC win a watch to memory 50 of you will get an Apple watch By the way, if you're a member of the emcee Federation, you're not eligible for this, you know Okay, so here's the key thing to get the Apple watch to get the opportunity to win the Apple watch you need to tweet hash emcee win a watch during the general session and You need to be present at the end of the general session to get your watch Finally as a reminder the solution is exposed going all today So you're able to go and check that out until five o'clock when it closes and then this evening We'll be wrapping up with a huge party Again the first three emcee having two bands on stage very very excited We're really welcoming a full ad boy and one Republic to the emcee stage tonight at eight o'clock So with that let's get the show on the road. Thank you very much. Thank you illuminate absolutely fantastic Let's take our mind back a couple of days. It's been two days already So let's get back into the zone of where we were in David's talk on Monday and really start drilling down from there We said 2020 only five years away We're gonna have seven billion people connected to the internet 30 billion devices 44 zettabytes of data And we can start to see that the journey towards that world has already begun and We also believe very firmly that the key enabler for these devices and these new experiences is going to be software That's really what Paul talked about yesterday And these devices are forming a network that we call the internet of things Now when he's when you think about things you think about a sensor or maybe that's me You're carrying my mobile phone Or maybe it's my car, but it could also be a Cow it's gonna be one of those days Smart farmers believe it or not are already connecting their cows to the internet And in fact the stream of data believe it or not coming off a cow can be 200 megabytes a year Now of course nobody owns one cow, right? You have a herd of cows an emc world first a herd of cows Believe it or not. There are two billion cows in the world That means by 2020 we could be generating four exabytes of data from the cows on planet earth That's another perspective on the internet of things Now the software that we're gonna write to enable these devices. It's gonna be very very different We've already talked about this right millions on the internet in platform two to billions enabled with platform three The data volumes when I first came to emc Five years ago. We had our first petabyte customer and today we've got over a thousand In fact today we have our first exabyte customer and I would tell you by the time we get to 2020 I'll bet we've got over a thousand exabyte customers The way applications have been architected is fundamentally different No longer is it a monolithic application and a single relational database as the system of record We're moving to the world of composite applications Microservices many many different database and data types If you look at the development Methodologies certainly from an infrastructure provider like emc We think software in the new world is gonna move much more towards open source Community-based development a fundamental change from the platform to world If you look at the deployment methodologies to date Everything has been VMs and and we've seen over the last year the rise of the lightweight Linux containers So that promises to be a new deployment methodology You look at the organizational model Waterfall-based development ITIL processes handing off to IT operations the new world really is about agile development continuous development and integration and DevOps the topic of our conference on Sunday But we're not done there if you look at the management model You've got to consider your platform two apps as being pets And your platform three apps been chickens We're gonna stick with farm yard animals all morning. You're gonna love this What do I mean by this? Well, think about it when your pet gets sick What do you do? You treat it with a lot of tender loving care you take your pet to the vet and you nurse your pet back to health It's the same thing with your platform two applications The management infrastructure takes good care of those applications Because those apps have to scale up and we have to protect them so that they can recover in any eventuality Not so with platform three. If your platform three gets sick. What do you do? You shoot it and spool up a new instance. It's a very very different management model So what we're talking about here is a clean sheet design today We're not going to be talking about moving to Canada Prius model of the world We're gonna be talking about going straight to the Tesla platform three model of the world And we think this is less of an evolution and much more of a revolution And to drill down on these technologies We said earlier that most of this world is gonna be software driven not all but most and I'm delighted to Invite on the stage a guy who I've known for many years and has always been one of the software guys CJ decide so let's get started Good morning It's great to be here today so Jeremy talked a lot about devices and all the new trends and Lot of talk about Tesla and Apple watches, you know to be frank I still haven't figured out What do I use the Apple watch for? All right, and so I was coming up with ideas my next startup that I can do in part time And I had a brilliant idea that I call myself. I Said I have a teenager daughter who is gonna be driving soon It would be a really good app When she goes over a certain speed limit my watch gives me alert When she goes to the area that she's not supposed to go my watch gives me alert and My wife, you know, she gives me a lot of action items which shows up on my calendar with this Internet of things Somehow all of those get done So that was my idea Good idea So let's talk about some of the core design principles Around the products that we are building an emerging technologies division I will go into details on some of them, but software defined scale out Using commodity we do want to leverage open and next generation flash So these are some of the design principles or characteristics that we are using as we build modern infrastructure for these applications on On scale out architecture Whether you have performance driven demands More file or object. We want to make sure that for those Applications we give you choice you want scale out block start small for a high performance database where you store metadata You want capacity driven object or for a certain type of file protocol you want file We want to make sure that we give you that Analytics so in 90s lot of talk about data warehouses Nowadays there is a lot of talk about real-time analytics. We want to make decisions fast and We want information up to date. So when I look at that So the analytics need to be built in it cannot be bolted on and The system storage systems need to support all the native protocols It should be shared storage that can scale out linearly and should work with all the leading big data analytics application out there Now let's talk about software defined storage. This is one of those words that gets overused So last year you had about 42 percent of storage capacity That was sold on commodity But it only contributed to about 10 percent on the storage revenue for the vendors What this means is when you are at scale you have a lot of unstructured data Whether that's petabytes and petabytes all the way up to hundreds of petabytes with software defined on commodity You will recognize a lot of savings Cloud aware. This is a given the next generation object platform Should be able to get data in and out of public cloud seamlessly and should support all the cloud stacks that are out there So on open source one of these this is a new thing for the division and new thing for emc Open source has been around it started with operating system in 90s Variety of applications browsers went through the route over the last 25 years Open source is not seen as a risk It is a development model where we want to leverage the community to bring innovation into the product and last but not least is We have seen with scale out all flasher a the great flasher we have which is extreme. I owe With innovation in flash you can now scale out storage with compute however, when you look at Massive-scale storage transport system, which is highly scalable you look at eliminating operating system overhead You want to take advantage of flash media to do read and write make sure cooling and everything is taken care of You can recognize much higher performance than an all-flash array will give you So we are working on the next generation set of flash technologies, which we'll talk about soon So this were the core design principles. I wanted to quickly touch on them and Let's talk about What's new? So the first thing I would say is we recently announced data lakes and we have two product family Isilon and ECS as part of the data lake foundation So let's talk about Isilon first Isilon natively supports all of these protocols the data is already there is scale out is protected It can linearly go as high as you want And what we are seeing is amazing amazing results with Isilon We have now six thousand plus customers globally, but most importantly about 700 of them are using Isilon for analytics running analytics workload on top of Isilon We also had very recently just in a single transaction one of the largest deals which was for a hundred petabyte and When you look at Hadoop shared storage, we are the market leader In terms of having Isilon capabilities and on the scale piece We just recently released HD 400 and you can have up to 50 Kettabytes in a cluster and the new introduction to the data lake family is ECS ECS is a cloud scale Storage platform that supports both object and HDFS and we are very excited about the capabilities of ECS The cost beats the public cloud cost and we are really looking forward to customers using this product in the future Now let's talk about software defined storage And specifically ECS scale Ivan Viper controller So when I look at the announcement we made yesterday, which we'll talk about in detail soon Project Copperhead. This was a historic moment for EMC. We open sourced Viper controller And that project is called Copperhead where community can contribute and bring innovation in the product And the commercial version of that will be called Viper controller in addition We are working on the next release of ECS Which has amazing features when it comes to geo application Improved performance with geo caching. We have made the product easily deployable and to be able to scale out to hundreds of petabytes I'm gonna spend a little bit time on scale IO On scale IO, you know, this is a software defined block storage solution And when you look at scale IO, we have had a phenomenal success last year Many large enterprise customers Service providers are using scale IO to run storage on commodity hardware Now what's so great about scale IO? So first it is flexible. It can run on bare metal. It can run on Multiple hypervisor. So it is very flexible whether it's VM, where Microsoft and others it can run on that number two as you add servers These servers Help you with processing IO and hence the scale out is linear. So as you start adding servers The performance you get scales linearly The design point has been parallelism and that's why there are no choke points as you start adding hundreds of thousands of servers For scale IO and the next thing I would say is so we did a study on scale IO We looked at attaching PCI to the server and we were able to get 220k IOPS on one PCI card. We started multiplying that up to 28 of them and You had very very high performance then we went to 100 125 and you get the math here So whether you look at the reads reads and writes or just writes the performance of scale IO with its design point scales linearly Next thing is I get a lot of questions about Seth It's a how to scale IO compared to Seth What's going on here when it comes to block? Seth is still Block storage running on top of object. So when you make a block request It results in a transaction that results in multiple IO request and hence the performance is Very very low as compared to scale IO number two Seth is very resource intensive It takes high CPU percent points to run and it requires higher amount of memory and third Seth is very complex to manage. So What happens is you need a PhD in Python before you download it and It's doubtful that you can get it working whereas with scale IO you can get this product running very very fast So when I'll talk to some of you, I'll say then why did you download Seth? Why did you work on Seth and almost the answer is CJ? It was freely available when I go to download scale IO I don't know where to find it What's going on and then it asked me a lot of questions It's time-bound and that process takes too long in this new world as Jeremy said We want to make sure that our software the storage software is available freely and as frictionless as possible so We announced yesterday again historic moment number two we announced yesterday that scale IO will now be available freely by end of this month and for unlimited time and You can play with his as long as you want and the software will have community You can get all your ad questions answered and there you go So this is a big big change compared to our previous stance and this will be going live by end of this month Some of the scale IO features which are scheduled for the later part of the year Number one we want to make sure that it's highly available if you have a scale out system high performance block We are doing a lot of innovation around non disruptive upgrade to make sure you can seamlessly upgrade software When you try to bring a server down rebalancing of data and other features Second is security is important in this platform three world So we'll have support for IPv6 Network encryption and some of the other features and what I'm excited about is for disaster recovery We'll also have integration with recover point as we announce this product towards the second half of this year As you saw in David's announcement on Monday Super excited about VX rack our Collaboration with the VCE team you can start at a quarter of a rack and scale up to thousands of nodes That's the power of scale IO and that's why we are very very Bullish on this product and what it's what it will be able to do so you look at All the announcements, you know around data lake new release of ECS coming out scale IO Viper controller going open source We also have two new product announcement at last EMC world. We just announced DSSD So we'll go into the details of DSSD product and also project Caspian that David had announced on Monday So before I do that, I'm sure all of you are wondering. Hey CJ, you know, these are a lot of new technologies We are trying to create new products new platform. How do you do that? What kind of engineering team you have? So first thing I would say is Over time the MCD has done a great job Whether it was getting a team that helped build Azure in 2012 Whether it's having scale IO acquisition in 2013 whether it's DSSD and cloud scaling in 2014 What I'm today excited about is We just hired somebody from Apple It's his first day today On the stage, I would like to welcome Josh Bernstein who is joining EMC and emerging technologies division today It's great to have you here great to be here. All right, so Josh my first question is Give us the scale of what you built at Apple and Tell us for which application So It's a great question Apple has huge scale. We were probably in the 50 or 60,000 node range for Siri Did you just say 50 to 60,000 50 to 60,000 nodes? That's right. All right, just making sure and Lot of learning lessons We learned all kinds of stuff there. I mean when you go big Everybody's very excited about getting their Apple watch So we were challenged to build an environment to support that kind of product and I told you before I'm still not convinced on the watch I know you're still not convinced. Okay. So the second thing is tell me why EMC from Apple I get that question a lot. I think that a EMC and I think that the The industry in general is on the verge of doing something really incredible EMC is incredibly well positioned They have incredible products and I think they're really at the verge of Making a dent in this industry and really supporting their customers and doing a phenomenal job And so I'm excited to be here. I'm excited to participate in that and I'm excited to drive that forward with you Excellent. Thank you. Welcome to emcee. Thank you So we talked about bringing great engineering talent in the emcee and As we talked about project Caspian, you know around last summer emcee acquired a company called cloud scaling Randy Biles the last time I checked he still has the highest number of Twitter followers and Chad needs to catch up there So Randy Biles was on the open stack foundation board. He was the founding member We are very proud that he's part of emerging technologies division at emcee and I would like to welcome Randy Biles on the stage. Please give him a big round of applause Thank You CJ Well, it's really great to be here. This is my first emcee world ever So it's been pretty exciting and I Have been learning a lot talking to folks reporters and analysts and I'm really excited to talk to you today about open source and open stack So just by way of a little bit of background I was previously the CEO of cloud scaling cloud scaling. I like to think was The number one in building production grade open stack systems in open stack land And I also sit on the open stack foundation board of directors So cloud scaling You know one of the one of the ways that I know that we were very successful before emcee took us out Is that we had two major deployments for two of the fortune 15? There were about 20 plus racks each That's a very large deployment and open stack land because it's early days And we learned a lot in those experiences The other thing about cloud scaling that you should know is that it was a 100% open source solution So our customers were demanding open source. They're demanding commodity hardware And so it was really interesting when emcee came in and said hmm. We'd like to buy you I said emcee cloud scaling kind of how does that how does that equation work? And but over the course of a very short period of time a week or two I realized that emcee actually sees that future and wants to get there So today I'm really excited to talk about emcee's first forays into open source So the challenge is I came on board at emcee and was helping people understand what open source was about And I ran into a number of challenges right out the gate where people were thinking about an open source in terms of being software or free software or a particular Kind of license the open source licensing model and it took me a while to get it across to folks to help them understand That actually open source is about three key things at least in the minds of the open source consumer the enterprise consumer one community to control and three vendor neutrality So we're entering into a new era and what it is is that customers who are adopting new technology platforms They want to know that other people just like themselves are around the table that they're having the same kinds of problems That they can engage with them in public forums like bug tracking systems and IRC and you know the places where developers go They want to know that in terms of control that if the features that they want on the roadmap for an open source project aren't there today that they can actually Directly influence that that they can hire their own developers to contribute code back That they can hire outside engineering firms to help them put those features in and so on and then finally You know time and time again as I talked to customers over the last several years They want vendor neutrality they want multi vendor solutions They want to know that if one of their vendors doesn't work out for them That they can let them go and replace it with another vendor now That might be scary to some businesses, but to EMC It's an opportunity because we know that we went on innovation and a customer service and will continue to do so in the future as well So as you might expect there was a ton of resistance at EMC to open sourcing our first open source product However, what was amazing to me is that once we push through the resistance the EMC execution engine just kicked right in You know people were had really internalized what this was about and we're really going to try to do it the right way So that sort of brings me to the punchline Which is that you know historically in the Federation itself VMware and Pivotal have actually been very very active in open-source software They have multiple open-source software products And have been leading the way and it's really great that today EMC proper is joining the party so What are we open-source well We could choose to do some little thing You know we've already got some drivers up in EMC code, but we wanted to go big What product do we have that's already multi-vendor that we could get a community going around immediately That we can give customers control by allowed by making it open source as quickly as possible That is already a leader in the space Well as I look through all the options Viper controller rose to the top So we really focused on Viper controller in delivering a fully open-source version of Viper controller this year Viper controller for those aren't aware is the world's number one software defined storage controller It has a number of northbound API's that plug into all the variety of cloud stacks that are available today As well as a bunch of different southbound API's that plug into a bunch of different storage systems including non EMC storage arrays And the great thing about this is it's already an extensible framework So as we open-source it pretty much anybody can be part of the party with us So you probably saw the announcement yesterday from Project Copperhead very very exciting You know the first EMC open-source product ever And I was directed very very clearly by CJ to make it to be to help people understand that this is not the last It's the first but not the last we are going to do more We believe in open-source and we believe that it's part of our future So with that I'm going to hand it over to CJ who's got a special guest panel that's going to tell us more. Thank you Randy All right, so it's me again So let me introduce my guest panel. We have Bell prayer From Intel. She's vice president and general manager of their storage division We have Dr. Dietmar Reynolds. I try to pronounce it many times VP of infrastructure for SAP took a long flight to be here with us today And then we have Kumar Vishwanathan from Verizon You can talk to him about any complaints with your cell phone service or bills right after this But he's VP and chief technologist at Verizon. Glad to have you here So I'll start with you first. So tell me The Copperhead announcement about open sourcing Viper controller What does that mean to you? We've been working with the industry for a while Talking about the software to find store storage controller environment and it's been really clear From end users that they require an open interoperable heterogeneous management infrastructure so Getting Viper open source getting Viper controller open source gives us the opportunity to really accelerate that controller infrastructure So we're really excited about what EMC is doing. Excellent Dr. Dietmar So you have been using Viper controller one of the early adopters for us since day one and thank you for that When you heard about us Open sourcing Viper controller, what are some of the first thoughts that crossed your mind? Well, if I'm honest first I was surprised I was surprised that you really did it But it's the right thing to do it removed one of the biggest Challenges or concerns we had when we moved to Viper in the initially and that is that we would be when the locked in We would lose speed of innovation and we would potentially have challenges with keeping up and giving us the choices to use the storage solutions We need So it's the right thing to do you did that precisely correct Thank you So Kumar you have a big role at Verizon Tell us as you deliver Verizon's one of the applications you have told me about is easy backup But as you build this next generation applications that could scale to hundreds of millions of users What are some of the design points you are looking at for the overall infrastructure? Sure? So if you take a step back All of our applications were built in silos you had silos all the way from the application to the operating system to the hardware and You know the silos create hardware standard capacity. It creates operational like mayor for Operations team to have to go maintain so many different versions of it So the whole goal was to drive from what we call redundancy based fault tolerance to resiliency based fault Or is which is you need to have a homogeneous layer at the bottom commodity hardware driving to Get all of her software into chunks of Microsoft or chunks like containers that we can start deploying at scale and Really run our application and again We pick the application that had the biggest impact which are petabytes of storage and we decide to start playing with it Rather than pick a small one. So the goal is to completely move from silos to Data centers data centers that are distributed across not just one physical location, but multiple locations So much more distributed So coming back to you again With this copperhead announcement around Viper controller What can how can we work better with Intel what can Intel help us in getting industry adoption? How do you think Intel contributing to copperhead? So Intel has been working of course in the open-source industry for 20 years, right? We have a Huge amount of work that we do in the Linux kernel and an open stack and a lot of the other open-source infrastructure organizations And what we will be doing with the software to find storage controller space is really working on open interoperable API as those southbound and northbound API is really excited to to work with copperhead and to Again continue to move forward that open interoperable standard space for storage controllers Excellent. Thank you. Thank you for your partnership coming back to you So you have been using Viper controller in production now. We just open-sourced it And you know, hopefully community pretty soon will start contributing SAP's role. How do you change think about your storage infrastructure now given this announcement and how would Engineers from your team would contribute to controller I mean first of all that this announcement perfectly fits into our overall strategy of having an open environment to manage our heterogeneous storage landscape as much as you probably would like to have only emcee storage in my data center It's never gonna happen. It's always gonna be extremely heterogeneous SAP Embraces open source open stack is we are an active contributor to open stack And I'm pretty sure we are gonna be able to share our experience with the community to maybe even actively contribute and last but not least also to work with our suppliers our partners to Jump into that and I built exactly the big community We are and you are looking for so that we have all the new fancy Storage solutions in that community and have the ability to have the choices we need to pick the right solution for our services Thank you. Thank you very much and come on last question to you. So when you looked at your Applications infrastructure, and I know that Verizon has very ambitious plan to scale it further over next couple of years Tell us about couple of technologies from emcee that you liked and what was your experience with it as you work with the engineering team? absolutely, and I think you mentioned in your talk that Some of your technologies are difficult to find like scale IO, and you had a nice chart about scale. I was a step We actually went through that complete Journey we started with trying to pick certain open source pieces trying to make it fit but frankly we've been working very closely with ECS and scale IO for the past few months and We actually picked some of those builds on early alpha have been running the software in our labs and quite honestly I think it's been a mutually beneficial relationship. Some of our use cases drove a lot of Different testing on your side But we learned a lot about the software defined storage on our side and what we should do on commodity hardware What we need to take advantage of what we need to be very of and quite honestly this open Collaborative development has been very beneficial to us and in my team excellent And you know in talking with Josh at Apple when I talked with your team It's also a reminder for these kind of application the continuous development and continuous integration and both teams working together It's absolutely critical as you scale out your entire environment. Absolutely because if you really think about it we're talking about the If you're talking about containers being deployed in the data center the containers can fail over from One machine to the next from one data center to the next you talked about geo redundancy all of those features I mean did there is some amount of work to be done for us to get that and we have we've been working to take advantage of that As we launch because when we move the petabytes of storage We obviously want to leverage everything that you do on your geocaching geo redundancy all the way up to how we use scale from few Terabytes all the way to hyperscale makes sense. So the last question for the panel When we announced this what we told you about open sourcing our core IP on controller. What are you skeptical? Open source is hard going from close to open source is hard. It's a hard thing to do But I think that as long as we keep in mind that end user value of making sure that you can Manage a heterogeneous environment and and really engage that controller across the infrastructure. We'll do fine excellent, so Big round of applause to the panel And back to you Randy That was fantastic CJ. Thanks To you and all the rest of the panel and that last part reminded me of the of the could we back up a slide, please and that last time The panel reminded me of the Of the panel that we did on Monday and we had about a full room and I asked everybody in the room You know raise your hand if you Were expecting emc to announce open sourcing a product coming into emc world and I got one taker And then I asked the room I said You know how many people think emc is going to be successful at you know building and launching and running an open source project and I and I got most of the room and I was surprised, but it was a testament to emc and although I helped sort of drive organizationally some of the changes and the initiatives around Viper controller There was a big crew of people at emc who actually did all the work I just basically made all the noise and broke all the glass So let's give them a round of applause to the emc team that got Viper controller open source. Thank you so much So switching gears a little bit, but still an open source land I thought we should talk about open stack and if you haven't heard of open stack I'm sure you will or you're gonna hear about it now at least a little bit Open stack is a open source project that was designed to be a clone of Amazon web services It's about five years old now. It's the fastest growing open source community in history There have been over 3,000 developers that have committed to open stack There are 400 plus maybe even 500 plus companies that are deeply involved with it The folks on the board on the open stack foundation board or who are members or golden platinum sponsors There's a who's who's list of the industry Cisco and Dow and emc and Red Hat and canonical. I mean it's just really a laundry list of all the major players in in the enterprise world People are using open stack today to really solve a key problem around the third platform Which is that they've got next-generation applications. They're moving towards a dev ops model They're trying to figure out how to have a lower cost basis for those applications and a faster more nimble iteration and open stack is really the centerpiece that people are using in their data centers to modernize them and take them to the next step so To this date emc has actually been very active in open stack land, of course vmware is number six contributor Emc proper is also a major contributor. We have a whole bunch of drivers and plug-ins on the truck. We have a bunch of solution art Reference architectures and solutions that we're able to help customers with we're working closely with partners like Canonical and red hat and marantis on those things And then of course viper controller and copperhead project copperhead plug directly into open stack as well So there's a whole bunch of stuff on the truck today for open stack But we asked ourselves, you know if we want to take this to the next level if we want to help our customers adopt open stack and really be very successful What are the requirements and we came up with four key areas? The first is we needed to build something that would be scalable or nearly infinitely scalable You know you start with a hundred racks you grow to a hundred when I was at cloud scaling every customer wanted to get started with Open stack, but they want to start small. So you've got to be able to start small and grow big Second you got to focus on the third platform systems like open stack are predominantly designed for those next generation applications They don't really make sense for the old world. That's still a hard thing for people to see But when I see successes in in today's open stack appointments, it's all focused on third platform Third we want to deliver something that's hyperconverged customers want it to be you know an easy turnkey system But they also want to disaggregate compute and stores. They want to be able to scale compute and storage separately They want to have a lot more flexibility They don't want you know these kind of hyperconverged solutions that you've seen to date which are software to find But are very very rigid in their deployment models and then finally they want to be easy They want all that operational goodness of the hyperconverged model They want to have an open and transparent system or it's all open-source software I'm and they want to be able to you know do the day-to-day Operations in a relatively effortless manner. So you've probably heard about this, but today or Recently this week at least we announced project Caspian that we're going to do a sneak peek of it today And so I'm really excited that we'll have Chad Sackatch on the stage here in a few minutes to basically run Us through that and to show you what our response and our answer to those four requirements for customers embracing OpenStack is and In order to get us to Chad. I'm gonna tee it up to Jeremy who's going to take us there Thanks, Jeremy Always on always connected always stress How do you keep up stay healthy and reduce stress? By staying connected with meta app When you're on the right track And when you're not Ladies and gentlemen Chad Sackatch, Jeremy. I didn't see you there. Hey these people Chad Um, I gotta talk to you about your lifestyle choices. What's I don't understand what you're talking about. Well, I Am on the board of Novium and we've got some big issues, but you might be our number one issue What about that in a second just come from the board meeting three things that we need to do in the next 30 minutes No problem number one our media You know, you see the Apple watch application. It is going great. We have thousands of customers We think it's gonna go to millions. We need to scale our infrastructure elastic infrastructure. That's number one. You got that Number two. We're buying a company We're gonna migrate their back office over onto our infrastructure now I'm sure that this acquired company uses the same technology stack that we currently have right, you know, okay They use KVM on Linux and we are Windows and VM wire fantastic. All right. So that's not all third We're gonna get it the realm of big-date randlicks. It's a huge opportunity Chad We've got to build the infrastructure Jeremy. I got your back buddy. Okay, you got that piece of cake All right, so let's talk about how we're gonna solve the first problem We need to be able to build an elastic infrastructure to support that app and this is a Technology preview of what we've been working on with Caspian. Okay, this is Caspian. You're lugging in right now Right. So the first thing that you need is you need an elastic physical infrastructure layer So as I go in and I take a look I can actually take a look at the physical infrastructure So here we have racks bricks and nodes and just visualize these are just industry standard servers very similar to the VX rack Hardware that people saw on day one Now let's see. I'm just gonna blam in and say, okay We want to have one full rack here and that's it It's I mean it's that easy to go go and select now I think that we might actually need more so I'm gonna Said like another rack and you can see the physical capacity increases And now I'm gonna say I want to deploy and what you can see is it's deploying all of the open stack Component that's it because I was led to believe this would take weeks and months I have worked with customers that have been working on open stack deployments with very smart people for 18 months Like Randy said we need to industrialize it make it simple and easy, but also still elastic. Okay, fantastic So now if I go to the dashboard you can see that we've got the capacity deployed But frankly the infrastructure is just the first part We're gonna go in and really start to look at the power of these platform three applications. I'm gonna actually use This to create and push do a CF push to push many app Up into the infrastructure, so it's a cloud foundry app running on top of our racks exactly Ultimately, this actually creates small containers running inside the Nova instances and it could actually do it on the bare metal as well So now we actually see the instances starting to run up power up The users are now starting to create the load on the web front end and on the app nodes and you can see that the load is increasing All right fantastic. So still increasing. It looks like it's fine. You know, don't worry about it This is good. This is the purpose of the beta is to increase the load. It's still increasing Chad We're gonna be in trouble here fine, man I told you we're gonna be wildly successful with this app Well, I'm starting to get a little bit worried Chad red is bad in any data center, right? So we're getting an alert here that says you know what we need to have more capacity So It's as simple as going in and saying I need some more physical infrastructure Expand out the open stack components onto it and boom we have more we've demonstrated here this concept of elasticity at the Physical layer now. That's just the infrastructure though. Surely the app. We have to go do something similar there exactly now again The power of these third platform apps is that you can actually just issue a CF scale command and say I want additional 350 instances of the actual code Boom done and you can see that now the load is increasing But it looks like we've kind of hit the sweet spot for the current state of the beta Okay, we are back under control right and the last part is we can actually take the information from heat Celiometer and start to offer customers the ability to have better visibility into their consumption of their pure platform 3 Maybe as we build out a range of apps We can charge back the business units for the infrastructure that they consume exactly so first problem solved Done done. All right. We scaled our infrastructure to potentially millions of users So why didn't you so net this out for me? How could you do this with Caspian? So Caspian is designed to give you that elasticity at the physical infrastructure We've developed some amazing technology to leverage industry standard hardware make it elastic and easily provisioned number two We took open stack something that was a great open source project and everyone's contributing to but it's complex to deploy and Made it very very simple and to be clear I mean, although the design point is you know industry standard Apache open stack potentially It could use any open stack include an even you know vmware vmware vio for customers that are going down that path So again this idea of federation choice and then the last part that I think is perhaps the most important you could hear from Josh You could hear from Randy. It's about elasticity ultimately at the application layer And we demonstrate how it's designed to be a simple and easy way to deploy cloud foundry on or off-premise to give you that Elasticity case a problem number one tick done done about that acquisition I am really worried about this because that's the number one issue with acquisitions is integration of systems And it doesn't sound easy if only we had a freely available easy elastic Transactional storage model that could basically power an open environment with both vmware and with the Linux KVM environment Let me have a think you got one right there. I do scale IO So what I'm going to do here is I'm going to log into the scale I owe you I hang on and What I've configured here is you can see the dashboard now first things first I know that not everyone here knows about scale IO, but trust me you will it's orient me a little bit here, right? So an SDS is a server It's a thing that consumes storage and aggregates it and pulls it and spreads it out. We started pretty small here There's only six nodes and a client is someone that's consuming it. It could be anything right now What I've done is I've actually created a model for this new tenant. This is a storage pool So this is the newly acquired company. This is our current environment. That's the target for our migration, right now Just like any company they've got their own app dev environment. They've got their management environment They have their own open-stack cinder environment, you know that is again Linux KVM What I'm going to do is I'm going to take a look at this and say you know what let's add some SDS nodes. So the first node I'm going to add now These are just physical servers that I'm adding into the pool I'm going to go and I'm going to give it an IP address in fact to for redundancy Okay, there's the first one Next one and then what I'm going to do is I'm going to add the physical devices So these are local devices on that physical server. So this first one is a magnetic drive So a traditional old-school HDD and then we're going to add flash SDD exactly an SSD and then individual High-performance PCI flash drive. I'm going to sign them for their different purposes. So this is an interesting idea of Flexibility you can actually use it for many different things and I'm not just going to add one I'm going to add a total of three nodes into the cluster each with their own IP addresses and What we're demonstrating here is frankly how easy it is and of course this can be automated via APIs So you could script all this if you want to script all this But we're showing here is it's very very easy to add physical nodes of all sorts of disparate configurations and different hardware Okay, and There we go. We're now done Now If I go and see okay Are these being used because we're actually presenting to the clients in advance you can see the load coming in load is starting to ramp up Which is great notice. This is a chat optimized demo. I'm having to like it's really, you know, the world revolves around me, dude so look One thing that we found is that this concept for many people about software distributed like where is what how do I Figure that out. Yeah, how do I see which nodes? You know what I'm running is is running on this is our production database that is actually running for the Preacquisition part of the company you can actually go in and you can see how is this distributed across all of these SDS nodes And the other thing that I think is really cool is there's deep abilities to look into things like device latencies This is the internode latencies between each one of the SDS nodes that's serving up this particular Right environment couple hundred microseconds So what this also tells me is you think it's off to how do you find storage on commodity? You think capacity optimized and that would be ECS right this though performance up to performance optimized Transactional very very cool stuff and actually if we look here, we can see that we've gone up and we're now driving Basically almost 80,000 IOPS. All right. So what about 30% increase from what we had before exactly notice that the SDS node count increased By the three that you added right now by the way We've got a bit of a problem here, which is that the new acquired company is actually driving a fair amount of demand on the overall pool So they potentially could be like a noisy neighbor right impact our system performance So what we've got the ability to do is actually to apply unbelievably cool QoS policies so here I'm doing it for the whole pool and I'm saying hey 25 megabytes per second for each one of the SDS nodes for just that pool that tenant We can actually do this at the tenant level. We can actually do it an individual workload level and Now you'll see that go from 300 down down down and So really the kind of quality of services and the multi-tenancy and actually the design point of scale IO was much more for the sp World than anything and that's how our work actually started with scale IO with our friends at Verizon. So interesting stuff Now if I go and I take a look at the dashboard Here we are we're cooking along everything is great Cooking I noticed this rebuild thing. I guess we had a failure one of the nodes is dead So wait a minute. This could be the world's longest demo because I mean it's gonna take hours to rebuild What's amazing is that the power of this distributed? Performance-oriented scale IO model means that massive rebuild rates are possible This is just a simple nine node SDS if it's a thousand nodes These rebuild rates go up well above gigabytes per second done already that wasn't a rebuild of a drive That was a rebuild of a node a whole server needed to get rebuilt And by the way, I'm relaxed. I know that actually the cluster is highly available the whole time So incredibly incredibly potent. So we're done. We're done. We just solved problem number two So explain to me again, give me the highlights scale IO, you know We just migrated all of our applications over from our acquired company. How can scale IO do this so easily? So what we demonstrated is that this SDS model with a running software on commodity hardware Which is a kind of a platform three idea can be applied to platform two transactional workloads. Okay, we demonstrated the elasticity So you can actually grow and shrink scale IO clusters to infinity and beyond We demonstrate that we could do it with rich rich multi-tenancy rich rich Openness so again, it's the industry's only SDS layer that you could use with KVM with Linux bare metal core OS VMware And I really think to date it's been almost the biggest secret inside the EMC portfolio But if only we would do something like give it away for free for anyone here to use Oh wait a minute. We just did so there's no excuse then for everyone in the room They can take this today. They can download it. They can run it up. They can see what it can do for themselves exactly Chad well done. You're doing well To down want to go. This is the big one because this is gonna make an impact to our business We've got streams of data. We've got information around consumers that we never thought we could have yep We're gonna get into realm of big data analytics Chad. This is gonna be a real money spinner. Can you help? Dude after those two this one is a piece of cake. You know why why is that we've been investing in a data lake And I've got this monster 40 new node Apache Hadoop cluster running on DAZ that will blow away your requirements. Oh, man There's something going on over here Who's that dude? Well, it's good to know that you have 40 nodes of DAS there Chad but to tell the truth That's like bringing a donut fueled tricycle to race a Tesla. Oh, come on. He's baiting you man Well, you really need for doing high-speed analytics is something like rack scale flash Rack scale flash. I think he's smoking something Indeed and So rack scale flash is exactly what it in the name sounds like it's shared PCI connected flash That connects to a rack of servers to help you solve your high-speed analytics workloads And it used to be the case that in the platform 2 and 2.5 worlds But it was always your usual suspects that had these huge monstrous real-time data problems Whether is you know the financial institutions doing their high-speed trading or whatever your government agencies or Again your standard HPC guys either way, you know, it was the usual suspects However today with platform 3 with billions of connected devices out there and even herds of cows generating all this data What happens is that every company has this much data somewhere inside of it that it wants to analyze To get real answers to real business problems So that's where rack scale flash comes in and what we've done Because we've actually engineered a shared PCI connected fabric that connected connects to a rack of servers and allows you to deliver the full Potential of raw flash all the way to a shared environment So you get all the benefits of PCI connected storage that is closer to the CPU in terms of memory hierarchy and Faster and lower latency to help you get your job done while retaining all of the operational benefits of shared storage That is you have a easy service model So your data is always available even if a server goes down other servers can still access it you have service ability You also have pooled storage and pooled capacity all of the operational Efficiencies you get from a shared appliance with all the performance advantages of a PCI connected storage array So that's rascal flash for you there Ted Okay, I get you're in trouble a buddy. No, no, no, no 40 node Hadoop cluster on DAS man And look, he's making stuff up. I don't know who to believe but I think I know a way to settle this one I know a way to settle it for once and for all You're going down now ladies and gentlemen. You thought you saw the big fight on Saturday night But let's get ready to rumble in the blue corner Wait in at 155 155 155 pounds He drives a Honda he drinks Red Bull Chad And in the green corner wait in at 230 pounds He drives a Tesla and his only weakness is the speed of light Bill more Gentlemen, this is a world championship contest. It's one round of a hive transactional query Keep it clean gentlemen Let's settle this like real men over a computer terminal So here we both have a nice web interface to do some nice hive analytics on a Hadoop cluster I hear Chad's got this rockin 40 node DAS and over here we have of course a smaller system connected to rack scale flash and We'll just pull up some save queries here and on this first one here What we're going to be doing is we're going to be querying Novium healthcare is 50 terabyte database to figure out over the last 30 days or over the last several years What? Conditions have caused the highest readmission rate in patients that have been seen in the last 30 days So here you can see the query up on the screen and just so that we have an idea of how much we're progressing we'll bring up a little IOPS meter here and Any time you're ready there Chad on your marks get set Go go. Okay. Here we go. So Chad's over there with his 40 nodes. I've got a paltry 10 nodes here So you're done man. Look at this. I'm starting to ramp up 2% progress Chad's in trouble. I think could I maybe get a little extra time? Chad you way behind buddy. Well Chad, I think your time's about oh ladies and gentlemen. I think we have a winner There we go, dude Yeah, it's it's done. I think we would call that a technical knockout in the first round Bill while we're waiting for Chad's I mean it would be the right thing to do to let him finish But while we're waiting for Chad's why don't we see some of the cool things that DSSD can do sure So we got another save query here This one here is a lot more of a complex query as you can see here And what this is doing is again looking over the top four diseases and again this 50 terabyte database Trying to figure out amongst senior citizens What are the common diseases and syndromes that they see that are not present in the age group 50 to 64? So once again, we'll bring up the little IOPS here and we'll start that running And maybe we should check on Chad Yeah, Chad you finish these like stop. I'm like right here, you know, this is kind of embarrassing I've got a lot of friends of the audience Chad's almost at 5% go easy on me fellas single digits Wow, I don't have your new fangled rack scale flash thing and Bill How many nodes are in your cluster just as a matter of interest about 15? So less than half of Chad's yeah, well, you know done already a powerful things come in small packages, man It's a bill. All right. This is fantastic, but let's take a look under the cover. So to speak inside this beast here Certainly. So this is our D5 storage appliance. Let me just turn it on here real quick Press the button So on this five you rack scale flash appliance Has 36 flash modules in it. So today, that's a hundred and forty four terabytes next year That'll be 288 terabytes growing from there And if you look at the front of this you can see that we have 36 of these flash modules Now you can pull out these flash modules and what these are is these are custom engineered flash modules We buy raw NAND and engineer a flash module from scratch for that now You may ask why we would do such a crazy thing like that. Yeah, Bill Why would you do such a crazy thing like that? Well because we're crazy But besides that we actually have some real reasons So if you look at raw NAND, it's you know got a certain amount of capacity per die But it also has a certain amount of potential performance per die and unless you can actually Power and cool the flash at scale inside this one module. There's five hundred and twelve NAND die So even though each one only consumes a small bit of power you multiply that out That's 45 watts of power to drive the NAND at the performance potential. It's capable of so in addition to that We wanted it to be hot pluggable PCI connected again because that's the fastest lowest latency Interconnected right now on the planet and there were no form factors out if it did that So this is dual ported PCI connected hot pluggable and Highest performance thing that we have Yeah, and unlike traditional SSDs where you're trying to on these tiny little Microprocessors do wear leveling do media management do the flash translation layer all of that takes horsepower to do right And so what we did was we actually migrated all of those complex algorithms up to a dual socket Intel motherboard that we put into our server here where we can run far more complex algorithms and do a far more better job Again at the system level not just at a single SSD level So we can make optimum optimal choices across all 36 modules rather than looking through a sodas straw and Trying to optimize it only on a single module and then I know there's really cool stuff under the covers here So can we that's right and take a look inside of course we can So as part of this since it's all PCI connected what we had to do is we had to figure out how to make the world's Largest PCI fabric to connect these 48 servers to all of these PCI flash modules And this is it you see each of these little black triangles There's actually 12 of them some are hidden here each one is a 64-lane PCI switch So this is the largest PCI fabric anyone has ever built before and the reason that's necessary is because on one side You have 48 cables going to your rack of servers each one of these cable was a PCI gen 3 by 4 So capable of 4 gigabytes a second and then we have 48 of these on a single module Fanning out to the 36 modules in front so they can all be aggregated and consumed as a whole That's how you get the direct connect but shared storage exactly right and of course There's two of these this being an enterprise product You have to be able to have redundancy and resiliency and service abilities So there's two of these each server connects to both sides for full resiliency and redundancy Bill when people think fast, I think to date they probably think memory or they might think SSDs That's right everyone. I am I think we're dying to know how does this stack up against SSDs or even memory? Well, that's a good question We actually asked that question of ourselves early in the project and what we did was we wrote a Benchmarking program and what this benchmarking program does is it does a hundred and twenty eight thousand random? 32 Kio's to whatever device you point it at so here Let me connect this to a server here And what this is going to do is it's going to do these random Ios over a four terabyte data set and That's actually going to start from 16 gig and go to four terabytes and you can run it here and see how this goes Now in the interest of time here We've actually made this so we can accelerate this but this is real data that we collected and As you can see in order to do these random Ios DRAM is pretty fast in the order of seconds And there you go. That's what we typically think of as probably the I'm almost certainly the fastest indeed It is fast, but it's also volatile. So not exactly the best place to keep data that you want to analyze later All right, so here we go for SSDs and now with SSDs you can see that it's taking a little bit longer and the dots are moving slower So you're gonna me accelerate that again And again, you can see for doing the same workload that took seconds in memory It actually took over a minute to do on fast SSDs So now everyone is dying to see all right show us D5 and where does that gonna fit indeed? So here we connect to a D5 and we'll see how it sort of winds up in the middle here and As you can see it's much closer to the memory speed because again It's connected directly to the CPU through PCI Express And that's how we can get closer to memory speeds than any SSDs on the planet I think most people find a bit of a shocker because I would have expected somewhere in the middle You know when you talk about bridging this gap you think all right You kind of hit in the middle ground, but actually much closer to memory That's right And that's because again we can aggregate so many nan devices in a small appliance and get that density and get that Performance to that PCI fabric. So now net this out for us, you know business value of DSSD You know give us the the bottom line. Yeah Well, it's essentially you can analyze your workloads and get results You know way faster at a much better cost of ownership because you're able to get them faster You know take example, you know a healthcare industry, right? They're trying to match, you know a custom drug to a particular Patient's genome to see what works best for them right now That takes 11 days to run that query because it's such a massive data set And if you're a stage for a cancer patient, you might not have 11 days So being able to do that same analysis in a matter of hours is truly a life-saving skill Yeah, it truly could be saving lives Speaking of saving lives, I've just noticed out the corner of my eye over there Chad is looking pretty hot and bothered He's also using the Medi app I think we Is there a way we can see his you know state of being at this level just to finish with Well, I'm not happy about this man So in an always connected world, of course, you can do analytics on people even when they're right in front of you Oh, geez All right, we'll log in here Chad, the new Medi app, this is our analytic front end that we've been working on That's right. So this has been analyzing what Chad's been up to the last couple days collecting data on him And it looks here like uh 130 a.m. There was a little irregular cardio rhythm What's the irregular baby? Winner's the grabs table All right, well we're glad to hear that Chad And uh, let's see what else it says below recommended level sleep says here You've been only getting about three and a three point two hours of sleep there Chad. What's up? Your system is wrong. That is not abnormal. That is normal emc world sleep cycles We'll even red bull and donuts can only get you so far Chad Oh, wait a minute. Something bad's going on here Chad Uh, I feel okay, but this thing is saying call 911, uh What is that all about? Chad, how do your arms feel? My left arm feels a little funny I think he's got big problems here. I look at his heart rate. It's escalating. Can we get some help? Chad, you need help. I don't need any stinking help from you. Mr. Rockscale flash Oh, man All right, it's time to say goodbye to Chad ladies and gentlemen and Chad Sackatch rematch Next emc world I kind of feel sorry for the guy. I mean bill like that could be the end of his career Yeah, well, I'm sure he'll figure this out sooner or later. So but again with a higher performance analytic capabilities I think Chad will figure out a way to recover. I think you'll figure out. All right. Well, look ladies and gentlemen bill more CEO of dssd bill. Thank you very much. All right. I'd better get checking on dad. Thanks, Jeremy Seems like bill's got some work to do reviving Chad backstage. He doesn't look healthy on that picture Okay, wrapping up then what are we talking about today? We have a great business today platform to transitioning to platform 2.5 You've heard about your many of those products over the last couple of days But I hope what you take away today is we are absolutely fixated on not just continuing to lead in the second platform But bringing the right talent the right technologies to lead in the third platform Again key attributes scale out software defined open source rack scale flash You know, they run through everything that we're going to be doing So the final thing that I need to do obviously here to keep everybody happy Is make a determination who is going to win the 50 apple watches So on the click of this mouse if you see your name up on the screen You have won an apple watch I have no idea where you go to collect them. It's like a detail. I forgot They'll be in the mail. Trust us. Everything's going to be okay So the winners are Okay, I've been told you meet in the front of your names on the uh slide here You meet down the front and I'm sure we'll send you a we owe you an apple watch Ladies and gentlemen check out the screens. Thank you for being a great audience today. We'll see you next time. Thank you Hi, everybody, so that concludes our general sessions. We will be back in here at eight o'clock this evening for the concert Enjoy your day of sessions. Go check out the village. Thank you very much for joining us today