 from San Jose, in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE, covering OCP US Summit 2016, brought to you by OCP. Welcome back to the Open Compute Project Summit 2016. I'm Stu Miniman with Wikibon. This SiliconANGLE media is theCUBE. We go out to all the events, help extract the signal from the noise. Happy to have on the program two first-time guests. So we've got first sitting next to me, it's Eric Enderbrock, who is the VP of storage marketing with Micron, and we've also got Mark Glasgow, who's VP of enterprise sales with Micron. Gentlemen, thanks so much for joining us. Thanks for having us. I appreciate it. So, you know, the keynote this morning, I left Facebook, walked through, and talked about the expansion of the project. They said, you know, really helped it, you know, transforming compute. Networking was the next one. And storage is, you know, a next piece that, you know, they see lots of opportunity to help, you know, improve efficiencies. And, you know, I think about Micron. You guys are at the center of a lot of the transformations going there. So, Eric, maybe to start with you, tell us, you know, a little bit, just your background, what you do at Micron, and what brings Micron to the OCP event. Yeah, you bet, Stu, thanks. So, yeah, first of all, it's great to be here. I mean, such an exciting event, as we were kind of talking before the camera. But, you know, my role is, is I run the marketing team for what we call our storage business unit at Micron. Really focused on NAND flash, emerging memories that are non-volatile and putting all those together into systems and solid state drive. So, it's pretty exciting. And you really would, what I think kind of the, as you mentioned, the talk there about networking and all these elements coming together and driving the efficiencies of data center at scale, that's where we really think Micron's coming into our own. You know, been known as a DRAM company, but there's so much more to us. The storage business unit that we're focusing on brings both the memories that drive, you know, the compute platforms today, along with the NAND flash and the storage. And it's really about bringing those two closer to the processing window, bringing them closer to the CPU so that we're driving more efficiencies out of that whole platform. And it's an exciting time. And I really think if you think about what's in a server, that server being the core element of all computing today, you know, it's becoming the storage element. It's becoming the networking element. It's always been the compute. Memories are really kind of core to all pieces of those. And that's our future. All right, so Mark, why don't you say the same thing? Yeah, so thank you, first of all, for having us. I'm really exciting. So I've been in the storage industry now for a couple of decades and I was running storage for North America for a large company and was recruited away from there by Micron to start an enterprise storage sales division. And, you know, basically what Micron sees is a 37 year old memory company that makes all manner of volatile and non-volatile memories, as Eric just mentioned. We really felt the need to get closer to the end user, especially as you see some of these hyper skill accounts that are consolidating and aggregating so much compute and storage. There needed to be a mechanism, a sales team, that would extend the brand that is Micron, not just through the OEMs, but actually down to some of the bigger end users. So my team is a globally focused team talking to the biggest customers everywhere from, you know, Amazon, Baidu, Alibaba, on down to some of the more traditional companies like Goldman Sachs and Bank of America. Yeah, Mark, maybe start, you know, one of the things I look at this show, this is a great example of some of the learnings from the largest companies and helping really push that down market and expand it. What are you seeing? What are some of the interesting conversations that you're hearing from users today? What are they asking from Micron? Some of the big disruptions. Yeah, well, I mean, if you want things back about 10 years, it really has changed dramatically. I started selling frame-based arrays in the 90s and on into the early 2000s. And some of these bigger companies, like Google, like Facebook, really just their model broke if they were going to base their business on a frame-based array, and not the least of which were the maintenance fees that they would have to pay. So it's all about driving costs. Every time I meet with any one of the big eight hyper-skills on the planet, I'm blown away at just how focused they are about driving costs out of the data center by keeping the flexibility there. And so while they were working on that, an interesting thing was happening on the semiconductor side of the world, and that was, NAND flash prices kept dropping and dropping and dropping, justifying the use of NAND in many, many more workloads. And so those two confluencing factors come together and you get this thing called these hyper-skills that are aggregating and buying an awful lot of NAND and DRAM to accomplish reducing latencies, driving out costs and increasing flexibility. Yeah, it's interesting. Eric, you were talking a little bit about the transition of what's happening with DRAM and flash, you know, I think back to the 90s, as to, you know, it's like, well, memory was memory and, you know, storage is storage, and now it's a little bit more of a continuum as to, you know, price points and latencies and eye offs. It's, you know, if I design something today, I can have a very different mix of those components based on price and how much we have versus if I just did it two years ago. So, you know, how does Micron, how do you guys help your customers? And how do you manage? I mean, there's so much change happening. How do we keep up with it? Yeah, let me take that one to start with and we'll go to Mark, but, I mean, a couple different ways. And Mark actually mentioned probably the most key piece to this, which is really around the workload. You know, we spend a lot of time not focusing on the technology and really kind of understanding what are the dynamics of those applications. And it can be a hyperscale workload, they can be a more traditional database workload, but all of those in our Austin facility, we've set up pretty big kind of interoperability and partnership lab where we're running those, we're testing them, we're getting a better feel and understanding for the dynamics of exactly what you said. What is the right recipe across the sets of workloads and how do they drive it? And, you know, if you think about the amount of data that's coming out there and it's really becoming that memory and flash technologies and the speed and bandwidth they have is really becoming almost a necessity for the application. So, you know, maybe 5% of storage today is flash. If you think about it per gigabyte. So, it hasn't actually penetrated that deeply into the workloads across the data center. But as we move forward with artificial intelligence and, you know, it could be autonomous driving cars or machine learning or even maybe the most cutting edge of in-memory big data kind of applications, the speed at which the data is flowing in, how quickly you have to update it and feed those in-memory applications to get real-time answers. You know, it's just really all about memory and it's how it's coming together. Yeah, so just to keep going on that workload topic, it's funny, my sales teams don't ever sell the speeds and feeds aspects of NAND and DRAM. I mean, could it get any more boring than that? My guys are focused on solving the bigger workload problem. And we know if we do that right, then ultimately the right answer will come out and they'll see what the smart way to go is, which is precisely why we're here at OCP because this whole movement here is helping to drive that exact, getting the data closer to the CPU, reducing latencies, reducing costs, increasing flexibility. And so my guys are very, very focused on that. Another interesting thing though, we did a study, a pretty exhaustive study. We looked at all the servers going out the door in support of what we call performance-sensitive workloads or high-value workloads. It is not uncommon for about 70 to 75% of the bomb cost of a server that's going out the door in support of a performance-sensitive workload to be made up of NAND and DRAM. So it begs the question, who's the rightful owner of that workload conversation? He who has 25% of the value on the server or he who has 75%? Micron likes the 75% and honestly, when we start talking to the guys that are in the trenches trying to solve these workload problems, they like knowing that we can have those deeper technology conversations with them about the workload. Well, that's a really good point. I mean, it's interesting, the storage industry, we've talked for so long about, it's just storage growth. But really it's new workloads, new demands. I mean, Facebook, when they started OCP, it was like, ah, photos. I mean, they used to just use filers from some of the traditional enterprise filers and remember that just the scalability wasn't there, the performance wasn't there, they had to build some new architecture. Now it's like, you know, photos were kind of tough, but video, oh my God, streaming high-def video. It's like, Zuckerberg's going to do a broadcast and a million people want to watch that. Heck, I'm surprised it was only a million they said, since they've got a billion users. So we're talking just orders of magnitude more for some of these guys. What do you guys see as this, putting pressures on the data center, the new platforms, new technologies? Yeah, absolutely. I mean, you know, we've talked a lot about them, but really I think the key one that I see and maybe it's the macro trend, if you will, and it doesn't matter if it's Facebook or a hyperscale or even getting down into advertising kind of companies, it's this move from I used to do everything in batch to now I do it in real time. And when you think about the pressures of real-time video delivery and capturing and producing it out, the things that you're doing here on theCUBE, amazing technology that can produce this, get it out to the masses and beyond systems immediately. But then you take that a step further and you say, okay, I actually have to drive a car or steer it around the streets and react to different traffic conditions. I've got a machine and to machine communication that has to happen, that the speed and the flow of data, even if that data is only relevant for, you know, say it takes a minute and then that data is garbage to you, you had to transact it, you had to crunch it and you had to come up with the right answer almost in real time. And it's that real-time-ness that we see as probably the most exciting part of what open compute is all about and where it comes together. Yeah, when we deal with the large hyperscales, it's very easy to fall into the trap of just, you know, have a very sort of meaningless, what I feel is a meaningless conversation around cost per gig or cost per IOP or IOPs per BTU, God forbid. And it happens. But we're really working really well with three-fourths of the big super eight and getting into those deeper conversations where it's 26, 30, 40 months down the road, where they're trying to solve some of these bigger problems. And when we have those deep dive engineering conversations, I'm blown away by this whole idea of the internet of things, how we haven't even begun to scratch the surface on big data and the need, every single one of those applications has to have instantaneous access. And I'm sorry, but spinning media just doesn't cut it. It's just not gonna get there. I mean, one of the funny things, I was thinking about this the other day, if you opened up a server or any compute device, really, and you looked at it, I think there's only one thing in it that would be recognizable from a machine 20 years ago or even 15 years ago. And it's this one little platter that spins around. You know, and I just keep questioning myself, the processor doesn't look the same, the motherboards are different, the memory, everything is so dramatically different, yet we're still spinning media inside them. It's kind of baffling. And so, we take it as a big challenge to get the cost right, get the technology right and move us forward. Yeah, I mean, I think conversation we've had with most users these days, it's not, you know, why flash, it's where, how much, how fast can we adopt? And we're definitely seeing that. At this conference, you know, not only is the discussion of open source, but it's all the partnerships that are happening. You speak too. I mean, the storage industry isn't really well known for open source, you know, partnerships kind of come and go. Yeah. You know, talk about that aspect of what's happening. Sure, I mean, you know, nobody can do it alone in the enterprise. And, you know, that was, you know, Bob, Mark and I have come from pretty varied background working at some big companies who were pretty big in storage as you have yourself. And even those big companies had to partner up. And so, you know, as we look at this, we're spending, in fact, even coming up in through April, we're doing some pretty exciting announcements coming forward about how are we partnering in trying to grow a bigger ecosystem? And so, we're a huge actually, proponents, I guess, of the OCP model, which is due collaboration, we'll move these things a lot quicker. Some of what we're really focusing on today is, you know, if you think of flash, we treat flash like spending media. And that's not necessarily very optimal because it isn't spending media. We do that for interoperability and ease of use and deployment. But so, we're doing a lot with our software brethren to say, all right, how do we actually strip out some of those layers of abstraction that we've done for convenience sake, but actually, you know, really lower the amount of benefit you get out of using memory technologies in general. And I kind of think of it like, you know, anybody's laptop, you know, we all know and can look at the specs and say, you know, an SSD is 100 times faster than an hard drive. But your laptop's on 100 times faster when you put an SSD in it. Well, there's a lot of layers of stuff in between, same thing in the data center. So we're working really hard with a lot of the really key open source partners as well as kind of industry standard commercial software players to see if we can't break some of those barriers down. Yeah. Now, if you look at everything we make, we make a lot of stuff, but the main two things are NAND and DRAM, so volatile and non-volatile memory. Everything we make requires a server. So, first and foremost for me is make sure we're very close to our server brethren. And I don't care what form that comes in. We love them all, right? Because they need us and we need them. So from a partnering standpoint, we really do work hard with them to optimize architectures, especially going forward. And then beyond that, you know, software defined X, whether it's networking storage, data center, you know, inevitably, if you're going in and not talking about speeds and fees, but actually talking about workloads, the software services layer is going to come up. Well, we don't do that. So we have many partnerships. VMware is one of them. Where we've got some very interesting announcements that we're going to, as Eric mentioned, we'll be announcing on April 12th. All right, so we're looking forward to the April event. Just for our audience, you know, people probably know Micron because they might have opened up a box and they see your logos on there, I know. And I see your stuff everywhere. You know, when we think of Micron, what's the brand? What should we be thinking of Micron as to how you fit in the ecosystem? You know, we have a really long history of innovation through memories. Now, you know, memories and traditionally have been pretty commodity. And so, you know, that's one way to think of us. But when you start moving forward into this open compute model, really it's the innovation we're doing with things like 3D Crosspoint, where we're now taking memories to the next level. Certainly what we're doing in Flash and DRAM kind of comes along with those too, but really it's going to start being the innovation that becomes the cornerstone for, I think the server technologies of the next generation. And therefore the application technologies. And so, you know, we're definitely interested in being a greater part of the value chain, working more with the partners as you described, but taking that innovation direct to customers in a way that I think Micron hasn't done before, so that there is maybe a greater recognition of what's inside and the value that we're bringing. So I'm going to answer just a little differently. Being storage guys, we come to Micron and when people refer to storage as memory, we sort of, our skin sort of crawls. So I want to make sure we throw in from a branding perspective that Micron is a storage company, a very large one actually. And we sell a lot of, just however you want to measure it, dollars or terabytes out the door. So we are absolutely a storage company that as Eric mentioned with some crazy interesting stuff that's about to hit in the next 12 to 18 months. And it's going to enable applications that people haven't even thought of yet. So we're super excited about that. And again, sort of what Eric mentioned about partnering, we really are getting deeper, not only with the big hyper skills, but with companies that are looking for better, the ever sort of never ending process of looking for ways to optimize across what is a very complex polynomial equation of network compute and storage, making that all work together perfectly. And so for me, the brand recognition that I hope to achieve, where we came from for 37 years as just being an OEM provider, we're going to keep doing that. We're not competing with our OEMs. Any demand we create, we pull through those OEM channels, but we want to also be seen as a company that's out meeting with end users and talking to them specifically about what their needs are, not just today, but next 12, 18, 24 months. All right, well, Eric and Mark, really appreciate you joining us here, the open compute project, exciting stuff going on, storage, kind of the big challenge as Facebook said, to tackle and a lot more to look forward to. So we'll be right back, lots more coverage here from OCP 2016. This is theCUBE. Thanks.