 Live from San Francisco, it's theCUBE. Covering Micron Insight 2019, brought to you by Micron. Welcome back to San Francisco, everybody. We're here at Pier 27, the sun is setting behind the buildings in San Francisco. You're watching theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-host, David Floyer. We've been here all day, covering Micron Insight 2019. Tom Evey is here as the Senior Vice President and General Manager of the Compute and Networking business unit at Micron. Tom, great to see you again. Great to see you. So you got Compute and Networking, two of the big three in your business unit. There you go. But we're going to talk about 3D Crosspoint today, but so anyway, you know. Absolutely. We're kind of bringing you outside to swim lane or maybe not, but tell us about your BU and what's been new, what's the update? Yeah, I mean, so we sell primarily memory today, DRAM, although in the future we see 3D Crosspoint as a great opportunity into the data center, you know, both traditional servers and the cloud players, PCs, graphics, and networking. Yeah, so you got some hard news today. Why don't we dig into that a little bit? We really haven't covered much of it, but why don't you take it? Okay, absolutely. Yeah, so I guess a couple things of interest, probably most directly as we announced our first 3D Crosspoint storage device. It's the highest performance SSD in the world and offers compared to other 3D Crosspoint-based solutions on the market, you know, anywhere from three and a half to five times the performance on a range of both sequential and random reads and writes, two and a half million IOPS, bandwidth read and write north of nine gigabytes a second. And also- So super fast. Super fast. Super fast and similar, you know, very positive comparisons up against NAND SSDs. Okay. And so we're excited about that. So where's the fit? What are the use cases? Who you're targeting with? Sure. Yeah, I mean, so I think, you know, one way to think about it is that anytime you introduce a new layer into the memory and storage hierarchy, you know, historically it was SRAM caches and then it was SSDs going in between DRAM and rotating media. And now this is 3D Crosspoint sitting in between DRAM and NAND. And the reason it is a benefit in terms of another layer is it's, you know, higher density and greater persistence than DRAM. It's greater performance and you know, it can cycle greater endurance than NAND. And when you do that, you do nibble away at either side of that layer. So in this case, it nibbles away a little bit from DRAM and a little bit from NAND, but it grows the overall pie and is the only player in the industry that provides DRAM 3 Crosspoint and NAND. We think that's a great opportunity. Add some color to the economics because it's more expensive than NAND, less expensive than DRAM, higher performance than traditional flash, lower performance, well under the performance of DRAM, so. Yeah, I mean, so again, I think the benefits, like I said, is it offers greater density and it offers greater persistence than DRAM. And so that's the advantage there. And it offers much greater performance on things like bandwidth and IOPS and much greater endurance than NAND. And certainly our preliminary results are in applications like databases, in certain AI and machine learning workloads, and in workloads that benefit from low latency. I think financial service markets is one specific example. We think there's a good value brought there. So a Colombo question if I may. Yeah. So SAP would say, no, throw everything in memory in HANA. And of course, selling DRAM and say, okay, that's okay with us. So you mentioned databases. How should we think about this relative to in-memory databases? Sure, I mean, I think that if you can afford it, and of course it will be more expensive, we would love to provide our highest density DRAM modules on the highest end server platforms. And you mentioned HANA database into terabytes and terabytes of DRAM. That would be great. That is not free. If we're free, you'd do it. Right, exactly. And so if you have the need for that performance, that's what you will do. But we see there's an attractive range of workloads that cannot afford the cost of that very high end solution. And so this affords something that gives good benefits and database performance, but at a slightly more economic solution. I know you want to jump in, go. Oh yeah, sure. If you will compare yourself with Intel, which has obviously got the same raw technology. They have gone for consumer type, obtained SSDs, but they put all their effort into combining it with a DV, or not at all, NVIDIAm and have combined that with the processor itself and made a combination which is very good for storage controllers. Yeah. So the question, and you've done very well in the SSD, much, much more than they have. Are you looking to go into that, NVIDIM, because obviously you don't have the processors themselves to manage? Yeah, I mean, to be clear, what we're offering today is a product that runs on standard PCIe NVMe and while there may in the future be opportunities to further enhance performance with software optimization, it runs out of the box without any software optimization. But I do think that there are opportunities both to use this technology in more of a storage type of configuration and looking forward, there are also opportunities to use it in a memory configuration. What we're announcing today is our first storage product and with regard to additional products, stay tuned. So if I think about the storage hierarchy, the classical pyramid and forget about it, let's focus on the persistent end of that spectrum. This is at the tip, right? Is that how we should think about this or not necessarily? I mean, it is at the storage tip, but I think we tend to think a little bit more holistically that that triangle extends from DRAM traditionally to SSDs to rotating and we're now inserting a 3D cross-point-based layer in between and so from that perspective, it is the tip of the storage triangle, but it does sit below DRAM in the overall. And the reason for my question was sort of a loaded question because if we eliminate the DRAM piece, now you've got that tip. So NAND benefits from the volume of consumer, thoughts on how you get volume with 3D cross-point? Sure, again, I think there are, at a lower performance point, you can get higher density, more cost-effective storage solutions with NAND. And we certainly don't see NAND going away. We're quite bullish on that. Yeah, you like NAND. We announced both a SATA and NVMe 96-layer TLC NAND-based products today, so that continues to be a major area of investment. But from a value and opportunity point of view, we see a better opportunity applying this technology, again into this layer in the server or data center hierarchy, as opposed to what one might be able to do in the consumer space. And your OEMs are saying bring it on, right? I mean, they want this, we're talking about the server manufacturers, data center, cloud guys. Yeah, I think we're in limited sampling with select customers. So more to say about our go-to-market, at a future date. But certainly we see that there is, we're bullish about the opportunity of the marketplace. So just asking a question about volume again. Sure. If you look at the marketplace, it's, ARM has been incredibly successful and has driven a huge amount of memory and NAND for yourself. And that seems to be where the volume is growing much faster than most other platforms. Are you looking to use this technology, 3D cross-point, as in that environment? As even memory, as in DRAM itself, as memory itself, at a much lower level. I'm just thinking of ways that you could increase volume. Sure. I mean, so to be, just to be clear, you're talking about what's driven overwhelmingly by the cell phone market, obviously it's proliferating into IoT and automotive and lots of others, yes. Again, our view of the first and best opportunity is in the data center, which is still today an x86 dominated world. I would say, in terms of opportunities, like I said, for memory-based solutions in the data center and for how we apply this in other areas, stay tuned. Let's talk about this forward next acquisition. So it's really interesting to see the micron making moves in AI. Why the acquisition? Tell us more about it. Sure, yeah, so it's a small startup, handful of players, although fairly experienced, as I believe Sanjay mentioned, they're on their fifth generation of their architecture. And so what we've acquired, it's both a hardware architecture that currently runs on FPGAs, along with the supporting software that supports all the common frameworks, the TensorFlow, the PyTorches, as well as the range of the network architectures that are necessary to support, again, primarily on the inference side, we see the best opportunities in edge inferencing. But in terms of what's behind the acquisition, first of all, there is an explosion of opportunity in machine learning. We see that in particular on edge inferencing. And we feel that in order for us to continue to optimize and develop the best solutions, both overall of a deep learning platform that includes memories, but also just memories that are best optimized, we need to understand the workloads, we need to understand the best solutions. And so that's why we made this acquisition, we integrated it with our team that has for some time developed FPGA-based add-in cards, and that's actually the basis of the technology for some of the dialogue that you saw, for example, with OHSU. When you talk about edge inferencing, we're envisioning this sort of massively scalable, distributed system that of course comprises edge. You want to bring the compute to the data wherever the data lives, obviously you don't want to start moving data around. Now you're bringing AI to that data, which is data, AI, cloud, all these superpowers coming together. So our premise is that the inferencing is going to be done at the edge, much of the data, if not most of the data is going to stay at the edge. And so this is what you're enabling through that integration. So it's a heterogeneous combination of technologies. Correct. I mean, to use the extreme example that we talked about on stage earlier, CERN has this massive amount of information that comes from the, I think it's 40 million collisions a second or more. I may have my figures wrong. And you cannot possibly store, nor do you want to transmit that data. And so you have to be applying AI to figure out what the good stuff is. And in the stream itself. Exactly, and that solution exists in a myriad of applications. The very simplistic one, you're not going to send the picture of who's at your front door. To a core data center to figure out if it's somebody in your family. You're going to want to be doing that, maybe not in the camera, but certainly a lot closer. Because the network simply will not, can't handle the capacity. All right, we got to go. But last word, what are the takeaways from today? What do you want our audience to remember from this event? Well, I think it's just, we continue to build on our memory and storage base to move up the stack and add values in a way that may be stored subsystems like our NAND SSD and 3D crosspoint that go a little further up the stack in terms of our gaining greater expertise in machine learning solutions or the example with Authentia of providing a broader solution, including key management for how we secure the billions of devices that are going to be at the edge. Touching all the bases, Tom. All right, congratulations on all the hard work and it was great to see you again. Thanks for coming on. Likewise, Dave and Dave. And keep it right there, but it will be back to wrap Micron Insight 2019 right after this short break from San Francisco. You're watching theCUBE.