 Live from the Palace Hotel in San Francisco, it's The Cube at the HGST Press and Industry Analyst Briefing brought to you by headline sponsor HGST. Here are your hosts, Stu Miniman and Jeff Frick. Welcome back everyone. I'm Jeff Frick. You're watching The Cube, but we're at the Sharon and Paulson Downtown San Francisco at the HGST Industry Analyst Impressive Event. A lot of exciting news going on, and we're sitting down as we like to do with the smartest people in the room, asking them the questions you wish you could have asked them and getting all the right insight. Joining us by this next segment is my co-host. Hi, I'm Stu Miniman with wikibond.org, and joining us for this segment is Currie Muntz who's the vice president of SSD. Currie, thank you so much for joining us. Thank you. All right, so you're the vice president of SSD, but we're also going to talk this segment about shingled magnetic recording, or SMR, which of course is a hard to strive technology. Can you give our audience a little bit of insight? What is SMR? Sure, SMR, our challenge is we continue to increase the density of data on the surface of the disk, and so we've increased in capacity, and that's how the industry has grown over 55 years. We've reached a challenge of how small we can make those bits, and we're having some challenge of how we write those bits that small, and so shingled magnetic recording is a change in the way we format the information on the disk. It impacts the way we write the information, but allows us to get a higher density, so we can take the same magnetic, same mechanics, and take an eight terabyte drive and turn it into 10 terabytes, and that's really exciting to get that higher capacity, lower dollar per gigabyte, and really enables it, and it's that really change in how we reformat the information on the surface of the disk. Will that propagate throughout all of your disk lining? Yeah, and the issue is that with the shingled magnetic recording, it affects a little bit of the way we write the information, and it has some implications in terms of performance in certain applications. So we're starting out in what we call host managed, in terms of how the information is written down in a sequential fashion, and we'll evolve that through more intelligence on the drive, and trying to hide that and become more self-managed and hide that, but this is a foundational technology. We're starting out in this, what, with kind of more cold or cooling storage, 10 terabyte product. We'll move into our mobile products and eventually propagate in and across the full product line with other technologies. All right, so the other technology that got discussed here down the line is hammer, could you explain what that is, how SMR and hammer kind of go from a road map? Sure, so again, as shingled magnetic recording has a way we write down the tracks, and it has to do, allow us to get narrow tracks together, we also need to continue to push the dense even higher. So I said that eight takes us to 10. We need to go beyond 10. We need to keep going. And so the thing we're looking to go beyond that would be technologies like heat assisted magnetic recording or called hammer. A hammer changes the way we write the information from a standpoint that we now use heat to heat it up. So we can get a disc which is magnetically very stiff, very difficult to write. We use a very tiny focus spot of heat that causes it to the magnetics to get softer. We can then flip it. We cool it down in nanoseconds. It becomes hard again and stays very stable. So the shingled, then we can still do the shingled tracks, allow us to put it close together, but we combine it with the heat in the writing process to continue to take us beyond 10 terabyte hard disc drives. And then you add helium on top of that. Then you throw in some helium because that's going to help you with a lot of the mechanics and the power dissipation. So when you spin those discs, some of those discs are spinning close to 100 miles an hour, and they're spaced less than a millimeter away from a stationary shroud. So those discs are spinning very fast. They're causing a lot of shear and they are dissipating a lot of power. So the helium in there is because it's a much lighter molecule, it dissipates a lot less power. And now when I put it in racks, I consume a lot less power. And so in a data center, one of the biggest limitations is just not how much capacity I get, but how much power I have to put in. So the helium is a great benefit in terms of reducing power and reducing vibration and allowing us to continue to go to higher densities as well. So, Kerry, it almost sounds like magic tricks to kind of keep making these jumps going forward. How far can we push this? Is there a wall down the line that we're going to hit? Well, people have predicted, the hard disk drive has been around for over 55 years. And there's always been a prediction as that we're reaching a limit. Every time we get close to the limit, we find out a way to engineer ourselves around those limits. So I think anybody, my comment is anybody who thinks there's a limit, it's time for them to retire, because there is going to be a limit. Now, what happens is the rate of progress, which you make. As things get more difficult, they don't stop. They just slow down. We throw more innovation. We throw more things at it. So we see technologies, including hammer, beyond hammer, combining that with featuring and patterning individual disks. We call bit pattern media. So we see SMR building on top of that heat-assisted. We see two-dimensional magnetic recording come in, helping us on the reading process, just like SMR helps us on the writing process. And then we see featuring the disk in a fairly inexpensive nano-imprint technology to get the bit pattern media as extensions. So we think when we combine heat-assisted, bit pattern media, two-dimensional magnetic recording, SMR, we can take capacities out to, say, 10 times where they are today. All right, so you mentioned the pace of innovation. Let's shift gears a little bit and talk about SSDs, your primary focus area. The gap between capacity on SSD and HDDs has been shrinking. I mean, you've got 6, 8, and 10 terabytes on the HDDs, but you've now got terabytes on a flash, as opposed to it used to be kind of almost an order of magnitude difference. Where is the innovation happening, and how fast is flash going to shrink that gap? Well, I think the key message about SSD and HDD is they're really complementary. We build out tiers of storage, and traditionally we had SRAM, which was very high performance, very expensive, and then we had DRAM, and then we had HDD, and then we had tape, which was the cheapest but lowest performance. And what's happened is, HDD has gotten cheaper and really encroached upon tape, life difficult for tape, and a lot of the new data centers don't put tape in there anymore. But the problem is, because it was a mechanical device, it didn't scale with semiconductor speeds. So we opened a gap in performance, and that's where flash and SSDs come in. And it's still blossoming in that space in terms of how do I take advantage now when I had software, I had architectures, I had back planes all built for the speed of HDD, suddenly I put something in much faster. So what's happening is SSD is coming into that space, bringing performance. It is more expensive in dollar per gigabyte than HDD, but it's much higher performance, and applications can be lower power. So we're going to see a tiering of where I need the performance closer to the processor, a lot of antics going on, I want to put that in my flash. One I want to have the longer tail, the data that I'm not changing as much, I don't need to access stuff, and I want to move that down to HDD. So I want to really balance that to get the output performance of performance, cost-effective storage, while I'm trying to find that balance. And so while we're seeing flash get cheaper, really HDD still remains substantially cheaper than flash, except the very high performance of HDD, which is going to get slowly over time replaced by SSD. So it's the evolution of that storage stack and those architectures that's changing. So we'll see that evolution, we're only halfway at most through that change in architecture and software to really support SSDs. And then beyond that, we'll see new solid state memories beyond that. So the world's always an evolution. As soon as flash thinks that they're entrenched, there's going to be a new solid state that's going to be faster, and they're going to be coming and challenging it as well. Currie, if I can, I want to poke on that, the next generation non-volatile memory technology. So one of the things that really accelerated flash usage was it was used in consumer devices. So a big Apple announcement today, if it wasn't for Apple putting flash in the laptops and the tablets and the phones, not sure if we would have seen the adoption and the enterprise like we did today. Look at where some of the next generation solutions are. Is there some place that they can get that foothold to be that disruptive technology? Because I don't know, we don't see it today. All the technologies that are out there, everything else is going to the cloud. You're building HDD and SSD solutions in the cloud. They are entrenched. So how will a next generation technology gain a foothold and potentially displace what we have today? Yeah, so that's a very good question. And I think we need to kind of understand that these new technologies I talk about take a long time to come in. They take a long footprint. Flash memory's been around for 30 years, right? SSD's been around since 1991. So you have to have that opportunity, you have to have that leverage, and indeed only a small amount of total NAND is actually consumed in SSD, mobile handsets, tablets, PCs, flash drives, other things. Really enable all the R&D there. The new technologies probably are going to come in not to be cheaper than flash. I think they're going to be hard performance. They're going to try and fit into a niche up there and they're going to compete more against the DRAM. Can I get a non-volatile as cheap or cheaper than DRAM and think about when you shut down your tablet or something else, what happens? When the power drains down, you have to plug it in. Why is that? Because I'm refreshing the DRAM in there. So what if I had something that was fast enough for DRAM to replace the DRAM was maybe he is cheaper, cheaper than the DRAM, and by the way, when I powered it off, it didn't consume any of the battery. So I think it's going to be those things. It's also going to be for again, application acceleration. How do I get even lower latency memory even closer to the processor? So we talk about hard drive. Latencies measured in milliseconds. We talk about flash. Latencies measured in 100 of microseconds. There are new technologies out there where you're measured in single digit microseconds. What will that do to speed up in-memory compute new application data analytics? So I think it's going to be in that in-memory compute and it's also going to be in mobile devices to kind of get that longer battery life. If it can gain that traction and foothold, perhaps when flash slows down, it can play a bigger role in the storage segment. The other piece is looking at your background on LinkedIn. You're the longest-tenured guy here at HGST. Talk about kind of the evolution really of software in what was traditionally a hardware company and how that's grown in terms of importance and obviously strategic value that you guys are delivering to your customers. Well, we clearly see a change in a customer in the data center and what's happening. And therefore, it's more about how I integrate solution, provide more levels of integration. You need software to do that. And at the same time also, we've been through traditionally a period of time where technology, aerial density growth rate on disk has grown faster than demand on storage. We're now at a point in time as aerial density growth rate has slowed down and big data has sped up with demand. So you guys are adding more density and capacity than was the growth rate. So in the late 90s, we're growing 100%, doubling every year. In periods of time in the 90s, we're growing 60%. In the time in 2003 to 2008, we're probably growing at 45% growth rate in aerial density. So we were keeping up with demand or sometimes even exceeding it in terms of what the man out there. As your density has slowed down and big data analytics has grown, we're getting the point where not. There are analysts who would say in 2005, we shipped more storage than there was data being created. But after 2005 was the first year where more data was created than storage was available to store. So we got that dichotomy happening. What's happening is that's driving us now to say how do I make up the gap? How do I, we talk about device affinity. How do I know by using not just the raw aerial density, how do I know more about the features, the performance aspects, how these things are put together to enable larger capacities, to enable easier integration, to make trade-offs and allow me to have standard interfaces, but still allow me to innovate more at the device level. Hence we call device affinity. So how can I put more of the design aspects of the hardware and the device I'm doing behind a more intelligent software interface to allow that interface out? How do I get a higher level of aggregation and integration so people have easier use to integrate in the storage platform? So that's where I think a lot of stuff is going and that's where a lot of our focus has been on acquisitions, our focus on some of the product announcements today and really direction of what we're trying to do differently as a company. Move up the value stack to higher levels of aggregation, more value add in the system and deal with the fact that the inherent hardware is not progressing as fast as the storage man's out there and how do I close some of that gap by putting more intelligence on top of the devices I'm making? Yeah, what a changed world, huh? So, Curry, talking about those increased demands on all of the equipment, can you talk a little bit about how HGST looks at research? How much are you working with your customers during the kind of the R&D process? Do you have any partnerships with academia and what you're doing? Help us understand that. So, I go back to the IBM research days, I spent a lot of time in research. It was really one of our fundamental commitments in terms of our investment research, the value of research, the value of innovation. We've also believed that collaborating with universities is key to get access to some of the brightest minds and also access to the brightest students as they graduate to become employees coming in. We also believe in industry collaboration, so advanced storage technology consortium to integrate with the other, to work with the other companies to help set standards, help drive technology, help drive investments, help create standard roadmaps for a supply base, for a customer base to understand, to get commonality in where we're gonna go. So, this industry is about innovation. So, you have to invest in research, you have to invest in universities and you have to invest in some level of industry collaboration to be successful to go forward. Yeah, and clearly you've got some other interesting partnerships that were talked about today with Intel and some of the other funds. I'm excited, you know, the Intel, we started the partnership in 2008, really kind of marrying their best-of-breed understanding of nanotechnology and capabilities and with our understanding of the SaaS market, the storage market, the customer requirements, the software integration, how you need to build that SaaS stack, how you had to qualify that product, how to integrate and how to support it. So, that's been a great combination of the strengths of two strong partners being able to collaborate together. That's been exciting. I've talked about partnerships on software and how do I integrate more? And it's going to get more and more with the ecosystem. We talked about you're building on the new data centers. It's how do I make all the pieces work? How do I get them all to work together? You've got to go collaborate, you've got to go build an ecosystem, you've got to work together because it's no, this is not, I'm going to go make one solution and you're all going to come to me. This is how do I make my solution work with everybody else's stuff and interoperate? So, Currie, one of the challenges you have is it's like a train. You might have the technology ready, but there's something from the device driver or something from the supply chain that we need to do. You can only go as fast as the slowest car. So, how does the ecosystem help pull that train forward maybe a little faster? Well, that's a little bit about also higher levels of aggregation. And we talked about the sort of the active archive. And what you're seeing there is I'm now putting some intelligence and software in front of a bunch of hard drives. Why do I do that? Well, that allows me, I don't have to get all that stuff standardized because now I can uniquely do that behind my software. I can now innovate at the device level with how to have to get a standardized and then make the outside to that be an object store standardized interface. So, that's some of the things you see by higher levels of aggregation and more software. I'm going to, while I would like to do more standard stuff, it just is a slow process. If I create more high level virtualizations or abstraction layers, I can do more innovation below that layer without having to do as much standardization. So, that's also I think some of the announcements you saw today. Creating those abstraction layers, creating high levels of aggregation that allows me to do the device affinity to do a little more innovation below the standardization level and create more value. All right, well, to speak of standardization, NVME is one that kind of caught our eye. Where do you see the adoption of that helping the industry? Well, I think that it's going to be fantastic for adoption in terms of SSD. People are looking for that level of standardization. It's going to provide more ability to scale out. It's going to provide interoperability. It's going to provide multi-vendor integration. It's going to enable earlier adoptions, faster scale up of these things. It's going to help the customer at the end because there's going to be, you know, they're not going to have to worry about what kind of driver they loaded on what kind of OS that was compiled at what kind of time. So it should be a great benefit to the industry. Yeah, can you actually give us a little insight? You know, how many devices are out there that are going to be NVME compliant right now? I know over at Intel Developer Forum, they've got an area that you can go touch and feel a bunch of hardware. Give us a little bit of what we expect to see. Generally expect, I think that very soon all SSDs going in the PCIe bus will be NVME compliant. I think that's kind of the key message. I think it's going to become more and more difficult to, quote, have a proprietary interface. All the major SSD folks are all moving in that direction. All will have products announced. We obviously announced our first NVME compliant SSD coming out and available in first quarter of next year. But we expect that all the SSD manufacturers in PCIe space have to be there. That's kind of really the clear direction. There are plug fests where everyone gets together to test out and make sure they're compliant. We see all our major competitors showing up with those plug fests. So with the benefit of hindsight and perspective, how have you enjoyed this kind of crazy explosion of data, not only on the consumer side and the hyperscale, but now we're moving into a whole other phase with the Internet of Things and all these connected devices. I mean, you've been around for a while. You just said you guys delivered more storage capacity than data was being created in 2005, which is absolutely amazing sitting here in 2014. Give us a little bit of your insight and feeling for what's going on. Well, there have been times of great excitement and innovation, but in my career, it's hard to look back and find a time that's probably been more dynamic than right now. More dynamic in terms of we have SSD and solid state, and we have rotating magnetic recording. We have new data centers. We have traditional OEM customers. We have big data analytics taking off. Data and storage are becoming more the light blood of business, becoming more important to the business. It's less about how the operations and now about how I create value from that data. So storage is moving to the forefront in terms of importance in how people are building data centers. So there's so many moving parts going on. We're all getting interconnected. It's all about digital data. So I've never seen a time when there wasn't more opportunity, more innovation and more change and I just love the excitement of the change. It's just opportunities to kind of do new things and it makes me very excited. That's great. All right, super. Thanks for joining us. A lot of excitement. Going to keep coming up with new and creative things like hammer to extract more value out of the spinning disc as well as the flash. So it's been terrific here. So thanks for watching. I'm Jeff Frick, Longstube Miniman. You're watching theCUBE. We're in downtown San Francisco at the Sheridan Palace at the HGST Industry Analyst Impressive Event. Thanks for watching. Tune in next time. We'll see you next, later.