 Hey, welcome back everybody. Jeff Frick here with theCUBE. We are on our Palo Alto studio. The conference season hasn't really kicked off yet into full swing, so we can do a lot more kind of intimate stuff here in the studio for a CUBE conversation. And we're really excited to have a many time CUBE club on and a new guest, both from Western Digital. So Dave Tang, Senior Vice President of Western Digital. Great to see you again, Dave. Great to be here, Jeff. Absolutely. And Martin Fink, he is the Chief Technology Officer at Western Digital, a longtime HP alum. I'm sure people recognized you from that in our great machine keynotes, we were talking about it. So great to finally meet you, Martin. Thank you, nice to be here. Absolutely. So you guys are here talking about, we've got an ongoing program actually with Western Digital about data makes possible, right? With all the things that are going on in tech at the end of the day, right? There's data, it's got to be stored somewhere, and then of course there's processes and things going on. And we've been exploring media and entertainment, sports, health care, autonomous vehicles, you know, all the places that this continues to reach out and it's such a fun project because you guys are a rising tide, lifts all boats kind of company and really enjoy watching this whole ecosystem grow. So we really want to thank you for that. But now there's some new things that we want to talk about that you guys are doing to continue really in that same theme and that's the support of this risk five. So first of all, for people who have no idea what is risk five, let's jump into that and then kind of what is the announcement and why it's important. Sure, so risk five is, you know, the tagline is it's an open source instruction set architecture. So what does that mean? Just so people can kind of understand. So today the world is dominated by two instruction set architectures. For the most part, the, we'll call the desktop enterprise world is dominated by the Intel instruction set architecture. And that's what's in most PCs that people talk about as x86. Right, right. And then the embedded in mobile space tends to be dominated by ARM held by ARM holdings. And so both of those are great architectures but they're also proprietary. They're owned by their respective companies. So risk five is essentially a third entry we'll say into this world. But this distinction is that it's completely open source. So everything about the instruction set is available to all and anybody can implement it. We can all share the implementations. We can share the code that makes up that instruction set architecture. And very importantly for us and part of our motivation is the freedom to innovate. So we now have the ability to modify the instruction set or change the implementation of the instruction set to optimize it for our devices and our storage, our drives, et cetera. So is this the first kind of open source play in microprocessor architecture? No, there's been other attempts at this. OpenSpark kind of comes to mind and things like that. But the ability to get a community of individuals to kind of rally around this in a meaningful way has really been a challenge. And so I'd say that right now risk five presents probably the best sort of clean slate. Let's take something new to the market out there. Right. So open source obviously we've seen take over the take over the software world first in the operating system which everybody is familiar with Linux but then we see it time and time again in different applications, Hadoop. I mean, there's just a proliferation of open source projects. The benefits are tremendous. Pretty easy to ascertain in a typical software case. How is that going to be applied? Do you think within the microprocessor world? So it's a little bit different when we're talking about open source hardware or open source chips and microprocessors, you're dealing with a physical device. So even though you can open source all of the designs and the code associated with that device, you still have to fabricate it. You still have to create a physical design and you still have to call up a fab and say, well, you make this for me at these particular volumes. And so that's the difference. So there are some differences between open source software where you create the bits and then you distribute those bits through the internet and all is good. Whereas here, you still have a physical need to fabricate something. Now, how much more flexibility can you do then for the output when you can actually impact the architecture as opposed to just creating a custom chip design on top of somebody else's architecture? Well, let me give you probably a really simple, concrete example that kind of people can internalize of some of our motivation behind this because that might sort of help get people through this. If you think of a very typical surveillance application, you have a camera pointed into a room or a hallway. The reality is we're basically grabbing a ton of video frames but very few of them change, right? So the typical surveillance application is it never changes and you really want only know when stuff changes. Well, today in very simple terms, all of those frames get routed up to some big server somewhere and that server spends a lot of time trying to figure out, okay, have I got a frame that changed? And so on, and then eventually it'll find maybe two or three or five frames that have got something interesting. So in the world what we're trying to do is to say, okay, well, why don't we take that find no changes and push that right down to the device? So we basically store all those frames. Why don't we go figure out all the frames that mean nothing and only ship up to that big bad server the frames that have something interesting and something you want to go analyze and do some work on. So that's a very typical application that's quite meaningful because we can do all of that work at the device. We can eliminate shipping a whole bunch of data to where it's just going to get discarded anyways and we can allow the end customer to really focus on the data that matters and get some intelligence. And that's critical as we get more and more immersed in a data-centric world where we have real-time applications like Martin described as well as large data-centric applications like of course big data analytics but also training for AI systems or machine learning. These workloads are going to become more and more diverse and they're going to need more specialized architectures and more specialized processing. So big data is getting bigger and faster and these real-time fast data applications are getting faster and bigger. So we need ways to contend with that that really go beyond what's available with general purpose architecture. So that's a great point because if we take this example of video frames, now if I can build a processor that is customized to only do that, that's the only thing it does. It can be very low-power, very efficient and do that one thing very, very well and the cost adder, if you want to call it that, to the device where we put it is a tiny fraction but the cost savings of the overall solution is significant. So this ability to customize the instruction set to only do what you needed to do for that very special purpose, that's gold. Right, so I just want to, Dave, we've talked about a lot of interesting innovations that you guys have come up with over the years or the helium launch, which I don't know, couple, two, three years ago, we were just at the Mamber event really energy assisted recording. So this really kind of foundational within the storage and the media itself and how you guys do better and take advantage of evolving land space. This is kind of a different play for Western Digital. This isn't a direct kind of improvement in the way that storage media and architecture works but this is really more of, I'm going to ask you, what is the Western Digital play here? Why is this an important space for you guys in your core storage business? Well, we're really broadening our focus to really develop and innovate around technologies that really help the world extract more value from data as a whole, right? So it's way beyond storage these days, right? We're looking for better ways to capture, preserve, access and transform the data. And unless you transform it, you can't really extract the value out of it. So as we see all these new applications for data and the vast possibilities for data, we really want to pave the path and help the industry innovate to bring all those applications to reality. Right, it's interesting too because one of the great topics always in computing is you got compute and store, which has to go to which, right? And nobody wants to move a lot of data that's hard and may or may not be easy to get compute, especially these IoT applications, remote devices, tough conditions and power, which we mentioned a little bit before we went on there. So the landscape for the need for compute and store networking is radically changing than either the desktop or what we're seeing a consolidation in clouds. So what's interesting here, where does the scale come right? At the end of the day, scale always wins. And that's where we've seen historically where the general purpose microprocessor architectures dominated, but used to be a slew of specialty purpose architectures. But now there's an opportunity to bring scale to this. So how does that scale game continue to evolve? So it's a great point that scale does matter. And we've seen that repeatedly. And so it's a significant part of the reason why we decided to go early with a significant commitment was to tell the world that we were bringing scale to the equation. And so what we communicated to the marketplaces, we ship on the order of a billion processor cores a year. Most people don't realize that all of our devices from USB sticks to hard drives, all have processors on them. And so we said, hey, we're going to basically go all in and go big. And that translates into a billion cores that we ship every year. And we're going to go on a program to essentially migrate all of those cores to risk five. It'll take a few years to get there, but we'll migrate all of those cores. And so we basically were signaling to the market, hey, scale is now here. Scale is here. You can make the investments. You can go forward. You can make that commitment to risk five because we're essentially, we've got your back. So just to make sure we get that clear. So you guys have announced that you're going to slowly migrate over time, your microprocessors that power your devices to the tune of approximately a billion with a B cores per year to this new architecture. That is correct. And has that started? So the design has started. So we have started to design and develop our first two cores. And, but the actual manifestation into a device is probably in the early stage of 2020. Okay, okay. But that's a pretty significant commitment. And again, the idea is you explicitly said it's a signal to the ecosystem. This is worth your investment because there is some scale here. That's right. Yeah, pretty exciting. And how do you think it's going to open up the ability for you to do new things with your devices that you before either couldn't do or were too expensive with dollars or power. So we're going to step and iterate through this. And one key point here is a lot of people tend to want to start in this processor world at the very high end, right? I'm going to go take on a Xeon processor or something like that. It's not what we're doing. We're basically saying we're going to go at the small end that the tiny end where power matters power matters a lot in our devices and where can we achieve the optimum combination of power and performance. So even in our small devices like a USB stick or a client SSD or something like that, if we can reduce power consumption and even just maintain performance, that's a huge win for our customers. If you think about your laptop and if I reduce the power consumption of that SSD in there so that you have longer battery life and you can get through the day better, that's a huge win, right? But I don't impact performance in the process. That's a huge win. So what we're doing right now is we're developing the cores and based on the RISC-5 architecture. And then what we're going to do is once we've got that sort of design sort of complete is we want to take all of the typical client workloads and profile them on that. Then we want to find out, okay, where are the hotspots? What are the two or three things that are really consuming all the power and how do we go optimize by either creating two or three instructions or by optimizing the micro-architecture for an existing instruction and then iterate through that a few times so that we really get a big win even at the very low end of the spectrum. And then we just iterate through that with time. So we're in a unique position, I think, in that the technologies that we develop span everything from the actual media where the bits are stored, whether it's solid state flash or rotating magnetic disk and the recording heads. We take those technologies and build them all the way up into devices and platforms and full-fledged data center systems. And if we can optimize and tune all the way from that core media level all the way up through into the system level, we can deliver significantly higher value, we believe, to the marketplace. So this is the start of that that enables us to customize command sets and optimize the flow of data so that we can allow users to access it when and where they need it. So I think there's another actually really cool point which goes back to the open source nature of this. And we try to be very clear about this. We're not going to develop our cores for all applications. We want the world to develop all sorts of different cores. And so for many applications, somebody else might come in and say, hey, we've got a really cool core. So one of the companies we've partnered with and invested in, for example, is Esperanto. They've actually decided to go at the high end and do a machine learning accelerator. Hey, maybe we'll use that for some machine learning applications in our system level performance. So we don't have to do it all. But we've got a common architecture across the portfolio. And that speaks to that sort of open source nature of the RISC-5 architecture is we want the world to get going. We want our competitors to get on board we want partners, we want software providers, we want everybody on board. Right, it's such a different ecosystem with open source and the way the contributions are made and the way contributions are valued and the way that people can go find niches that are underserved. It's this really interesting kind of bifurcation of the market really. You don't really want to be in the big general purpose middle anymore. That's not a great place to be. There's all kinds of specialty places where you can build the competence and with software and with, you know, thank goodness for Moore's law, you know, decreasing prices of the power of the compute and now the cloud, which is basically always available. Really exciting time to develop a myriad of different applications. Right, and you talked before about scale in terms of points of implementation that will drive adoption and drive this to critical mass. But there's another aspect of scale relative to the architecture within a single system that's also important that I think RISC-5 helps to break down some barriers because with general purpose compute architectures, they assume a certain ratio of memory and storage and processing and bandwidth for interconnect. And if you exceed those ratios, you have to add a whole new processor. Even though you don't need the processing capability, you need it for scale. So that's another great benefit of these new architectures is that the diversity of data needs where some are going to be large data sets. Some are going to be small data sets that need high bandwidth. You can customize and blend that recipe as you need to. You're not at the mercy of these fixed ratios. Yeah, and I think, you know, so much of kind of the what is cloud computing and the atomic nature of it that you can apply, you know, the ratios, the amount that you need as you need, you can change it on the fly. You can tone it up, tone it down. And I think the other interesting thing that you touched on is some of these new, which are now relatively special purpose, but we're going to be general purpose very soon in terms of machine learning and AI and applying those to different places and applying them closer to the problem. It's a very, very interesting evolution of the landscape. But what I want to do is kind of close a new Martin, especially because again, kind of back to the machine, not the machine specifically, but you have been in the business of looking way down the road for a long time. So you came out, I looked at your LinkedIn, you retired for three months. Congratulations with that, I hope you got some golf in. But you came back to Western Digital. So, you know, why did you come back? And as you look down the road a ways, you know, what do you see that excites you, that got you off that three month little tour on the golf course? And I'm sorry I had to teach you about that. But what do you see? What are you excited about that you came back and got involved in an open source microprocessor project? So the short answer was that I saw the opportunity at Western Digital to be where data lives. So I had spent my entire career, we'll call it at the compute or the server side of things. And the interesting thing is I had a very close relationship with Sandisk, which was acquired Western Digital. And so I had, we'll call it an insider view of what was possible there. And so what triggered was essentially what we're talking about here was given that about half the world's data lands on Western Digital devices, taking that from a real position of strength in the marketplace and say, what can we go do to make data more intelligent and rather than start kind of at that server end? And so I saw that potential there and it was just incredible. So that's what made me want to join. So at times, Dave, good get. Yeah, we're delighted to have Martin with us. All right, well we look forward to watching evolve. We've got another whole set of events we're going to do again together with Western Digital that we're excited about. Again, covering data makes possible, but kind of uplifting into the application space as a lot of the cool things that people are doing in innovation. So Martin, great to finally meet you and thanks for stopping by. Thanks for the time. Dave, as always, and I think we'll see you in a month or so. Always a pleasure. I'm Martin Fink, Dave Tang, I'm Jeff Frick. You're watching theCUBE. Thanks for watching, we'll catch you next time.