 and we're here with Partha Rant. I know I'm going to do this, Rannagatan, right? Ranganatan. Ranganatan. Okay, welcome to the queue. Yeah, nice to be here. So this was your baby, this product, the Project Moonshot, which is the server technology, the man's hyperscale server technology, your baby. How did this all come about? Yeah, I think we're very excited about this. There's a lot of work that went into this. I don't think I can take full credit for it, but it was definitely something that HP Labs had a lot to do with. So we've had about almost a decade of work that we've done on this so far. We've been looking at energy-efficient computing and energy-efficient servers, and about five years back, we kind of looked at what can we do that's very similar to what HP did with blades. And if you remember, we came in, it was all pizza boxes all along, and then we introduced this new category of blade servers, and now we are the leading market vendor there. And so when we were looking at our research, we looked at saying, well, we have all these mobile processors that have been designed grounds up for energy efficiency, can we apply it to the server market? And that's really where the genesis of this idea came up. We wrote a very well-received paper about it, and then as we started learning more about that, we realized there was a lot of hard problems to be solved. And then it's really, I think the simple message that I usually talk about here is it needed a lot of out-of-the-box thinking. And what I mean by that is very literally, we had to kind of not just think about an individual server, but what happens when you have thousands of these servers, right? And then we had to think about the entire infrastructure, the shared power, the shared cooling, the fabric, the storage, the management, how do you bring all of these things up, and so on. And so those are the kind of things that we've developed at HP Labs, which you see in the Moonshot announcement today. In addition, we also developed a lot of tools, and one of the things we realized again was it's not just about the hardware, it's about the partner ecosystem, it's about the software, it's about understanding when energy efficient works in the hyperscale market, and when you want to go back to your traditional ProLiant stuff as well. And so we spent some time on that, and that's again part of the announcement today. So if you look at the HP Discovery Lab and the HP Pathfinder program, those are elements which capture our insights in terms of being able to say, let's help bring all the technologies, the tools we have developed at Labs to the broader community to help them kind of make the same kind of trade-offs we've been doing. So the hyperscale market, obviously we know some names like Facebook, these are like people who are building their web apps, if you will, and that's one of the use cases in the announcement. They want to own their own stack, so that's this notion of purpose built they want, and they buy a ton of servers, so there's a lot of density, both physical product as well as on the boards. When did you guys learn that this was a trend? When you said, wow, they all really don't care so much about the hardware, but they care more about the power, they care more about the software, the stack? Correct. So the point that you, a couple years ago, when did you guys make that connection? I think about almost about five years back, one of the big things that we've been doing at HP Labs is to say that it's not really about performance, it's not really about even performance per watt, it's about really understanding what is the net value the customers want, and then we call it service level agreements, and it's service level agreements, SLA by TCO, the total cost of ownership, and so we've been systematically looking at all elements of the total cost of ownership, and going back to what you said about Facebook wanting to own their stack, it's not as much as they want to own their stack, they want the ultimately very, very aggressive cost savings in their infrastructure, because they have so much at scale, and single dollar on a single server can literally translate to a million dollars somewhere else, and so they want to basically be very, very, very focused on cost, and that's what's very unique about the hyperscale market, it's high volume, low cost, right? So they need to own a large portion of that stack, is the premise to affect that, or not necessarily? Not necessarily, I think again, it helps definitely, but I think the key again is you need to look at it at a holistic point of view, and you need to really look at the total cost of ownership, which goes back to the out of the box thing that I was talking about, you can't really look at one component, and then squeeze costs out, you have to really look at what happens in the overall picture. So the purpose of my question is there was obviously a line of thinking that, oh, IT will just be outsourced, like a utility, and that, it's really not happening wholesale, it's, you're seeing mega scale, or hyperscale, critical mass, what do you see there in terms of the trend? Do you see that the companies are going to increasingly own more and more of their own stack, or is that not a way to sustain competitive advantage? I think again, you've got to understand that the market has multiple components to it, right? I mean, if you look again, look at HP's proland, we lead the market, 61 quarters, we've been punching away at it, very, very solid product, that market is going to exist, and that is going to continue to do that, there are going to be customers who rule that. The hyperscale market is about 10%, and it's a very, very fast growing market, a very important market, and that market is going to have a certain set of dynamics. There's the cloud, which is what you talked about with people kind of outsourcing things to a backend, and that's again a very fast growing, emerging market as well. And then I think the key again, and that's what we try to do with Moonshot today, is to really acknowledge the fact that there is an interesting market category that needs a new radical approach, and then I think the phrase I used earlier today morning is, do you want to keep squeezing blood out of a stone, right? And so you could kind of do a bunch of traditional things and eke out additional benefits, or you could kind of understand the specific dynamics the hyperscale market brings to the table, and then start thinking about radical ways, and that's really what we try to do with Moonshot, what we've done with our research for the last five years that we announced today. You said this morning, you used to call this Blades++? That was the research project that we had, exactly Blades++, yeah. So talk about some of the research you guys were doing in the labs. Let's back up a little bit, because you guys obviously are storied HP labs, and people don't always know what comes out of the labs and what goes to the market. So this is one of those great success stories where you guys really have a relevant product in a market that seems to be in demand, big data, cloud, mobile, converging together. Big impact. Big impact. There's obvious use cases out there now. No offense, but bigger than store ones, right, which is going to reduce some data, and that came out of HP labs, big poster child, but this is bigger. It's pretty big. Go back and talk about some of the projects that came out of this. Actually, it turns out HP labs is behind a lot more innovations than we kind of publicize, and that's something that we should get better at. But I think if you look at the enterprise power and cooling portfolio, the EcoPod, for example, the world's, I think the marketing pitch is the world's most efficient data center. It has a PUE of 1.04 or something of that sort. It's not much left there, is it? Exactly, right? Let's talk about squeezing blood from a stone. And that kind of leadership came out of a lot of work from labs. You talked to Chandrakant Patel, and he talked about the work that we had done. So we've had a very strong track record of working with the businesses on making impact. The enterprise power and cooling portfolio, being able to have power and cooling as a first class citizen in the portfolio, came out of work from labs. Power capping came right out of the research I had done, and it came out of the work we had done in our group, ensemble power management. So there's a whole bunch of work we've done, and this is part of that. And I think we are making a bigger deal about the microblades work, partly because I think we are kind of, like you said, we are at the right window of time, big data, cloud, energy efficiency. A lot of stuff is combining together to do that. But I think the message is, there's a lot of very strong innovation pipeline coming out of labs, and we've been chugging away at moving things around. And we talked to, and Dave and I talked to a lot of people on theCUBE, and we had over five weeks a guest this year, and I was living in town. I mean, I talked to Jonathan over at Facebook, who runs operations, and there's other folks like at Apple and at Netflix, they run in really big infrastructure, and the candid off the cuff comments are, we really don't care about vendors, meaning the HPs, the EMCs of the world, and the NetApps, they care about the product and they care about cost. And so for them, their holistic view is the only those two things. And what they told me is that some of the vendors just aren't meeting it, and so things like those legacy vendors getting bounced out, and the dollars are significant, like we heard, they're writing what checks for 20,000 servers is like a, that's a no brainer, P.O. And I think 20,000 servers is a small order, right? Small order, yeah. And that's why you saw in the announcement today, this architecture. Do you agree with that? That that's functional as the product and costs? That's why I said it's SLA by TCO, right? I mean, and it's more technical, but absolutely, at the end of the day, any sale we make, it's what is the value that we want and how can we deliver it at the maximum efficiency that we can have? So if you look at the numbers, we talked about 90% savings in energy, 60% savings in space, so on and so forth. Those are, again, reflective of the fact that we went and ground subset, what can we do that is kind of a new category, optimized for hyperscale? And that's really what you see here as well. What's the roadmap? Obviously, what I'm like about this announcement, I think Gabe and I were talking about this game change and we think you guys don't take enough credit for it. HP's kind of like the Boy Scouts, they don't do the right thing and they don't toot their own horn enough, but this isn't a product that's very relevant in a market and could be the beginning of massive growth because we think about the value proposition for this product really is for all servers. Correct. If the architecture shifts towards purpose built or operating system like functional elements. Correct. So what's your vision of the data center operating system? Because we heard from Sandra Khan talking about power and he only mentioned Jules once, by the way, I had a pet to beat him. We did talk about the second law. We talked about the first law of thermodynamics and the second law today, so I'm very happy. There's a holistic view and we heard from someone earlier about air conditioning, repair men's skill that they might want to learn in college these days because power and cooling, but these are all now elements. Now in a holistic fashion, that's an operating environment. Software is going to tie it together. It's not just your classic throw gear at it, turn the power on. Correct. So first of all, I think going back to your comment about HP doesn't toot its own horn. You're probably right. I think, again, if you look at blades, if you look at blades, I mean the amount of innovation that has gone into kind of where we are in blades, so it's not, we don't kind of make a big deal about it, but I think I couldn't agree with you less that this is a very fundamental inflection and I'm very excited about that. But again, the results will be when we kind of deliver it. So that's why we have the Discovery Lab, the customer enablement, and as we start doing that, I think the results will speak for itself, hopefully. So now in terms of the operating environment and what's up ahead, again, there are two questions there. So the operating environment, again, the way we looked at this vision, when we first presented this vision, we talked about disaggregated, Chandra likes to say dematerialized as well. So we have very, very efficient disaggregated building blocks and then we talk about an ensemble management layer. And again, HP is incredibly good about the management infrastructure and convergent infrastructure. And so what you do is you basically take apart all these things, build very efficient building blocks and then you have a composition layer that brings them back again. We call it the ensemble operating system and so we've shown that management layer to be a very powerful tool in helping us. So that goes back to your operating environment. So provisioning and unprovisioning resources on that. That's one element of that. But in terms of the, again, holistically looking at compute storage, networking together, understanding the management, thinking about the software issues, all the whole shabbang, right? Last question you asked was about where next, what's the future stuff, right? And again, this is something that I get a little bit excited about, right? So no, I could spend hours talking about that. But one of the big things that we are looking at is something that we call nanostores. And again, what we announced today was energy efficient processors. And again, the best metaphor that I can think of is kind of three-legged stool, right? So if you look at any basic computer and I teach computer architecture and I usually tell people, my students, a computer isn't anything fancy. There is something that computes, there's something that stores, there's something that moves data around. Those are the three elements of a computer. And what we announced was a disruption in one stool, the energy efficient processors that you were seeing. But the other two legs of the stool are also very interesting. And then we have some very, very cool HP Labs innovations that we are bringing to bear there. One leg is the storage layer and that's basically where we have memristor. And the other leg is photonics and that's optics, right? So now let's talk about memristors. I know you had Stan earlier and he couldn't make it because he's sick today. But Stan does a really great job representing it. So I'm going to do my best Indian accented impersonation of Stan here. But so if you look at a memristor, so that there are the basic building blocks, there's resistance, capacitance, inductance. And the fourth element was memristors. And Professor Leon Chua back in the 70s, he postulated that based on the three fundamental elements, there had to be the missing element. And then he drew the graph and he said, this is how the curve should look like and all pinched hysteresis, all sorts of fun stuff. And since then, nobody had really kind of reduced it to practice. And then when my colleagues in the nano technology group here started playing with nano level stuff. So they really shrunk the technology to really, really small dimensions. They found that when they applied voltage, something really interesting happened, right? Normally when you apply voltage, electrons move from one end to another end and that's kind of your transistor. At a nano scale for the particular material they looked at titanium oxide, when they applied voltage, material started moving. So literally the voltage started pushing material from one end to another end. Ions, oxygen vacancies started moving. So what you now had is when you apply voltage, the material morphs. And you take out the voltage, it stays that way, right? And that's why there is a memory resistance, memory resistance, memristors. Very, very exciting. And so they found that this had the properties of the mythical memristors and the pinch hysteresis curve was exactly like a memristor. Very cool. And the memristors, Stan will tell you, has a whole bunch of potential applications. It's starting from, it has the word memory in it. So we got to say something. It's obviously very logical, it's memory, right? It can be used as a flash replacement. It can potentially be a D-RAM replacement. But it also turns out it can be an integrated compute memory element. And that's fantastic for modeling the human brain and doing synaptic architectures. And really cool stuff, right? But again, Stan would do a much better job talking about that. I'm going to focus on the memory stuff, right? So when you start thinking about memristors as a memory replacement and you go look at the existing memory technologies, whether it's D-RAM, whether it's flash or disks, memristors have a very unique opportunity to provide a very fundamental disruption. And the reason is memristors are non-volatile. So you turn the power off, it's still the members, which is like a disk. But it also has properties like memory. So it has competitive latency. So we are talking about nanosecond latencies, as opposed to the millisecond, microsecond that you see with disks in flash. You're talking about significantly better energy efficiency, we are talking about two picajoules per bit, which is orders of magnitude better than anything we have with memory, disk and flash. And so we now have a technology that kind of has properties like disks but also has properties like memory. And that's a fundamental disruption. So and again, if you go back to the memory era, we had magnetic cores and we had D-RAM. And the last time we invented a new memory technology was back in the 70s. And we are ready for a new technology coming up soon. When? I'm not going to predict, but it's going to happen. But it's exciting, it looks good. And it is looking good and we have a high-next partnership with the labs level and we are exploring the technology and every day we are learning more and every day we are getting more excited about this. This is the hard problem that Prith talks about. You guys are solving these hard problems. Absolutely, right? And so that's the fundamental technology. And then on top of that, you think about optics. And again, I can talk for ages on optics, but the big thing for this discussion is that optics provides the energy efficient, high-lady, high-bandwidth communication channel that is very different from traditional copper. And so now you have energy efficient processors, which Moonshot talks about. We have fundamental memory still technology for storage. We have optics for communication. And you put these three things together, you have what I call a black swan event. A black swan event for most of the viewers. Perfect storm if you're from New England and you have no power. Well, a black swan event is a little bit more than that, right? It's a hard to predict event, which is very disruptive. But in hindsight, you kind of hit your head and say, of course, I should have seen it coming, right? The real estate bubble, whatever, right? And so we have a black swan event where you see the confluence of that. I have a feeling that in five years, everybody's going to look and say, of course, what Partha says was very obvious, hopefully. And so what we are looking at is saying, well, here's a black swan event, and then you combine it with another very interesting change, which is the emergence of big data. And we don't just think about big data. Everyone knows data is big. You mentioned you have a big data solution that you're thinking about. And I think the big thing there is data is growing faster than Moorsla, which to me is just an incredibly interesting statistic. I mean, I make a living out of Moorsla and to know that there is a bigger exponential gets me pretty excited. But it's also not just big data. We talk about big, fast, total, deep data. And then what that means is we are now starting to use the data to kind of get streaming, real-time insights. We are looking at unstructured, structured, streaming, all kinds of fun data. We are looking at deep analytics. We are looking at more and more sophisticated insight generation from data that we never looked at. Today the customer talked about, every time I answer a question, I have three more questions I want to answer. Deeper analytics, right? So you take the technology. At the speed of business. At the speed of business. At the speed of business. At real time. Exactly. And so you combine these two things. That's the genesis of our work, right? And so we looked at this and we call our project the Data Centric Data Center. And so we are bringing the data back into the data center, right? And one of the instantiations is Nanostore. And then what Nanostore? So the name Nanostore is kind of a playoff of microprocessors. Microprocessor was micro-level technology, all about processing. Nanostores is nanotechnology, all about storage, right? And we think Nanostores has the same potential as microprocessors in being a new building block for the next generation of data-centric computing. So what is a Nanostore? Now that I've gotten you all excited, you're going to be like, tell me more, right? And so the Nanostore is basically taking our memrister stuff, kind of doing a 3D sandwich of a memrister and then putting one level of logic as well. So you put the compute and the data and that's your Nanostore building block. And you can do that because the memrister technology works very well with CMOS and it has some very beautiful properties that I'm not going to go into too much technical depth on. But you could do that now, right? And so once you have that, you take this Nanostore and duplicate it and you build your system with that, right? So what have we done? What we have done is we have gone from a traditional system. So right now if you wanted to solve data-centric systems, you have the processor, you have a first level cache, you have a second level cache, you have a third level cache, a first level memory, a second level memory, maybe flash and then the disk, very deep hierarchy. Our hypothesis is that as you go further into the future, such deep hierarchies are going to start reaching the limits of their benefits. And then in fact, we were part of a DARPA study where we actually looked at extrapolating current data 10 years into the future and we found that 80% of the power in a future system can be spent in just storing and moving data between all these hierarchies, right? The overheads of moving across that hierarchy. At some point the benefits and the overheads that there's a tension and you're going to start backing off. What Nanostore does is now it kind of keeps the data as the queen of the world and it surrounds it with compute. And then if you think about it, most people the first time they hear a compute hierarchy surrounding data as opposed to a data hierarchy surrounding compute. And then they say, of course it makes sense. It's the data-centric world, you're kind of flipping the model on its head. Absolutely, right? And so we have Nanostore architecture, it's large-scale distributed, which is very, very well matched with the kind of stuff that you hear about with Web 2.0, MapReduce, large-scale distributed workloads. It has the notion of a shallow hierarchy, which is very, very energy efficient. It has the notion of a compute hierarchy backing it up so you can actually do a lot more stuff. It has, this has some very, very cool properties. Before, and then you're going to say, well, all this is nice. Do you have anything backing it up? So we have run experiments, we have run models. And what we're finding is that this architecture has incredible potential. Some of our results indicate that it can basically be about 100 times more energy efficient than extrapolated, aggressive implementations of traditional approaches. And so that's where we are really pushing very hard on for the next breakthrough. And we think it could be potentially game-changing. So we start with Moonshot, one of the legs of the stool, add the other two legs, look at the whole thing together, change the world. Is it a big assumption there on the cost to store a bit? We have pages and pages of reports on the cost model. The short answer is we believe we can handle that. The longer answer. That's a big wow, if you can handle that. I'm not saying it's a trivial problem. I'm saying we can handle that. So you should be careful about how I'm answering the questions. But it is an important point. And this is where I think there's a lot of business issues as well. It's not just a hard technical problem. There's volume dynamics. And one of the things we really look at and goes back to Moonshot is the volume of energy efficient processors in the mobile market is what helps us kind of keep driving energy efficiency all the time. And then I think there is a volume play to be made as well. And so we have a very sophisticated model. Just like with Moonshot, where one of our contributions was understanding the trade-offs and not just saying this is for everyone, but this is where it works. We have a very nice model for memristors and there's a certain magic inflection point on dollar per bit when this makes sense. And we know exactly where it is. I'm not going to tell you. Yes. The premise is basically that granular data hierarchy will give way to a granular compute hierarchy where data is more homogeneous. That's the premise. And we are pushing very hard on seeing if it works. We have early evidence that it can. And hopefully... This decade? Oh, much earlier. I'm too old to wait for a decade. Good. Great. So let's... We'll keep track on you then. Absolutely. All right. Four Johns 50? Don't even go there. 46 next month. Thanks so much for coming inside the queue. Thank you, my pleasure. Great to have you. See you later. My brain is full now. Thanks for coming in. I know, probably my pleasure. We'll definitely go back and re-look at that tape. Take some notes. Thank you. Sounds good.