 Our next guest is Dwight Barron, Chief Technologist for Hyperscale Computing. Dwight, great to see you again. We were talked yesterday. Step up to the microphone. We're here inside the Cube, our flagship telecast. We are here for a special exclusive post-event moonshot project announced by HP this morning. Wall Street Journal, Reuters, all the major press was here. We're doing a special edition Cube where we are going to do the commentary and analysis to break down what this means. It's a new announcement for HP, and I'm here with my co-host, Dave Vellante. Dave, what's your take so far before we jump in with Dwight? What's your feeling? Well, Dwight, we've been talking all day about how this feels like a bigger inflection point than even, say, Prince's Blades, which is obviously a big deal, drove a lot of revenue, pods, kind of evolutionary. But this feels like it's a major sea change, consumerization of IT really driving into the data center. So we're interested in the sort of secret sauce and value add that HP brings, right? It used to be in the old days, oh, you'd do a processor and an operating system, and that's really changed. You guys have proven that you can make a lot of money doing other things and building other values. So maybe talk about that a little bit. Yeah, or you mentioned in Blades, and as part of the team that worked on the Blades and that transition. And it caused us to look at a larger scope of the problems, right? More than just the servers, here, go put them in a rack, put a bunch of cables, have a nice life, kind of mentality. We integrated more of the solution. We brought in the connectivity. We looked at the management and the pieces and solved a lot of the problems that we were creating and to help customer scale up and on enterprise workloads. I think in the four or five years that we've been in the hyperscale business when we started incubating that, we've learned a lot and it's taught us to look at the whole problem from a data center level from a, you know, started two megawatts or two and a half megawatts of power and work backwards from there. How do you, how much energy goes into cooling? How much goes into the IT equipment? How do you cut the cooling cost out, right? When we started this few years ago, we were burning two watts on cooling for every watt. We were burning on IT equipment. Now with pods, we barely burn over just a fraction of a watt for every hundred watts that we put in IT equipment. So we've learned a lot by being in this business and looking at the holistic problem. So pods actually clearly swung the pendulum of the PUE. Does this new architecture maintain that or does it swing back the other way and you have to bring pod-like thinking in to maintain that? What we've learned with the pods and looking at the data center level is we got a tremendous amount of improvement in a quick amount of time. Now we need to go look at, well, what's left over, right? The IT equipment itself. How much work are you getting done for every watt you put in the IT equipment? There's no more wasted watts going into the cooling. Very, very little loss is going into the power distribution. So now we're going to look at, well, what's the IT equipment? How much work are we getting out of it for every watt that we put in? Finally, attack the real problem. The real problem is the low-hanging fruit. So I want to take us back to a kind of a higher level conversation around the evolution of microprocessors. We were talking earlier, Dave and I, about when we were growing up in the computer business, the PC was very disruptive to the Unix and marketplace and you had 8086 and obviously the X86 going up through that. And that spawned Wintel desktops and then servers came out after kind of first generation. Why is this announcement, and you mentioned you learned a lot. So when you looked at the problem, X86 is a phenomenon that grew up kind of the legacy, if you will. And all kinds of talk about the PC is dead and they talk about that more like a desktop being at a queue when you have mobility. So when people think about ARM, which is a big part of this announcement, Dwight, they think about the benefits of mobile. And people can see that today when they have an Android phone, which is an iPhone, one's got better battery than the other, ARM is a big part of that. What does that speak to about, one, this new evolution of microprocessors and this is a server announcement. So you don't, people tend not to put mobile chipsets, technology into a server discussion. So what's your take on this, you know, PC to servers, mobile angle? Can you just elaborate it from a technical perspective? Sure, and you know, so we start with the premise of, gosh, we've got to get more computing done for less power, right? And we look around and we say, well, what's the leading technologies that know how to get the most amount of computing done for the least amount of power, unless battery powered things, right? It's, battery life is finite. Everybody wants smaller, thinner devices. If anything, they want a smaller battery and they want more computing done. They want richer webpages. They want it rendered faster. So if you, that's the area that we see as the hotbed of really the, it's so deep in the DNA of the designs that for every transistor we lay out, we're gonna lay it out to do the most amount of work for the least amount of power. And when there's no work to be done, we're gonna not burn power. And so it's really coming from the mobile devices, the client devices, and that could be, you know, you think of laptops, you think of tablets, you think of smartphones, but it's that heritage that comes from the mobile devices and the battery power that's making these great strides and how to get the most amount of computing. So what's the trade-off there? Well, the trade-off is the peak speeds that you get in the transistors that you lay out. Don't go up to the top end of the high-speed transistors, but, you know, we learned a long time ago, five, six years ago, that the incremental benefit of speed versus power was not worth it, right? It was, so we held the clock rates back and started going to more cores. And so the clock rates have stayed modest in terms of what the silicon processes can do. And so now you can actually dial the cores back and, I mean, dial the clocks back and for a certain amount of power, you can get more work done. What's interesting, Dave, is that, you know, the analogy of mobile driving this new server architecture on PCs drove the wind-tell server. I remember those days when the first Intel server came out, it was essentially a 386 and a 486 processor from Intel. And they were laughed off, you know, out of the market. The land server. It was laughed in the market, but it had a specific use case that was serving local area networking, Novel in those days, right? And evolved. So to me, I think this announcement is very similar in the sense that this hyperscale announcement, we heard use cases specifically around cloud and big data, may look narrow, but if you look at the growth of what happened in the server business with x86, massive growth to blades. And blades was like the glass ceiling from what we're hearing. So this is, so take us through your vision and do you see that the same way? Yeah, we've been through a client-to-server transition before, you know, with the, in the desktops. And the real interesting thing to me is the servers that evolved were, you know, reflected what the clients were doing. The clients at the time were doing personal productivity spreadsheets and the servers at the time were holding the spreadsheets, the file servers for them, the print servers. They grew up to be the database engines and until finally they grew up to manage all of the workloads in enterprise business applications. But they mirrored what the clients were doing, which was business productivity and communications. Today in the mobile space you look at these new class of clients that are coming out and people are doing a whole different set of things within their communications devices, their information devices. So I think that as we transition those client architectures that are being used in these mobile devices into the server space, that there's a lot of things that they can reflect of those clients and the new workloads and new applications that will develop just because of the client environment that we're serving with them. Now that's not necessary for success because today on the internet, you speak internet protocol and anything can speak to anything. But when you start thinking about what could be enabled in a client device, if you think about pushing state to the cloud from your client device, here's the stuff I'm interested in. This is the context. This is the valuable bits that mean something. And now can I have those bits pushed up to the servers? Can I have them pushed anywhere, anytime, anything I log into? Well, today we think of that as maybe data. Some address book entry or a picture or a note that you jot down. But what if it were the app itself? What if your mailbox that's running on your client device when you're not looking at your mail? So more power's going to the cloud, obviously. More power, yeah. More power's gonna push to the cloud and the cloud needs to have data centers. Push it up there, let the app run there for a while. They've got more resources, they've got more data access, they've got more bandwidth. So there's two major trends we're watching, obviously at the edge of the network meaning the device, the endpoint, is the user and that mobile desktop tablet, whatever they got to now a cloud environment which is more SaaS, platform as a service, infrastructure as a service. How is the role of new elements in the architecture that we've been hearing systems view, Dave, you know, SSD with flash, right? Enabled a lot of more integration. How is that gonna change that evolution, how fast? Is there a Moore's law like effect in that market? And how do you look at it as a technologist? What are you watching in that emerging hardware area? Well, so clearly the solid state memory, the things with flash and things HP Labs is working on called memoryster, all of the category of persistent memory that are electronic that have a mixture of properties. They remember stuff when you turn the power off but they're accessed electronically and they're fundamentally silicon devices. That's creating a new kind of middle ground between DRAM memory and rotating media. And of course we all see it in the portable devices that we carry around with us. But again, back in the server end of it, a lot of the information and the time you wait on information and the power spent to store information is in rotating media. And we believe in the next few years. We've already seen a huge transition to basically all of the what's called random IO, the things you gotta move the mechanical disk head around and wait a few milliseconds for. A lot of that is already going to solid state media and the answer has come back in a thousand times faster, microseconds. But the capacity stuff, I got a gigabytes of files. Well, it's still cheaper to put that on magnetic media but even that's gonna move to solid state. Within your environments that you're talking to your customers and or the HP Labs guys you talk to you probably have some insight into kind of these emerging use cases. Share with us your vision of your experiences either anecdotally or specifically the big data phenomenon. How real is it? I mean, I'll see, we cover it. We have an opinion on that. How does it affect all this equation? Oh, it's huge. I mean, the customer experience that's enabled by big data knowing what you want, when you want it, trying to lure you into buying it, figuring out how you're gonna pay for it later. All of that is built on personalized big data analysis. It's basically the financial model that's driving all the commerce driving the websites. So that part is huge. I mean, it's already here. Now the question is how do the rest of us get it and how do enterprises get access to the data we see that happening. So we think a lot of growth in these same tools and techniques that the web companies are using to look at what their customer trends are. So basically big data is coming to the business. It's already there. It's already here. It's here. And this is the kind of machines that they want. This is the kind of purpose-built, big data and powerhouse machine. You need to nail and head purpose-built. It's like, how do we optimize not only the compute architecture that goes with it, but the whole package, the solution, and make it easy to consume and fit it in there in the framework that the customers know how to deal with? Well, if they have any more questions, I have one kind of different question, kind of change gears, because we love to get guys chief technical guys on theCUBE to talk about trends. But more importantly, one of the things we're passionate about, Silicon Angle and Wikibon, is a new generation of computer scientists and engineers, whether they're data scientists or actually programmers, hardware, and software. And the business disciplines are changing from a personnel standpoint. Could you talk about the new requirements? Because we're talking about jewels. We're talking about power. We're talking about chip level, double E electrical engineering type discipline, but it's also a little bit of computer science. For folks out there who are in the younger generation, what would you advise them and share with them about the kind of curriculum or expertise they would need to learn to master and have a career in this area? Yeah, I think the biggest thing that's most important is to realize that almost every problem that we tackle today is a multi-disciplinary approach. You know, we started off talking about the biggest challenges in advancing the state of computing in a data center. Turns out to be the air conditioner. So, you know, I've partnered with the ITT Technical Institute, and it's gonna be graduating some serious engineers. But, you know, we had to blow the dust off of the old thermodynamics books and go back and say, oh yeah, I had a course in that once. What are they talking about? So when you say in college, I'm never gonna use this, you know what? Actually, you could use it. But it truly is a multi-disciplinary approach to the problems. It's the hardware, the software, the mechanical engineering, both from a packaging. You know, even the electrons will be, we'll be converting them to photons very soon. And the primary communication means when you think about chip to chip, it won't even be electrons anymore. So, that's a lot of deep physics and a lot of work there. So, I think the biggest thing is, you know, study what you love, do what you love. You know, there's no need going through life being miserable when there's so many fun things to go do. But learn how to be a partner, learn how to team, you know, and I know many of the colleges and curriculums are really, really encouraging teamwork and cross-disciplinary work. And we're encouraging them to do that because, you know, there's the individual. Some creativity that if you're too piqued on one topic, you might not see a solution that might require just a little bit out of the box. Yeah, just looking at, you know, I've seen this time and time again in my careers. You know, people would get siloed looking at a problem just from one dimension, can't, you know, tackle the problem nearly as effectively as a multi-discipline team and, you know, wrestle it to the ground from multiple dimensions. I'm interested from your standpoint to what is a technologist on the concept of differentiation. You know, people think, oh, you know, everything's commodity and it's going commodity, but I've been writing down, just listening to our conversation, ways in which you can differentiate. Be first. There's the ecosystem. You talk about packaging and the platform itself that you guys are building. What are the ways in which you see, as HP, adding value, providing differentiation and ultimately being able to drive a profit out of this business? Well, let's first start with, have yet to walk into a customer visit and they go, hey Dwight, all my problems are solved. You can go home. Here's a check. I'm ready to buy this and I don't have any more problems than need solving. So there's, A lot of pain. A lot of problems out there to be solved. I don't care who they are, how big they are, or how well they've got to handle on their operations. There's more problems to be solved. I've always subscribed to the theory that if we can help our customers lower the cost of delivering a transaction to their customer by an order of magnitude, they've got two orders of magnitudes, more things waiting in the wings that are now profitable to deliver as business transactions. So you think about how much richer the websites have gotten. That's a business transaction. How much more they put into building a website or a e-commerce site or for you. And that's because we made them cheaper to deliver over time. So behind that comes increased complexity. It's increased nodes. So managing it, decreasing the complexity, making it reliable, making the parts show up on time when they're supposed to and operate reliably. It's a tremendous amount of opportunity to add value here for our customers. So we've got somebody who actually Skyped in a question. Will the applications developed for mobile clients move to these new servers in the same way as the Microsoft client moved to Windows servers? In other words, is this the end of the Microsoft Intel duopoly? What's your take on that? No, I don't, well, so first off, this isn't the end of anything Intel. This is, the beginning of everything HP. It's the beginning of a new round of experiments. And we're gonna start with new hardware and a new server architecture for the software that exists today, right? So. Open source in particular. Open source, you know, web servers and things. It's just a, it's only four or five categories of applications that we think are good candidates to start with. But we've done our homework there. We've done quite a bit of homework over the last two or three years in measuring these things and building models and benchmarks. So we think those will have a high degree of success, right? So, you know, software and hardware when you're trying to build something new is always referred to as chicken and egg, right? So today's software is the chicken and then we'll start with a new hardware. Well, you guys are putting this out there today. I mean, what I was impressed with today is, you know, you always, these announcements, you always kind of like, you know, just another, you know, rah, rah announcement. But, you know, it's a use case that's really relevant with big data, as you mentioned, and cloud is bursting out at the seams. I mean, technically, there's issues. You know, I mean, growth issues, challenges. And it's a new generation. So it's like, literally, move from that, like PC to servers, now mobile to new kind of servers. So to me, I think it's a really good vision and I totally think that's what the market needs. I mean, you know, we know a little bit about the data as we're doing our own little Hadoop stuff and we know what it costs. I mean, it's $600 to buy a box, pizza box, cheap. But I don't need three quarters of the stuff in there. I'd love to have four of them in one smaller box. So the market's changing and, you know, if you can deliver performance at that level, you know, it's a winner. And again, obviously the ecosystem's huge. I mean, you got to have the right, you know, incentives and we asked, you know, Mike Kimball. You kind of didn't really answer the question because it sounds like it's just getting started, which makes sense. Yeah, and, but we'll use today's software to build and support a new round of hardware and then as that hardware gets established, then a new round of software will get established. I mean, we're hearing that. There's a tsunami of developers out there that have grown up on a new, younger generation that's like, I called third generation open source that, you know, we just, we covered the passing of the guy who invented C recently and you know, that was a lot of the early guys talking about that, but really two generations away and you have these software guys that they don't really have any insight into port configurations, a lot of the stuff that goes on at the network level. And so all they do on is code and they're coding on open source. So portable porting code like that is really key. And if they can have a programmable web on top of it, I think that's a great foundation. Okay, Dwight Barron, thanks for joining theCUBE. First time chief technologist.