 here inside theCUBE live from New York City for a special CUBE presentation, siliconangle.com, wikibon.org is theCUBE. We're here covering HP's moonshot, big announcement, changing the game in the data center, changing the game in the computing landscape, enabling what we're seeing today, cloud mobile and social, new application framework software, exciting times. I'm John Furrier, the founder of siliconangle.com. I'm joined by co-host. I'm Dave Vellante of wikibon.org and Suresh Gopala Krishnan is here. He's the vice president of AMD's server business and we are geeking out big time today with a system on a chip and talking about changes in the server market. Suresh, welcome to theCUBE. Hey, thank you for having me. Great to see you today. So first of all, you got props here. Let's go right to it. We've been seeing all kinds of innovations all day. Yeah, hold it up nice and high. Perfect. So tell us what we're looking at here. So you're looking at one of our next generation accelerated processing units which is a combination of CPU and GPU built together. And this goes into the HP Moonshot system. This is one of the cartridges. So that's one of the cartridges, slides right in. Your IP is embedded in there, right? Correct, it's right here. It is for SOCs or systems on a chip. Okay, and we asked earlier, but I want you to describe. So systems on a chip, what does that mean? What makes it systems on a chip and what are the attributes of a system on a chip? I think one of the previous guests also mentioned this but if you look at a traditional server, you will see a lot of CPUs plus the memory associated with it and then a lot of chipset that talk to either the IO or to memory. And in this case, all of that, all of the devices that are needed to talk to the IO as well as memory is embedded in one of these chips. Hold up a little higher so I can see it. So it's in one of those chips. We have four of these chips in here and then the memory is attached to the back. Okay, so the only thing that you need to build a server, a basic compute node are just the processors and the memory. So that's a packaging innovation. Obviously you've got to figure out how to make them at scale, make them reliable. Correct. There's other capabilities that you have to design in there. Why all of a sudden are you seeing just such action in this space? What has been the technological breakthrough to allow the industry to develop such innovation? One of them is as a silicon geometry becomes smaller and smaller you get to pack a lot more stuff into silicon. And the other capabilities that you can put a lot more course into these SOCs themselves. So you're looking at, you can put anywhere from 8 to 16 and going to 24 and 32 kind of course coming into a single chip. At that point you're bringing in enough compute capability into a single chip to build a server around it. If you have one or two cores it doesn't make sense but when you have a large number of cores you have enough computing horsepower in there. That gives you the ability to say hey, now I can bring in some of the other peripheral chips into this. And we've been talking a lot today about power consumption and heat density and several years ago Google wrote a paper and quantified some of the impact on what it was gonna mean to their environment. You're starting to see that now trickle into the traditional data centers, aren't you? Where energy consumption, power and cooling is becoming an onerous component of the operating expenses. Is that the main driver behind these innovations or are there others? There are multiple things. One is power is one that everybody talks about space is another one. So most people have to operate within that given power and space constraint. So if density was not a matter you could kind of keep chips outside and build bigger boards. So that's one part. The other part is when you have chips that are talking to other chips you're going to expend energy. So when you do it as an SOC you kind of reduce the amount of energy that is dissipated in that communication. So both of those things are reasons. Sir, let me ask you a question because we've been talking about this earlier when we do in our commentary in the morning was the big trend everyone's talking about right now is the changing landscape of the data center. Both obviously from a physical plant perspective, facility perspective, as well as architecture around how servers are built. And obviously Google builds their own and Facebook has an open compute summit. Brings the question around customization. So what's your view on this whole changing landscape around customization where, and I'm on record saying hey I think that's great the high end guys might build their own like Google. That's just a skew data point in my mind but the average big enterprise will assemble their own not necessarily build their own. What is the trend here? With these kinds of components you got programmability, you got software that can do things on the chips. Is this whole thing over hyped? This whole build your own thing, build your own like data center from scratch? Build your own, I think you hit some of the things right. I mean you have to have a certain scale before you kind of start customizing everything to your level. So the examples you give Google and Facebook have the scale to kind of go do that. And you see some of the trends from open compute where people who are still spending a little bit more money than your small business trying to say that hey we want to procure it a little differently. So will they go and build their own server? Most likely not. They will require them to design from the plant perspective up to the apps, right? So there's a little bit different mindset too, right? Correct. So they'll most likely go with an open compute kind of model where they can buy these things, source them differently and then figure out how to manage that as a common. So you heard from our previous session everyone likes to talk about software defined data center. Obviously that's a nice marketing tactic now but it's a real destination for folks as they look at the holistically the operating system. How does all this innovation at the physical product standpoint factor into what software defined blank means? So software defined server. How would you tackle that question? I tend to keep away from everything software defined because it's so hyped up and so... Controversial. So controversial. You can call anything software defined because software is what turns the hardware on. So... It's everything. You let your marketing guys worry about that. Yeah, so I have to do some marketing as well but software defined networking kind of started the whole thing and then I think one of my favorite partners has now started the software defined data center approach. There's an open data center community out there as well. Yeah, VMware, right? Yeah, so VMware started that and then there's an open community out there as well that is trying to define various things around there. So one of the things is there's a lot of programmability in these kind of products, right? When you put in your IO and your networking into these things you have to program all of those things and the program at the highest level would be how do you manage that whole data center? So you start with software there. How do you provision? How do you connect to your network together? How do you take care of reliability? How do you kind of redundancy? All of those things at the software level. That's probably what's the better definition for everything software defined. Everything software. That's why we like software led infrastructure. So you guys have had to compete over the years and you've been around a long time. You know the business in the space that you're in almost by definition you have to have a value proposition that is more compelling than the biggest player out there. What is specific as it relates to Moonshaw AMD's unique value that you're bringing to the table here? So what we are focused on is what we call the accelerated processing unit, which is bringing the right amount of CPU as well as parallel computing together on the same chip. Something like we have done here as well. What that does is that when you look at highly parallelized workloads that are in these data centers you now get to use the parallel processing capabilities that are available in GPUs without adding a separate GPU on there. It is on die, it is lower power and can give you the parallel capabilities. And so we've been talking a lot about the hyperscale space as well and John and I sort of envisioning this space and on one side is the hyperscale and the other side is the traditional enterprise and they've been relatively separate up until recently. You're starting to see the two bleed together. And we certainly believe that the Googles and the Facebooks and the Amazons are sort of showing the way. Sort of things like DevOps came out of that world and certainly cloud as well with Amazon's sort of invention essentially of the cloud. What do you see as far as the hyperscale space, the major trends driving that and how fast are they driving into the traditional enterprise? Depends entirely on software. That's the gate. That's the gate. Because in the, if you look at all the hyperscale, all the cloud players, they pretty much own their software either through open source or their own development. Enterprises depend a lot on buying software. So I think in your earlier meeting you were talking about, is it going to be SAP? Is it going to be Oracle? It's going to be gated by the software vendors. Yeah, so what's your take on open source? What's AMD's posture toward open source? What do you do, if anything, to cultivate sort of that open source environment? Are you a consumer of open source internally? Yeah, we are a consumer of open source. We participate in a lot of the Linux distributions. We work with various folks to make sure that our optimizations are in the open source compilers. We have people working on that. We are also part of open compute which is on the open hardware side of things. Yeah, we covered that at, they had the last summit, it was awesome. The question is about the software thing, because again, this is something that we really feel passionate about. We do love the software message. Obviously you can hear that in our conversations, but the developer communities are changing. You mentioned commercial software, buying pre-packaged software and or say Oracle or whatever, is shifting to open source and sourcing that either directly from the communities themselves or putting kind of a layer on top of it. What does this mean for the developer community? And you guys work with developers, obviously at your level, much lower levels, down to the chip level, then you go up and down the stack. Everyone in the cloud market wants to move up the stack and have a SaaS model, have the on-prem SLA security. What's going on in the developer community that you can share from your perspective that that's relevant for people to understand? I mean, you can start with how the LAMP stack evolved. And that's now supported on all kinds of processors at this point. It's supported on x86. It's supported on 32-bit on. It's now going to be supported on 64-bit on. So definitely that set of tools are available for developers to develop on. And what we have done on top of it is to make sure that when we introduce the accelerated processing units, it is a very simple model for them to program to that. So we're working with the developer community to get to the right compiler so that you don't have to figure out whether there's an APU or a GPU underneath it. You can just compile it. You mentioned Java. Is there any particular languages that's cool that you like that's more going with this than others? Actually, there are surprising things when you try to introduce ARM and x86 and APUs into the market. There's a lot of HPC-related code that's available in public as well as with companies who treat it as their IP. They're very interested in using a combination of APUs and ARMs into developing their solutions, either for power or for a GPU. So you guys are in the ecosystem of HP here. Talk about what AMD's bringing to the table with Moonshot and how that relates to the overall picture. So we have been part of Pathfinder program since 2011 and we've been working on this cartridge for a bit with HP. Our, like I said, our primary focus is to bring the right kind of CPU and GPU technologies together so that for the hyperscale workloads and the HPC-type workloads and media streaming, we have a great value proposition that we can bring to HP. You see the workload messaging. We heard that in the webcast. Obviously workloads drive a lot of the conversation around what's deployed. Do you agree with that? Absolutely. I think if you look at some of the workloads, you can see people trying to put discrete GPUs, which we have and we sell, along with our Uptron channel-purpose CPUs, and we also are seeing people looking at APUs too. Okay, final question. Then we'll get you to the last word in here. Explain to the folks out there from your perspective. So you've been in the beginning for a while with the ecosystem and with HP Moonshot. What does the Moonshot announcement mean to the industry and what is it gonna do? What ripple effect is this gonna have for folks out there? What do they need to pay attention to from your perspective? I think for folks whose workloads are very scale-out-oriented, this is the right platform to go with because this allows new levels of scale-out within their existing power and space constraints. We heard the CIO say it's a zero-risk situation because you can just play with it and then if it works, you buy it. If not, you don't. So, all right, Suresh, thank you so much for coming on theCUBE. This is SiliconANGLE coverage in live in New York City for HP Moonshot special CUBE presentation. We'll break back with our next guest after the short break.