 Okay, we're back here live in New York City for the special Cube Silicon Angle Wikibon Cube presentation with HP Moonshot. They're a big announcement here, changing the game on the data center, disrupting the cloud, mobile, and social and big data. I'm John Furrier, the founder of SiliconAngle.com, and I'm joined by co-host. I'm Dave Vellante of Wikibon.org. Mike Major is here, who's the Vice President of Corporate Communications. And we've got, you are the manufacturer of X-Genes at Applied Micro. Yes, we are. So X-Gene is a pretty intriguing name. Tell us about Applied Micro first. Not a lot of people know who you guys are and what you do. And then we'll get into the whole Moonshot space. Yeah, well, we're a semiconductor company. We've been around for a long time. The company was founded in 1979. More recently, our really rich legacy is in connectivity products for telecom. In 2004, we got into embedded processor products. And then we've been working on this ARM 64-bit server on chip product now, since, well, we conceived of it in 2009. And we finally are at Silicon. So hold it up for the folks there. We see the prop here. This is, I want to ask you a question with a board a little higher. There you got that. All right, you got that? Mick, got that? OK, good. So my question is, we've got a lot of people on the Twitter sphere asking how it's software defined. Can you elaborate a little bit on how that plays into the power and cooling, but Dave Downey said it's software defined servers. Is that because the software on the actual blade or cartridge, or is it more enabling software developers? Well, I'll leave it to HP to talk about their specific products. I can tell you where XGene is concerned. We have our first generation that is now in Silicon, along with our eight big 2.4 gigahertz cores on that chip. We've got four smaller ARM processors, and they're there to handle both storage and networking offloads. So it would not be that difficult to use that capability for. So it's programmable. Exactly. So that's kind of what they mean by that. OK. And just in terms of, can I see the cartridge? Sure. What's been the feedback that you've been hearing about some of the stuff you've been doing? Well, the interesting thing about where we are, and we are really, I think, differentiating ourselves in the ARM world, is that we design this ourselves. We got an architecture license from ARM way back when. We were the first architecture license for 64-bit. And the reason that we did that is we wanted to develop a product that would actually have the capability of the currently deployed infrastructure. In other words, what's out there now is the on-class, E3 and E5. And we have designed our product specifically to compete in that range with that kind of capability. And you focus on the high performance sector, right? Can you talk about that a little bit? Because you don't typically associate ARM with high performance in the broad scale. But in your little part of the world, you certainly do, right? Right. And I've been watching the presentations today. They've been terrific in talking about bringing cell phone and tablet technology to servers. We've actually leapt beyond that to, like I said, the currently deployed level of capability. So while there is great growth in the cloud, and everybody's really excited about how the demand of handheld devices and all is going to cause the cloud to expand, the cloud already exists. And data operators are not going to go take out their currently deployed assets and replace them with something different if that other different thing isn't set up to run the currently deployed software. They don't want to go backwards. They're not going to take a step down in terms of capability. So we felt there is a great opportunity for us to enter the market with this high level product that really, from a data center operator, it's really easy to plug and play. So can you hold this up again, if you would? Tell us what we're looking at here and take us through the IP on this card. Well, I'll talk about our chip rather than the card. This is our Xgene server on a chip. It has eight cores that have been clocked at 2.4 gigahertz. It's got four smaller cores for the storage and networking offloads. It's got four 10 gig pipes on it. And the idea there was not only to facilitate communication between nodes, but you need that for big data. We were presenting at ARM TechCon last November. And Amr Awadala from Cloudera was there. And he very clearly said we are excited about this product coming along. Big data needs 10 gig. Yeah, needs 10 gig. And what about low power? What's exciting about that from a big data perspective, specifically or generally? Well, we've modeled this at what we expect are real world cloud workloads. And for our first generation product that you see here, we feel that the power savings is going to be on the order of 50%. So we're not talking about the ultra low, lowest, lowest level, but we're talking about giving a very high level of performance, giving a very high level of reliability, giving all the server-class stuff that is out there today, but doing it for a greatly reduced cost. Mike, why did Amr Awadala be excited about it? He's a friend of ours. He's been on theCUBE multiple times. We love Cloudera, scale out open source is something that we love to promote because it's relevant. But why was he excited about this comment, your comment to Amr? Well, again, it's for two things. One is it 64-bit. So the addressable memory is not constrained to 4GIG. And secondly, with the big pipes, as you're moving large amounts of data in and out, you're doing analytics on that big data, it gets to it much faster. Yeah, so the data pipelining is huge for them. Because they're moving a lot of batch around. What do you see this going for you guys in terms of the next generation capabilities and what other capabilities do you guys have in this architecture? Well, with the background that we have in the connectivity and the analog mix signal area, we're used to having high-speed products. We have 100 gig in silicon right now. And so the speed capabilities that we're gonna see on next generation product are gonna go up. You know, at Open Compute in January, our CEO mentioned that the next step for us is 100 gig. So that's, you know, instead of bigger, faster, better, it's gonna be, you know, everything in this world is now smaller, faster, better. I'm reading, I pulled up the ARM website, ARM processor business model system on chip. Obviously it's all the rage as to where everyone's going. Share with the folks out there things that you've heard that confuse people and what you can do to kind of clear up any kind of misconceptions or clarity around how complicated this is and kind of what does this mean for computing and software-led infrastructure and applications? Well, I think there are two dimensions. One is the cloud is going to grow substantially over time. I mean, everybody that you've talked to today has been, it's gonna be extra innings we heard. And as that growth occurs, I think what you're gonna see are the data center operators plug in very specific tailored configurations for whatever their workload happens to be. So obviously doing that is going to reduce operating cause. It's gonna reduce power consumption. There's going to be a lot of goodness that comes from that. But in terms of the more immediate opportunity, I think we're looking at a more immediate opportunity as well where the currently deployed infrastructure can be replaced with X-Gene and specifically here on the HB cartridge. The lower level software that's operating on top of the operating system, the applications can be recompiled fairly easily because you're recompiling from 64-bit to 64-bit. And then there's another layer of software that's running on Java and Java has announced developing 64-bit as well. So much of the application layer in the data centers already runs on Java. It'll be really easy to port that over. So let's talk about that a little bit because you're basically putting forth this value proposition to IT managers that you don't have to rip and replace to take advantage of X-Gene and you get the benefits of, you maintain a high performance and you get low power. So talk us through saying you recompile 64 to 64, Java's there. So from an application development standpoint, what kind of cycle time are we talking to or elapsed time to actually port applications to X-Gene, for example? Well, I think what you'll find in the hyperscale data centers is it will be easiest for them because they mostly rely on their own code anyway. And I think they'll be the first movers. There's a second wave, we believe, of enterprise users. And they're going to, I think, wait and see how this stuff works at the hyperscale level. They're going to wait for fully supported software releases. So I think there we're looking a little bit longer. But personally, I think that this will deploy in test environments fairly quickly and I think our customers will like what they see. Okay, so you've cited web serving, big data analytics, media streaming is on your website, Hadoop and caching. So those will go down first. And then how long do you think it'll take to trickle into the hyperscale is bleeding into certain parts of the enterprise? How long do you think it'll take to actually start to get picked up in the traditional enterprise and what apps will go first? Is it going to be SAP, ERP, Oracle, or some sort of fringe apps? Again, I think the enterprise customers tend to be a little more conservative by nature. So if you were to ask me how I see things unfolding, I would guess that it will be more of the fringe apps first and as they bang on it and feel the reliability is there, they'll put more of their core. Yeah, but so the world never thought that X86 was going to run database applications, right? And look at it now, right? So it looks like a water-cooled mainframe when you look at it, right? So a lot of people perfectly expect that ARM-based processors, low-power processors are going to essentially eat into the enterprise. You would agree with that, yes? Yeah. It might take decades, but. Well, I don't think it'll take it. It'll take years rather than decades. There you go. We're honing in, Mike. So we're getting a little Twitter action going on here and one of the comments that's getting traction is, anything that's measured by rack density and power consumption doesn't qualify as software defined. And that's obviously coming from folks who are trying to compete in that software-defined data center world. But software-defined data center is kind of the destination for a lot of the vendors to kind of get to this cloud operations where you have orchestration in software. What do you think about that market, this whole software-defined, and where you have massive scale-out and hyperscale as the destination? Some are saying that it's a race-to-hyperscale architecture and whoever doesn't get there will have to buy someone else's infrastructure, the cloud, et cetera. Do you, what's your take on that marketplace? Well, let's talk about the software-defined aspect of it. I mean, the most immediate and tangible, I think, is software-defined networking. And that exists today. And you've heard other vendors talk about their big switch, their switch fabric capabilities and all this stuff, right? So that is definitely here. I mean, the way we look at it is when you take that switching capability and bring it on to the same dye where you have your processor and other IO stuff going on, all of a sudden, it's more than a software-defined network. It's a software-defined server, software-defined data center. So I think it's going that way. I also think that one of the things the data center operators want to have is flexibility and malability with their deployed hardware. So to the extent they can go in at a software level and reconfigure and repurpose some of this stuff. There's value there. They definitely want to do that. They want that. Model-ethic data centers. But that's the trend. Software-defined data center. Thanks for coming on theCUBE. We really appreciate it. That's the hot area. Obviously, software-defined. There's a lot of marketing hype going on. But like OpenStack, which was a lot of marketing at first, now has a lot of legs. Thanks to software-defined networking systems on a chip. All this is great stuff, low power. Again, I think this is a great announcement from HP. I think this is going to be something that a lot of people are going to be following very, very quickly. And we're going to continue to break out. Thanks for coming on theCUBE. We appreciate it. We'll be right back with our next guest after this short break.