 I'm around in the middle seat here and we're here in post analysis of the event game-changing announcement, HP announcing Project Moonshot which is going to change a game and server, technology, chips, integration, footprint, energy savings, power management for the data center. So you're on stage there. Is this a real deal? Well I think the quarterback was really on target today. Sorry wrong It's a West Coast offense for HP taking a first step in the arm using this new architecture at server scale. So you're an analyst, you're out there in the industry poking around, talking all the vendors. Is this a real deal? Tell us your analysis. We've got a big audience here waiting for you, your comments. You know it was a small intimate event downstairs, so we got 1,600 people watching. So take us through sort of what you're taking. Explain what happened. What does this mean? Well I think the significance is they've, the underlying problem has been around for a long time. The fact that servers consume lots of power, there's continual pressure on people in big data centers to reduce the power to improve their workload efficiency. We've basically been trapped is the wrong word, but we've basically been in an environment where it was assumed that you'd have an x86 instruction set and a processor from AMD or Intel. And nobody, you know that's basically kind of the recipe people have been working with. What we've got here is a confluence, I think, of two or three really significant factors coming together. One is really good underlying processor architecture engineered by some people who understood servers. Second is a partner, you know, a system level partner like HP that can take a good technology and push it out into the market with a tremendous base. And three is a convergence of all of the software environment needed. The things like a Linux distribution tools. So yes, I think this is a significant event. It's not going to instantly displace x86 servers in the data center, but I think it stands a pretty good chance of capturing a significant chunk of a rapidly growing niche, which is this web 2.0 for one of a better term. You know, these very large scale web-facing environments where people are running thousands and thousands of servers on a single application. Why wouldn't you use a low power chip like this for those applications? I mean, is it going to be a dominant like 90% of that marketplace or are there reasons why you wouldn't use it? There are reasons why you wouldn't use it. Right now a lot of its workload characteristics are unknown. So there's a significant issue in characterizing the workloads understanding what works what doesn't. But once it's established that a given workload works well on this, particularly among this small number of very big customers who own their entire technology stack, there'll be a tremendous pressure on them to start using it because it's ultra-efficient. So there will be a lot of pressure. The other significant thing here, which is kind of was quietly stated, but is happening is HP's redstone system architecture is kind of redefining the layers of the system engineering stack. The architecture that they've got with their redstone system accepts the Calzita modules at the fabric level and HP is then is it responsible for the rest of the system? What that means is that this is a way to for HP to integrate successive generations of technology as long as it outputs that fabric definition. So HP has now created a platform where they have a tremendous amount of agility going forward in terms of integrating either new things like ARM or even x86, which they've said is on the roadmap. Does this change any of the go to markets plans for product lines? Obviously purpose-built type technology like this as specific benefits as you mentioned, but is this going to change with redstone and HP's experience getting products to market faster? I think definitely. It has to because if it doesn't, they will have been willfully negligent and none of those organizations are like that. In fact, I actually worked for HP for four years. I was director of blade system strategy from 2006 to 2010 and I know several of these people and it's obvious to me from the people I'm meeting that I used to work with that the hyperscale program has skimmed some of the best and the brightest from the other organizations. So there not only is this a serious program but they've seriously staffed it with some impressive talent. Well, so you know well from your blade's background that a lot of the the messaging around blades have shared resources. Absolutely. This is that on steroids, isn't it? Absolutely. The notion of factoring out common elements so they can be shared goes back to the earliest versions of the earliest iterations of the blade server cycle and in an x86 technology platform that was a very strong message and this is taking it as you say on this is on steroids and the ARM architecture itself for the processor geeks in the audience the ARM architecture itself is very interesting because it was designed from the beginning to be a thing that turned on and off at short intervals and you know can power on and off chunks of itself so it's a great architecture for low power servers. The software was always a barrier and that's the thing that's one of the elegant pieces of this program is the Discovery Center and the software partners they are they're using it as a learning exercise so HP is going to contribute to the industry wide IP in terms of what works on these platforms and at the same time they're going to be first to know about it. So learning on one side but also I'll talk about the commercial aspect you mentioned specific use cases can you elaborate a little bit more on that? First of all like I said it's a work in progress but the kind of use cases that seem to make sense if you look at the performance characteristics of the ARM core you take out things that are heavily CPU intensive it's not going to be a derivatives calculation engine probably it's not going to do HPC structural analysis but there's a true lightweight web serving static web pages even lightweight dynamic web pages the kind of thing you see things you see by the hundreds when you log into a site and I'm not necessarily saying these customers but when you log into Facebook, Amazon etc you know you see hundreds of hundreds of objects served up on a lightweight web page some layers of the mid-tier and that's where it gets a little dicey because I understand some of the Java workloads work well some don't so static web media distribution streaming of media that's something that keeps the processor fairly lightly loaded because it's constantly just spitting out pointers to pointers for blocks of IO Hadoop big data anything that operates you know embarrassingly in parallel so there's a lot of stuff that's that's going to work well and there'll be a there'll be a thrashing period you know okay we have an analyst giving us the scoop here Richard thanks so much we want to get the man of the hour here who gave the keynote Paul and thanks so much for jumping inside the cube thank you