 So, early prototypes of Cortex A9 processors from ARM, and then now that we're getting access to the early build-out of the Calzada processor, we're working with Calzada. We've been working with them for about 18 months across FPGA, these are very early. Before you actually have silicon, you can do software modeling on what a processor will look like. So, we've been working on this for a while. So you've got your own kind of little lab going, sort of virtual lab, is that right? Sort of what HP is setting up in Houston, have you sort of started that? Not a very small scale, but the great news about the Moonshot program is we get access to hardware on a scale that really allows the whole ecosystem to start benchmarking and then optimizing. This is the beginning of a several-year journey. So talk about how you're going to use that discovery lab. It sounds good, they show a little video of a guy walking around, it looks like a real lab. You guys are going to go there physically, you're going to remotely, yes, yes? So both. What does it mean for you guys? So we've got the initial builds up, there's a lot of planning going on at the moment, but our goal is to take the daily build of Ubuntu Server. So when we're in a development mode, we publish every day a build of Ubuntu Server with all the applications and we'll be testing that in these labs. And the goal is in the next six months with active benchmarking is every time we come across a problem, either in the Linux kernel, it could be in PHP, it could be in Apache. We identify that problem and we automatically then send that bug with a fix up to the upstream community. So this is really about accelerating the focus of upstream developers on issues in these low-power servers and also giving them access so they can actually then debug and fix ahead of this finally coming to market. Okay, so you'll be active in that lab for the daily builds? And we also have a large team in Texas so that most of the server team are based in Texas. Can you talk about some of the dynamics around Linux and around now? I mean obviously Linux became, you know, because of open source and commodity hardware, it was the boom for developers. Hey, you know, of course it's great. Use the gear, make it work, stack them up, you know, rack and stack, get the processing power. Okay, now you mentioned the scale issues. Okay, what are you learning in the program that you've been prototyping? What are some of the things you're seeing in the use cases of the band that you can share with folks? Can you talk about that at all? Sure, so is it the first question? Can it differ from the old way, which essentially rack and stack versus some of the new stuff? Okay, so I mean the first question is, I think it's a change in how Linux is being used. It's always been very popular with developers. The last 10, 15 years have really seen Linux replace proprietary Unix in Wall Street and in big enterprise. And now we've got this new class of hyperscale customers who use Linux in a fundamentally different way at a much larger horizontal scale. What are we doing, therefore, to address those specific issues? Well, there's some very basic making sure that all these apps and the OS work beautifully on the hardware. There's a separate set of management problems. So if your power saving from running, I think today was 2,800 cores is what Paul announced. 2,880. 80 cores, that's possible in a rack. You need different approach to management software so that as you are starting to use only 50% of that power, you're literally turning off cores dynamically and then waking them up as necessary. And so a lot of the work, sure there's a ton of work just to make sure that the stack is working beautifully on these new SOCs, but there's also a different way of thinking about the management of multiple nodes. And I think you'll see both open source solutions to that and some proprietary solutions both coming into market. So Paul, I think Paul this morning put some high level numbers out about what we think is achievable. I think it's too early to put. We want to do more benchmarking, and before we really nail our... But it's significant. Oh, it's very significant. It's very significant. Because remember, it's not just power at the CPU, it's power in cooling. It's weight. You're dealing with the waste of... There's numbers well. There's fantastic. 10 racks to a half a rack. But you can maybe nickel dine the numbers when you're saying it's... Or magnitude significant. Thinking about massively parallel using low power architectures, whether they are, or frankly, atom, using them this different way we think can yield enormous savings. So if this was only, frankly, if we were only talking about a 10, 20% saving in power here, it wouldn't be worth the ecosystem thinking about a massive change in architecture. It's the fact that we're looking at such a huge potential saving that means that we, frankly, a developer who's responsible for Pearl or PHP, we're going to... This is such a big saving. It's worth everyone caring. And that's the good story. What's the point and point you just made about power and cooling? Because every dollar spent on running the IT equipment is another dollar spent on cooling it on average. That's probably not the case in these large web properties. They're probably a little more efficient than that. But still... It's a big issue. And I meet CTOs sometimes at banks and they'll talk about the fact that they're actively... Where's the new data center going and your biggest constraint is how close can I get to a piece of the grid that has the available electricity. So this is also about getting much more bang for buck out of existing data centers. We've been talking a lot today about customers innovating with IT. You remember Nick Carr's book, Does IT Matter? And of course, obviously, to guys like Google, it matters a lot. They're printing money with it. I'm envisioning the data center is an ATM. And the more you can get out of that ATM, the more money you're going to print essentially is really what we're talking about here, isn't it? These companies are basically profit centers. I think density of compute within existing data centers. It's not just... It's Greenfield as well, but within existing data centers is a big part of this. The other part, and Nick Carr's book, is what's interesting is whether you look at this type of architecture of compute or whether or not you look at big data, these are solutions to problems that occurred first in hyperscale companies at Google, at Yahoo, with Hadoop. And what we're now seeing is people taking big data and saying, how does this apply to the traditional enterprise? And so that innovation occurred at the small number of companies is now flowing out to the mainstream. Well, the premise of the book was that IT cannot give a sustainable competitive advantage, but data in many ways potentially changes that dramatically, doesn't it? Your ability to store more of it, to do more analysis on it, and to do that analysis faster, velocity, volume is the big driver. So it just shows. It's incredibly wealthy from running a book where the premise is fundamentally flawed, but as long as you package it, right? You can be successful. Yeah, so we've been talking, we're here with Christopher Kenyon, who is with Canonical, and they are behind the Ubuntu Linux distribution, and we're here at HP Labs. What's next for you guys in this space? Big question. We have three big focuses. Cloud, we've just announced actually with HP that HP's public cloud is running all on Ubuntu, both as a guest and as a host OS. So expect to see more announcements around OpenStack and Ubuntu Cloud. It's an OpenStack. If you're building out on OpenStack, the default OS underneath that right now is Ubuntu. And there's a host of historical reasons for that, but expect to see lots more news around Ubuntu and Ubuntu Cloud and then Ubuntu and big data. Should we just talk about OpenStack? Because Ubuntu is fundamental to OpenStack, right? Is OpenStack ready for primetime, Christopher? We know of many customers who are deploying on Diablo, and we certainly think that the Essex release that comes out just before our release, so that we work very closely with the OpenStack community, so as soon as their final version is out, it's ready in time to be released in the next version of Ubuntu. But the Essex release is certainly, I think, the one that everyone's putting the big money on to be production ready. There's a lot of canonical developers who now work at Rackspace and at Nebula, and there's a lot of movement of great quality time to round at the moment. Yeah, there's a lot of energy behind it. A lot of Rackspace people here at HP now, from what I hear. Oh, I hear the same thing, John Furrier and a few others. John Purrier, not to be confused with Furrier. Sorry, correct me. I'm John Furrier. Thanks for coming on theCUBE. Appreciate your commentary. Thank you very much for your time. Have a great rest of the day.