 From Seattle, Washington extracting the signal from the noise. It's the Cube on the ground at Linux con North America 2015 Now here's your host Jeff Frick. Hi Jeff Frick here with the Cube We are on the ground in downtown Seattle, Washington for Linux con North America We came up to the Great Northwest to check out what's going on and there's a lot of excitement Excitedness next segment to be joined by Keith Packer distinguished Linux technologist for HP. Welcome Thank you, sir. So you're working on some fun and exciting things. You're working on the machine Yeah, in January, I got an opportunity to move to HP and come and start working on Linux on the machine So I've been to to many HP discoverers I've seen many presentations on the machine, but for those of the people out there that aren't familiar with it Give them kind of a quick overview of what is the machine? What's the vision of the machine? Why should they start paying attention to some more notices about the machine notices? So the machine is actually a collection of technologies that the basic idea is we have three three ideas. We have Silicon doing computation our traditional CPUs. We have photonics, which is silicon photonics an optical Internec between directly between integrated circuits and then we have the new HP memrister, which is a high-density low-power Storage technology and we're combining those together into an enormous machine with a large amount of compute and a large amount of memory We're building a demonstrate a initial initial version of the machine. We're building building right now we'll have 80 nodes With four terabytes of memory on every node for a total of 320 terabytes and 80 CPUs One more time on the stats there. Yeah 320 terabytes of of not of memory and 80 CPUs in a single rack So you guys are really talking about next-gen along all three cores of what makes basically computing It's compute and store and moving of the data. That's right We're building we're a massively massively Parallel computation a photonic interconnect direct silicon-to-silicon photonic interconnect and our next generation non-volatile memory storage the memrister So fun and exciting stuff. You're here at Linux con What percentage of the software that runs the machine is going to be open source? The only operating system work we're doing on the machine is all going to be under the GPL So everything is free software awesome And then what are some of the the problems that you are even kind of to the application space in terms of how you're going to use all This compute power with the machine What are some of the things you guys want to tackle with this with the monster? Well, one of the big problems that one of the big opportunities that we have is that we have a phenomenal amount of memory 320 terabytes is more than most people put in their typical telephone these days and the Linux kernels not quite ready for that And So we have a lot of work to do in the actual operating system itself to get it ready to support that that amount of memory We also are building a massively parallel machine with up to 80 CPUs And so our our optical interconnect is going to be very different than than what people are used to right now That it's no longer cache coherent So that's going to be a lot of software challenges on the in the application space to deal with the new memory architecture So what does that mean no longer cache coherent? That means when one CPU writes data to memory you have to you have to communicate with the other CPUs to tell it Oh, by the way I may have changed something and you have to do a lot of work in software to make sure the data is actually Transmitted between the CPUs correctly and not there aren't any data collisions or data corruption So you're distinguished you get to play with the toys before they're ready for GA What is the timing on this thing and again? What are some of the applications that you guys envision that you can tackle today with this type of horsepower that you Couldn't tackle before so this is for enormous data sets So the we had the advantage that all the memory is shared among all the processors So if you have a phenomenally huge data set a large graph problem a big sorting problem a big data analytics problem That's what this machine is really designed to tackle, but this is like big big big data This isn't like your typical big data right? This is a pretty specialized machine This isn't kind of what people are trying to do every day now with with consumer sentiment and operational data systems of intelligence You guys are moving way way way beyond that So I would imagine it's got to be weather crazy serious physics those types of problems absolutely anything with anything with a lot of data We're talking about being able to do stuff a data analytics on stuff like engine data from every airplane in the world flying simultaneously All the engine data from every More time so all the engine data from every airplane flying all over the world Simultaneously yeah, because engines get thrown out all the time. It's kind of a funny big-data use case, right? 747 throws light and we're close to bowing so I guess it's only appropriate right absolutely throws out so much data You know per hour per trip, but you just said every airplane all of the data Exactly, but we're also doing a lot of work in security to allow us to partition the machine efficiently between and have multiple applications running simultaneously on the hardware Secure against OS and software software problems So we're doing a bunch of work in security and a bunch of work in scalability obviously So from a technologist point of view, you know What gets you out of bed in the morning to really have a the power of this thing that you're building? But to really to see the world in a different way than you saw before and to really tackle some challenges that here Before we're just not even within the realm of grabbing well in this particular case The machine is actually solving some very long-standing problems and software that we've had for years and years and years If you look at what a typical software application developer spends all of their time doing they're spending their time figuring out how to use their Limited memory resources efficiently as possible getting data in and out of the machine with this new art with the machine architecture We don't have that problem anymore So typically half to three-quarters of an applications runtime is just going to vanish along with about half the development cycle So we're talking about being able to change the software development paradigm We're talking about being able to change the scale of problems that people are able to solve Okay, I'm going to put you on the hook. When's GA? Yeah I'm an engineer. I'm not allowed to answer that question. You're distinguished You get a play with it long before even the regular engineers go. All right. Well, super. Well, thanks for stopping by absolutely Thank you for the opportunity to chat with you today. Thanks for being here Jeff Frick. You're watching the Cube We are live from Seattle, Washington at Linux.com North America. Thanks for watching