 from Seattle, Washington. Extracting the signal from the noise. It's theCUBE on the ground at LinuxCon North America 2015. Now, here's your host, Jeff Frick. Hi, Jeff Frick here. We are on the ground in Seattle, Washington at LinuxCon North America 2015. Want to come up here and get a little taste for what's going on. Really the granddaddy of them all in terms of enterprise open source projects. We're really excited to be joining us next minute by Maury Whalen, the VPN director, open source technology center from Intel. Welcome. Thank you, Jeff. I'm glad to be here. Absolutely. So people probably that aren't familiar with Intel's open source kind of stance might say, what is Intel doing in open source? I thought they're all about putting x86 chips and computers. We do both. Open source is extremely strategic to us when you have a whole community of people and Linux started in what, 1991 and having grown up on Intel architecture. Extremely fortunate there that we made some key right decisions with giving more documentation out and more availability to our hardware, right? I think one of the things that people really need to know about how strategic open source could work for them, right? Is how it scales. So if you take, for example, I have a team that does a lot of Linux kernel CPU and ship set development. When we're pushing things up to the Linux kernel, it's like, okay, we pushed into one place. And then what happens is anybody that's doing something on an Intel platform, they would have that code available in that one place and keys, they have the code. So if they needed to make modifications or some little thing, you know, they added a different peripheral to their platform. They have the code available that they can then do the support and the changes themselves. But then it scales. So again, pushing things to the kernel once, it scales to hundreds of millions of people, right? So the opportunities for people then to take kernel and make product out of it and support it until architecture is huge scale opportunity. And really extends kind of the core until architecture a whole nother level if they can get in there to the kernel and make those changes. It does. And it also gives, you know, when you look at it too, if somebody's going to do something innovative, chances are they're going to give it back to the community or if they come back into the community that we're working in and doing things as Intel engineers working out in the open, they might come back and, you know, give something very innovative. And then we can also take advantage of, one, helping them maybe to drive something that's going to drive until architecture or we can just participate with them on their cool innovative new idea. Yeah, and talk about that in terms of using community based kind of innovation back inside of Intel that goes into some of the core products that you make, because that's different than having your own people down, you know, working away on a project, pushing code, pushing gold tapes back of the day, right? Bringing the community in is very different, much quicker, a lot of different directions it can go. Right, you know, a couple ways to look at this is when I always say if you're not out collaborating where the largest open source base is actually happening, if you're not out working with them, innovation is going to happen without you. And if you try to do something and not do it in the open way, if you're using open source software, you know, are you just going to be a one hit wonder and then everybody else is just going to sit there and innovate and progress again without you, right? So that can always happen. And then you also see other things, so, you know, containers is enormous, you know, buzzword, even though the containers have been around for a little while, I think when Docker and then, you know, Chorus with Rocket and now there's the open container initiative that the Linux Foundation started, I think when all this starts you see, hey, there's new usages going on because the cloud has brought in new usages and it's just like, what are people doing with this? And you know, we look at containers and we're like, okay, we know the good and the bad at them, so we've kind of evaluated that intel and then one of the things that we did, we said, you know what, we could probably take this usage that people are now using in the industry and do something and we did something called Clear Containers. So it's part of the ClearLinux project, clearlinux.org, and we're using virtualization to bring a container usage that provides some of the security that you don't often get with normal container usages. So again, taking advantage of, hey, look what's going on in the community, right? Hey, that's kind of interesting. Maybe we could do it a different way, but we wouldn't know about it unless we put it out there and collaborated with people. So it's funny you talked about scale, scale, scale because one of the big topics in computing right now obviously is data centers, right? Everything is moving to the data center. You've got a little local app on your phone driven by mobile but a lot of horsepower moving to data centers. Got all the classic problems with compute and moving data, heat and cooling and this, that and the other, but that said, it's going to continue to grow. So talk a bit about what intel's doing in the data centers these days, how that kind of is a priority relative to, before it was shipping PCs. Now shipping PCs are probably not at the top of the list. So a couple of weeks ago, we announced a project and an initiative called Cloud for All. And really what Cloud for All is to enable people to use open software to do tens of thousands of clouds. And one of the big areas of investment that we're putting into this is into OpenStack. And we're starting an innovation center with Rackspace down in San Antonio. And it's gonna be open. The work that comes out of this is gonna be definitely pushed toward the open source. So when you look at OpenStack and trying to make it, hey, how can we make this more usable, installable? You know, when you look at the enterprise side of it Intel is heavily involved in pretty much four working groups as part of the OpenStack foundation. One of them is the enterprise readiness. So when you're talking about live migration, when you're talking about rolling upgrades, just and again the installability of it, right? We're involved in the telco working group. So meeting the needs and requirements of telco users of OpenStack. We're involved in the product working group. The product management working group. So when you look at, hey, OpenStack is a collection of products or projects. What does that mean and what does it make, right? And so giving a little bit more focus into what these projects are actually making. And then we're also involved heavily into the diversity working group. And I always say Intel did make a huge investment and an announcement last year into diversity. But I always say what we really wanna do here is provide inclusion. So I don't want 20 people just like me running around. I wanna work with people. I wanna work with other geos, other genders, other companies that think differently and maybe challenge me in a different way. Which is about bringing in inclusion and making people feel safe in the environment that they're working in. But definitely OpenStack is something that we've put a lot of resource and investment in and you're gonna see a lot more from Intel in the coming months. So we're gonna see you at OpenStack Silicon Valley next week? Not there, but there is the OpenStack summit that will be in Tokyo in October. That's right, okay. And we were at the one in Vancouver and we were at the one in Portland before that. So we've been involved in OpenStack all along. It's a great project. So I would be remiss if I didn't at least touch on Moore's Law. It's the gift that keeps on giving. The Intel I-32 architecture just continues to be more than good enough and scalable and really the basis for a lot of these ongoing improvements. I wonder if you can talk about really at the core, right? At the core it's I-32 and driving now this data center and cloud-based system which at the end of the day, somebody says I saw a bumper sticker. The cloud just means that somebody else's computer still got to be driven by compute and how that just continues to really be at the core of delivering these data center solutions. Right, so when you look at the amount of feature-rich silicon that we deliver, it is really some of the foundation building blocks of what actually makes it what I would call an open data center, right? When you look at things that maybe we're doing with compression, with vector rendering, those type of things. These things, when we enable them at the core from the hardware, so hardware features that get enabled up into the more operating system stack and then they can get exposed at the upper layers. We're also doing things where when you look at the different usages, when you look at what Intel Silicon actually provides, it's just like hey, how can we take this and do something a little bit different and how are people going to use it? And things going in the cloud are now really trending more into hey, what's going on with networking? What's going on with storage? I mean, a lot of the emphasis that's really been going on with cloud, it's compute, compute, compute. But what happens with compute and then you stick storage next to it and what are you going to do with the latency? So there are definitely a lot of different challenges and opportunities that we're still facing. It's just going to be an exciting area to keep working in. Yeah, great time. Well, Maury Whalen, thanks for stopping by from Intel and it's good to see you. Great, Jeff, thank you. All right, it is. I'm Jeff Frick, you're watching theCUBE. We are on the ground in Seattle, Washington, at Linux, gone North America 2015. See you next time. I'm Jeff Frick.