 Cube at IBM Edge 2014, brought to you by IBM. Hi everybody, we're back. This is Dave Vellante with Jeff Frick and this is the Cube, our flagship program. We go out to the events, we extract the signal from the noise. Dylan Larson is here as the director of Xeon product lines at Intel. Cube alum, Dylan, welcome back, good to see you again. Good to be here. So tell us, what's new with Xeon? Xeon is doing great. Business continues to do really well in the data center business for Intel. We launched our four-socket product earlier this year. We've been on kind of this relentless pace of driving more and more innovation in the X86 product lines and things are going really well. Do you guys, do you ever get bored, like the UCLA Bruins and the run that they made and they just kept winning and winning and winning? Is that ever? Not at all, not at all. You guys don't get complacent, right? You can't, cause it's in your DNA, paranoid. And I think that's, we live by that. I mean, people wonder, do you really live by one of the paranoid survive? We do. I mean, I think that we don't get bored. We continue to fight for every design win, every engagement with customers. So for us, it's very much about continuing the pace. Well, I would imagine, I mean, there's so many disruptions going on now. You're seeing flash, you're seeing software defined, you're seeing cloud mobile social big data. I mean, all of that stuff has direct implications on how you guys have to behave, where you put your R&D money, how you evolve the ecosystem. So how have all these mega trends, these buzzwords that are filling our brains, how has it affected how you guys operate, how you behave, how you go to market, how you spend an R&D? Talk about that. I mean, those are all really good things for us. Cause if you look at increases in IO performance, things like flash memory, all those things kind of unlock the pathway to the processor. So from that perspective, it's good for us. Cause now we can unlock all the potential that we can build into the CPU. So that part is all goodness. And I think the more innovation that happens around the product, the core microprocessor is good for us. And we're also investing more broadly in the portfolios to take into account things like the transition to software defined infrastructures in lots of ways, things that we can do in the CPU, but also in the other components that connect into those platforms. So from our perspective, we think it's these disruptions are great times of more and more innovation. How about, go ahead, Jeff. I was gonna say, before we came on camera, I was talking to Dylan, my first tech job was at Intel and I was working on the 64-bit architecture that was gonna be so much better than the 32-bit architecture. But these guys just kept coming after us from behind. We couldn't get out ahead of the curve fast enough from the x86 guys, our own internal guys. So this relentless pace of innovation, this goes back to like 96, right? 97, we're trying to change the game in our own internal guys to this relentless BKM, best known method, an Intel process. They run the business like they make a microchip, design it, shrink it, optimize it, redesign it. It's amazing, but the other thing we talked about is, can ARM do the x86 what x86 did to risk? And there's something I've talked to Floyer about. Can the sheer volume of ARM chips in a, in a font. Well, forest premise is no way. Can't happen in the data center. Why not? I think one is huge install-based software applications that people are used to developing on. A massive ecosystem that has worked there on the existing microarchitecture. I think that when we look at it, we talked a little bit about this, that the dynamics are kind of almost inverted. And you look at the number of ARM players that are trying to go out to this market, 15 plus or something at last time we counted. So it'll be hard for them to get critical mass as well. That doesn't mean we're not gonna look at continuing the relentless pace on lower power, on new designs to support this big scale type of architectures. We're taking it very seriously. Like I said, we're the UCLA Bruins, but we're not giving up. We keep moving. Now you talked offline about some stuff you're doing with the architecture to enable private cloud. Can we unpack that a little bit? What are you doing specifically? I think one of the things we've looked at for a long time is how much of the power profile of the system does the CPU take up. It's a larger portion. It's also probably the most valuable component in the server platform, but ways that we can drive more efficiency, either by adapting the workload dynamically based upon what's required, or exposing, in the case of things like the software-defined infrastructure concept, exposing low layers of instrumentation, telemetry, and control structures to be able to make that converged infrastructure much more optimized. So for example, if I could tell you what it would take to deliver more service quality to a particular service on the system, or more power efficiency on the system. One of the things we talk a lot about is this idea of the noisy neighbor, the VM that sits next to you, that is taking up all of the resources on the system and making the service suffer. And what we have looked at is what we could do to understand more deeply, at layers lower into the system architecture, to say, how do we identify this noisy neighbor? How to kind of mitigate its impact on the service quality of the overall thing? So this idea of service quality or quality of service is a big area of think we can do. It's not a pure performance gain, but it is about optimizing the way workloads get deployed in the data center. So that leads me to a question on security. David Floyer, again, the guy who predicts there's no way Intel can lose in the data center. Outside the data center, mobile, different story, but so I'm just saying, he's not just an Intel bigot. Sure. When the whole virtualization meme started to take hold, he said this is a security nightmare. This is so bad because you don't know what port is connected to what drive, is connected to what server, and it's just really fuzzy. And the only way to solve this problem is at the core level, the microprocessor guys are going to solve this, and you guys went out and made some big acquisitions. Absolutely. So I wonder if you could talk about what you're doing in security, that basic premise, because it's really hard, I don't feel more secure. If anything, it's getting harder and harder and harder, but is that potential light at the end of the tunnel through guys like you? I think we take it very seriously, and I think there's a couple of different areas that you can address in the security space. You can look at what you do to just encrypt data, and processor performance is a fantastic way to make encryption pervasive. And one of the things we did a few years ago is we added this AES new instructions, the ability to essentially accelerate the mathematical processes that it used to do very, very deep block-based cryptographic processes. And that's a place that microprocessors can do really well. We don't need specialized silicon to do the work anymore. It's all within the core microprocessor. That's one place. The other one is around establishing this concept of trust. And one of the things we launched a few years ago was this trusted execution technology, which was launched kind of defined first in client. And then when we started looking at what we could do in server land, we said it's the same kind of problem, which is how do I look at this device or this virtual machine or the system that houses many virtual machines and put it to work in ways that establish a sense of trust. And it starts with the low-level hardware. The very first sets of instructions that that CPU issues are about saying, here's who I am, here's where I'm located, here's what I'm capable of. And you put those things to work and you get a model which says this platform is now trusted or this platform is not trusted. And you can make your virtualization decisions where you deploy a new VM, a new service based upon the level of trust at the hardware level. And I think it's a multi-layered problem. I think you have a good point. But what we've really tried to do is say, those areas that really when you rest them in hardware make a very powerful addition to that security stack. So it's good to be an arms dealer. You guys love everybody. Software companies, even hardware companies, except your direct competitors, right? So, I mean, Amazon uses, you know, micro processes, Google does, everybody does. Now you got OpenStack coming along. What's your take on what's going on with OpenStack? You know, where does it fit? What are you guys are doing to advancing the ecosystem? I think OpenStack is really exciting because it does give kind of a new opportunity to take the open source playbook and put it to work on a very difficult problem which is provisioning new services in a cloud-like infrastructure. I think the exciting part about OpenStack is what's that path to private cloud for the enterprise? How can the enterprise get that same kind of economics and efficiency that we see in the large Amazon, for example, the cloud infrastructures and bring that capability there at a way that is cost-effective to get into the door? And one of the things we've been looking at OpenStack is one, we think it's just a great model, a good approach. There are other good models that VMware has for building cloud-based infrastructures inside the private enterprise as well. Microsoft's got work there. And I think they're going to do well in the markets they serve. I think OpenStack is a new alternative though. And I think one of the things we've looked at is because it's an open source proposition, we can find ways to expose our value more effectively by working with our own inputs into the distributions or putting more focus on the kinds of capabilities that need to work in this world. So I talked about things that the CPU can do, hardware from a security perspective or efficiency perspective. We envision this ability to take layers of OpenStack and project sort of low-level instrumentation northbound so that the provisioning services that OpenStack exposes can take full advantage of that, get to that quality of service or service assurance as I talked to. So we had, we were at AWS re-invent and we had James Woodon. He was talking a little bit about special purpose-built servers as opposed to everyone using industry standard servers within their cloud. And I think some of the other big hyper-scale players are doing that as well. What's your kind of view on how that ecosystem is changing where it seems to have gone from specialty servers to commodity servers now back to specialty servers all fortunately powered by X86 processors, but it seems to be morphing a little bit. Yeah, I mean, I think the interesting thing is the guys that run the biggest data centers in the world, they know their workload, they know their infrastructure extremely well. So they've said I can basically set a definition for what this infrastructure should look like. And the good news is like you said, that they're primarily looking at X86 or IA to deliver those products and services. But I think the thing is they just said what can I do to optimize my whole rack level infrastructure inside my data centers? And they've engaged different players. They've said, hey, I will work with the big OEMs but I'll also work with new players or all established organizations like opencompute.org which Facebook established. So I think that that model is for these guys that are going to manage the most dense systems on the planet. I think that model is going to continue to drive a lot of innovation. I think that what the difference I think in between that and the OEM models, the traditional OEM model, there's a lot more capabilities to manage the service and the capabilities of that platform. And I think that the biggest clouds are all about managing that operation. So I think they have been in a mode of saying, I'm going to strip out a lot of things and put just what I need into the system design. And they've been relatively simple in terms of the number of applications they're running. I mean, massive scale, not a lot of complexity in terms of different, where typical enterprise has a lot of different types of workloads. They know exactly what they run, right? And because they know exactly what they run, they can make decisions about every down to the hardware level, how they make it work. And make it really dense. So Dylan, I got to ask you, so IBM is selling its X86 business to Lenovo, are you hurt? I've heard it, yeah, I've heard it. No, are you hurt? Am I hurt? Are you sad? Are you excited? You know, mixed feelings, mixed emotions about that? I mean, it happened before, you know, many, many years ago at the PC side. I mean, I think the thing I've really enjoyed about working with IBM over many years is that they're extremely innovative company, right? Super bright people that work there. So not just saying that because I'm here, I genuinely believe that. But I think it's interesting. It's a dynamic in the industry. I mean, China's our fastest growing server market on the planet, right? So it's not sort of a shock that there is a potential engagement in this fashion. But yeah, I mean, I think, you know, I love all my customers. Lenovo's another great company to work with as well. So for me, it's not choosing. It's, you know, we're going to work with the best companies we can. We're going to provide the best capabilities and services we can. How do you expect that innovation that IBM provides? Because it does spend, you know, a lot of resource on R&D. How do you expect that to flow over time? Do you have a sense of that? I don't, I mean, I can't comment on anything specific, but I would say that, you know, I believe that we're going to work with a set of really, you know, capable and exciting people the way we do today in this new world. So I don't think we're going to lose momentum at all. I really don't. Excellent. All right, Dylan, well, listen, thanks very much for coming on theCUBE. It was a pleasure to have you again. My pleasure. Keep it right there, everybody. This is theCUBE. We're live at IBM Edge in Las Vegas, and we'll be right back with our next guest right after this. Thanks again.