 Welcome back to HPE Discover 2021, the virtual version. My name is Dave Vellante, and you're watching theCUBE. And we're here with Guido Appenzeller, who's the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. Thanks, Dave, I appreciate it. It's great to be here today. So I'm interested in your role at the company. Let's talk about that, you're brand new. Tell us a little bit about your background. What attracted you to Intel and what's your role here? Yeah, so I grew up in the startup ecosystem of Silicon Valley, came from my PhD and never left. And, you know, built software companies, worked at software companies, worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me was like, hey, you got the wrong phone number, right? I'm a software guy. That's probably not who you're looking for. And, but, you know, we had a good conversation. I think at Intel, there's a realization that you need to look at what Intel builds, more as an overall system, from an overall systems perspective, right? That the software stack and then the hardware components are getting more and more intricately linked. And, you know, you need the software to basically bridge across the different hardware components that Intel is building. So I'm here now as the CTO for the data platform school, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, you know, with Pat, you know, got a fantastic CEO at the home. I worked with him before at VMware. So a lot of things to do, but I think a very exciting future. Well, I mean, the data center is the wheelhouse of Intel. I mean, of course, your ascendancy was a function of the PCs and the great volume and how you change that industry. But really data centers is where it at. I mean, I remember the days of people saying, Intel will never be in the data center. It's just the toy. And of course, your dominant player there now. So your initial focus here is really defining the vision and be interested in your thoughts on the future, what the data center looks like in the future, where you see Intel playing a role. What are you seeing is the big trends there? You know, Pat Gelsinger talks about the waves. He says, if you don't ride the waves, you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? Yeah, that's right. You want to surf the waves, right? That's the way to do it. So look, I like to look at this in terms of major macro trends, right? And I think that the biggest thing that's happening in the market right now is the cloud revolution, right? And I think we're halfway through or something like that. And this transition from the classic, you know, client server type model, you know, that where with enterprises running the old data centers to more of a cloud model, where something is, you know, run by hyperscale operators or it may be run, you know, by an enterprise themselves or it may sedate the edge. So there's a variety of different models, but the provisioning models have changed, right? It's much more of a turnkey type service. And when we started out on this journey, I think we built data centers the same way that we built them before, although, you know, the way to deliver IT had really changed, right? It's gone to more of a service model. And we're really now starting to see the hardware diverge, right, the actual silicon that we need to build in order to address these use cases diverge. And so I think one of the things that is kind of most interesting for me is really to think through how does Intel and the future build silicon that's built for clouds, you know, like on-prem clouds, edge clouds, you know, hyperscale clouds, but basically built for these new use cases that have emerged. So just kind of a quick aside. I mean, to me, the definition of cloud is changing. It's evolving. You know, it used to be this set of remote services in a hyperscale data center. It's now, you know, that experience is coming on-prem. It's connecting across clouds. It's moving out to the edge. It's supporting, you know, all kinds of different workloads. How do you see that so evolving cloud? Yeah, I think, I mean, there's, the biggest difference to me is that sort of, cloud starts with this idea that the infrastructure operator and the tenant are separate, right? And that is actually has major architectural implications. I mean, just to, you know, I don't know if it's a perfect analogy, but if I built a single family home, right, where everything is owned by one party, you know, I want to be able to walk from the kitchen to the living room very quickly, if that makes sense, right? So, you know, my house here is actually an open kitchen, right, it's the same room essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture, right? The kitchen from your restaurants, where the cooks are busy preparing the food and the dining room where the guests are sitting, they're separate, right? I mean, the hotel staff has a dedicated place to work and the guests have a dedicated places to mingle, but they don't overlap, typically. I think it's the same thing with architecture in the clouds, right? That's sort of, you know, initially the assumption was it's all one thing. And now suddenly we're starting to see, you know, like a much, much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing is just evolving incredibly quickly, right? I mean, you know, 10 years ago, you know, things were mostly monolithic. Today, you know, most new workloads are microservice based and that has a huge impact in, you know, where CPU cycles are spent, you know, a way we need to put in accelerators, you know, how we build silicon for that to, you know, give you an idea. I mean, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based, an application written in a microservice based architecture, you can spend anywhere from, I want to say 25, in some cases over 80% of your CPU cycles just on overhead, right? And just on, you know, marshalling, demarshalling the protocols and, you know, the encryption and decryption of the packets and your service mesh that sits in between all these things. So I created a huge amount of overhead. So for us, kind of 80% go into these overhead functions. Really, our focus only needs to be on how do we enable that kind of infrastructure? Yeah, so let's talk a little bit more about workloads if we can. I mean, the overhead is also sort of as the software, as the data center becomes software defined, you know, thanks to your good work at VMware. It is a lot of cores that are supporting that software defined data center. And then there's... That's exactly right, yeah. And as well, you mentioned microservices, container-based applications, but as well, you know, AI's coming into play and what is, you know, AI is kind of a morphist, but it's really data oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? Okay, I think the trend you're talking about is definitely right, right? We're getting more and more data centric, you know, shifting the data around becomes a larger and larger part of the overall workload in the data center. AI is getting a ton of attention, right? It's, look, if I talk to the most operators, AI is still the emerging category, right? I mean, we're seeing, I'd say five, maybe 10% percent of workloads being AI is growing. They're very high value workloads, right? They're very challenging workloads, but, you know, it's still a smaller part of the overall mix. Now edge is big, right? And edge is too big, it's big and it's complicated because the way I think about edges, it's not just one homogeneous market, it's really a collection of separate submarkets, right? It's very heterogeneous, you know, it runs on a variety of different hardware, right? Edge can be everything from, you know, a little server that's fanless, that's wrapped to a, you know, a phone, a telephone pole with an antenna on top for an integrated microcell or it can be, you know, something that's running inside a car, right? I mean, you know, modern cars that has a small little data center inside, it can be something that runs on an industrial factory floor, right? The network operators, there's a pretty broad range of verticals that all look slightly different in their requirements. And, you know, I think it's really interesting, right? It's one of those areas that really creates opportunities for vendors like HPE, right? To really shine and address this heterogeneity with a broad range of solutions. Very excited to work together with them in that space. Yeah, so I'm glad you brought HPE into the discussion because we're here at HPE Discoverer. I want to connect them. But so my question is, what's the role of the data center in this world of Edge? How do you see it? Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us it leads to polarization of a classic data into Edge and cloud, if that makes sense, right? It's splitting, right? Before this was all mingled a little bit together. If my data center is my basement anyways, you know, what's Edge, what's data center, same thing, right? The moment I'm moving some workloads in the clouds and I don't even know where they're running anymore, then some other workloads that have to have a certain sense of locality I need to keep closely, right? And there are some workloads you just can't move into the cloud, right? I mean, if I'm generating a large amount of video data that I have to process, it's, you know, financially completely unattractive to shift all of that, you know, to a central location, I want to do this locally, right? And will I ever connect my smoke detector with my sprinkler system via the cloud? No, I won't, right? Just for, if things go bad, right? That may not work anymore. So I need something that does this locally. So I think there's many reasons, you know, why you want to keep something on premises. And I think it's a growing market, right? It's very exciting. You know, we're doing some very good stuff with our friends at HPE, you know, they have the Proline DL-110 Gen10 Plus server with our latest third generation Xeon's in them. The OpenRAN, you know, which is the radio access network for in the telco space, HPE edge line servers, also the third generation Xeon says there's some really nice products there that I think can really help addressing enterprises, carriers, a number of different organizations, you know, these edge use cases. Can you explain, you mentioned OpenRAN, VRAN, so we essentially think of that as kind of the software-defined telco? Yeah, exactly. It's a software-defined cellular, right? I mean, actually, I learned a lot about that over the recent months. You know, when I was taking these classes at Stanford, you know, these things were still done in analog, right? That basically a radio signal would be processed in an analog way and digested. And today, typically, the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And you know, it happens on servers, right? Some of them HPE servers. And it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and run a number of different cells inside the same server, right? It's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. You know, a lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. The debate is about, okay, how much data that is processed and inferred at the edge is actually going to come back to the cloud. Most of the data is going to stay at the edge. A lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative for the data center, but the counter to that is there could be so much data. Even a small percentage of all the data that we're going to create is going to create so much more data, you know, back in the cloud, back in the data center. What's your take on that? Look, I think there's different applications that are easier to do in certain places, right? I mean, that, look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, you know, lots of different services. You have, if you need very specialized hardware, if I want to run a big learning task where I suddenly need a thousand machines, right? And then this runs for a couple of days and then I don't need to do that for another month or two, right? For that is really great. There's on-demand infrastructure, right? Having all this capability up there, you know, at the same time, it costs money to send the data up there, right? If I just look at the hardware cost, it's much, much cheaper to build it myself, you know, in my own data center or in the edge. So I think we'll see, you know, customers picking and choosing what they want to do where, right? And there's a role for both, right? Absolutely. And so, you know, I think there's certain categories. At the end of the day, why do I absolutely need to have something at the edge? And there's a couple of, I think, good use cases. I mean, one is, let me actually rephrase a little bit. I think it's three primary reasons, right? One is simply a bandwidth, right? Where I'm saying, okay, my video data, like I have 104K video cameras, you know, with 60 frames a second feeds. There's no way I'm gonna move that into the cloud. It's just, you know, cost prohibitive. I might have a hard time even getting a line that allows me to do this, right? There might be latency, right? If I need one to reliably react a very short period of time, I can't do that in the cloud. I need to do this locally with me. I can't even do this in my data center. This has to be very, very closely coupled. And, you know, then there's this idea of fate sharing. I think, you know, that if I wanna make sure that if things go wrong, right, the system is still intact, right? You know, anything that's sort of an emergency kind of backup, you know, an emergency type procedure, right? If things go wrong, I can't rely on that there'd be a good internet connection. I need to handle things locally. Like, you know, that's the smoke detector and the sprinkler system, right? And so for all of these, right, there's good reasons why we need to move things close to the edge, right? So I think there'll be a creative tension between the two, right? But both are huge markets. And I think there's great opportunities for HP ahead to work on these use cases. Yeah, for sure. Top brand in that compute business. So before we wrap up today, you know, thinking about your role, I mean, part of your role is a trend spotter, right? You're kind of driving innovation, righty? Surfing the waves, as you said. Skating to the puck, all the... Have my perfect crystal ball right here. Yeah, go on. All the cliches, right? Fits a little pressure on you. But so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects? Particularly, obviously, in the data center space. What's really exciting you? Look, I mean, there's a lot of them. And I pretty much all the interesting ideas I get from talking to customers, right? You talk to the sophisticated customers. You try to understand the problems that they're trying to solve and that they can't solve right now. And that gives you ideas. Just to pick a couple, right? I mean, one thing, one area I'm probably thinking about a lot is how can we build, in a sense, better accelerators for the infrastructure functions, right? So no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spent on, you know, microservice, marshalling, demarshalling, service mesh, you know, storage acceleration and these things like that, right? And so clearly, if this is a large chunk of the overall cycle budget, right? We need to find ways to shrink that, right? To make this more efficient, right? So I think, so this, this basically infrastructure function acceleration, it sounds probably as unsexy as any topic could sound, but I think this is actually really, really interesting area. I want the big levers we have right now in the data center. Yeah, I would agree, Guido. I think that's actually really exciting because, you know, you actually can pick up a lot of the wasted cycles now. And that's just, that drops right to the bottom line. But please. Yeah, exactly. And it's, you know, it's kind of funny. I mean, we're still measuring so much with spec end rates of CPUs, right? Performance is like, well, they make sure we're measuring the wrong thing, right? If 80% of the cycles of my app are spent in overhead, right? Then the, the speed of the CPU doesn't matter as much, right? It's other functions that I need to learn. So that's one. The second big one is memory is becoming a bigger and bigger issue, right? And it's memory cost because, you know, memory prices, they used to sort of decline the same rate that, you know, our core counts and then, you know, clock speeds increased. That's no longer the case. We've run to some scaling limits there, some physical scaling limits, where memory prices are becoming stagnant. And this is becoming a major pain point for everybody who's building servers, right? So I think we need to find ways how we can leverage memory more efficiently, right? You know, share memory more efficiently. You have some really cool ideas in that space that we're working on. Yeah, and Pat, let me just start to interrupt, but Pat hit into that in your big announcement. I mean, you talk about system on package, I think is what he used to talk about what I call disaggregated memory and better sharing of that memory resource. And I mean, that it seems to be a clear benefit of value creation for the industry. Exactly, right? I mean, if this becomes a larger, for our customers, this becomes a larger part of their overall cost, right? We want to help them address that issue. And then the third one is, you know, we're seeing more and more data center operators are effectively power limited, right? So we need to reduce the overall power of systems or, you know, maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there, right? To both make these data centers more economical, but you know, also to make them a little more green, right? I mean, today data centers have gotten big enough that if you look at the total amount of energy that we're spending in this world is mankind, right? A chunk of that is going just to data centers, right? And so if we're spending energy at that scale, right? I think we have to start thinking about how can we build data centers that are more energy efficient? I'll do the same thing with less energy in the future. Well, thank you for laying those out. I mean, you guys have been long-term partners with HP and now of course, HPE. I'm sure Gelsinger is really happy to have you on board, Guido, I would be. And thanks so much for coming on theCUBE. It was great to be here, great to be at the HP show. And thanks for being with us for HPE Discover 2021, the virtual version. We're watching theCUBE, the leader in digital tech coverage. Be right back.