 Hello everyone and welcome back to theCUBE's live coverage of HPE Discover here in Barcelona, Spain. I'm your host, Rebecca Knight, along with my co-host and analyst, Rob Streche. We are joined by Matt Foley. He is the director, EMEA, field application engineering at AMD. Thank you so much for coming on theCUBE. Thank you for having us. Thank you for having me. And you are a guest who brought props. So we've got to tell us a little bit about what that is. I think you might already know, but. Yeah, I love this thing, so let's show it off. It's eye-catching, so tell us a little bit about it. This is a mechanical sample of our fourth generation EPIC processor. The code name was Genoa, it was launched a year ago and it's the highest performing general-purpose processor in the market today. What you see there are the way that we've really cut up the problem or modularized it, if you were. So there, if you will, there's, you know, 12 chiplets on there, all of those are the core dies. And then the big die in the middle is actually the IO die. So all of the pins on the back of the part go to the IO die to handle memory and PCI transactions. And the chiplets have different personalities? They could conceivably, right? And so, you know, from that perspective, just the modular design and the way that this works allows us to, you know, it does, you know, you can imagine an era of flexibility in that kind of thing going forward. It's why AMD participates in some of the evolving standards around chiplet integration and things like that because, you know, there are options to do things in packaging that you used to have to do in ASICs. And so as a result of that, you know, it's certainly interesting to us and quite frankly the rest of the industry. I think that to me is one of the, I guess, classic problems is that do I go and have separate pieces in the same, you know, case to do different things or do I bring them all together? And this is helping, you know, with the whole acceleration and AI and that's part of why this came about, right? It really comes about for a couple of reasons. One is cost, you know, smaller dies are better. I think the other is flexibility in that there are different pieces and different components inside of the part. Not all of them benefit in the same ways from the different process technologies. And so being able to implement them in different ways on different parts and different processes actually helps out quite a bit. So speaking of AI, which is the dominant force and dominant part of our national conversation right now, not even just at this conference, but really what's going on globally, how is AMD meeting the AI challenge in terms of its portfolio of products and services? So we are, you know, AMD as it's gone forward has really assembled an impressive intellectual property portfolio of processing technology. So we've had our CPUs historically, we've had GPUs really since the acquisition of ATI, you know, 15, 20 years ago. Lately Xilinx has come into the family as well. And so there's all different processing architectures for different times and places and use cases and different environments. And so really building our AI approach around that, you know, one size certainly doesn't fit all. And so we really do need the diversity, the diversity of architectures that we have under one roof in order to help the customer select the best particular architecture and application and really system to solve those issues. So that's what we really hope to bring to the AI conversation is, you know, it's certainly been a very hot topic in the graphics side and the GPU side, and we participate in that as well. But it's a broader story, a bigger story and it affects more things and it can be solved in more ways. I think to me, one of the big stories that doesn't get a lot of hype, but has actually, it's kind of had an undercurrent here and I think it's because of HPE's focus on sustainability is really how do you make these, how do you make AI and chips in general that help towards those sustainability goals? And it's a hot topic here in Europe to put it mildly. Very much so, yeah, here in Europe of course with the energy prices, what they are and generally there's always been an environmental focus here in Europe as well. In terms of sustainability, we have certainly a long history with our corporate responsibility report. It's been over 25 years of publications of that and we really are trying to work within the realistic, fact-based goals and measurements and those types of things. The area though beyond some of the things that are really base things that you need to do to be a responsible company are the things that we like to do which is design and it is this kind of stuff here by splitting up the problem and solving it in different ways by being able to implement accelerators for different functions, by being able to, just this integration, there's different types of integration too. You can have that in the package, you can have the chip stacked on top of each other. That has different implications of how much energy it takes to move bits around the part, for example. So just considering all of that and working on that in design, for us that's really, that really is the exciting part I think from an AMD perspective is being able to use some of these advanced design techniques or advanced architecture in order to produce more and more sustainable and efficient solutions. And that's part of what your partnership with HPE is doing, right, is bringing that kind of knowledge to them so they can package it up and bring it to the customers. Certainly, and I would say from that perspective, system vendors benefit from competitive component markets. I saw this in my own history. I was certainly part of platform and system design or system development in my past. And I can say that the systems that were developed when the component markets were competitive are better in terms of just the things that you have because it forces everyone to listen to the other opposing view. Like, do you really need that much space on the board? Do you really need that much power? Are you sure that's not the best performance you can do? Having that conversation on more equal footing because the component markets are competitive is, I think, a key to building better and better systems with certainly great system makers like HPE. Well, I'm really interested in this partnership with HPE, particularly because you're an HPE veteran. You worked there for a long time and so you are bringing both the AMD and forward HPE perspective to this and your discussion about having this design thinking in terms of how you're going about solving problems. How do you work together to collaborate and innovate and make sure that you are bringing the best products to bear? Well, it's really an engineering level collaboration. Things about the inputs that we get from the customers, the inputs that HPE also gets from the customers as well in terms of what are the features and things that we need to see. There also has to be a degree of realism as well. What can you really do? Where should we, how should we implement this so that it's not very useful to have a system that, for example, you can't deploy? So for example, we've been pushing the industry towards single socket server implementations instead of dual socket implementations because it's something like this, we have enough cores, right? So we have enough cores where you really wouldn't need two sockets. Often that's better performing because you're not worried about the cache coherency in between the sockets if you compare the same core count between both. And by doing that, you wind up allowing, and this is particularly something that's pertinent to a lot of the customers here at the HPE event is, how do I implement the new technologies efficiently without having to rebuild my data center to fit them, right? That's really the question they're after and the thing that they're asking for. And by being able to work on the single socket implementation, we're able to give the efficiency benefits that the architecture implies but also do it in a way that is supportable in the data center environments that our customers have already built by and large. So that's one area where we've been able to take what we did, you can make it smaller, you can make it more dense, but if the customer can't deploy it, it doesn't matter. And so as a result of those kind of dynamics and those kind of discussions, it's certainly a partnership between the components and the system vendors and certainly between AMD and HPE. Yeah, and I think it gets back to the workloads that are being put on through those servers and I think that that's where a lot of this innovation comes in. And like you were saying, the fourth generation of the Epic processors are out now and you have an event next week around AI where you're going to make some more announcements around some accelerators. Help us understand kind of AMD's roadmap and where we go from here and what that really means to customers that they'll be able to have that confidence in AMD. So I think from here the first part about having multiple architectures to be able to address these issues, that's important. Then tying that around with software and then working with the industry in order to make sure that that software is available, open source, deployable, common, that really helps us build the ecosystem and the market behind it. I guess one way you look at it is we're trying really to open up a market and compete within it. This is one way I would phrase it. And by doing that, the types of thinking that we have that drive us are, how can we make the systems more efficient? When is the right time maybe to use acceleration? When is not the right time? When do you introduce it? In some of the architectures we've talked about about the merging of heterogeneous computing. So like the frontier supercomputer which was built with HPE, it's in Oak Ridge, Tennessee frontier. If you look at that node, it's a single CPU plus four accelerators around it. But the secret is that they share a coherent memory space. The links are cache coherent links instead of just standard IO links. And as a result of that, it makes it a lot easier for the developers to use. And so as we go through different implementations of that closeness of the different types of compute accessing the same memory, as we go and put more and more of these things together, then we can really, first it allows the developers an easier way to experiment because they're not worried about data movement back and forth. The other piece is that it allows some level of experimentation and then customization on terms of the results. You know, we learn a lot when we do these things. So I think really learning, finding these things out and that's, I guess that's part of the reason that we've got such a deep partnership with HPE on high performance computing as well, right? One of the things for me in the European theater, there's a lot of high performance deals going on. So in America, one big country, one big government, one big, you know, whereas in Europe, every local, every country needs to have some skin in the game. And as a result of that, you know, we wind up competing for all these very difficult opportunities. And I like calling it the worst, the computer science project from hell, right? So basically it's these professors, they write this computer science project and then we go and we tackle it. And that's what our team does, is tackling those projects. But in doing so, we learn a lot, right? Our roadmap is validated or not, you know? Sometimes you win, sometimes you lose. But the roadmap's validated. The science that's being done is groundbreaking and out of that come a lot of the other innovations and a lot of the other markets, you know, like AI and those things, you know, they grew up in a lot of cases in these sort of advanced research, advanced research institutions. So, because Frontier is still the top dog as the, you know, fastest super computer. But it's more than just gen AI. It's like ML, which is separate from everybody calling everything AI. But I mean, it's a piece of AI. And what you're seeing is a lot of these different organizations are going down that route as well as the craze for gen AI. And are you seeing a pickup as well with the gen AI crowd as well that in here in Europe in particular, that they're saying, hey, we need to be building this? We need to be building it. We need to be building it now. I can't get ahold of the parts. Can you help me? We need, you know, we do see a lot of that. I mean, there is a groundswell. There is a demand. And, you know, we certainly are excited about that. You know, and of course, stay tuned for the events next week. Absolutely. We look forward to addressing those kind of things. I was really intrigued by how you described trying to open up an opportunity and then also compete within it. Open up a market and then compete within it. And also the complexity of how you described all these different European companies trying to have some skin in the game. How much are you talking to your rivals about this potential market opportunity in terms of what you're seeing and maybe not giving away the secrets of what you're building. But how your roadmap is progressing, what they're hearing, how much are you comparing notes? I guess from that perspective, really not much, right? I'm not sitting in those kind of things. We certainly wouldn't see that. But we do meet, of course, in industry conferences like this. We do, we sit on standards bodies and those things together. Like, for example, for procurement in Germany, for example, there's a company, there's a group called Bitcom. And we sit on there with our competitors in order to hash out what is the right way to do a vendor-neutral tender, for example, those types of things. So we do see that. I think a lot of the activity too is around some of the basic, the base parts of the industry, some of the process technology, some of the inputs there, packaging technology. In fact, you can sort of see how the industry's evolved over time. It's gone from, I guess in the last 30 years, a vertically integrated one to a disaggregated one. So there are a lot of meeting points at those levels of disaggregation where we share common processes at times. We share common, other common inputs into there in order to, I guess, put our own spin on it in designing the products that result. No, I think that, to me, is where a lot of this, because I think all of a sudden, Silicon became sexy again. I mean, I think with Gen AI and all of that's going on, people are looking at it. And I think it's, people are starting to figure out what's the ROI on it as well. Yeah, and that was actually the hardest part of my job when I joined AMD was to convince the folks that I had been working with at HP and other places that it was worth looking at the Silicon again, because we had all come of age in an era where there wasn't much differentiation in the performance, the raw compute performance. There were differentiation in management and a few other things around the systems, but the base system was largely the same. And so what had happened was is the participants in the industry had moved their careers, right? So they moved it to the OS level. They moved it to the application. They moved it to the service level. They started getting cloud certifications, all this other stuff, and that's where they all went. And come back in there, it's like, no, you really need to look at this. Oh, all hardware's the same. No, no, you really need to look at this, right? It's important. And I think that that point's been certainly proven, right, is when, with the competitive pressure that we've been able to bring and the competitive products that we've been able to bring in the industry, it's accelerated the cycle of development, accelerated the speed at which we can make meaningful leaps in performance. And that's because of hardware, right? I mean, it is back to the hardware piece in the end. Software contributes an awful lot, but if you don't have the horses, you really, it's tough to get going. So true. I love it. Silicon is sexy again. Matt, thank you so much for coming on theCUBE. A real pleasure having you on. Thank you for having me here. I'm Rebecca Knight for Rob Stretch. I stay tuned for more of theCUBE's live coverage of HPE Discover here in Barcelona. You're watching theCUBE, the leader in high tech enterprise coverage.