 The market for enterprise servers is large. It generates well north of $100 billion in annual revenue and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds as we've reported are impacting all segments of the market. CIOs, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary OPEX, particularly in the cloud, they're dialing it down and just being a little bit more cautious. The market for enterprise servers, it's dominated as you know by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful in competing with Intel because of its focus. It's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire rapid CPUs now slated for January 2023 have created an opportunity for AMD. Specifically, AMD's next generation Epic CPUs code named Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads. There's a compute density optimized Zen 4 package and then a cash optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data oriented workloads that are being driven by AI and machine learning and high performance computing HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell in particular will be using these systems as the basis for its next generation Gen16 servers which are going to bring new capabilities to the market. Now, of course, Dell is not alone. There's got other OEMs, you got HPE, Lenovo, you got ODMs, you got the cloud players. They're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU, the sole and most indicative performance metric. There's much more emphasis and innovation around all those supporting components in a system. Specifically, the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors determine how well systems can perform and those kind of things around compute operations, I.O. and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So, we're seeing OEMs like Dell. They're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on theCUBE. Thanks for making some time with me. Yeah, of course, Dave, great to be here. All right, so you heard my little spiel in the intro, that summary, was it accurate? What would you add? What do people need to know? Yeah, no, no, no, 100% accurate, but I'm a resident nerd. So, just some kind of clarification. If we think of things like microprocessor release cycles, it's always gonna be characterized as a rolling thunder. I think 2023 in particular is gonna be this constant release cycle that we're going to see. You mentioned the, excuse me, Genoa processors with 96 cores. Shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then we can talk about what it means in terms of nanometers and performance per core and everything else. But yeah, now that's the main thing I would say is just people shouldn't look at this like a new car is being released on Saturday. This is going to happen over the next 18 months, really. All right, so to that point, you think about Dell's next generation systems, they're gonna be featuring these new AMD processors. But to your point, when you think about performance claims in this industry, it's a moving target. It's that, you call it a rolling thunder. So what is that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? So out of the gate, slated as of right now for a November 10th release, AMD is gonna be first to market with, everyone will argue, but first to market with five nanometer technology for in production systems, 96 cores. What's important though is those microprocessors are gonna be resident on motherboards from Dell that feature things like PCI-E 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. So I appreciate you painting a picture. Let's kind of stay inside under the hood if we can. And share with us what we should know about these kind of next generation CPUs. How are companies like Dell gonna be configuring them? How important are clock speeds and core counts in these new systems? And what about you mentioned motherboards? What about next-gen motherboards? You mentioned PCI-E Gen5, where does that fit in? So just take us inside deeper to the system, please. Yeah, so if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture. That interconnect, how quickly that interconnect performs is critical. Now, I'm gonna give you a statistic that doesn't require a PhD to understand. When we go from PCI-E Gen4 to Gen5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, two X. The performance is doubled, but the numbers are pretty staggering in terms of gigatransactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from fourth-gen to fifth-gen. But the reality is most users of these systems are still on PCI-E Gen3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance. And then all of the peripherals that plug into the, into that faster bus are faster, whether it's rate control cards from rate controllers or storage controllers or network interface cards, companies like Broadcom come to mind, all of their components are leapfrogging their prior generation to fit into this ecosystem. So I wonder if we could stay with PCI-E for a moment and just understand what Gen5 brings. You said, two X. I think we're talking bandwidth here. Is there a latency impact? Why does this matter? And just this premise that these other components increasingly matter more, which components of the system are we talking about that can actually take advantage of PCI-E Gen5? Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer service tend to want to do in 2022, controllers that are attached to internal and external storage devices, all of them benefit from this enhancement and performance. And it's, PCI Express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do, it's mind numbing. I wanna say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm gonna have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCI-E four is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD, Genoa or the Epic processors, the Zen with the Z4 microprocessors, for every dollar that you're spending, you're getting on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's gonna be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive. And the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage, and for every dollar, you're getting a greater amount of performance and transactions which translates up the stack through the application layer and out to the end user's desire to get work done. So I want to come back to that, but let me stay on performance for a minute. We all used to be when you go buy a new PC and be like, what's the clock speed of that? And so when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And again, where does that supporting ecosystem play? So if you are really into the speeds and feeds and what's under the covers from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are. But really the answer is, look at the benchmarks that are created through testing, especially from third party organizations that test these things. For workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's law, have we been able to continue to track along that path. We know there are physical limitations to Moore's law from an individual microprocessor perspective. But none of that really matters. What really matters is, what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. So I presume we're going to see these benchmarks at some point. I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? Yeah, 100%, 100%. Dell and I'm sure other companies are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we're going to see quite a few world records set because of the combination of things like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, AMD is sort of starting off this season of rolling thunder. And in a few months, we'll start getting the initial entries from Intel also. And we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have a portfolio of products that highlight the advantages of each processor set. Yeah, I talked to my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it. AI, ML are going to be the big buzzwords moving forward. So Dave, you talked earlier about that some people might have sticker shock. So some of the infrastructure pros that are watching this might be, oh, okay. I'm going to have to pitch this, especially in this tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? If they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily or I don't have necessarily insider access to street pricing on next gen servers yet. But what I do know from examining what the component suppliers tell us is that these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal. And anyone who looks at it and says, 10 bucks, it used to only be five bucks. Well, the ROI and the TCO, that's where all of this is really needs to be measured. And a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. So it's consolidation, which means you can do more with less, it's going to be or more with the same. It's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into the staff. So you're going to have to sort of identify how the staff can be productive in other areas. You know, probably not going to fire people, hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids. You know, of course it's been well documented, it's late, but they're now scheduled for January. Pat Gelsinger's talked about this. And of course they're going to try to leapfrog AMD and then AMD is going to respond. You talked about this earlier. So that game is going to continue. How long do you think this cycle will last? Forever. It's just that there will be periods of excitement, like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got Lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket, you can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the X86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so you would think that over time, ARM is going to creep up as all disruptive technologies do. And we've seen that. We've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got Nvidia kind of off to the side starting out heavy in the GPU space saying, hey, you know what? You can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional X86 vendors, certainly. Yes. I'm glad you brought up ARM and Nvidia, I think. But maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the X86, it's the supporting, it's the CPU, the NPU, the XPU, if you will. But also all those surrounding components that to your earlier point are taking advantage of the faster bus speeds. Yeah. No, 100%. You know, look at it this way. A server used to be measured. Well, they still are. How many U of Rackspace does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we're in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCI-E5 and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICS and what that means from a performance and or consolidation perspective or things like RDMA overconverge ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors with this number of cores and memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, it's the definition of what constitutes a server and what's critically important, I think has definitely changed. All right, Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks so that we can quantify these innovations that we've been talking about, bring us home. Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really the details that are going to come. The day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So hang on, it's going to be a fun ride. All right, Dave, we're going to leave it there. Thanks so much, my friend, appreciate you coming on. Thanks, Dave. Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders. We got analysts on there, technical experts from all over the world. Thanks for watching and we'll see you next time.