 The ascendancy of cloud and SaaS has shown new light on how organizations think about, pay for and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays and maximizing server utilization has been superseded by demand for cloud architects, dev ops pros, developers with expertise in microservices, container application development and the like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware, begs the question, is hardware going the way of COBOL? Well, not likely, software has to run on something but the labor needed to deploy troubleshooting managed hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by x86 processors, value is flowing to alternatives like NVIDIA and ARM based designs. Moreover, other componentry like NICs, accelerators and storage controllers are becoming more advanced, integrated and increasingly important. The question is, does it matter? And if so, why does it matter into whom? What does it mean to customers, workloads, OEMs and the broader society? Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at Tech Analysis Research. Ziaz Karavala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townsend is CEO and founder of CTO Advisor and Mark Stamer is the chief Dragon Slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to the Cube. Thanks so much for spending some time here. Good to be here. Thanks for having us. Okay, before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers. And it's this, I'm showing a subset of the taxonomy here, this X, Y axis. And the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything, the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud. And cloud of course is very impressive because not only is it elevated in the vertical axis but it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector, the server, the storage, the networking and even PCs despite their work from home are depressed in relative terms and of course data center co-location services. Okay, so you see obviously hardware is not, people don't have the spending momentum today that they used to, they've got other priorities, et cetera. But I want to start and go kind of around the horn with each of you. What is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? Sure, Dave. So look, I mean hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable that it's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPUs for computer vision processing. We've got AI dedicated accelerators. We've got all kinds of other network acceleration tools and AI powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is, excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory, the edge and where edge devices are depending on your definition changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures and that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. Excellent, great points. Thank you, Bob. Zias, up next please. Yeah, and I think the other thing when you look at this chart to remember too is through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know from really all the network manufacturers anyways that they were companies had projects underway the upgrade networks, they put them on hold now that people are starting to come back to the office they're looking at that now so we might see some change there but Bob's right the size of that those markets are quite a bit different. I think the other big trend here is the hardware companies at least in the areas where I look at networking or understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon some like data forwarding and things like that. Historically, when you look at the way network devices were built you did everything in hardware you configured in a hardware you did all the data forwarding and did all the management and that's been decoupled now so more and more of the control element has been placed in software a lot of the high performance things encryption and as I mentioned data forwarding packet analysis stuff like that is still done in hardware but not everything is done in hardware and so it's a combination of the two I think for the people that work with the equipment as well there's been more shift to understanding how to work with software and this is a mistake I think the industry made for a while as we had everybody convinced they had to become a programmer it's really more a software power user can you pull things out of software can you through API calls and things like that but I think the big trend here David it's a combination of hardware, software working together that really make a difference and how much you invest in a hardware versus software kind of depends on the performance requirements you have and I'll talk about that later but that's really the big shift that's happened here it's that the vendors have figured out how to optimize performance by leveraging the best of all of those. Excellent, you guys both brought up some really good themes that we can tap into Dave Nicholson please. Yeah, so just kind of picking up where Bob started off not only are we seeing the rise of a variety of CPU designs but I think increasingly the connectivity that's involved from a hardware perspective from a kind of a server or service design perspective has become increasingly important I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard we're not in so much a CPU centric world anymore various application environments have various demands and you can meet them by using a variety of components and it's extremely significant when you start looking down at the component level it's really important that you optimize around those components so I guess my summary would be I think we're moving out of this CPU centric hardware model into more of a connectivity centric model we can talk more about that later. Yeah, great and thank you David and Keith Townsend I've been really interested in your perspectives on this I mean for years you worked in a data center surrounded by hardware now that we have the software defined data center please chime in here. Well you know I'm going to dig deeper into that software defined data center nature what's happening with hardware hardware is meeting software infrastructure as code is a theme what does that code look like we're still trying to figure out but servicing up these capabilities that the previous analysts have brought up how do I ensure that I can get the level of services needed for the applications that I need whether they are legacy traditional data center workloads AI ML workloads workloads at the end how do I codify that and consume that as a service and hardware vendors are figuring this out HPE the big push into Green Lake and as a service deal now with Apex taking what we need these bare bone components moving it forward with DDR5, CXL, et cetera and making and servicing that as code or as services this is a very tough problem as we transition from consuming it hardware based configuration to this infrastructure as code paradigm shift Yeah programmable infrastructure really attacking that sort of labor discussion that we were having earlier okay last but not least Mark Stamer please Thanks Dave my peers raised really good points I agree with most of them but I'm going to disagree with the title of this session which is does hardware matter it absolutely matters you can't run software on the air you can't run it in an ephemeral cloud although there's the technical cloud that's a different issue the cloud has kind of changed everything from a market perspective in the 40 plus years I've been in this business I've seen this perception that hardware has to go down in price every year and part of that was driven by Moore's law and we're coming to let's say a lag or an end depending on who you talk to to Moore's law so we're not doubling our transistors every 18 to 24 months in a chip and as a result of that there's been a higher emphasis on software from a market perception there's no penalty they don't put the same pressure on software from the market to reduce the cost every year that they do in hardware which kind of is bass-hackwards when you think about it hardware costs are fixed software costs tend to be very low it's kind of a weird thing that we do in the market and what's changing is we're now starting to treat hardware like software from an OPEX versus CAPEX perspective so yes, hardware matters and we'll talk about that more in length you know I want to follow up on that and I wonder if you guys have a thought on this might Bob or Donald you and I have talked about this a little bit Mark you just pointed out that Moore's law is sort of waning Pat Gelsinger recently at their investor meeting and said that he promised that Moore's law is alive and well and the point I made in one breaking analysis was okay great Pat's at doubling transistors every 18 to 24 months let's say that Intel can do that even though we know it's waning somewhat look at the M1 Ultra from Apple they in about 15 months increased transistor density on their package by six X so to your earlier point Bob we have this sort of these alternative processors that are really changing things and to Dave Nicholson's point there's a whole lot of supporting components as well do you have a comment on that Bob? Yeah I mean it's a great point Dave and one thing to bear in mind as well not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it CXL and interconnect on the chip itself is dramatically changing it and a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance if perhaps not number of transistors per se is the interconnects that become available you're seeing the development of chiplets or tiles people use different names but the idea is you can have different components being put together eventually in sort of a Lego block style and what that's also going to allow not only is that going to give interesting performance possibilities because of the faster interconnect so you can have shared memory between things which for big workloads like AI huge data sets can make a huge difference in terms of how you talk to memory over a network connection for example but not only that you're going to see more diversity in the types of solutions that can be built so we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements and oh by the way the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller we've been so focused and so obsessed with becomes the Moore's law to the size of each individual transistor and yes for certain architecture types CPUs and GPUs in particular that's absolutely true but we've already hit the point where things like RF for 5G and Wi-Fi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size they actually get worse so the beauty of these chiplet architectures is you can actually combine different chip manufacturing sizes you hear about four nanometer and five nanometer along with 14 nanometer on a single chip each one optimize for its specific application yet together they can give you the best of all worlds and so that's we're just at the very beginning of that era which I think is going to drive a ton of innovation again gets back to my comment about different types of devices located geographically different places at the edge in the data center in a private cloud versus a public cloud all of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see Yeah, David Nicholson's got a graphic on that they're going to show later before we do that I want to introduce some data I actually want to ask Keith to comment on this before we go on this is, this next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware and you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right hand side but virtually all categories firewalls, peripheral servers, storage you're having either moderately difficult procurement issues that's the sort of pinkish or significant challenges so Keith, what are you seeing with your customers in the hardware supply chains and bottlenecks and we're seeing it with automobiles and appliances but it goes beyond IT the semiconductor challenges What's been the impact on the buyer community and society and do you have any sense as to when it will subside? You know, I was just asked this question yesterday and I'm feeling the pain you know, people question why kind of a side project within the CTO by the way we built the hybrid infrastructure traditional IT data center that we're walking with the traditional customer and modernizing that data center so it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit of Rister switches, some older Dell 730XD switches you know, speeds and feeds and we said we would modernize that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges I thought we easily migrate from 10 gig networking to 25 gig networking path that customers are going on the 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity even the 25 gig switches and we're focused on networking right now is hard to procure we're talking about nine to 12 months or more lead time so we're seeing customers adjust by adopting cloud but if you remember early on pandemic Microsoft Azure kind of gated customers that didn't have a capacity agreement so customers are keeping an eye on that there's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware vSphere or some other virtualization technology where it doesn't matter who can give me the hardware they can just get me the hardware because it's critically impacting projects and timelines So that's a great setup Zia for you Keith mentioned earlier the software defined data center with software defined networking and cloud you see a day where networking hardware is commoditized and it's all about the software or are we there already? No we're not there already and I don't see that really happening anytime in the near future I do think it's changed though and just to be clear I mean when you look at that data this is saying customers have had problems procuring the equipment right and there's not a network vendor out there I've talked to Norman Rice at Xtreme I've talked to the folks at Cisco and Arista but they all said they could have had blowout quarters had they had the inventory to ship so it's not like customers aren't buying this anymore I do think though when it comes to networking network has certainly changed some because there's a lot more control as I mentioned before that you can do in software and I think customers need to start thinking about the types of hardware they buy and where they're going to use it and what its purpose is because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down right it just doesn't have the horsepower to run it and I've even you know even when you do that you have to start thinking of the components you use the nicks you buy and I've talked to customers that have simply just gone through the process replacing a nick card in a commodity box and had some performance problems and you know things like that so if agility is more important than performance then by all means try running software on commodity hardware I think that works in some cases if performance though is more important that's when you need that kind of turnkey hardware system and I've actually seen more and more customers reverting back to that model in fact when you talk to even some startups that think today about it when they come to market they're delivering things more on appliances because that's what customers want and so there's this kind of act pivot this pendulum of agility and performance and if performance absolutely matters that's when you do need to buy these kind of turnkey turnkey free build hardware systems if agility matters more that's when you can go more to software but the underlying hardware still does matter so I think you know will we ever have a day where you can just run it on whatever hardware you know maybe but I'll long be retired by that point so I don't care Well, you bring up a good point Zias and I remember the early days of cloud the narrative was oh the cloud vendors they you know they don't use EMC storage they just run on commodity storage and then low and behold you know they've tried out James Hamilton to talk about all the custom hardware that they were building and you saw Google and Microsoft calling for this forever right and I mean all the way back to the turn of the century we were calling for the commodity hardware and it's never really happened because you can still drive as long as you can drive innovation into it you know customers will always lean towards the innovation cycles because they get more features faster than things and so the vendors have done a good job of keeping that cycle up it's a little bit of a long time of course Yeah and that's why you see companies like Pure Storage your storage company has 69% gross margins all right I want to go jump ahead we're going to bring up the slide four I want to go back to something that Bob O'Donnell was talking about this sort of supporting act you know the diversity of silicon and we've marched to the cadence of Moore's law for decades you know we asked the Moore's law dead you know we say it's moderating Dave Nicholson you want to talk about that those supporting components and you shared with us a slide that shift you call it a shift from a processor centric world to a connect centric world what do you mean by that and let's bring up slide four and you can talk to that Yeah yeah so first I want to echo this sentiment that you know the question does hardware matter is sort of the answer is of course it matters maybe the real question should be should you care about it and the answer to that is it depends who you are if you're an end user using an application on your mobile device maybe you don't care how the architecture is put together you just care that the service is delivered but as you back away from that and you get closer and closer to the source someone needs to care about the hardware and it should matter why because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware the more bang you're going to get for your buck so it's not only a quantitative question in terms of how much can you deliver but it also ends up being a qualitative change as capabilities allow for things we couldn't do before because we just didn't have the aggregate horsepower to do it so this chart actually comes out of some performance tests that were done so it happens to be Dell servers with Broadcom components and the point here was to peel back peel off the top of the server and look at what's in that server starting with the PCI interconnect so PCIe Gen3 Gen4 moving forward what are the effects on from an interconnect perspective on performance application performance translating into new orders per minute processed per dollar et cetera et cetera if you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance you can see that CPU architecture is sort of lagging behind in a way and Bob mentioned this idea of tiling and all of the different ways to get around to that when we do performance testing we can actually peg CPUs just running the performance tests without any actual database environments working so right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input so the key thing here what this is highlighting is just as a very specific example you take a card that's designed as a Gen3 PCIe device you plug it into a Gen4 slot now the card is the bottleneck you plug a Gen4 card into a Gen4 slot now the Gen4 slot is the bottleneck so we're constantly chasing these bottlenecks someone has to be focused on that from an architectural perspective it's critically important so there's no question that it matters but of course various people in this food chain won't care where it comes from I guess a good analogy might be where does our food come from? If I get a steak it's a pink thing wrapped in plastic right well there are a lot of inputs that a lot of people have to care about to get that to me do I care about all of those things? No, are they important? They're critically important So okay so I want to get to the okay so what does this all mean to customers and so what I'm hearing from you is to balance a system it's becoming more complicated and I kind of been waiting for this day for a long time because as we all know the bottleneck was always the spinning disk the last mechanical so people who wrote software knew that when they were doing a write the disk had to go and do stuff and so they were doing other things in the software and now with all these new interconnects and flash and things like you could do atomic writes and so that opens up new software possibilities and combine that with alternative processes but what's the so what on this to the customer and the application impact did anybody address that? Yeah let me address that for a moment I want to leverage some of the things that Bob said Keith said Zoo said and David said so I'm a bit of a contrarian in some of this for example on the chip side as the chips get smaller 14 nanometer, 10 nanometer, five nanometer soon three nanometer we talk about more cores but the biggest problem on the chip is the interconnect in the chip because the wires get smaller people don't realize in 2004 the latency on those wires in the chip was 80 picoseconds today it's 1300 picoseconds that's on the chip this is why they're not getting faster so we may be getting a little bit slowing down in Moore's law but even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip it goes within the system composable architectures it goes to the point where Keith made ultimately you need a hybrid because what we're seeing what I'm seeing when I'm talking to customers the biggest issue they have is moving data whether it be in a chip, in the system in a data center, between data centers moving data is now the biggest gating item in performance so if you want to move it from let's say your transactional database to your machine learning it's the bottleneck, it's moving the data and so when you look at it from a distributed environment now you've got to move the compute to the data the only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute the software running on hardware closer to the data so go ahead so is this what you mean Nicholson was talking about a shift from a processor centric world to a connectivity centric world you're talking about moving the bits across all the different components not having the processor you're saying is essentially becoming the bottleneck or the memory I guess well that's one of them and there's a lot of different bottlenecks but it's the data movement itself is moving away from why do we need to move the data can we move the compute to a processing closer to the data because if we keep them separate and this has been a trend now where people are moving the processing away from it it's like the edge I think it was Zeus or David you were talking about the edge earlier as you look at the edge who defines the edge, right is the edge a closet or is it a sensor if it's a sensor how do you do AI at the edge when you don't have enough power you don't have enough computing people are inventing chips to do that to do all that at the edge to do AI within the sensor instead of moving the data to a data center or a cloud to do the processing because the lag in latency is always limited by speed of light how fast can you move the electrons and all this interconnecting and all the processing and all the improvement we're seeing in the PCIe bus from three to four to five to CXL higher bandwidth on the network and that's all great but none of that deals with speed of light latency and that's an issue go ahead Mark, no, no, I just want to because what you're referring to can be looked at at a macro level which I think is what you're describing you can also look at it at a more micro level from a systems design perspective I'm going to be the resident knuckle dragging hardware guy in the panel today but it's exactly right moving compute closer to data includes concepts like peripheral cards that have built-in intelligence right though again in some of this testing that I'm referring to we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for things like IO now you have essentially offload engines in the form of storage controllers rate controllers of course for ethernet nicks, smart nicks and so when you can have these sort of offload engines and we've gone through these waves over time people think we're waiting at rate controller and NVMe flash storage devices does that make sense? It turns out it does why because you're actually at a micro level doing exactly what you're referring to you're bringing compute closer to the data now, closer to the data meaning closer to the data storage subsystem it doesn't solve the macro issue that you're referring to but it is important again going back to this idea of system design optimization always chasing the bottleneck plugging the holes someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar Yeah well this whole drive the whole drive performance has created some really interesting architectural designs right like you like this is something like you know one of the you know deep the drives of the DPU right brings more processing power into systems that already had a lot of processing power there's also been some really interesting you know kind of the innovation in the area of systems architecture too if you look at the way NVIDIA goes to market their drive kit is a pre-built piece of hardware you know optimized for self-driving cars right they partnered with Pure Storage and Arista to build that AI-ready infrastructure and I remember when I talked to Charlie Giancarlo the CEO of Pure about when the three companies rolled that out he said look if you're going to do AI you need good storage, you need fast storage fast processor and fast network and so for customers to be able to put that together themselves was very very difficult there's a lot of software that needs tuning as well so the three companies partnered together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it and so in that case in some ways the hardware was leading the software innovation and so the variety of different architectures we have today around hardware is really exploded and I think part of that is what you know Bob brought up at the beginning about the different chip design yeah Bob talked about that earlier Bob I mean most AI today is modeling you know a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge and that's a radically different architecture Bob isn't it? It is it's a completely different architecture and just to follow up on a couple of points excellent conversation guys Dave talked about system architecture and really this that's what this boils down to right but it's looking at architecture at every level I was talking about the individual different components the new interconnect methods there's this new thing called UCIE universal connection I forget even what it stands for but it's a mechanism for doing chiplet architectures but then again you have to take it up to the system level because it's all fine and good if you have this SOC that's tuned and optimized but it has to talk to the rest of the system and that's where you see other issues and you've seen things like CXL and other interconnect standards you know and nobody likes to talk about interconnect because it's really wonky and really technical and not that sexy but at the end of the day it's incredibly important exactly to the other points that were being raised that like Mark raised for example about getting that compute closer to where the data is and that's where again a diversity of chip architectures help and exactly to your last comment there Dave putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to for example maybe it's an FPGA maybe it's a dedicated AI chip it's another kind of chip architecture that's being created to do that inferencing on the edge because again it's that the cost and the challenges of moving lots of data whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters and the other thing is we're tackling bigger problems so architecturally not again just architecturally within a system but when we think about DPUs and the sort of the east-west data center movement conversation that we hear in Vidya and others talk about it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data so really is about tackling where the processing is needed having the interconnect and the ability to get where the data you need to the right place at the right time and because those needs are diversifying we're just going to continue to see an explosion of different choices and options which is going to make hardware even more essential I would argue that it is today and so I think what we're going to see not only does hardware matter it's going to matter even more in the future than it does now Great discussion guys I want to bring Keith back into the conversation here Keith, if your main expertise in tech is provisioning LUNs you probably want to look for another job so maybe clearly hardware matters but with software to find everything do people with hardware expertise matter outside of for instance component manufacturers or cloud companies I mean VMware certainly changed the dynamic in servers Dell just spun off its most profitable asset in VMware so it obviously thinks hardware can stand alone how does an enterprise architect view the shift to software defined hyper scale cloud and how do you see the shifting demand for skills in enterprise IT? So I love the question and I'll take a different view of it if you're a data analyst and your primary value add is that you do ETL transformation talk to a CDO chief data officer from midsize bank a little bit ago he said 80% of his data scientists time is done on ETL super not value add he wants his data scientists to do data science work if your chances are if your only value is that you do LUN provisioning then you probably don't have a job now the technologies have gone gotten much more intelligent as infrastructure pros we want to give infrastructure pros the opportunities to shine and I think the software to find nature and the automation that we're seeing vendors undertake whether it's Dell, HPE, Lenovo take your pick that pure storage net app that are doing the automation and the ML needed so that these practitioners don't spend 80% their time doing LUN provisioning and focusing on their true expertise which is ensuring that data is stored data is retrievable, data is protected, et cetera I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at because as my data is spread across the enterprise hybrid different types you know today you talk about the super cloud a lot if my data is in the super cloud protecting that data and securing that data becomes much more complicated than when it was me just procuring or provisioning LUNs so when you say where should the shift be or look because you know focusing on the real value which is making sure that customers can access data to recover data can get data at performance levels that they need within the price point that they need to get those data sets and where they need and we talked a lot about where they need out one last point about this interconnect thing I have this vision and we all I think we all do of composable infrastructure that's idea that scale out does not solve every problem the cloud can give me infinite scale out sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today and the opportunity is to create composable infrastructure so that we can solve a lot of these problems that just simply don't scale out you know wow so many interesting points there I had just interviewed Jamak Degani who's the founder of data mesh last week and she made a really interesting point she said think about we have separate stacks we have an application stack and we have a data pipeline stack and the transaction systems the transaction database we extract data from that to your point we ETL it in takes forever and then we have this separate data stack if we're going to inject more intelligence and data and AI into applications those two stacks her contention is they have to come together and when you think about super cloud bringing compute to data that was what Hadoop was supposed to be it ended up all sort of going into a central location but does that it's almost a rhetorical question and it seems that that necessitates new thinking around hardware architectures that kind of everything's the edge and the other point is to your point Keith it's really hard to secure that so when you think about offloads right you've heard the stats you know NVIDIA talks about it Broadcom talks about it that you know 30% 25 to 30% of the CPU cycles are wasted on doing things like storage offloads or networking or security it seems like maybe ZSU have a comment on this it seems like new architectures need to come together to support you know all of that stuff that Keith and I just spewed and by the way I do want to the question is to ask Keith it's the point I made at the beginning too about engineers do need to be more software-centric right they do need to have better software skills in fact I remember talking to Cisco about this last year where when they surveyed their engineer base only about a third of them had ever made an EVI call which you know that kind of shows this big skill set change you know that has to come but on the point of architectures I think the big change here is edge in because it brings in distributed compute models historically when you think about compute even with multi-cloud we never really had multi-cloud we'd use multiple centralized clouds but compute was always centralized right it was in a branch office in a data center in a cloud with edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and different edge locations and I think Mark you were talking about this like the edge could be in our eye with heat advice it could be your campus edge it could be cellular edge it could be your car right and so we need to start thinking about how our applications interact with all those different parts of that edge ecosystem you know to create a single experience the consumer apps a lot of consumer apps largely works that way I think like an app like Uber right it's it pulls in the information of all kinds of different edge location edge services and you know it creates a pretty cool experience we're just starting to get to that point in the business world now there's a lot of security implications and things like that but I do think it drives more architectural decisions to be made about how I deploy what data we're in where I do my processing where I do my AI and things like that and it's it actually makes the world more complicated in some ways we can do so much more with it but I think it does drive us more towards turnkey systems at least initially in order to you know ensure performance and security right mark I wanted to go to you you had indicated to me that you wanted to chat about this a little bit you've written quite a bit about the integration of hardware and software we've watched Oracles move from buying son and then basically using that in a highly differentiated approach engineered systems what's your take on all that I know you also have some thoughts on the shift from capex to OPEX chime in on that sure when you look at it there are advantages to having one vendor who has the software and hardware that can synergistically make them work together that you can't do in a commodity basis if you own the software and somebody else has the hardware I'll give you an example would be Oracle as you talked about with their Exadata platform they literally are leveraging microcode in the Intel chips and now in AMD chips and all the way down to Optane they make basically AMD database servers work with Optane memory PMEM in their storage systems not MVME SSD PMEM I'm talking by the cards itself there are advantages you can take advantage of if you own the stack as you were putting out earlier Dave both the software and the hardware okay that's great but on the other side of that that tends to give you better performance but tends to cost a little more on the commodity side it costs less but you get less performance what Zeus had said earlier it depends where you're running your application how much performance do you need what kind of performance do you need one of the things about moving to the edge and I'll get to the OPEX-CAPEX in a second one of the issues about moving to the edge is what kind of processing do you need if you're running in a CCTV camera on top of a traffic light how much power do you have how much cooling do you have that you can run this and more importantly do you have to take the data you're getting and move it somewhere else to get processed and the information is sent back I mean there are companies out there like Brainship that have developed AI chips that can run on the sensor without a CPU, without any additional memory so I mean there was innovation going on to deal with this question of data movement there's companies out there like Tachyon that are combining GPUs, CPUs and TPUs in a single chip think of it as super composable architecture yeah they're looking at being able to do more in less on the OPEX and CAPEX it should be a college hold that thought on the OPEX-CAPEX because we're running out of time and maybe you can wrap on that I just wanted to pick up on something you said about the integrated hardware and software I mean other than the fact that Michael Dell unlocked whatever $40 billion for himself and Silver Lake I was always a fan of a spin-in with VMware basically become the Oracle of hardware now I know it would have been it would have been a nightmare for the ecosystem and culturally they probably would have had a VMware brain drain but is anybody have any thoughts on that as a sort of a thought exercise I was always a fan of that on paper yeah I gotta eat a little crow I did not like the Dell VMware acquisition for the industry in general I think it hurt the industry in general HPE Cisco walked away a little bit from that VMware relationship but when I talked to customers they loved it I gotta be honest they absolutely loved the integration the VxRail, VxRack solution exploded Nutanix became kind of an afterthought when it came to competing so that spin-in when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack Dell was well positioned to do that with a potential spin-in of VMware yeah we're gonna be go ahead please yeah in fact I think you're right Keith it was terrible for the industry great for Dell and I remember talking to Chad Sackett when he was running VCE which became Rack and Rail their ability to stay in lockstep with what VMware was doing most the number one workload running on hyper-converged forever was VMware so their ability to remain in lockstep with VMware gave them a huge competitive advantage and Dell came out of nowhere in the hyper-converged market and just started taking share because of that relationship so you know this sort of you know I guess it's from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties right and networking and servers and things like that they could have given the dominance that VMware had from an industry perspective though I do think it's better to have them decoupled I agree, I think they could have dominated in the super cloud and maybe they would become the next oracle where everybody hates them but they kick ass but we got to wrap up here and so what I'm going to ask you is I'm going to go in reverse order this time big takeaways from this conversation today which guys by the way I can't thank you enough phenomenal insights but big takeaways any final thoughts any research that you're working on that you want to highlight or you know what you look for in the future try to keep it brief we'll go in reverse order maybe Mark you could start us off please sure and the research friend I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services on the other aspect that I wanted to chat about real quickly OPEX versus CAPEX the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software as you use it pay for what you use in arrears the good thing about that is you're only paying for what you what you use period not paying for what you don't use and compute time everything else the bad side about that is you have no predictability in your bill it's elastic but every user I've talked to says every month it's different from a budgeting perspective it's very hard to set up your budget year to year and it's causing a lot of nightmares so it's just something to be aware of from a CAPEX perspective you have no more CAPEX if you're using that kind of perspective that kind of base system but you lose a certain amount of control as well so ultimately that's some of the issues but my biggest point my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations moving it within a system moving it within a chip all those are key issues great thank you okay CTO advisor give us your final thoughts all right really great commentary again I'm going to point back to us taking the walk that our customers are taking which is trying to do this conversion of all primary data center to a hybrid of which I have this this hard earned philosophy that Enterprise IT is additive we rarely when we add a service we rarely subtracted service so the landscape and surface area what we support has to grow so where our research focuses on taking that walk we're taking a monolithic application decomposing that to containers and putting that in a public cloud and connecting that back to the private data center and telling that story and walking that walk with our customers this has been a super enlightening panel thank you real real different world coming David Nicholson please you know I really harkens back to the beginning of the conversation you talked about momentum in the direction of cloud I'm sort of spending my time under the hood getting a grease center by fingernails focusing on where still the lion's share of spend will be in coming years which is on-prem and then of course obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture I know we all know Sapphire Rapids pushed into the future when's the next Intel release coming who knows we think you know in 2023 there have been a lot of people standing by from a practitioner standpoint asking well what do I do between now and then does it make sense to upgrade bits and pieces of hardware or go to go from a last generation to a current generation when we know the next generation is coming and so I've been very very focused on looking at how these connectivity components like RAID controllers and Nix I know it's not as sexy as talking about cloud but just how these components completely change the game and actually can justify movement from say a 14th generation architecture to a 15th generation architecture today even though gen 16 is coming let's say 12 months from now so that's where I am keep my phone number in the Rolodex I literally reference Rolodex intentionally because like I said I'm in there under the hood and it's not as sexy but yeah so that's what I'm focused on Dave well to paraphrase maybe derivative paraphrase of Larry Ellison's rant on what is cloud it's operating systems and databases, et cetera RAID controllers and Nix live inside of clouds all right you know one of the reasons I love working with you guys is because you have such a wide observation space and Zs caravalla you of all people you know you have your fingers and a lot of pies so give us your final thoughts yeah I'm not a propeller headie as my tip counterparts here so I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences right there's a you look at talk to every business and how they the experiences they deliver to their customers is really differentiating how they go to market and so they're looking at these different ways of feeding of data and analytics and things like that in different places and then this is going to have a I think this is going to have a really profound impact on enterprise IT architecture we're putting more data more compute in more places all the way down to like little micro edges and retailers and things like that and so we need the variety like historically if you think back to when I was in IT you know you know pre-Y2K we didn't have a lot of choice in things we had a server that was racked down if we're stand up right and there wasn't a whole lot of differences in choice but today we can deploy you know these really high performance compute systems on little blades inside servers or inside you know autonomous vehicles and things and I think the world from here gets you know just the choice of what we have in the way of hardware and software works together is really gonna I think change the world the way we do things we're already seeing like I said in the consumer world right there's so many things you can do from you know smart home perspective you know natural language processing stuff like that and it's starting to hit businesses now so just wait and watch the next five years yeah totally the computing power at the edge is just gonna be mind-blowing it's unbelievable what you can do at the edge yeah hey Z I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you because we know that you studied basket weaving I was actually a physics math major so good man another math major all right Bob O'Donnell you're gonna bring us home I mean we've seen the importance of semiconductors in silica and in our everyday lives but your last thoughts please sure and just to clarify by the way I'm a I was a great books major and this was actually for my final paper so I was like philosophy and all that kind of stuff in literature but I still somehow got into tech look it's been a great conversation and I want to pick up a little bit on a comment Z has made which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen I think is critically important and the other thing is because of the diversity of the chip architectures and all those different pieces and elements it's going to be how software tools evolve to adapt to that new world so I look at things like what Intel is trying to do with one API you know what NVIDIA has done with CUDA what other platform companies are trying to create but tools that allow them to leverage the hardware but also embrace the variety of hardware that is there and so as those software development environments and software development tools evolve to take advantage of these new capabilities that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures that can leverage all these new interconnects that can leverage all these new system architectures and figure out ways to make that all happen I think is going to be critically important and then finally I'll mention the research I'm actually currently working on is on private 5G and how companies are thinking about deploying private 5G and the potential for edge applications for that so I'm doing a survey of several hundred US companies as we speak and we're looking forward to getting that done in the next couple of weeks. Yeah, I look forward to that guys again thank you so much outstanding conference and anybody going to be at Dell Tech World in a couple of weeks? Bob's going to be there, Dave Nicholson, well drinks on me and guys really can't thank you enough for the insights and your participation today really appreciate it. Okay and thank you for watching this special power panel episode of theCUBE Insights powered by ETR. Remember we publish each week on siliconangle.com and wikibon.com all these episodes they're available as podcast, DM me or any of these guys I'm at Dave Vellante you can email me at david.vellante at siliconangle.com check out ETR.ai for all the data this is Dave Vellante, we'll see you next time.