 For decades, the technology industry had marched the cadence of Moore's law. It was a familiar pattern. System OEMs would design in the next generation of Intel microprocessors every couple of years or so, maybe bump up the memory ranges periodically, and the supporting hardware would kind of go along for the ride, upgrading its performance and bandwidth. System designers, they might beef up the cash, maybe throwing some more spinning disc spindles at the equation to create a balanced environment. And this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to today's challenges. This has all changed. The confluence of cloud, distributed global networks, the diversity of applications, AI, machine learning, and the massive growth of data outside of the data center requires new architectures to keep up. As we've reported, the traditional Moore's law curve is flattening, and along with that, we've seen new packages with alternative processors like GPUs, NPUs, accelerators and the like, and the rising importance of supporting hardware to offload tasks like storage and security. And it's created a massive challenge to connect all these components together, the storage, the memories, and all of the enabling hardware and do so securely at very low latency, at scale, and of course, cost effectively. This is the topic of today's segment, the shift from a world that is CPU centric to one where the connectivity of the various hardware components is where much of the innovation is occurring. And to talk about that, there is no company who knows more about this topic than Broadcom. And with us today is Jazz Tremblay, who is general manager data center solutions group at Broadcom. Jazz, welcome to theCUBE. Hey Dave, thanks for having me, really appreciate it. Yeah, you bet. Now Broadcom is a company that a lot of people might not know about. I mean, but the vast majority of the internet traffic flows through Broadcom products, like pretty much all of it. It's a company with trailing 12 month revenues of nearly 29 billion and a $240 billion market cap. Jazz, what else should people know about Broadcom? Well Dave, 99% of the internet traffic goes through Broadcom silicon or devices. And I think what people are not often aware of is how breadth it is. It starts with the devices, phones and tablets that use our Wi-Fi technology or RF filters. And then those connect to access points either at home, at work or public access points using our Wi-Fi technology. And if you're working from home, you're using a residential or broadband gateway and that uses Broadcom technology also. From there, you go to access networks, core networks and eventually you'll work your way into the data center all connected by Broadcom. So really, we're at the heart of enabling this connectivity ecosystem. And we're, at the core of it, we're a technology company. We invest about $5 billion a year in R&D. And as you were saying, or last year we achieved 27.5 billion of revenue. And our mission is really to connect the ecosystem to enable what you said, this transformation around the data-centric world. So talk about your scope of responsibility. What's your role generally and specifically with storage? So I've been with the company for 16 years and I head up the data center solutions group which includes three product franchises. PCA fabric, storage connectivity and Broadcom, Ethernet, Nix. So my charter, my team's charter is really server connectivity inside the data center. And what specifically is Broadcom doing in storage, Jez? So it's been quite a journey. Over the past eight years, we've made a series of acquisition and built up a pretty impressive storage portfolio. This first started with LSI. And that's where I came from and the team here came from LSI that had two product franchises around storage. The first one was server connectivity, HBA, RAID, expanders for SSDs and HDDs. The second product group was actually chips that go inside the hard drives. So SOCs and preamps. So that was an acquisition that we made. And actually that's how I came into the Broadcom a group through LSI. The next acquisition we made was PLX, the industry's leader in PCIe fabrics. They'd been doing PCIe switches for about 15 years. We acquired the company and really saw an acceleration and the requirements for NVMe Attach and AI ML fabrics, very specialized low latency fabrics. After that, we acquired a large system and software company, Brocade. And Dave, if you recall Brocade, they're the market leader in fiber channel switching. This is where if you're a financial or government institution, you wanna build a mission critical, ultra secure, really best in class storage network. Following Brocade acquisition, we acquired Emulex, that is now the number one provider of fiber channel adapters inside servers. And the last acquisition for this puzzle was actually Broadcom, where Avago acquired Broadcom and took on the Broadcom name. And there we acquired Ethernet switching capabilities and Ethernet adapters that go into storage servers or external storage systems. So with all this, it's been quite the journey to build up this portfolio. We're number one in each of these storage product categories and we now have four divisions that are focused on storage connectivity. You know, that's quite remarkable when you think about it. I mean, I know all these companies that you were talking about and they were very quality companies, but they were kind of bespoke. And the fact that you had the vision to kind of connect the dots and now take responsibility for that integration, we're going to talk about what that means in terms of competitive advantage. But I wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so important. Like what are the trends that are driving that shift that we talked about earlier from a CPU-centric world, the one that's connectivity-centric? I think at Broadcom, we recognize the importance of storage and storage connectivity. And if you look at data centers, whether it be private, public cloud or hybrid data centers, they're getting inundated with data. If you look at the digital universe, it's growing at about 23% keger a day. So over a course of four to five years, you're doubling the amount of new information. And that poses really two key challenges for the infrastructure. The first one is you have to take all this data and for a good chunk of it, you have to store it, be able to access it and protect it. The second challenge is you actually have to go and analyze and process this data. And doing this at scale, that's the key challenge. And what we're seeing these data centers getting a tsunami of data and historically they've been CPU-centric architectures. And what that means is the CPUs at the heart of the data center. And a lot of the workloads are processed by software running on the CPU. We believe that we're currently transforming the architecture from CPU-centric to connectivity-centric. And what we mean by connectivity-centric is you architect your data center thinking about the connectivity first. And the goal of the connectivity is to use all the components inside the data center, the memory, the spinning media, the flash storage, the networking, the specialized accelerators, the FPGA, all these elements, and use them for what they're best at to process all this data. And the goal, Dave, is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers. So it's really a shift from CPU-centric to bringing in more specialized components and architecting the connectivity inside the data center to help. We think that's a really important part. Okay, so you have this need for connectivity at scale, you mentioned, and you're dealing with massive, massive amounts of data. I mean, we're going to look back in the last decade and say, you've seen nothing compared to when we get to 2030. But at the same time, you have to control costs. So what are the technical challenges to achieving that vision? So it's really challenging. It's not that complex to build up faster, bigger solution if you have no cost or power budget. And really the key challenge is that our team is facing working with customers is, first, I'd say it's architectural challenges. So we would all like to have one fabric that can connect all the devices and bring us all the characteristics that we need. But the reality is we can't do that. So you need distinct fabrics inside the data center and you need them to work together. You'll need an ethernet backbone. In some cases, you'll need a fiber channel network. In some cases, you'll need a small fabric for thousands or hundreds of thousands of HDDs. You will need PCIe fabrics for AI ML servers. And one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose-built needs? That's one thing. The second architectural challenge, Dave, is what I challenge my team with is, example, how do I double bandwidth while reducing net power? Double bandwidth reducing net power. How do I take a storage controller and increase the IOPS by 10X and will allocate only 50% more power budget? So that equation requires tremendous innovation and that's really what we focus on. And power is becoming more and more important in that equation. So you've got decisions from an architecture perspective as to which fabric to use. You've got this architectural challenge around we need to innovate and do things smarter, better to drive down power while delivering more performance. Then if you take those things together, the problem statement becomes more complex. So you've had these silicon devices with complex firmware on them that need to interoperate with multiple devices. They're getting more and more complex. So there's execution challenges. And what we need to do and what we're investing to do is shift left quality. So to do these complex devices, they come out time to market with high quality. And one of the key things, Dave, that we've invested in is emulation of the environment before you tape out your silicon. So effectively taking the application software, running it on an emulation environment, making sure that works, running your tests before you tape out and that ensures quality silicon. So it's challenging but the team loves challenges and that's kind of what we're facing. On one hand, architectural challenges. On the other hand, a new level of execution challenges. So you're compressing the time to final tape out versus maybe traditional techniques. And then you mentioned architectural. Am I right, Jazz, that you're essentially from an architectural standpoint trying to minimize the, because latency is so important, you're trying to minimize the amount of data that you have to move around and actually bringing compute to the data. Is that the right way to think about it? I think there's multiple parts of the problem. One of them is you need to do more data transactions. Example, data protection with rate algorithms. We need to do millions of transactions per second. And the only way to achieve this with the minimal power impact is to hardware accelerates. That's one piece of investment. The other investment is you're absolutely right, Dave. So it's shuffling the data around the data center. So in the data center, in some cases, you need to have multiple pieces of the puzzle, multiple ingredients, processing the same data at the same time. And you need advanced methodologies to share the data and avoid moving it all over the data center. So that's another big piece of investment that we're focused on. Okay, yeah, so let's stay on that because I see this as disruptive. You was talking about spending $5 billion a year in R&D. And talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision. So let's break it down on a couple big industry problems that our team is focused on. So the first one is I'll take an enterprise workload database. If you want the fastest running database, you want to utilize local storage and VME based drives and you need to protect that data. And RAID is the mechanism of choice to protect your data in local environments. And there what we need to do is really just do the transactions a lot faster. Historically, the storage has been a bit of a bottleneck in these types of applications. So example, our newest generation product, we're doubling the bandwidth, increasing IOPS by 4X, but more importantly, we're accelerating RAID rebuilds by 50X. And that's important, Dave. If you are using a database, in some cases you limit the size of that database based on how fast you can do those rebuilds. So this 50X acceleration and rebuilds is something we're getting a lot of good feedback on for customers. The last metric we're really focused on is write latency. So how fast can the CPU send the write to the storage connectivity subsystem and commit it to drives? And we're improving that by 60X generation over generation. So we're talking fully loaded latency, 10 microseconds. So from an enterprise workload, it's about data protection, much, much faster using NVMe drives. That's one big problem. The other one is if you look at Dave, YouTube, Facebook, TikTok, the amount of user-generated content, specifically video content that they're producing on an hour-by-hour basis is mind-boggling. And the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way. So there we're leading the industry in the transition to 24 gig SAS and multi-actuator drives. Third big problem is around AI ML servers. So these are some of the highest performance servers that they basically need super low latency connectivity between GP GPUs, networking, NVMe drives, CPUs, and orchestrate that all together. And the fabric of choice for that is PCIe fabric. So here we're talking about 115 nanosecond latency in a PCIe fabric, fully non-blocking, very reliable. And here we're helping the industry transition from PCIe Gen 4 to PCIe Gen 5. And the last piece is, okay, I've got an AI ML server. I have a storage system with hard drives or a storage server in the enterprise space. All these devices, systems need to be connected to the Ethernet backbone. And my team is heavily investing in Ethernet NICs, transitioning to 100 gig, 200 gig, 400 gig, and putting capabilities optimized for storage workloads. So those are kind of the four big things that we're focused on at the industry level from a connectivity perspective, Dave. Yeah, and that makes a lot of sense and really resonates, particularly as we have that shift from a CPU centric to a connectivity centric. And the other thing you said, I mean, you're talking about 50X RAID rebuild times. You know, it's a couple of things you know in storage is if you ask the question, what happens when something goes wrong? Because it's all about recovery. You can't lose data. And the other thing you mentioned is write latency, which has always been the problem. Okay, reads, I can read out a cache, but ultimately you've got to get it to where it's persisted. So some real technical challenges there that you guys are dealing with. Absolutely, Dave. Yeah, and these are the type of problems that gets the engineers excited. Give them really tough technical problems to go solve. I wonder if we could take a couple of examples or an example of scaling with a large customer. For instance, obviously hyperscalers or take a company like Dell, I mean, they're a big company, big customer. Take us through that. So we use the word scale a lot at Broadcom. We work with some of the industry leaders in data centers and OEMs. And scale means different things to them. So example, if I'm working with a hyperscaler that is getting inundated with data and they need a half a million storage controllers to store all that data, well, their scale problem is can you deliver? And Dave, you know how much of a hot topic that is these days. So they need a partner that can scale from a delivery perspective. But if I take a company like example Dell, that's very focused on storage. From storage servers, their acquisition of EMC, they have a very broad portfolio of data center storage offerings. And scale to them from a Broadcom, from a connected by Broadcom perspective means that you need to have the investment scale to meet their end to end requirements. All the way from a low end storage connectivity solution for booting a server all the way up to very high end, all flash array or high density HDD system. So they want to company a partner that can invest and has a scale to invest to meet their end to end requirements. Second thing is there are different products that are unique and have different requirements and you need to adapt your collaboration model. So example, some products within Dell portfolio might say, I just want to storage adapter, plug it in. The operating system will automatically recognize it. I need this turnkey. I want to do minimal investment. This is not an area of high differentiation for me. At the other end of the spectrum, they may have applications where they want deep integration with their management and are silicon tools so that they can deliver the highest quality, highest performance to their customers. So they need a partner that can scale from an R and D investment perspective from a silicon software and hardware perspective. But they also need a company that can scale from a support and business model perspective and give them the flexibility that their end customers need. So Dell is a great company to work with. We have a long lasting relationship with them. And the relationship is very deep in some areas, example, server storage. And it's also quite broad. They are adopters of the vast majority of our storage connectivity products. Well, I want to talk about the uniqueness of Broadcom. And again, it's an awe of the fact that somebody had the vision, you guys, your team, obviously your CEOs, there's one of the visionaries in the industry, had the sense to look out and say, okay, we can put these pieces together. So I would imagine a company like Dell, they're able to consolidate their vendor, their supplier base, and push you for integration and innovation. How unique is that, is the Broadcom model? What's compelling to your customers about that model? So I think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest. So if you look at some of the things we talked about from a scale perspective, how data centers throughout the world are getting inundated with data, Dave, they need help. And we need to equip them with cutting-edge technology to increase performance, drive down power, improve reliability. So they need partners that in each of the product categories that you partner with them on, we can invest with scale. So that's, I think, one of the first things. The second thing is if you look at this connectivity-centric data center, you need multiple types of fabric. And whether it be cloud customers or large OEMs, they are organizing themselves to be able to look at things holistically. They're no longer product company. They're very data center architecture companies. And so it's good for them to have a partner that can look across product groups, across divisions, says, okay, this is the innovation we need to bring to market. These are the problems we need to go solve. And they really appreciate that. And I think the last thing is a flexible business model. Within example, my division, we offer different business models, different engagement and collaboration models with technology, but there's another division that if you wanna innovate at the silicon level and build custom silicon for you, like many of the hyperscalers or other companies are doing, that division is just focused on that. So I feel like Broadcom is unique from a storage perspective. It's ability to innovate, breadth of portfolio, and the flexibility in the collaboration model to help our customers solve their customer's problems. So you're saying you can deal with merchant products slash open products, or you can do highly, high customization, where does software differentiation fit into this model? So it's actually one of the most important elements. I think a lot of our customers take it for granted that will take care of the silicon, will anticipate the requirements and deliver the performance that they need. But from a software, firmware, driver, utilities, that is where a lot of differentiation lies. Some cases will offer a Nest DK model where customers can build their entire applications on top of that. In some cases, they wanna complete turnkey solution where you take technology integrated into server and the operating system recognizes it and you have out of box drivers from Broadcom. So we need to offer them that flexibility because their needs are quite broad there. So last question, what's the future of the business look like to Jazz Tremblay? Give us your point of view on that. Well, it's fun. I gotta tell you, Dave, we're having a great time. We've got, I've got a great team. They are the world's experts on storage connectivity and working with them is a pleasure. And we've got a great set of customers that are giving us cool problems to go solve and we're excited about it. So I think this is really, you know, with the acceleration of all this digital transformation that we're seeing, we're excited, we're having fun. And I think there's a lot of problems to be solved and we also have a responsibility. I think the ecosystem and the industry is counting on our team to deliver the innovation from a storage connectivity perspective. And I'll tell you, Dave, we're having fun. It's great, but we take that responsibility pretty seriously. Jazz, great stuff. I really appreciate you laying all that out. Very important role you guys are playing. You have a really unique perspective. Thank you. Thank you, Dave. And thank you for watching. This is Dave Vellante for theCUBE and we'll see you next time.