 Everybody, we're back with theCUBE at GTC 2024 here in San Jose, the pop-up cube. We're running and gunning. Really excited to have John Lin here. He's the EVP and general manager of the data center services group at Equinix. Great to see you. Thanks so much for coming on. Make sure you subscribe. Thanks for having me. I love this gorilla interview style. We love it too. It's like unconference, you know, the uncube. So you saw the keynote yesterday, which was amazing. We were just in a two-hour meeting with Jensen and he was just kind of going on and on. He said he was so nervous up there, which was amazing to me. You can't see that. He said, I felt like I was doing crunches for two hours. We were like, wow, dude, you're amazing up there. Well, what do you- No rehearsal, no rehearsal. He just goes up there and wings it for two hours. Unreal, I mean, wow, what a talent. Wasn't reading from a teleprompter and just going for it. What did you think of the keynote? What were the key takeaways for Equinix for you? Yeah, just incredible. One, obviously the announcement about Blackwell and what that means for just the hardware landscape for AI overall, just really excited about the work that we've partnered with them already, understanding how to support that. I think the density that they're driving and kind of the unique characteristics from a liquid cooling perspective. Again, we've been working with them for months to make sure we understand that and are ready to support that for our customers. But more importantly, I think, you know, you look at the landscape for AI at large, right? And the economic benefit and impact it can have for the world and like the use cases being described and like this isn't trivial like, you know, we're making funny cat photos. This is, hey, we're doing drug discovery. We're like automating factories. We're like, you know, creating more productive crop yields. It's really, really powerful stuff. So what exactly are you doing with NVIDIA? You mentioned liquid cooling. I mean, I presume you're thinking about packaging and standards and, you know, these things run hot. So you got to figure it out. But give us some details on what the partnership is like. No, absolutely. So we've been working with NVIDIA for a number of years dating back to when they started with the AI Launchpad as a service, kind of introducing that to new users of AI. That was actually powered at Equinex so giving them infrastructure to be able to support that. Just knowing that coming down the road, we could see this workload and this use case was going to be incredibly important for the enterprise. And, you know, we think about ourselves as the world's digital infrastructure company and an obligation to bring to all of our customers, whether it's service providers, the clouds or the enterprises using that, the best technology in the world. And so a great opportunity to bring NVIDIA into the bed years ago. And then over the course of the last year, we've been working feverishly to help support their DGX deployments, so really private DGX deployments for the enterprise. What NVIDIA had been hearing and what we've been hearing was the enterprises really, you know, they're obviously enamored with the use of AI but they're worried about cross-training of their data. They're worried about leakage potentially into other models. And so the ability to control and own that infrastructure is just incredibly powerful and they get the ownership economics for those workloads. So explain how that's different to our audience from the cloud model of cloud, multi-tenant, I go in, I rent infrastructure, it simplifies my life. You guys have made a great business sort of co-locating approximate to clouds so to reduce latency, but you've got a different model, explain that. Yeah, it's really kind of a best of both worlds for the enterprise. Obviously, everyone is a cloud user. I think the ability to have some of those workloads though on a private basis for their own compute or making sure they understand the most critical applications or where they want control, they want to have that domain and ownership of that infrastructure themselves. So being able to connect that into the public cloud is incredibly important. And so what we've built then for the enterprise is really the nexus of all of the most critical data flows that they have. It's whether they're using the multi-cloud, whether they're using their supplier network, whether they're using APIs with other SaaS providers, all of that comes into their infrastructure that's housed with us at Equinix. And so that's the most logical place then to put these AI workloads, right? You want to put that directly next to the data, right? And so that ability to have that together and co-locate it in the same facility is incredibly powerful. So people, Jensen in particular, talks about the AI factory. Do you, when you look at the Equinix infrastructure and the data centers, do you start to envision AI factories? Is that a little hyperbole? Well, how do you think about that? No, I look, I think the future is heading this way, right? And I think we've obviously been working very closely with NVIDIA to understand what will that requirement look like? How do we understand what the latency characteristics of that are going to be for inference purposes? We're in 71 metros in 30 countries around the world. Like that ability to then let a customer have that deployment as close to the end user as they need to with, you know, 90% plus of the world's population being within 10 milliseconds of our facilities is really, really powerful. And so we are, though, the future is evolving so rapidly. I think just again, just from the keynote yesterday about the announcement on Blackwell and what those requirements will look like, it will change the design and implementation of data centers for the future. So your customers are using AI-powered servers and other infrastructure as well, storage, networking, et cetera. But they're basically, think Dell, think Supermicro, et cetera. Taking servers, putting them, they've got GPUs and they're putting them in your data centers. That's right. My question is, how do you think about, and how should we think about today's, and they're doing whatever, they're doing training, they're doing inference. John, is that the same infrastructure, or does it require different infrastructure? Is today's training become tomorrow's inference with a depreciated asset? How should we think about the lifecycle as we grow from where we are today to a thousand X performance in 10 years? Yeah, I think it's, candidly, I think it's early days. And I think each enterprise's use case of inference or generation, as Jensen likes to put it, I think it's going to depend on what they're, exactly they're doing with that data, where it needs to live close to the end user and close to the applications that they're delivering. So we're going to see and evolve that. I do think, though, that a lot of the work, when you think about training, training is cost, right? It's, you're ending up devoting energy, time, power, money into that infrastructure. The value is created when you flip the switch and you turn it to inference, as you said. And then you're going to retrain periodically, but that's not as big a load, I think, and that's what we're seeing as an emerging use. Well, let's talk about energy sustainability. Of course, the media is going to go right after that issue, and it's an important issue, but you certainly start with crypto, and now AI is very energy-consumptive. How are you dealing with that problem? You mentioned liquid cooling. Jensen talks about, essentially, today he implied, or at least I inferred, that the work that you can do per watt, as long as you can keep the factory loaded, is going to be more efficient. But if you don't, then obviously you're wasting a lot of power. So how, I mean, I know it's an important issue, but where does it stand on the hierarchy of importance and how are you dealing with it? Yeah, for equinex, I mean, sustainability is one of the core ethos that we're operating with. We were the first data center provider to talk about going to 100% renewable power, we're actually over 95% coverage right now globally, and this has been a focus area for us for probably 15 years now, close to. And so when we think about this, and that's across the entire supply chain, it's about where you're sourcing your power, how we're making sure we're doing renewable power, but also on data center design around efficiency to be able to reduce the power utilization and also efficiency around cooling. So the liquid cooling component, liquid cooling is a much more efficient way to actually remove the heat out of the system. And that liquid cooling isn't we're wasting water and splurging it, that's actually like a closed loop system in our case, we're using that and making sure that we're not wasting water, which again is one of the areas that we think about on a sustainability level. But yeah, ultimately we wanna make sure that as we're developing this technology and it's being deployed more broadly, we're helping our customers get there and that's we can provide them a testable reporting about their own green power utilization that's incredibly valuable for their commitments around sustainability as well. So you'll reuse that hot liquid and cool it down, and yeah, we run it through our central chiller plan, we go ahead and cool that liquid down and it really is a closed loop system around that. Are there standards emerging around liquid cooling or is it still the Wild West? I would say it's slightly more mature than the Wild West, but it is still very much an environment where it's moving from fully bespoke everywhere with relatively like small amounts of production to kind of factory and standardization. And we spent the last year productizing, understanding both the engineering for our own sites, how we're gonna do this, but also the interface with our customers. And we announced at the end of last year, we have a hundred data centers from our existing fleet that can support liquid cooling, including directed chip liquid cooling around that because we knew that this was going to come. I mean, I've been around a long time, so I remember liquid cooling from the mainframe days and it's really interesting to see when you walk around shows like HPE Discover or Dell Tech World, the number of liquid cooling companies that are now on the show floor, and obviously it's going to be a hot space for a while. I want to ask you about, so last year, the Cube Research developed the power law of GNAI and we used the typical power law, which is a hard right angle where you got a few brands really kind of own the day and then you have this long tail. We said, we see something similar forming with LLMs, but it's different in that, the dimensions are maybe size of model and then domain specificity. And the other difference is the torso we see getting pulled up by open source. You know, when you see things like Meta and Lama II and Lama III and other open source and third party anthropocohere, you know, sort of an interesting dynamic, but these small language models that are very domain specific that will likely be on-prem, that will drive different industries, that will allow customers to have their own, sort of leverage their own proprietary data sets, whether they're using RAG, et cetera. What are you seeing there? Is that starting to take shape based on the patterns that you're seeing? It is, and I mean the testament to the 30,000 plus people at this show, I think there's so much interest in energy around this and this is, you know, we're moving away from the experimentation phase and this is real life deployments now all over the place across many different industry verticals and you said it well, that specificity of use case is going to be incredibly important. It's not, not every enterprise is going to end up developing a massive large language model on their own. They're going to go ahead and purchase models for their use, they're going to go ahead and retrain off of their data and make sure there's also small language model and very vertical specific use case that they'll drive, whether that's video processing or pharmaceutical drug discovery, like all of that's going to be incredibly powerful. Well, and of course, you know, let's talk about the macro to the extent that you can share. I mean, Equinix incredibly successful company or probably what, I don't know, 10 billion with an $80 billion market cap, something like that, which is really unheard of for an infrastructure company to have that kind of, you know, revenue multiple, but the macro's still there, the macro's still tough. Our data suggests with our partners at ETR that about 45% of the customers that we talk to say they're funding AI from other budgets. So it's not just this incremental go-ahead and spend. You know, IT spending, yeah, maybe it's growing three to 4% this year. So it's not just all systems go, but AI is clearly stealing from other areas. As well, about two thirds of the customers tell us that they expect an ROI within 12 months. So very, very aggressive. So you said earlier, we're moving beyond the experimentation phase. Do you feel like 2024 will be the year of AI ROI? I think we're starting to see that for sure. I mean, again, it's a world where the value generation that the AI use case are creating is so massive, right? And I don't think about any of this as a zero-sum game in terms of like total amount of spend in GD because the value creation for the enterprise, when they actually see that being realized, then they can go ahead and fund that even further, right? And that creates more envelope for them as they're driving either higher revenue line, better productivity, better yield, all of that. And so we are seeing that starting to merge already. And so I think in 24 and beyond, I think certainly 25 is going to be another big year for this entire sector. It's a, I think we'll see continued investment. I think the, we are early innings in this, in this super cycle. I was listening here, heard Bill Gurley the other day say, you know, everybody wants to compare it to the dot com bust. And he said the big difference here, and he's right, I think is the funding levels are so much higher. The other thing is dot com was funded by a bunch of companies that kind of, you know, like Enron, you know, global crossing that sort of went out of business. Today it's being funded by, you know, the CAPS, TAPX of hyperscalers. That's right. And they got their balance sheets are enormous. So it's a different dynamic, but I hope you're right. I hope that starts showing up in the productivity numbers are 30,000 crazy people here, John. Hey, listen, I really appreciate you spending some time with theCUBE. It was great to have you. Thanks. All right, keep it right there. For more action from GTC 2024 from San Jose, you're watching theCUBE.