 Aloha and welcome to JSA TV, live from the floor of PTC24 in Hawaii. I'm Jean-Marc Liebman, joining me today at key stakeholders in the industry to discuss the impact of AI networking infrastructure. Joining me here in Honolulu is Jeff Barber, Vice President of Global Data Centers for Bloom Energy. In the middle, we've got Mark Cooper, Vice President of S Strategy for Atlas Edge Data Centers. And to my left, we've got Bill Klayman, board member and investor of Nero. Guys, welcome to Hawaii, early fixture, best conference in the world. Thank you so much for having us here. Telling us how in Chicago, it was like negative 25 without the windshield. So great to be here. I mean, when we left, it was quite cold in Europe as well. Yeah, it was quite cold in Europe. It was snowing. It's a challenge getting here, but it's always worth it. Jeff, where do you come from? I'm a California boy, so this is totally normal for me. Yeah. I might have been over served last night, so my answers might be a little slow. But yeah, this is definitely the best conference in the vertical, for sure. You can see the tan. You live in California. It's not like that. It's not that bad. Look, I think the topic that we have to discuss this year, and I think the topic that's on everyone's mouth, it's AI. It's this booming, this new chapter for this infrastructure that we're going to get into. We already got into it, but it's going to really kick in this year next year. So my first question would be, what are the challenges with all this and how do you see the industry shifting into the AI age? And as you start answering, just give an elevator pitch or pitch business so people just know what you do. Sure. So who wants to go first? Do we want to go this way that way? All right, Bill. It's a pleasure. Hey, everybody, LinkedIn people, thanks for watching us. There's a comment section right below this. So be sure to engage in the conversation. Great way to start this conversation, Joao. I think a part of some of the biggest challenges, well, let's talk about the opportunity and what's happening. I guess before I do the elevator pitch, hi, I'm Bill with Neuro. We are an artificial intelligence and infrastructure organization. One of the few and actually the only one that does AI and MLOps as a platform, as a multi-tenant architecture. So we'll stop there. You can go on the website or Google or whatever it is, but we as humanity are experiencing a fundamental shift in how we interact with data, and I'll keep this brief. But everybody listening, if you've gone to your favorite search engine, whether it's Ask A Jeeves or Alta Vista or maybe even Google sometimes, hopefully some people are asking, laughing about that. We've been conditioned for 25 years to ask a question and get a pretty blue link. Most recently, only the last couple of months, if you go on Google or Bing, you ask a question, the first response to get is a generative AI response. So we as humanity are experiencing a massive fundamental shift simply in how we interact with data. And so because of this, we're seeing a big shift in both how we support all this new data points and infrastructure and so on. One of the biggest challenges, at least from what I'm seeing, is our preparedness, our density levels. How ready are we for this kind of this revolution? In doing research, most of our industry is tapped at doing density levels of eight to 10 to 15 KW rack. And now all of a sudden you're being asked to do 60, 70, 80 KW per rack to support this kind of infrastructure. Because it's going to get a lot worse. It's going to get so much more. So a big part of the challenge is how do we create an architecture, whether it's hybrid, net new, building into that can support greater levels of density, obviously more cooling, way more power consumption. And these conversations aren't just something that's a fad that's going to pass us by. This human shift in data is here right now. And the biggest challenges that we have are, you know, both an opportunity and a challenge at the infrastructure level. And it's going to put the range on steroids completely. It's going to be 10 times more to begin with until the end of this decade. And then we're going to see what it's going to take. Absolutely. Mark. Yeah, when I think it's kind of taken us by surprise. So I mean, AI has been around for a long, long time. Wait, Mark, Mark, what's Atlas Edge? So we're, I was getting there. What do you do? Thank you. You're really cool and we like you, but what's happened to that? We're a pan-European data center operator with a focus on the tier two markets. So not so much on flat D, but it's more of kind of where we see the demand is coming and it's coming very, very quickly. And AI has been a factor in that. But kind of when we look at it, you know, we're sitting here last year. We may have been talking about the metaverse. No one's talking about AI. And in the last 12 months, AI has just kind of come from nowhere. So the technology has been around for a very long time. I think the challenge we see as operators is, yes, in handle rack densities, yes, there are solutions for doing kind of higher densities, but it's almost like, it's where do you place the bets? Where do you think the demand is going to come from? And how do you kind of prepare for that demand, prepare for the densities, kind of meet the requirements that come in from the platforms as well? Because I think the challenge for the platforms as well, they're probably 12, 24 months behind their rollout as well in terms of infrastructure because of the buying cycles. So everyone's like, AI is great, but we're in kind of catch up mode. I was still seeing supply chain issues. So I know, Jeff, just a quick follow up question. Are we still seeing supply chain issues when it comes to building facilities? No, I think the challenge is it's finding power and power at the prices the customers want to pay as well. All right. Speaking of power, power guy. Power guy. Tell us about BlueManager questions. I will, and I'll make some comments on this. It's a very appropriate topic. Once you mentioned AskGeeves, I'll direct everyone to my MySpace account and I'll give you my AOL email address right after this is over. Yes, absolutely. What's your aim, Hashtag? That's terrible. We're old, aren't we? We're old. I don't know what you're talking about. What is BlueManager? BlueManager, actually, we announced the vertical that I managed last year at PTC. So we specialize in data centers. But Bloom as a company has been around over a decade, publicly available, publicly traded since 2018, you know, gigawatts of fuel cells out there, which is what we are. So we are on site, power generation, with or without a utility, solid oxide fuel cells, which means essentially no moving parts in the machine. This is an electrochemical process. You give me molecules, I give you electrons, right? So what those electrons look like, we don't care, DC, AC stepped up, it doesn't matter. So very, very flexible. This is a very appropriate topic. And you're right. I'm a former developer. I'm a recovering IT nerd, even a mainframe operator, right? And I'm looking at this, and the rack densities are unbelievable. I had facilities with 15kw racks, and I thought no one will ever touch this. Most of the facility is running at 2 to 4 if it's a network, right? So then you're absolutely right. Just Moore's Law applies to GPUs as well. So next generation of video, for instance, should be over 100kw rack. So how do you power that? Well, from our perspective, it's obviously a great opportunity. People are having, I don't know if you've heard, but there are problems with utilities providing power within the next decade. We're taking dirty sources offline, transmissions are overtapped and unreliable now. So yeah, we are very, very busy with the densification of existing facilities, right? Now, do I think that's where it will stay? No, I think we'll see a disbursement of MLAI workloads towards what we call the edge. But yeah, we can supplement a facility. You can consolidate racks, you can densify the racks, assuming you can cool it, assuming you can distribute it, right? It's something we're not talking about. But yeah, this is a very, very busy time for us. I think he's going to do more with less. Sorry, you're going to say. Mark said something really profound. He's like, I remember being here, and I remember having these conversations, and AI, while sort of like ethereal topic, this wasn't happening. These massive panels, right? And if you remember, ChatGPT really hit the storm like February, March, and then South Park did an episode on them. It's called Deep Learning, both scary and really good. And at the end of the show, it said written by, I think it was Trey Park or Matt Stone and ChatGPT. Watch it. It's both scary and very, very good. And just imagine, and this is why this is so important, this conversation is so important, how quickly all of this has changed. That last year we were here, infrastructure, telecom, laying down fiber, and now we're like, oh my god, oh my god, how do we support all these new demands? Yeah, I think, I'm sorry, just final comment, the MLAI household name now, my mother knows what it is, right? But high performance compute has been around for a very long time, right? I think that much of the high performance compute world, high performance file systems, global file systems were in the enterprise, however, I mean, you would have Intel with massive data centers in the enterprise. And there's been a propagation now to wholesale colo, retail colo, services, time slicing, if you will, of HPC. So the hardware's out there for a long time, but it's just growing like crazy. I think the point you guys made about the speed of change is quite extraordinary, because last year we weren't really talking about AI at this conference. We were talking about just the splash chain issues, how to fix those, how just to answer the demand there. And then suddenly you get to the spring and everyone starts taking a step back on the projects to redesign everything for the AI age. So met to Google all those guys, go back to the drawing boards to build for this new world. I guess we were talking about power and power is probably the most important part of all this, because that's how we're going to get everything working. There's a lot of conversation, there was actually a conversation this morning on LinkedIn in Europe over how much this is consumed worldwide. There's the standard figure of 3%, some people argue it's more 1%, because 3% includes smart homes and all this sort of stuff. Anyway, if we weren't doing enough renewable energy, data sense will probably consume about 20%, 25%. So the industry is doing all right in terms of adopting that. But as we're going to go 10 times more for the next few years, especially going to go into 2050 times more in the next few decades, how can we assure that this transition really becomes sustainable and we adopt the right energy sources from the beginning? And then I'll add the next question to another layer, which is the community layer. How do you then transfer that message to the local communities? Because we've seen a lot of black clash on that front. And I think that's becoming one of the biggest challenges as well for data scintillage. Beyond power and bureaucracy in some markets, it is local communities' acceptance of data scintillage. And we've seen, especially in Europe, we've seen it that a lot. Look at Ireland. Yeah, and Ireland, Dublin, Amsterdam, you've seen a little bit of data in London as well, outside of London, you're seeing it in Paris. People are becoming a little bit against these facilities, but because they don't know what it is. So how can we ensure that as we build 10 times more, how do we ensure that we can educate the local communities? And this is actually, it's green, brings economic prosperity. It gives you a job, essentially. I don't mean as, I mean everyone, and it keeps your connectivity to your family alive. I could start, if you don't mind going this way. Yeah, we'll go this way. Yeah, that's very appropriate for Bloom. And my answer is we can't ensure it's green right now, not with that kind of increase in demand. So what you can do, however, is take logical pragmatic steps. Okay, so Bloom, for instance, reduces carbon by 25, 30% over utility. Is it 100%? Give me green hydrogen, and I'll just give you water vapor, but that's a pipe dream right now, 10, 15, 20 years, maybe. But Bloom is a hydrogen fuel cell today. What you can do is take pragmatic steps. We're non-combustions, no particulates, but we, you know, there is CO2. It's a byproduct of the methane process we use from natural gas. So I don't think that we can do a one-for-one reduction, hit the ESG goals, and increase capacity by 10x. And so I think we need to stop telling ourselves if that's possible, it's not. It's still a data center. What data centers do is they sell power, right? That's essentially what they do. Even the leases are written per kilowatt. They're not per square foot, right? So that need for N plus one, N plus two, that availability is going to be there. Intermittent sources can't do it. Wind, power, I mean, back to Ireland, they had a month of no wind, and it was a real issue. We've seen that in the UK when the UK had the energy crisis, it was a short fall of winds. That's right, out of four things. We went to 7% production out of from 20-something, because there was no wind. The Amazon website can't go down because the wind stopped blowing. So, you know, intermittent sources, maybe in combination with batteries, with fuel cells, and other technologies, you can give a reliable, you know, four nines of availability. And, you know, at the end of the day, the tenants still very much require that level of availability, and the lease drives what the developer does. So there needs to be a conversation there with the tenants about, okay, this is the impact of what you're requiring of me. And I think the industry as a whole needs a lot more marketing, right? We go, I look at, like, hosting Ireland, back to Ireland, and some of the activities, you know, Gary Connolly, hi, Gary, visit my MySpace page, and send me an AOL email. You know, they're very big on community outreach, not at all related technology. You know, articulating how many jobs flow through a data center. You know, it's all about marketing and branding right now, because they have a bad reputation, and we can't do anything in our lives without them at the end of the day. Yeah, and it's pretty in the community. Again, in Gary's case, for example, with the bees and all that, he does bring local people to help with that. Absolutely. Planting trees and all that. Yeah, I put several little bee blocks that Gary built with his daughter around Dublin. Last time I was there. Mike, energy. Yeah, I think what used to happen in Europe, especially with the AI demands, it's gone north. It's gone to the Nordics. And then something like Norway's been very well positioned because of all the hydroelectric power. And you go back five years, people are looking at Norway going, well, it's kind of a bit too far north. Connectivity doesn't really work. But it's got to a point now where there's capacity, there's power, and the connectivity's kind of good enough. But it's the question for us now is, well, where's next? We think it's going to go all the way south. It's going to go to Southern Europe. We have a lot of solar available where some countries where they'll pay you to take solar during the day. So I think you all see kind of the demand. It's gone north. It's going to come south. But I think it's going to kind of jump the shark in terms of not going to the flat markets because you just don't have the power availability there. You don't have the power pricing either. I would point out just to tag on that very quickly, but remember that many, not all, many machine learning AI workloads are not latency sensitive or at least nowhere near as latency sensitive as a relational database or something like that with an application that can crash. So good enough connectivity is truly good enough, probably more than they need. So you're going to see, in my opinion, a shift in geographies. You're going to see a shift in facility types. Maybe we go more containerized, more modular. Who knows? Cooling's going to be a whole different issue. It's interesting to say that because of the few interviews I've done over the last two days already pre-PTC. I love the conversation. There's no revolver on the big markets, not in Europe, not in the US. It is about places that they know used to come up in the conversation. That's right. It's Azerbaijan, Armenia, Pakistan. It's the yams on that side of the world. It's Chile, it's Argentina. It's the countries that we don't really talk about that much. They're coming up a lot more in the conversations. Here in the United States as well. So for everybody doing some research, Neuro, NEU.RO, we're an MLOps layer. So we go to secondary markets, secondary tertiary markets, where let's say, for example, Atlas Edge has 12 racks of GPUs available to them. We put Neuro right on top, and within 24 to 48 hours, he's a direct competitor to Amazon with a full white label multi-tenancy solution. Exactly, but we deploy into secondary tertiary markets where there's available capacity and good enough connectivity. But let me answer your question on energy first. So Peter Groce, if you're listening to this, I'm going to give you a shout out. He's one of my mentors. Hi, Peter. Yeah, there we go, there we go. Everybody knows this guy. He knows Bloomberg. He does. The data center industry loves innovation as long as it's 10 years old. And while that's an amusing statement, exactly, it's a bit of a challenge for all of us, right? And so there was an article that I wrote for data center knowledge right now, kind of picking back from what we're talking about, where we are seeing a diversification to support for intermittent technologies as far as energy like wind or solar. Nuclear technology is potentially going to become a bigger impact, like small modular reactors. And in Virginia, they're going to be building, I think, between four and six of these SMRs to support between 10 and 30 new data center build-outs in Virginia and also create a backup with hydrogen power that's going to be supporting the Virginia grid. So we're seeing this evolution, this new kind of conversation that's happening in our space. But it's still a bit of a challenge as far as how we can approach that kind of conversation. And it is a new approach to how we're building out infrastructure, how we're putting supporting environments in there. But you both mentioned some really great points. We've been building out these really powerful, massive AI solutions in secondary markets. What we're trying to do is find good green renewable solutions to put them and let them farm. Let them sit there for weeks and months on end consuming those GPU services on green systems, on better utilization. So Nuro was founded on three very core tendencies, AI ethics, AI transparency, and AI sustainability. We're the only platform that has what's known as a green counter. So a user at Atlas Edge is deploying this stuff and in their solution, they can see if you run your model at this time or extend it out by a week, you're gonna reduce your CO2 footprint by this much. It's some of the first time that we're giving users of AI, larger language models, GPT-like services a true window pane into what their workloads are actually doing from a resource perspective. That is a part of it as well. Final point, the PR challenge. Holy cow, we can probably spend another 30 minutes on this for the first time in my life. I went to a Virginia conference and I saw protesters like real life, angry, you know, holding signs. Oh, absolutely. I mean, the gas supply in Ireland is restricted only to data sets. And it's got more of an issue because I mean, when you organize events, you do have to take into consideration potential process with green peace, like just a boil. It's becoming a bit more complicated in that sense. We've gone mainstream. I've been, I'm lucky enough that I wasn't adopted in this industry. I'm like not a former musician or whatever it is. I got a degree in network engineering. And I did, but I started working. Glutton for punishment here. You ready for this? When I started working, we were calling him network closets, you know, we were gonna call him data centers back like 19 years ago. Now they're like officially data centers, but even now there's still a deeper understanding of what this industry is doing. And that PR, that communication challenge, something I'm deeply involved in is difficult. It's difficult for people to understand that we're not these non-descript buildings in the middle of nowhere that we are a part of the community. That we're not, you know, people who impact a watershed of a community that we're not these people who are noise pollutants either. There's a lot of really great data out there. It's just needs to be disseminated and put into the market. And we're doing that again. Unfortunately, we were built by design. Nobody tell them what those buildings are. Nobody tell them what's stored inside of them. No signage. No signage, no windows, nothing. It is a warehouse for all, you know, intensive purposes. And now we're like, well now we don't have any young people in here. This is a bit of a challenge. So how do we fix that? So it's a balance and we're catching up to it. It's kind of an existential challenge. I think we're still very much a buy the industry for the industry. We don't share outsides of the data center worlds. And I remember the interviews back in 2015, 2016, you would literally ask CEOs, and you talk about the big data center comes, you would ask them, would you like to bring the community to it? Does the person outside, the person walking on the street, do they need to know what the data center is? Do they need to know what you do? It's like, no, they don't need to know. As long as their phone works, they don't need to know what it works for. And back then, I think now we are collecting the best side of that. And we'll see if that lasts. I mean, we need some PR, but I mean, there are very real concerns, especially back then when you had multiple fiber networks terminating in the same building. And someone drives a truck into that and you can bring down a huge portion of the population. So it's, we do need better PR. And it is interesting final point on kind of the difference in workloads. These high performance workloads don't necessarily need to be on 24 seven to your point, like charging your Tesla, charge it during the off peak hours. I can load a library, I can chew on it, but there's no Amazon or Walmart transaction at the other end of that. So it can't go down. It can be similar to Bitcoin. That's a really good point. Tier zero, right? It's a new sort of term out there. Tier zero architecture, where you're no longer hardware dependent, you're workload dependent. So in that use case, you're training your data on neural, for example, and the site goes down, you're gonna lose like maybe 10, 15, 20 minutes up, whatever it takes for you to spin up that workload somewhere else. And that's a shift in mentality, right? Your SLAs have to be a little bit different, right? Yeah, why do you need 99.89s when your workload is resilient in itself and not dependent on the hardware? Great point. The SLAs typically a function of how much revenue would be lost if I lose this transaction. If I'm trying to commit a right for a multi-billion dollar stock transaction and I lose it, that's a real problem. If I have to go back 15 minutes, it's not a big deal. I can just recalculate and redo that workload. So it is gonna change the SLAs dramatically. I haven't heard of it before, so that was interesting. Sorry. No, it's funny, so you said Bitcoin. We haven't touched on that so far because I think a lot of these new ALA companies are actually repurposing what they were doing with Bitcoin. It's the same thing for, oh my gosh. You said Bitcoin, I said Metaverse. What you said, one of our partners, I am HBC shout out to you guys, former Ethereum miners with a whole bunch of A100s and they're like, Bill, this business is really not stable and it's really hard to predict when this is gonna be, what else can we do with these, and if you do A100s, really great high-density GPU clusters, I'm like, you wanna put some really cool LLMs on there and start making really good committed money and revenue, they're like, this is a great idea. And so we deployed with them and we gave them their own portal, their own logo. It's their own platform, but now to commit the contracts, it's a lot more stable rather than having to worry about is this new Bitcoin company gonna run out of money? They were pioneering, right? They defined tier zero. They went to areas where connectivity wasn't there. They could run off of a T1. If anyone remembers that term, you're really old, but they didn't require that you're up all the time. It's like, where can I get two, three cent power? I don't need a bunch of backup gens. I'm gonna shut it down if things start getting hairy and they would. I think you kind of touched a lot on the design front but I'll ask this question anyway because the design is gonna change a lot. We've already said Google went back, Meta went back and now they're all going to build. It's gonna be big. It's gonna be huge. But what does the AI data center of the future look like? What are we still missing? What needs to be developed? And you can be from an energy perspective. You can be from a rack density perspective. We're talking about 70, 80 kilowatts per rack. So a lot of people are talking about 100, 150. I've heard conversations of 400, 500 kilowatts. And I mean, I'm not a technical person. So let's keep it like white light on that front. But I know 500 is a lot. I mean, anything above like 40, 50 at the moment is still a lot. So how much we need to change design? How will the design change? And what does the data center of 2030s look like? How are we too far out from it? It's a blink of an eye, right? I'll start here very briefly. It's less than 30 quarters. It's about 25 quarters left. Stop making it so scary. Yeah. I'm just being some sense of reality. Like this is fast. A blink of an eye in our industry, right? It's five PTCs. Yeah. Well, it may be even worse now, see? At least we have 35 things to cross off. So I'm not a math guy. We've taken a few data centers now on this journey, converting them from 10, 15 KW Iraq infrastructure to something a little bit more intensive. And the way we've done that is we want data centers to continue to be amazing at what they do, selling capacity, space, power. We found that they don't have to rip and replace everything out. A hybrid model has been the best, where you have a little bit of improved containment. Airflow can only take you so far for all of you airflow enthusiasts. I mean, that's just the truth of it. The pesky physics. It is. Have we reached the end of it? Have we reached the tipping point? Like, when I worked at Switch data centers, we could get like 45, 50, 55 KW Iraq. But no noisy neighbors, no delta T variations. You have to really sort of contain that environment. And not everyone's built like Switch data centers, let's be honest here. So one of our partners that we've done this with, Scott Data Center, traditional co-location environment. But then they're like, you know what we can do? Really powerful design, getting them up to 60 to 70 KW Iraq, five DGX H100 nodes per rack, generating more than $4 million revenue annually for one single rack, right? And obviously we see a return on that because we're the platform running on top. And what we're driven by is not the data centers. We're driven by their enterprise customers who want data locality, data proximity, cost control. And these data centers like Atlas Edge, they already have databases like Exchange and SQL servers and VMware. So all they got to do is order their customers and it's like, can we get your AI workloads as well? So we've been taking data centers on this journey. The hybrid approach has been one of the best where you still have your traditional 42, you know, you can get 52 unit rack, you're working with vertical space. But those rear door heat exchangers, sometimes if you want to do double dipping, that's a pun with directed chip liquid cooling. But those have been the least intrusive, easiest way for traditional data centers to enter into this high density AI market. Now, when we start talking about immersion liquid cooling, yes, that's when we get into the hundreds, but you're also talking about pods, right? Like this kind of pod into which you dip this server into dielectric fluid, for example, or mineral oil. But the problem is now you're talking about horizontal space instead of something that's vertical. And that requires a bit of a redesign. So that's a purpose built architecture that one day you'll say, AI, crazy levels of density. I'm going to build a facility strictly for that. I'm going to use this floor space for immersion cooling, but many, many of my partners, you can get up to 135, 140 kW, a rack on a rear door heat exchanger in a good design, which is already extraordinary. That's where we've seen success. That's how we've sort of guided many of our partners into this AI space, how we brought customers out of their platform outside, like kind of moving away from them, the hyperscalers, sorry, not sorry. But that's been one of the best ways that we've done it. I think you're right. I mean, one of the things that I've visited most recently that really made a difference in my eyes was Q-Scal in Canada. They're really building for AI. I mean, from the cement, from the pillars, from the way things are done, from the floor being like this, in case there's a leakage of something, everything's thought to put just liquid in there and really power AI. But you were not in a lot to what he was saying. You were not in a lot in agreement. Yeah, I think you said earlier, so AI slash HPC has been around for a long time. And it used to be the joke that where's the HPC rack? What is one rack and the rest of the room is empty because of the cooling? With advances with kind of rear door heat exchangers, doing tap offs to do kind of direct to chip cooling, it's there, it works now, it's not new technology. But I think the thing as well is that you may see, you look at a row of racks, and there's maybe two or three racks in there that kind of DGXH100s, super identity, but people forget you've got storage, you've got a network as well. So it's kind of how do you build that kind of hybrid? That's the challenge for the DCs is kind of getting away from these hotspots and as I said, the one rack and kind of big empty white space and engineering those designs. I think personally where we're going, if we talk about the future of design, quote unquote, looking at these densities, looking at the cooling issue, again, pesky physics, the manufacturers, the invidious of the world are going to have to perfect some technologies just as Intel did many years ago, the way in which they cooled the chips and all kinds of new things came out of that, that will happen. I personally think the existing facilities, years in the future will start to be unpopulated and there'll be more propagation for like I said, containers or modular, more to the edge. I hate using that term because it's more of a construct. It means different, it used to mean it was a network pop or it was a data replication for an enterprise. Now it's more of a construct, but a lot of this processing, not just the large library type stuff, facial recognition on cameras needs to be done on site. You can send that metadata back to a hub, but I need to do that processing there and need to come up with a solution in real time before that person gets through the airport too far or something bad happens. So there's just not time for that replication. So more distributed would be my answer. I think one thing as well, because when we talk about data systems, we always talk about what's coming, the new thing, but there's a lot of legacy and a lot of the legacy is now reaching a level where you're gonna have to change the service. It's 10, 15 years old, it's changing. So we've seen a few startups coming up in the circle economy space to use these services and them to emerging markets and all that. How, I mean, very simple question. What's your view on retrofitting? What's your view on all the data centers? When does a facility actually become obsolete? Can we save them all? Are we gonna get rid of a lot of them? How is that conversation gonna go? We don't want, let's not talk about the shiny and the new. Let's talk about what's been there for 10, 15 years, 20 years now. What's gonna happen to those? Well, I think, I mean, the best data center is the data center you don't build. So you're retasking an existing building or you're kind of re-fitting an existing facility. That's what we've done. So a lot of our facilities are 10, 15 years old. But guess what? With kind of a little bit of clever engineering, they work, they're fit for purpose, both in terms of the hyperscalers coming closer to the edge or because you have the enterprise workloads that are on-prem and yeah, they may have one DGX rack, but two, three, four, no, they can't do that anymore. They've got to get it off-prem and that fits back into the more traditional data centers that are closer to them. Okay. That's a really good question. So I hate to say it. There's gonna be facilities out there that are like raised floor, right? And you're like, what's underneath your floors and the facility operators will be like, I don't know. Please don't raise those tiles. There's like wires from like 30 years ago. We just decided to cut them all and run new cable trays. I'm like, oh my God. So there's gonna be situations where I wanna say that all of them can be saved but there's gonna be places where it's just impossible. Yes. But in that sense, we've seen a lot of success, like was mentioned, where we can do some extraordinary things with existing retrofits and existing facilities, where again, we've started to take these customers on this journey to modernize their infrastructures. And I love what you said. There's a lot of really creative ways where you could improve like containment. There's tool list containment now that you can deploy. There are much better rack and row and hot and cold aisle designs. There are much better air movement kind of architectures out there. So the good news is that a lot of the vendors that are at this conference, they're responding to this new need. The question is, do you want to continue to host exchange and SQL servers or do you wanna become a part of this AI game? I don't think that's a good point. I don't have anything to add to that. I mean, between these organizers, I'm not gonna think of anything else. That's perfect. You said you already don't like edge too much, but I think it's important to talk about edge because we've been talking about it for a long time. And like you said, edge means different things to everyone. So the first question would be, what each one of you understands edge to be? Then the question would be, how is edge, will edge actually blossom with AI? Is edge the key to AI? And I know people say it's edge in 5G and all that, but let's work on the edge side of things. Is edge gonna really go in full strong now with the advent of AI? But first, what is edge? Is that the only question? Edge is, like I said, edge to me is a construct. It used to be a network pop. Now you have edge data centers that are 15 to 25 kilowatts or more if you're talking about meta. So it really depends on, if we get off a little bit of just ML AI, things like the metaverse, good example, I need to be moving and creating avatars and having conversations with very, very, very low latency. And I need that edge data center to be able to make decisions on its own. And I can't replicate that back to a hub, right? So the whole hub and spoke model, in my opinion, absolutely will not necessarily go away, but it will redistribute. We've been here before and we'll redistribute again. I think there will be all kinds of use cases we can't anticipate right now between smart cities and smart houses and buildings. And like I said, meta not in the form of a company, but the metaverse in general. Go watch Ready Player One. That's where everyone wants to go, right? In your virtual world. That's a lot of freaking compute power. So you're not gonna be able to do that a single building, right? I think what we've learned is the edge is where the network is. So there was this whole thing, oh, we can put compute everywhere. It's like, you can do that, but if the network infrastructure is not there, if you can't hand off traffic, it doesn't work. So for us, we've kind of really looked, okay, there's a subset of applications that are like super latency sensitive. What does that mean? It means they need network. And it's not one network. You need to be able to go, oh, I'm a T-Mobile user, but my mate's on Dish. How do you hand off that traffic locally? So that's where the edge will be. It's where you can do that in the interconnection between networks. And that's, for me, that's the real future of edge. See, that's a very good answer, but a very network-centric answer, right? I'm an ex EMC, an Oracle guy, right? So to me, the edge is data replication, business continuance, disaster recovery, enterprise data being propagated to protect it. I like to say, well, how much does a car cost? What kind of car? Is it a Mercedes? Is it a Toyota? Is it, what is it? So it's more of a construct than an actual one answer type of thing, right? We've evolved. I love the construct example. So we've gone from concept, which is very ethereal, to a construct where we have a little bit better definition of what the edge is. There's distributed compute, right? Trying to bring data as close to the source as possible. I do think AI is gonna have a little bit of a blossoming effect. We work with one partner. We can't announce it yet, but they take over floor space in, let's say like a Boston, New York metro area, like a big, big chunk of the floor, put a little modular ecosystem on there, crazy level of density, and then they, you know, neuro on top, DGX units inside, and you're talking like 40 to 50 KW Iraq, and all of a sudden customers in that little metro area are like, holy cow, I've got an AI processing center on the 34th floor of this building. I can access it and use it right now, and literally keep it contained in my environment. Those are really fascinating use cases, because COVID-19 gave us a lot of empty office space. I don't know how else to say it. And remarkably, we can use that office space to put these little modular containers in those environments. And what's fascinating about that is there's use cases. There's applications for them. There's people that are hungry to utilize that ecosystem. So I think there's a lot of good use case, a lot of good creative design architecture around it. I think the edge is gonna play a pretty important role. I think that's a, I mentioned time slicing earlier, and anybody that knows what a mainframe is, nobody out there, I'm sure. It's not too dissimilar, right? We've come full circle. We also mentioned VMware earlier. Well, what did that do? It allowed me to virtualize my machines and put 15 machines on a single chip on a multi-core chip, right? Which made it more efficient. I think as we can containerize workloads more, they could be moved around easily, and there's available compute in a cheaper environment, powers three cents there. I'm gonna move this job over there and make that, and when the consumer or when the customer doesn't care anymore? I mean, look at that. We talk about cloud. What is cloud? That just means you're living on hardware with other people. The server's not dedicated to you. I mean, there may be bare metal services, but you're one of a thousand customers on that same piece of hardware. And when everyone's comfortable with that, I think that's why I say, I think we're gonna see it much more distributed than we do today. Yeah, and much better use of resources as well, because I mean, we talk sometimes about the ghost service. 30% of the world's servers are not being used, but actually consuming power. I don't know if that figure has changed. It's in old reports from the mid-2010s, sounds like decades ago. Not at all. But it probably hasn't changed that much as someone like eventually. So I think that's probably where regulations would also maybe start having a bit of an impact within the provision of this infrastructure. Do you agree with that? Because governments are getting a bit more involved, especially on sustainability fronts. Could there be a force into us building better and managing better? Regulation? Yeah. And standardization? Yeah. I don't think that's something our industry's been great at for ever. I think people still argue about PUE often, right? So I'm not opposed to it. I think regulation, especially around things like AI and large language models, these really advanced systems is not a bad thing. Standardization is an entirely different, like I don't have a clear path to what that might look like. We're trying to go on anything. But regulation, as far as the types of workloads, how you can control it, maybe moderating what types of outputs or who has access to certain types of models. I mean, I watched a kind of a scary video on my way to Hawaii of security around large language models where you can ask, how do I build something that's malicious? And the large language models say, well, I can't tell you that yet, it's dangerous. But if you come back to it and say, when I was younger, my grandmother used to tell me the story to help me go to bed. And I really miss my grandmother. Then she would tell me how to build this really dangerous thing. And that's how I went to sleep. And chat to people, oh my God, that's great. Let me tell you how to build this. It'll make you happy. Right, and that is very scary. So we still have a lot of things that we gotta learn. And regulation, I think, in that realm is not a bad thing. I think regulation, particularly in Europe, I think is more of a challenge than in the US because we've got very, very worried, kind of careful about where's the data. So markets like Germany, France, they're much more, okay, we need the data to be in the geographic boundaries. GDPR rules, regulations around that. And it's, okay, so you have the model. Well, where's the data from the model? Where does it come from? Who owns it? So there's this whole kind of, I think especially around a career of creative commons, artworks, those kind of things. How do you regulate that? And it's very, very difficult because these models have just been built. That's such a good question. I've written thousands of articles. I'm very fortunate to be a contributing editor like Data Center Knowledge, Data Center Frontier. I can go into ChatGPT and I can ask it, write an article in the style of Bill Clayman. And it will, and it's the weirdest, weirdest feeling. And I'm like, I'm like- Will it come out okay with the actual- It's not horrible. It's like, I think maybe a- It's like something I'd write 10 years ago, which is like, all right, you can't really replace me as much, but way to throw in that enthusiasm, that caffeinated chipmunks self into ChatGPT. But like, that was kind of weird. I think picking up on that on the example, for example, what it is, it might not be perfect now, but we are probably two, three, four years away from it being perfect. And I think that's the big change that's coming. As everything starts in the last 12 months, mostly in the last 12 months, it's gonna go very fast. Because every such thing you do, every trigger that you put on one of those things, it's just more learning for the machine. So I don't think we're far away from getting to a world where things are just coming out, it's better. The other half of that question, I think is an important one, which is standardization, right, so regulation certainly around, how do I make a more deadly nerve agent or a more deadly virus? That's scary stuff. Well, we've seen some news coming out of. We've seen that before, right? But on the standardization front, I think when there's a new market or a new service product, very rarely is that standardized because you lose some competitive, you lose IP. But as it matures, like if we look at UNIX and then Linux, and then you have Red Hat come out, you have open systems. Lack of standardization destroys interoperability, essentially. And when we want these things interoperate, you then begin to see industry standardizing. And I think we will get there, but right now it's tough, it's somewhat, it's brand new. Yeah, yeah. We are running over the last few minutes now, so I'm just gonna ask, I'll start this way, plans for the businesses over the next 12 months, what's gonna be the priority for 24? New geographies, new launches, what are you guys gonna be working on? Yeah, yeah, good question. So for Bloom, data centers is a massive focus. Again, long lead times with transformers and utilities in general, and taking generation offline and transmission, like I said earlier, not working. So we have some very large customers coming out this year. We're increasing density. Bloom has continued to increase the density out of each box. It started at 20kW per box. Now it's a 65, which drives the pricing down, right? So much more efficiency in the system, even though it's 60% plus efficient today, you'll see even more efficiency come out. So higher densities, more efficiency, new markets. We are extremely busy in the EU, Italy, you know, Slough region, UK, Ireland, we will be busy in when I think they will come through this or what's going on right now in Ireland. They need to, yeah, yeah. So yeah, more global, more density, it'll be great. Anything for the middleists? So just like question, because it's another region that's really coming up now. It is coming up. So Bloom is, you know, we analyze where we go, very closely because it's extremely reliable, but that doesn't mean we don't have to put resources and offices and spare parts and everything in the region. So with us, it's very much driven by a large enough contract and we will make that investment. And then for those of you old enough to remember the Maytag Repairman, right? He would just sit around with nothing to fix. It's a lot like that. So we look very carefully where we go. Okay, thank you, Mark. I mean, for us, there's the kind of the retrofit program. So getting the order DCs fit for purpose, fit for the AI workloads. The second thing is kind of more countries, more cities within countries. And the third factor as well is kind of following the customer. Because kind of everyone was like, Flappedy, well, guess what? Frankfurt's got too big to fail now. So it's more about Berlin, Hamburg. So customers actually going deeper into the country because they realize that these places, they're too big. Like, I think Ashburn in the US is a big example there. But you need to go somewhere else. And sticking to Europe mostly as well. Yeah, I mean, we're Europe. No, it's Europe, yeah. Yeah, all right, so, Bill. Oh my gosh. I wish there was more I could announce on this conversation, but there's a few things that are gonna be happening. Literally, literally, I know you can't. Literally this week is gonna be some big announcements. Neuro has secured some really amazing infrastructure partners. We're still looking for a couple more. We'll talk later. And now we're switching our focus to enterprise customers who wanna shift some of their work goes away from Amazon and Google and all these big places and to put them into places like Atlas Edge with our support. So we are, again, like I said earlier, the only MLOps AI interoperability platform that's capable of multi-tenancy and we're also white labeled. So literally any facility can put their logo on there and immediately become an AI and MLOps organization while still focusing on modernization, power, capacity, think of it like a movie theater. He sells the tickets to the best show in town and I sell the popcorn. I'm the concession stand and I charge more money for the tub of popcorn than he will for the movie ticket. But what's exciting right now is that over this past year we've built building infrastructure partnerships. We have GPU capacity. We actually have real DGX 100s and now we're putting our platform on top and we're charging 30 to 40% Delta less than Amazon will with a platform with MLOps support. So what we're doing now is we're taking enterprise customers on a full journey. What's your data like? What's your problem? What do we need to build and where can we ultimately deploy it? So this next year for us is gonna be all about a couple more infrastructure partners but really gaining more of our enterprise customers and putting them in facilities that make way more sense of them and they're the ones asking for it anyway. So we wanna build an environment that's special for them. So stay tuned. Yeah, repatriation of data I think is a huge opportunity for you. I wasn't gonna say it, but it's massive. And thank you so much, chef. We hundreds, hundreds of thousands of dollars we save for customers just by removing egress fees. When cloud was coming up, right? All of those conversations and justifications started with a spreadsheet. It doesn't always pay out like you think the spreadsheet is predicting, right? I'm not paying him by the way to say this. No, no, no. Thank you. Well, you guys do. Well, you guys do. It's all good. We've got less than one minute. So one word to describe PTC very quickly. Describe PTC in general. Yeah, in one words. It's absolutely the premier, the go-to event for our industry. Really, if you're not here and meeting every day, you're not relevant, at least in our industry, yeah. It's the best start to the year. Just the conversations to people where it's the best. This is the conference to meet amazing people and to persistently innovate. And I don't think I'd ever be anywhere else in the world other than here in Hawaii, because it's cold in Chicago. The weather here is quite nice to be honest. Bill, Mark, Jeff, thank you so much for talking to us. As for your home, thank you for watching and do check JSA's channels for more than 40 broadcasting interviews that are going to be coming to your screen over the next three and four days. From here, it's all from us now. Mahalo. Until next time, happy networking. Thank you. Thank you very much. Bye, everybody.