 Good morning nerds, and welcome back to the mile high city. We're here at Supercomputing 2023. My name's Savannah Peterson, joined by my co-host, John Furrier. John, day two, lots of action, lots of people milling around. How are you feeling? I mean, I'm feeling great, and the conversation's about hardware, speeds and feeds, which we love because the Silicon and the Cloud bringing things together. More servers, more cores, more networking. I mean, it's the perfect storm of innovation here. It's going to be awesome. And data is driving AI, and that's all this is kind of collisioning together. It's great. It is great, and I'm particularly excited for our next conversation about sustainable business in HPC and AI. Hot topic here at the show, pretty much covering all the bases. Please welcome Serban and Gee. Thank you both so much for being here. Are you having a good show? It sounds like you're both busy. Yo, friend of, plenty of meetings, a lot of customers discussion, a lot of partners, and obviously we are looking to develop further what the business is for the HPC and AI. Which is the conversation top of mind for everyone here. I suspect most of our audience is familiar with the Dell. However, Serban, why don't you give us a little bit of background on Atnorth? So Atnorth is a pan-Nordic service provider. We're building data centers for HPC and AI only from the ground up. We build HPC as a service and AI as a service. So we are really partners in crime for really developing this business and supporting this business, but in a very sustainable way. Coming from the Nordics, I'm not surprised to hear you say that you have multiple data centers across the Nordics. Gee, correct? You're just about to open your ninth in Copenhagen. Is that correct? That's right. That is exciting. Congratulations on that. Thank you very much. Tell me a little bit more about the partnership. You mentioned that you've been working together, Serban, for five- Almost five years. Yeah, yeah, yeah. So you're not hype curve chasers. No, no. You've been planning for this moment. We plan for quite a long time this type of thinking. The way how it started was like, we were looking for a partner able to provide a sustainable and a clear understandable energy cost and able to provide a solution which is related to an area which is stable as well, not only from an economic perspective, but also from a political perspective. Into this world today, well, I think this was a very important way to look into this one. And when we start the business, we were looking to how we can merge a Dell culture which is around technology and people and processes and partnerships together with what the customers are looking for. And Atnot was exactly in the middle of that turning point. Guy, talk about the formula for success because it's very difficult to have the sustainability check box legitimately and have technology innovation, technology innovation, and the workload scalability and density that's required in the high performance data centers. What's the secret sauce? What's the formula? How do you guys pull it off? So first of all, as we focus on HPC and AI, we're focusing on what are the workloads our customers are running. Because that's why they do this. And then we really see what do we need to make this better, to make this more performant, to make this more agile. And we looked at what is needed underneath, what is the best configuration of the hardware, what is the best network, the best storage, but we have looked at scale because we look at customers at scale. Yesterday we were in the presentation where the Boeing was presented, how they do development of planes now and for the future. So this is really at scale, so no doubt. We have done this together for clients that we can mention, that we can really refer to. It's a bank, BMP Pariba. We looked at what they are running. They do all their risk calculations which are really heavy. They moved them away from France, from UK, from Central Europe, and we built that together for them in several data centers in Nordic, in Iceland, in Sweden, and so on. We built that for them in a redundant way and they are now enjoying this. And this is now their sustainability story. They show on their website, they show in their annual report how they achieve sustainability. And Guy is super important to mention here that it was not a one-side only work. It was all the parties, the customer which was looking to organize this one. Us as a vendor, he as a partner to assemble all these discussions and to build a solution. And by the way, one of the cities in Copenhagen is using the hot water in order to produce heat for the people in the city. So it's a kind of circular renewal with me. You get the renewability. Exactly. What's the design factor? Because I can only imagine the design has to be innovative. Yes. And you've got to use the Dell gear. You've got to get the stuff. You've got to get the hardware. What's the design mission North? What's the North Star on the design philosophy? Yeah, happy to really go into that. Not too long, but to the point. As we said, we covered the whole stack from the application. We looked what the application needs and what are the data center needs, the network needs, the compute needs, the storage needs, and so on. And what we did is because our clients, they want sustainability. Now, much more than before. A couple of years ago, sustainability was on their list, but not at the top, not necessarily at the top three even. It was mentioned, but now it is absolutely at the top three. So what we did, we did the design. So we see, of course, the computers, the data centers. So the data centers only use the most efficient way of energy and we have an abundance of renewable energy in the Nordics. So that's taken the box. Then with the data center design is the most efficient. We build it for HPC and AI in the most efficient way, taken the box. Then, of course, we look how the whole clusters come together and we now move more and more into direct liquid cooling. So we actually directly capture the heat, give that to the heat exchangers, and then we are selling that off to the heat nets and we are heating 10,000 of houses and apartments with that heat. So that's really the circular economy. So that's why these clients like that so much. They come with their critical mass, HPC and AI workloads. They love to come with it. And we love to do that more. And actually I can say it's not only in Europe. We have quite some US clients who bring that to us as well. So they have found that. I got to just emphasize a really fascinating point. I think sometimes when we think about sustainability, we think about net zero or we think about carbon emissions or we think about literal energy. But it's wild to think that the machines powering our AI future and perhaps AI that could control the thermostat in my house could also be the liquid cooling responsible for then heating the system as a result. Exactly. And think about when you're looking to the next generation of technology, right? The CPUs, the GPUs of the tomorrow world, they are going to consume more and more energy. Exponentially more. Exponentially. We're talking orders of magnitude here. That's the point. And we had earlier today a meeting with a customer and the discussion was not if they are going to liquid cool is when you are going to liquid cool. Right. And that is a discussion which you need to start planning it now in order to make the understanding about what you're going to be one year and a half, two years from now on because that will come massively into this industry. And we need to make the systems possible to be managed into a solution like that. And if you are not driving into a sustainable way then the overall system isn't... We're going to hit a dead end. Exactly, it's a dead end. There's no way to get around that. And I think that's really important. I love that we're representing a lot of geography here. G, you're from Belgium. So, Ron, we've got you from Romania, which I think is awesome. And I actually want to talk about that because they're in one of our goals for the show this week is to separate some of the myth from reality when it comes to AI and HPC very relevant to our discussion here. But I'm very curious because renewable energy in the Nordics, you've obviously got a very excited community in Romania. Will AI and HPC computing not only democratize AI or the access to this, but also lift up other global geographies and bring them into this high tech fold? Do you think we're going to see that? Of course, and I think if you are looking into this direction, you can compare it like, I don't know, we talked just prior to the show. You can compare it to the first automobile, right? When it appeared, there were no safety belts. There were no differentiation in fuel. There were no signs on the street. Everybody was driving on the right or the left side. Well, let's not go there, right? That's a debate for overbears we were talking about later. But it's at the beginning. The landscape of providers into artificial intelligence space is daunting. I mean, enormous, right? Which is exceptional, because this means a lot of innovation will come. And when you are looking to what we are driving, it's exactly into this direction. We are trying to democratize the infrastructure and the solution we are bringing together. Then we are going to advance, putting more new solution, and then we are going to innovate. Yesterday we have the HPC community event, which was outstanding. Outstanding. The level of conversation we had at Nord, we had Boeing, we had University of Cambridge. We have TAC as well there. So it was, and the differences in between what the peoples are implementing and what they are talking, goes into the same direction. The future is bright. We were actually commenting on the Dell HPC community event you guys had yesterday again. It wasn't just Dell though. You was everybody in the ecosystem. And I think Boeing had a great example showing the visibility into quantum. We saw NVIDIA and AMD and it's a whole ecosystem. My favorite line that I think in Capsule, it was the guy from TAC, he said AI vindicates the HPC way. And I think that's the conversation that we're having here so far is dominating the Cube and the Always. And I'll ask you guys the same question. What is the impact to AI on the HPC and the needs of the evolving data center? Because with cloud operations, you're now seeing not repatriation, net new architectural shifts, new build-outs, more GPUs, more needs for alternatives to GPUs, more interconnects. AI hardware is dominating, the hyperscalers involved. This is a whole new ball game. What do you guys see the AI impact for HPC? The number of use cases are just enormous. And we still are discovering more and more. Every day, every meeting, I hear about new use cases. And yeah, being it in finance, being it in automotive, being it in airplane, being it in business, whatever, they are all finding new use cases. And we find them within the ecosystem for improving the solutions. Where actually it's happening, for example, optimizing the data center because the GPUs are generating, they're using quite some power. They're generating quite some heat. And you can, if you don't do this right, the whole system can melt down if you don't do this right. So AI is used and computational fluid dynamics is used to optimize the data centers, to optimize the whole technology within the solution itself, but outside it's just hilarious. And we see this now in HPC. We are working also with developers, companies developing HPC software and so on, and AI software, and the two are merging. They're using AI within HPC to improve where calculations are normally taking way too long if you read to calculate over and over again. I can use another example. I'm meeting the guys later today. It's a weather forecasting company here in the Colorado area. And they're really applying more and more AI to predict the weather and to impact what can be done with climate. That democratization piece, you were just referring to that. That is a huge point. Yeah, so again, I don't think that those words are excluding. Those are starting to combine. Which world? The HPC and AI, they're going to combine. And you'll see systems right now running an HPC workload tomorrow being redesigned, reconfigured in order to run an AI workload, an inferencing or a training of a lot of things. And then you're moving again to something else. And this, I think that the challenge we are looking right now is to finding the people which are able to drive this one. In the past it was easier to get several students and it was cheap. Now you're facing for a lot of resources and you need to find those resources to attract them into the ecosystem and to develop them. First of all, I want to just say you guys are both excited. I can tell by that question. AI is hot. Why is AI important? And because everyone's excited, the enthusiasm high. What's going to change in the ecosystem? Because you're teasing it out. Use cases are expanding. Developer new talents coming in. Entrepreneurial energy. You start to see this tier two, core weavers of the world building on top of fair metal. What's happening? What's the real driver? And I'm getting back to the use cases. They are absurdly vertical use cases but there are a lot of horizontal use cases. Every company on this earth will have a sales, a marketing, a finance, an HR department. What if your HR department will be able to tell you what is the attrition risk of a specific individual? Or if your finance department will tell you what is the most likely risk assessment for a loan? Or you are doing a road optimization or you are doing a better offer, a better sales proposition instead of doing one per day, you can do 10 per day. And I think that this will improve the capacity of the companies to be more agile, to be more innovative and to be able to respond to the customers in a much faster way. And be more innovative as well. So they drive, really their innovation is also exponential in a way. What is possible? So they discover the combinations of what was possible and what they really look at now for the new use cases. That's so exciting to see this ever new ideas popping up. I mean, it is thrilling. And one of the things that I always remind myself, and both in the AI space and in the quantum space is we're going to be answering questions we don't even know how to ask yet. Which is, yeah. Exactly, but think about this one. One year ago, there were very few people speaking about generative AI. Yeah, oh yeah, no. One year at Christmas, I was together with my family and my friends. Everybody below 25 years old knew what it was in the market. Everybody above that stage was like, what? What are you talking about? Market education is wild. And now it's changed. And everything changed into what? One year? Yeah. Is it one year it was developed? No, it started in the 40s. That's the different story. I don't know, I know. But it's amazing how fast is the adoption, how fast is the... And you got to say Dell is on the right wave. I talked to multiple execs over at your company. It's clear there's an opportunity for Dell to ride this next wave like you did on all the other generations. PC, web, mobile, and now desktop. You brought up a good point and data drives this. So now, okay, so you have everyone understanding AI. The role of data is going to be more and more important. So there's storage, how packets move from one point to the other. That's just, and then, you know, what does it go, how it computes on it? This is classic data center, nuts and bolts. Storage, servers, and networking. We're back, we're back. We never left. We're back, right? And probably here it will come, what Guy is mentioning about is the data locality. Because you can't have the compute somewhere and the data somewhere else. You need to get them together because else the latency will become too large and you will not be able to benefit from the innovation. But this is why we proposed a dedicated line of servers for this machine, you know? So, you know, if you're looking to the XC9680, which is the A to A GPU machine, this is a beast. But this is a beast, which is what you need to run in order to be able to drive it. You need beast, you got the AI. I mean, beast mode. Yeah, that definitely is beast mode. Speaking of beast mode, you just had a very interesting acquisition. Can you tell me about the compute acquisition? Yeah. So I started off by saying we look at the workloads and then we see what's coming up. And actually we have been, of course, building the technology from the ground up from the data centers. And we're happy to provide the data centers to large companies who know what they're doing and they bring it in themselves. Or we provide HPC and AI as a service. Also for this, always for this HPC and AI workloads. Now we require compute who actually starts from the workloads. It starts from the engineering workloads, from the simulation workloads, from the AI workloads. And actually they are pre-installed. The applications, we pre-install them. So actually the users, the scientists, or the engineers, or the simulation engineers, they just say, okay, what do I want to run? I put my data and they hit and it's working. It's not like people need to think, oh, how this just needs to be configured right and lose a lot of time before they can actually run it. Because that's why they do it. They need to run the simulations, they need to run the AI inference or machine learning. And it's then also about getting the data fast enough to the GPUs to keep them busy, not having to wait for the data. So that is what the compute acquisition brings, is that from the applications downwards. We were already very strong from all the energy, the data sense, the networks, the compute, the storage sense upwards. So now we have the two that comes together, which is helping for the time to market and the productivity of the users. Yeah, having computer-aided engineering pre-installed, it's really that zero to 60 moment that you get to faster and then can start saying what's going to happen when you're on the road trip and get on the journey. Absolutely. And this is why we position, for instance, we have this del-validated design, which are giving the blueprint, if you like, of how to select the infrastructure dedicated for your workload if it's an inferencing, if it's a training, if it's a pre-training model, just to have the understanding of how to do this, is not just putting things together, it's providing you an infrastructure which works, it's tested, it's validated, and we know it's working. I love that. So, Braun, you brought up a really great point. Last question to both of you as you're noodling. You mentioned a year ago, we were not talking about generative AI. When we have you back on theCUBE next year at Supercomputing, what are we going to be talking about that is maybe just a murmur in the hallway track right now? Wow. I sense that what we are looking into the next year is that a lot of enterprises will move from a pilot into full development of their AI infrastructure. And then, you know, I don't have the, I have theCUBE, but I don't have the crystal ball. I sense as well that we are going to see a lot of developments into the infrastructure. We are a new processor, new GPUs, new connectivity, and we have a lot of examples from what can be done differently than the traditional way of doing IT. Love it, great answer. Look forward to playing back that footage next year and seeing how it tees up. G, what's your prediction? So, I agree, and I would like to add, now today a lot of machine learning is still happening. A lot of models are being created, and yes, we see the test with HPT and the large language models, but I think the inference, the real usage of the models, I think that's going to boom. And that's, as you say, the enterprises are all looking into it. Some are more advanced than others, but they will get there. They see the point, and it's really, they're bored, they're executive teams, they get it, and they really, they cannot wait to see the results in their organizations. So I think that's going to make a big difference. Absolutely. The uptake of this, the real usage of this in the whole economy, in the whole ecosystem. Absolutely. Absolutely, I agree. Lots more case studies, lots more reason to celebrate. Just one. Reason to come back. Yes, we would love to have you back. Thank you both for an absolutely fantastic and informative dialogue about sustainable business and HPC and AI. John, thank you for your candor and your fabulous questions. Thank you. And thank you to our fabulous supercomputing community out there, tuning in alive to our coverage here from Denver, Colorado. My name's Savannah Peterson. You're watching theCUBE. The leading source for emerging tech news.