 Welcome back everyone. This is theCUBE live here in Denver for Supercomputing 23. We've got amazing panel discussion here. We've got experts in AI storage data here in Supercomputing. HPC meets AI. That's the panel. We've got John here at Psycom. Leon from IBM High Performance Computing and Hugo with Google. Gentlemen, thank you for coming on theCUBE. Appreciate it. Thank you for having us. Thanks John. So the conversation here is HPC plus AI is changing the gain and with the foundational model layer, the user interface being AI, changing the applications, all the hardware configurations are under transformation. It's still storage, networking and compute, but it's at different scales. This is the state of the art show. So it's really kind of a collision of innovation. Correct. This is the topic. So first question is, how do you guys see this evolving? You see it from the bottoms up semi-hardware or cloud down? Is everyone meeting in the middle? What's the market landscape look like? John, we'll start with you. You know, I see it as really from the bottom up, right? I mean, the cloud infrastructure, both Leon and Hugo with IBM cloud and Google cloud, they've built that. They've built the networks, they've built the infrastructure, but we still have the problem of application and data, right? And how do we get the applications and the data merged in the cloud environment in a hybrid environment, right? And to give that HPC burst type environment. HPC's been driving a ton of innovation, speed, scale, precision. Cloud's been doing that too, scale. And now you have applications and ecosystems emerging. The workflows are turning into AI clusters. How is you guys seeing that evolve? Cause the cloud is going to be a big part of that. Now you got the distributed computing hybrid. It's cloud operations. You got Kubernetes clusters out there. You got AI cluster standing up. So you have a whole nother ball game going on, but it's the game is still the same. How do you guys see it at IBM? So I think the first point is nobody's doing less HPC. There are a few key areas where IBM really specializes in. Financial services being one, sort of we own that marketplace from a scheduler perspective, EDA, semiconductor design, and you look at the pressures of those businesses. We're seeing five X by 2030 for financial services. And we're seeing every generational shift in CPUs, driving three, four, five times more compute capability. So as I said, nobody's doing less HPC. Everybody's doing more. And then you sprinkle on top of that AI. So we are very, very busy and we've got a lot to offer our client base. You go, that's a great point. I was talking to some entrepreneurs. They're saying the same thing. It's not a zero sum game. It's all incremental new growth. Yeah, I mean, it's a super exciting time right now because if you think of the, look at it from a workload perspective, you've got the traditional HPC workloads that are just like using heavy iron, lots of compute, lots of stores, lots of networking. But with the emergence of AI, you've got AI accelerated HPC. You got AI being embedded within HPC applications and use such cases. So, you know, I think like you said, no one's saying I want to do less HPC. And I think AI just helps us accelerate time to insight and new development. So talk about the partnership between you guys, John, and how you guys are evolving solutions for customers. What do the workloads look like? What are some of the complexities? What are the opportunities? What are the challenges? The challenge is data, right? The challenge is where's the data sit? Where's the source of truth? How do we get it to the application regardless of where it is? And that's what we've worked with Google and IBM Cloud to bring them a solution that allows that data highway, if you will, between anywhere and any place. So you have data access to feed the AI models. What are some of the use cases that you guys see working with John? Well, I think that the biggest issue that most organizations have is they're at full capacity where they're on-prem environment today. You know, that next core could cost them $40 million if they have to invest in a new data center. So the cloud is a perfect opportunity. You know, hybrid cloud is the answer to solve that capacity problem. Now the problem when you shift to cloud is getting that data there, and data hydration. The right data at the right time. And that is really where we work with Psycom to solve that problem for our clients and make sure that they have seamless work load, a seamless experience and a seamless ability to operate at the maximum performance. You know, Leon pretty much hit it. You know, there's too big work. Use case is really the first one is somebody's bursting a job into the cloud and you need that seamless easy button that just pulls your data out and hydrates the cloud file system. And that's what Psycom's doing. That's the product that they built for us, the platform that they built for us. The second use case is where you have native jobs running. Of course our native file system is a block store and you want that easy, seamless, just pull the data over, spin up your instance, run your job, and when you're done with it return the results and spin down the parallel file system. If you guys had to point out an architectural shift that's going on for enterprise architects, data architects, I mean we've talked a lot about data management being upside down in this new paradigm, data cleanliness, data hygiene is going to be more and more important from a storage access to data because AI is going to have demands for the data and all the models, so do you go large and then is inference going to be the killer app? Everyone's kind of pointing to inference as the killer app, right, I kind of see, I'm not disagreeing with that. Training, okay cool, train the hell out of data but if you're going to have inference you've got to have access to the data. What does that do to the architecture in terms of storage, networking, and compute and what's around it, what's the system look like? How do customers think about this? Because if they've got cloud, they already know what scale looks like, and if they've got data they want to leverage it, what's the customer view on architecture around these solutions? I would say it is a hybrid model, right? It's getting data from on-prem into the cloud whether it's directly into an HP high performance storage system, or putting it into the cloud object where you can hydrate back and forth from it because that way it keeps things more palatable for the end users, right? And one source of truth across all, it's a global namespace and that's what I see the big challenge and how we solve that challenge for a lot of the clients. What are some of the things your customers are saying? What's the problem space that they're solving? What's the, is it a pain point? Is it an opportunity or both when you go in and talk to your customers? What are they? Oh, it's definitely both, it's a pain point for sure because they're running out of data center space, right? They're having to find more compute and how, like you said, how do we get the data there? Because in the end data is where everything starts, right? It has to be data's fed into the models, data's the inferencing that comes out of it, all that stuff feeds in. What do you guys see for constraints relative to technology? Is it culture, people? What are some of the blockers, if you can call them in, because we see demand and enthusiasm, confidence in production happening, you start to see things, people experimenting, they're iterating, certainly AI, well, how did I get it? Do I, how do I repeat what I just did? How do I change my observability equation? All these new things pop up, so how do you see that evolving? Because this is going to bring new opportunities. Are there things that are kind of blocking things from the progress as a supply constraint? I mean, everyone wants their GPUs. What's the constraint right now that people are working on in HPC? I think, you know, we have a very broad business. So we have over 25 million software licenses for our LSF product. So we cover, you know, there's 25 million cores there on-prem using our software to control their grids and to control their HPC. And it's across many verticals, multiple industries. And it's fascinating to see how those industries are adopting at a different rate and pace. The financial services business was straight in. You know, they wanted to get their platforms into the cloud. They want to be able to deliver results really quickly. So they were absolutely trailblazers. And we're seeing other industries now catching up like Life Sciences, Semiconductor, EDA. And one of the biggest concerns that they all have is around security. Can you secure my data? And can you move it effectively from the cloud and back from the cloud? And should I overpay to put my data in the cloud? So it's all around efficiency. And the three areas that I try and focus on is making sure that our solution is the most performant. It's the most secure and it's also the most cost-effective. If you can hit those three challenges, I think you can really deliver on your clients' outcomes. You go, what's your take on this? Because those are great benefits. It's going to unlock more access. The big theme here is access to high-performance computing is a benefit. What has to happen to make that more accessible? You know, everything Leon said makes sense. I'm just going to take a slightly different angle on it. You know, challenges and constraints, hurdles are a Monday morning in HPC. We've been, you know, tearing those down for decades now. So, you know, some of the big ones that we deal with on a regular basis, bandwidth, data movement, power, you know, latencies, infrastructure, you know, the demand for HPC, the demand for AI is just straining, you know, every aspect of the ecosystem to build more data centers, build more power plants, make it green. So, you know, these are challenges that as an industry and ecosystem, we're going to have to tackle together and solve to really bring the power and benefits of HPC and AI at all. You've been a great, great point. I want to ask you guys a question. This comes up a lot. I like to ask this because it kind of gives them mindset because cloud sees this problem now because it's such a big scale. If you could optimize for one or two areas, more compute GPUs or better networking, what would you optimize for if you could pick a choice right now? I think with the current- It's a quick question. We'll see how it went. I think with the current trend, it's got to be for AI and GPUs, right? It just has to be. And to a certain extent, that's what we've done in our business. So, it's a pretty easy answer. Okay. You know, it doesn't matter which one you solve today, then the other one's going to be the problem tomorrow, so you're constantly chasing that bubble between storage, networking, performance. Monday morning, we do compute Tuesday's networking. Wednesday, it's figure out which one to optimize next Monday, Tuesday. Yeah, it's just a vicious circle. Good answer, that's good. What's your take on this with your company as you look at the future here? A lot of things evolving fast. The pace of play in HPC and AI and cloud is an all-time high. It's not a slow boil right now on the action. What's your take on this for your company? What's your strategy? What's your vision? No, it's definitely not a slow boil, right? Everybody's running hard and fast, but it's, you know, if we do things right at SICOM, then I'm picking the right solutions in the right areas and pointing in the right direction. AI has been around. It's not like it's new, but it's now taking pace, right? It's been around for a while. IBM did it a long time ago with Watson, right? A long time ago. But now, because of the genitrate of AI and we've got it in the public and we've got it in consumers' hands, you see the drive. So for us, it's making sure that we are in lockstep with our partners and that we deliver the solutions that are needed to them. And what are some of the success outcomes that you're delivering? Could you share a few examples? Sure, we've got, we're delivering for hedge fund clients. You know, they've got the markets and stuff they've got to constantly do. And they wanted, I'll give you an example, 300 gigs a second throughput. Well, we delivered that. And so, but then you get the other end of the spectrum where I only need 25 gig or I need 10 gig. So our solution can span that gap, right? And we take the complexity out of it. Because anytime you're talking about a high-performance parallel file system, it can be complex. And we've removed all the complexity for them. Performance is a great thing. You see in high-performance computing, when will it be a steady state for just the enterprise? Because that's what people are driving towards. So a cloud growth, big time. You're seeing with the on-prem, that edge is emerging. You're going to have foundation models everywhere, edge. I mean, this is distributed computing. Absolutely. Back it's got to move. Things got to be stored. Inference has to be run on large sets of data with precision and personalization in real time. This is a huge HPC problem that's going mainstream with AI. What's the vision for your forecast of when this progress bar tips mainstream? Are we already there? I mean, there isn't an industry that develops a product, whether it's an IP product or a physical product, a trading decision, you name it. If you're producing something, you're using HPC. Some folks call it technical computing. Some people call it grid. At the end of the day, you're using computers to develop as a tool to develop your product and bring your product to market faster. So this isn't new. It's been going on for decades. We're just doing a better and better job. And I don't see a tipping point where it goes away. It's prime time right now though. It's yeah, absolutely. And the spotlight. You're not going to disagree. One of the challenges that I see with clients day in, day out is they're playing Tetris with their grids, right? They've got 20 different product managers that need access to that grid to deliver on a product, right? And all of those products have killer deadlines. So they're constantly playing Tetris. And at the same time, they're being asked to do more. So we talked a little bit about financial services. We mentioned a little bit about EDA, life sciences, et cetera. All of these grids are doubling, tripling quadrupling by 2030. And now all you've done is sprinkled AI as mainstream on top of it. My heart goes out to those grid managers, right? And I lay awake at night, sleepless, thinking about how I can help them, how we can help them in IBM, how can we work with our ecosystem, with Google and Sycom, and how we can deliver them the performance and the ability to pay better Tetris. Because at the end of the day, that's what they've got to do. Because they have to deliver for their line of business and help us all grow revenue across their businesses. I mean, total performance is going to be huge. When you talk about AI and not getting smaller, it's not a zero sum game, it's going to just grow, right? This has been seen. Data's growing, but the data budgets aren't growing either. So you got to do more on the platform engineering side. How is the data governance changing? Because now if this goes forward in the same trajectory with the AI kind of wave behind it, it's going to flip data governance upside down. If you take it to the edge, how do you govern scale when you have AI writing code or building pipelines, managing all the data plumbing, matching to the hardware chipsets? I mean, this is a hardware re-in-renaissance. And so all this new stuff's happening. What is your reaction to that? Well, and that's the capability of a storage system and the capability of working with partners and understanding how to move that data and collect the data and making sure that data governance is happening end to end. And you're right, it is going to flip on its head. There's going to be a lot of change in that whole data governance and how it's, and we're already seeing it, right? We're already seeing differences in how data needs to be handled and how it needs to be mined and who can have it and who can't have it. And you're going to have that whole paradigm shift. And with it being further out on the edge, it makes even more of a challenge. You know, it's funny. We've been doing theCUBE for 13 years and storage, we've always loved storage. And storage is going to die every year. But some people call it snoring because it's boring. We love it, but storage now in this new equation is everywhere, so storage is not a product. It's a feature of the infrastructure. It's actually computing to your point. It's accelerator, it's chips. So storage is out there, so data's going to be in it. Data's still going to move. This is kind of like the whole AI conundrum. So the battle for supremacy in the AI is what's the tech stack look like? If the experience is going to be consumerized like a chat GPT or coding code whispers and code generators, co-pilots coming around the corner, what's that stack look like? Because I can see it just being like a self-assembly infrastructure where you have that layer of models underneath. What's your- I think this is where you have different tiers of storage. You have long-term persistent storage. Tape drives are still going to be around for a while. People talk about that going away. You got hard disks, you got SSDs. You got hot tiers, cold tiers. For us, when our customers and clients are trying to manage their budgets, sometimes you're going to, especially when you're running the HPC applications, you really need that high performance low latency parallel file system, but you only need it for the time that you're running your job. So you pull your data, in this case, the architecture and the platform just pulls the data straight out of the object store. Seamless, easy, simple to use. You spin up your VMs, you run them, and as soon as you're done, you just return the results and spin down the file system. Saves costs. So you're a happy customer. You're a happy customer. Well, certainly we're very happy working with the side comp, but ultimately it's about making our customers happy. Again, and think about the alternatives without that. What are you doing? What's the hard problems that you're solving? Time, energy, what are the hard problems that you guys are solving with this? First, I mean, GPFS has just been around for 25 years. It's one of the most popular parallel file systems out there for HPC, but it wasn't designed for cloud. So what we did here in this partnership was work with someone who's got the deep expertise, knowledge and history and experience to build that easy button. Our customers don't want to manage the data movements themselves, they just want the service to work. So for us that was one of the big reasons for working with side comp and developing this platform. The other one is just worldwide 24-7 global support and having a partner that can make sure that they're there when our customers and partners have an issue that they can help with. And Leon, you're seeing some benefits too, right? Yeah, absolutely. I mean, GPFS, storage scale, it's an IBM product. And what we find with side comp is it's full of IBMers. They've got a heritage with the tool, right? So they understand the tool. They know the calls, they know all the inducting involved. And to Hugo's point, right, it wasn't built for cloud. And it's got, I think we say, like a million knobs and buttons that you can twiddle and twist on it, right? And side comp's got that deep engineering experience and they also have the agility to be able to constantly optimize and deliver that to the client because that's what the client needs. It's not a one and done, it's a setup and then continually optimizes their workloads evolve and their business evolves. So, you know, that's what I see the real value of our partnership in working with side comp. Leon, Hugo, great to have you guys on, on behalf of this interview. John, I'll give you the final word. Put a plug in for the company. What's new? What are you guys working on? What are you excited about? We're working on functionality, more functionality into the product and more simplification, right? Because it really is, we have to get them, we have to get our customers and our clients moving and we have to move them fast, but it's got to be clean and it's got to be perfect. So it's simplification, it's driving usage of the cloud where it makes sense. It's being able to drive that hybrid piece because we honestly believe it's a hybrid world and to be able to seamlessly give that picture of data no matter where it is, that's our focus going into next year and being able to drive the innovation. And because of our relationship with the cloud vendors and where we're going, and our tight relationship with IBM's development team, software development team, we can drive that innovation. A lot of value. AI is just going to vindicate HPC way. It's going to make this mainstream. Guys, thanks for coming on. I appreciate the commentary. John, thank you so much. Appreciate it. Thank you, John. We're here, Cube Live in Denver. Supercomputing 23, day three of four days of coverage on John Furrier. Your host will be back after the short break to wrap up day three.