 Hey guys and gals, welcome back to theCUBE, the leader in live tech coverage, here in Mile High City, Denver, Colorado, covering Super Compute 23. Lisa Martin here with John Furrier. John, we've had, you guys started last night with Savannah. We've had a great day so far today, but this next segment has got me on the edge of my seat. Tease it for us. Well, we got Super Computing show SC23. We're talking about Super Computing here and just highlights that the product changes are radically changing overnight. AI plus HPC is really creating a power dynamic that's going to spark massive innovation that doesn't foreclose the future and also builds on what the super community's done. So this should be a great segment of some great stories. Yes, and you hit the word overnight. Armando Acosta is back, one of our alumni, the director of HPC's solutions from Dell Technologies. Alan Chalker is here as well, the director of strategic programs for the Ohio Super Computer. Guys, welcome. Thank you. Thank you for having us. Alan, I got to go right to you. The Ohio Super Computer, this is an 80s baby found in an 87. Talk about its mission, its vision as it relates to data. Sure, sure. So first off, the origin story is really interesting. If you go back to the mid 80s, National Science Foundation created some national supercomputer centers. There's a group of faculty members in Ohio who said, look, we could be a national super computer center, but the NSF did disagree. But what did they do? They went back to the general assembly and the governor said, the federal government won't fund this. Will you fund it? And they said, yes. So we're a little bit unique in that we're not necessarily affiliated with one particular university. We are a state agency, a state entity, and we are there to provide benefits to the entire state, not just academic, but also commercial industry, and also to raise awareness of Ohio as being a great place to work in. So you can't pick a team. No. You got to be kidding me. You got to be kidding me. I do like the Buckeyes. I got a group of the Bearcats and everything. Close with your lens. What are some of the benefits that you're delivering across the landscape that you were talking about, that London dig into that? It will come down to aggregation. We, as a larger state entity, are able to buy in large aggregate, happen to be from Dell. We'd have four clusters right now from Dell, about almost 60,000 cores, that then can be used by anybody in the state, be it academics, be it private industry, be it whatever. And we have $25 million in Dell hardware on our floor right now that they can leverage. I think given university, for the most part, can't do that. Is this right? Yeah. It's huge. Yeah, I mean, and that's the beautiful thing about it is that what I love what Alan's trying to do, he's trying to enable more HPC users, right? Our HPC community event yesterday, we talked about all these different use cases, but if we enable more users, we enable more use cases. We're solving harder problems, but not only that, it raises all boats. And so that's what I love what they're doing at Ohio. Armando, I want to talk about that and tie that back over to Ohio Supercomputing, because the event yesterday was the Dell HPC community event, but it wasn't just the Dell event, it was a lot of other vendors there, it was community, the ecosystem event, basically. What does this tell us? Because to me, when I was in there, I was expecting to see something different. What I saw was an industry kind of locked step on the future. They clearly got an AI focus, but it's a celebration. This is not like talking about, well, AI is this, that and the other thing. It's like, it's go time on the product side. I mean, so we kind of talked about this before we started the cameras, but when you look at it, some people think is it either HPC or is it AI? And really what we're saying, it's both. And what you see from our customer stories, whether it's TAC, Ohio Supercomputer, is that you want to do both because it enables new types of research, right? So prime example, you look at weather modeling. In the past, you would run a simulation and that model will tell you, okay, here, this is where we think the hurricane's going to hit, right, for one example. But can you imagine now, you take that result out of that simulation and you plug it into essentially a network, a neural network, you run essentially training and that insight combined with that model now gives you two different perspectives so you can actually get a better insight without each individual one. So that's the beauty of AI. Alan, what's your story? You've got some good stories right now. What's state of the art? What's going on in your world? Give us some, come on, show us some tech love. Sure. Come on, we're hungry. I teach you more, Ansel. Yeah, you're holding back. So, building upon what Armando said, right now as we speak, we have in Ohio, art design undergrad students at Ohio University in Athens, Ohio, connecting on their iPads to Ohio Supercomputer Center Dell Resources using something we call open on demand. We're going to talk about that here in a second. They log in, they launch a stable diffusion app, they type in a prompt, behind the scenes that fires off onto our clusters, gets a three generation old NVIDIA V100 GPU in Kubernetes, there's string of words here, I'm technical, you know. No, we'd love it. But yes. We used the GoopCon, we'd love it. Yeah, we're north out. But the students are time slicing that GPU, few seconds later, they get back a generative AI image. They have no, they're art students. They can't spell HPC, can they? But they have no idea what all is involved behind the scenes. And they don't need to. Exactly like he was saying, they don't need to. All they need to know is that they can go and connect some of the most powerful computer resources in the world using the latest and greatest technology. Two years ago, if I had said that sentence, I just said, it would have made sense, right? But now we all know exactly what it means. This is a huge enable button. No, but I was saying that, you know, when you look at on demand, that's what I love of what you're doing, right? Is because they're abstracting the hard stuff from HPC that scares a lot of people, right? And so if you make it easier for the users to interact with that HPC cursor, guess what, you get more users. And once again, like I said, it raises all boats. And so you abstract that, you make it easy. That's exactly what you're doing. That's what I love. I want to get, at least go and jump in there for a second, but I want to just follow up on what you said. The magic is putting that under the covers. And I think you mentioned Kubernetes clusters. That's a key orchestration. It's becoming like a lingua franca, like what Linux is. Kubernetes is from a cloud native perspective. Pulling all this together is invisible to the user. That's going to create a new class of user. Expectations are different. Applications will probably look different. You're already seeing that now. This changes the game. This is actually the purpose of HPC. To provide this kind of horsepower. Absolutely, absolutely. The number of science domains that we have using our systems here in Ohio, I always said art design students, horticulture crop science students, anthropology students, political science students, fields that you never would have thought of would make use of this amazing technology. We're surrounded by it here. We eat, breathe and think inside baseball, day in and day out. But they don't, and they don't need to. That's not what's important for them. That's a great point is they don't need to know what's under the covers to be able to create what they need to like that. In almost real time. Open on demand from what I understand is about 10 years old. Give us a little bit of the history and how it's developed because as you said Alan, even two years ago, what you described wouldn't have made sense. In terms of the, John talked about things rapidly changing overnight, you're living that. Yep, absolutely, yeah. So this is actually the 10th anniversary of us introducing open on demand to the world. Basically, we were in the right spot at the right time. If you go back to the late aughts, we, like many of our other peer organizations, we're starting to get requests from our clients. And this is where, if you think about it, the iPhone came out in the mid aughts and everybody got used to online banking and eBay and everything like that. All of those enterprise applications that consumers were using every day, they were like, wait a minute, I don't want to see the green text scrolling across the screen, like on the hacker movies. That's not for me, right? Right, you know, I don't write code. We were in the right place at the right time that we started to develop web interfaces to our systems, introduce that to the community and the community was like, oh my gosh, that's great. Can we have a copy of this interface? And we're like, oh, wait a minute, this was just a thing that we played with in house or thing. We went to the National Science Foundation and we're now on the fourth of a series of multi-million dollar awards that NSF has funded to take that and deploy it and make it available open source to the community. So I can announce right now that as of today, we have nearly 700 research computing sites, 62 countries all over the world that are using Open On Demand as their primary interface and yes, you can do it on your cell phone. Now in Ohio, there's no law against drinking and computing but we don't necessarily condone it but I've seen people in the bars using Open On Demand on their phone. I'm dead serious, I've seen pictures of grad students sitting in a pub, logging in to see how their job's going, what's going on with Open On Demand? Adoption has been amazing. How did you facilitate such wide adoption? And what took you so far? Again, I think that we were in the right place right now. Lightning struck twice for us. I already mentioned we're in the right place where right when iPhones were taking off and everything. The other thing, a little bit of silver lining was the pandemic that we just got through. Now what happened during the pandemic was many universities had to go to a remote learning model and as a result, the students were not able to access the on-campus computer labs. What were they able to do? Well, there were so many sites out there that had Open On Demand which provides remote desktop capabilities and remote software access that I get stories left and right from just different academics that are saying if it weren't for Open On Demand we might not have been able to continue to teach throughout the pandemic. Because you were there and it was the right spot, right time. One of the things that came up in our earlier segment with AI coming is that there's going to be some low-hanging fruit. You get some benefits from existing stuff but it's the new things you don't see yet that's going to be compelling. As you get this on-demand and this kind of cloud-like experience with generative AI and compute and GPUs and CPUs, DPUs, QPUs, we're going to have our own processor. The ability to do new things and test, be creative, the barriers to entry to do stuff and experiments going to be very low. Okay, so if you believe that to be true then the next question is what do you guys see right now as these new enablement use cases? What's some of the things that are coming out that could give us a dots to connect to what we might see coming out of these big, large-scale high environments? Because obviously if you have massive amounts of compute that's got generative and some intelligence and reasoning, inference, training's great but inference is the holy grail. Right, that's where you want to get to, right? You got to do training first. That's what a lot of people don't understand. You got to do training first. Inference is the new web app at KubeCon. That was a big phrase we were kicking around but this points to what's next. We don't yet know but what do you guys see as signs? So do you want to go first? No, go ahead. I mean, so when you look at generative AI some of the use cases that we're looking at internally is how do we maybe improve process, right? Or hey, how many, maybe we put some inputs in there and hey, maybe we can modify some type of code. What are ways that we can use generative AI to automate manual tasks so that we can validate tests and build our products faster? So those are just some of the examples of what we're looking at from a JNAI perspective. What's going on about you, Alan? I'll give you a very precise example. Just yesterday, one of the colleagues that we collaborate with at Idaho National Labs. Idaho National Lab is one of the Department of Energy nuclear labs. They are the experts when it comes to the U.S.'s nuclear energy. Came up to me and they've got open on demand. They make it available. And they said, hey Alan, by the way, we just deployed a chat GPT like, it's not chat GPT, but to local help app and open on demand so that clients, when they have questions, instead of calling our help desk first, they can type in just conversational AI. And oh, by the way, we have this linked into our existing documentation, our existing training materials. And we can just add and dump more stuff in there and it'll steer them in the right direction first before ever going. So where is it going? It's allowing us to reduce that burden, reduce that friction. So when people have questions, they don't need to call up the experts necessarily right away. And the role of government is going to be important here too. Not from a regulation standpoint. I'm anti-regulation just for the record. Guard rails, okay, cool. Virtue signaling with guard rails, okay, whatever. But if you take this concept of successful performance, you guys are doing, we have national parks in this country. Why can't we do like national compute farms? Like why can't we, is there going to be a future where you need all this compute where citizens like me could just get compute or is that going to be a private service? Is there a movement? Because NFS, you're talking about NFS funding this stuff. That's how the ARPA started 50 years ago. Internet, the paper was the first thing went out. We're celebrating 50 years of the internet this year. Well, so I don't know if the government's going to give you one big generate of AI so that we can put a lot of things. Yeah, national cloud, I might not see it there, but I do believe if you see generate of AI, what you see taken off is AI as a service, right? And so when you look at AI as a service, what you see is, hey, maybe you don't have the time, maybe you don't have the experts, maybe you don't have the data scientists, but hey, we will go build that infrastructure for you. We will give you the tools and essentially all you have to bring to the table is your code, right? And so I think if you look at those types of models, those will get interesting, but in the long run, I think you're going to want to keep control of your data. You're not going to want to let some of your data out. You're not going to want to let that into the public. So I think you'll have enclaves, but to me, I think you're just- We have national parks for people to use, but this brings up the question of democratization, right? What we're seeing is no barriers to Angefree to do something creative to move the needle for advancement. I want to propose a different way for you to look at what you just said. So the future, what the future holds is, it doesn't matter whether it's a national cloud or if it's a local resource or if it's a commercial cloud or if it's your cell phone, all that matters is that the tool is there and you can click on it. This is one of the things I've talked to various people at NSF about, is that they fund access, they fund open science group, they fund local resources. What if there's just a common interface, open on demand, of course, what I want to be, but what if there's a common interface? So it doesn't matter where you're jumping across or what's actually happening. Behind the scenes, let the magic happen. We've got plenty of smart people that can figure out how to route things and make it so that the end client is just able to see on their phone, on their iPad, on their Tesla, on the Metaverse and the VR, whatever. I mean, I think that's the smartest thing that you did is that he understood how essentially his users wanted to consume the technology, right? And hey, if it's a nap and I just want to tap it and I want to use it, well, guess what? We can make that easy and do it for HBC as well. When things are boring there, they're very smart. When things are boring, they're being used. Kubernetes is being boring, as they say. My final question is, if this continues, the ecosystem, this community in supercomputing has been around since 1988. A lot's changing fast here. You still got a lot of academic, you got a lot of algorithm, long view, conversations around architecture, chips design, all that good stuff that's been going on for generations here in decades. But now you have a very fast pace of play of commercial applications coming in. What do you guys see as the ecosystem? What changes, what stays the same? How do you see, because partnerships are going to happen. If things go away and they're accessed and consumed, people will be playing together with their data. What changes in the ecosystem, what stays? What do you think? Sure, okay. So, you know, I think at least two of us have been around long enough. Let me give you just one example of what I've seen and then where we're going. I'm sure many people remember ASCII Red Program. Not that long ago. The federal government spent hundreds of millions of dollars to get the first Teraflop computer out there, okay? We have sitting on our floor at our data center, single node of Dell systems, $60,000 a node, 55 Teraflops. It's $1,000 a Teraflop and we're having those, you know, clients are doing amazing things on that. That's two decades. We've gone from hundreds of millions of dollars to $1,000 per Teraflop, right? So where is it going to be? It's going to continue to go. But what's happening now is it's the data. There's just so much data. And I'm sure you've heard this from other people in terms of the ingest from all the remote instruments and the edge computing and all of that stuff. Making sure that data all gets ingested appropriately, processed securely. We have confidence in where it came from. There's not disinformation that injected into that, things like that. Single version of the truth, yeah. No, but from our perspective is we want to enable what Allen's doing. But the other big thing that we're going to stick to is we're going to step to standards. We're going to put standards on every new technology that comes out, whether it's CXL, whether it's essentially accelerators. We want standards across the board, right? But the other big thing that we want to be able to do is essentially have these standard building blocks so that customers don't have to waste their time trying to figure all this out on their own. That's the beauty of why we do validated designs. This is why we work with Allen at OSU because we want to learn from them. And not only that, we can't build everything. Partnerships are still going to remain. You're still going to have hardware. But what I believe is that the time now is software. So on demand, he saw it. But when you see the software ecosystem, I see a tighter integration with hardware and software. And with that tighter integration, I think you're going to get essentially better performance. You know all about the different chip makers around every different AI use case, right? And I believe if you can enable all those different AI use cases with the right set of tools, then hey, let the user go do something with it and you'll be amazed at what the results are. Tell, you guys are doing a lot of great work. I want to give you props on that. We appreciate you guys being in the industry. One thing you said yesterday at the community event, Lisa, was really interesting around, because we always ask the question, what's the impact of AI to the workflows? AI is iterative. You were pointing this out on your keynote as well as other presentations. And there's a new era of, what are you iterating? Were you writing it down? It's like making sauce. It's like, what did I put in there? So you got to iterate to get the model. You guys now have programs for customers to come in, stand up, HPC, iterate, lock in, know what to measure, how to make it repeatable. This is going to be the new challenge. The memory of not the machine, the memory of what you did for the AI. This is model management. Yes, and that's the biggest thing that you understand. So once you build a model in your environment, you just don't set it loose and just say, hey, I'm never going to touch it again. That's not how it works, right? So you talked about data management. So when you guys, since you build a model and say, hey, we're trying to do something for inferencing and we're trying to predict accuracy of a credit score so we can loan somebody money, right? Well, guess what? Parameters change, different variables change. And so you have to go retrain that model constantly to make sure that your accuracy stays up to date. But not only that, you want to have what we call reproducibility. So hey, who touched the model? When did they touch the model? What essentially data did they add to the model? Did they add new layers? Do they add new essentially weights to the model? So you have all these things where not only you think about the data governance, but it's also now the model governance. And not to say this a bad way, but hey, if you make the wrong decision, you might get sued. Guess what? You have to show them how you got to that decision. So reproducibility is key. That's a great point. Last question in 60 seconds, Allen. The future of open on demand, what does it look like to you? So the future open on demand is a community. Right now it's been all OSC for 10 years. 10 years from now, I want there to be just Jupiter, something like that, maybe red hat even, something like that. And there's a community building it, using it, doing amazing things and not limited by just what me and my colleagues can envision. Well, it sounds like you're well on your way there. Guys, thank you so much for joining me on the program, talking about what you guys are doing together and the massive adoption. We're definitely going to be keeping our eyes on this space. Thank you. All right, thank you. Thank you for having us. Our pleasure. For our guests, I'm for John Furrier. I'm Lisa Martin. You're watching theCUBE live from SC23. We're going to be back with our next guest after a short break, so we'll see you then.