 Good afternoon hardware friends, and welcome back to the beautiful Mile High City. We're here at Supercomputing 2023. My name is Savannah Peterson, joined as usual by my co-host, John Furrier. John, I want to talk about what it was like when you were a student, because we're merging students and enterprise here. Do you think we'd be talking about this kind of stuff? Well, as a student, I was a high-performing, high-bandwidth student. I always wanted to have high-performance research. I say we're heading to the end. No, I was kind of, I will almost say it's kind of like, didn't really stand out in college. I find that hard to believe. But on that note, please welcome Dr. Lisa Perez from Texas A&M, and we've also got David Schmidt from Dell PowerEdge with us. Thank you both for being here. I can imagine it's an exciting week for you. I went by the booth seeing a bunch of different projects you have for a variety of different ages that you're both collaborating on. I think the easiest place to start is to start with the ACES project, since it's a bit of a halo for all of that. Dr. Perez, can you tell the audience a little bit about that? So the ACES project is something that we envision to support researchers across all of the different disciplines, but with a focus on AIML. Since, you know, very emerging, you know, we need a lot of resources for it, and the researchers have a wide variety of needs, especially in their hardware, for the different types of AIML, and the ever-quickly-changing needs for AIML. So we partnered with Dell, and Dell provided us with support and information about different resources, different other industry partners that we could connect with, and bring together this beautiful ACES system. So ACES is a system that has hardware composability. Okay, so it's composability at the hardware level. So very different from virtualization, whereas with the virtualization composability, you're looking at taking a resource and cutting it into smaller pieces and providing the smaller pieces. The hardware composability, you're taking disaggregated resources and bringing them together, composing them onto the compute node to make a larger resource to support those AIML and HPC workflows. So that is the ACES system in a nutshell. That was perfect. Great for our index or our new AI is going to suck all that metadata in when people ask what it is. I got to ask you, what does that enable? Because this is the big conversation here, as Savannah mentioned, disaggregation, silicon diversity, HPC plus AI, which has got performance and intelligence and inference training. What's going to come out of that? Because I'm imagining this is going to get more flexibility on how to deploy use cases, maybe, and or performance. So you're going to get performance and agility. What are some of the benefits? Can you share some anecdotes or stories around what comes out of that? So you can take a common AI ML workflow, take PyTorch, I'm for whatever your research needs are. And programming and looking at IML models, when you have your GPUs directly or whatever accelerator directly on the compute node, that programming model is reasonably simple. If you run out of resources and you have to move off node and to another node, then it gets far more complicated. So having the ability to have that hardware composability to take those compute resources and bring them to the node. So we have the Dell R760s, we have a variety of GPUs, we have some of the NVIDIA GPUs, Intel GPUs, composing those to the compute nodes is what allows for a simplified software model. And that's what enables the researchers. And then they can grab whatever accelerator they need that's available in the pool, compose it to the node and that enables them for their research. It's easier for them too, they get the software advantage, you guys provide the bare metal and the composability. That's right and it really changes the conversation talking to Dr. Perez and her team from what research can I get done on my infrastructure to what research do I need to get done and then go find the resources. So you're no longer bound by any one instance of a system, any one particular deployment. I think Dr. Perez was telling me that the deployments last are stretching back over a couple of different generations and you're able to utilize the latest and just add it on to the platform. And so we very much focus on these types of use cases and understanding what it means for the future because we know that we're not going to stop here, we're not going to stop with the hardware and the different types of accelerators and CPUs, our roadmap's going to keep growing. And so we're really focused on how do we enable projects like this in the future. We love to hear it and you touched on something that I think is really relevant right now as we're talking about quantum, our whole future in general, we're thinking about not just the questions we've asked before or being, to your point, infrastructure bound by our capabilities, but saying not just what research can I do, what solutions could we invent? What could happen now? I think it's actually going to revolutionize our thinking and it's so exciting that you're working at the intersection on this together. Dr. Perez, I have to say I went by the A&M booth, which I encourage y'all to do if you're here or figure out on video, or we'll look on their website, I should say rather. There are so many different interesting projects that folks are doing within your community. Everything from recruiting more K through 12 STEM teachers to predicting the sonic booms to also sequencing our genome. What are you doing perhaps collectively to recruit these types of projects? How are you fostering them up through the ecosystem so people can realize this? So I would have to say the number one reason why we've been able to provide that support for all of those different disciplines is the way that we present the hardware. We build out additions to the Open On Demand portal, which makes it very easy for the users to come in and utilize this very complicated hardware. We provide a large and diverse set of training and short courses, and we provide them to the national community. And our goal in those is not to just lecture. We immediately come in and say, here's an introduction to what you can do, and now let's do it. And so we get them on the system and doing practical application on the system. And we never say, okay, well, you have to be in this category or this category. It's a let's get everybody across the board down to the high school level, down to the junior high level. The teacher's so important because we need to get them early and bring them up in the ecosystem. But that portal that makes it easy to get access is critical. To make that portal work, it's got to be easy. Yes. You got to hide all the complexity. Yes. Okay, under the hood. What's the secret to that? Is it just design? Is it, how do other people do this? Because this is the topic here. We're hearing this as a big part of it in every single supercomputer area. So I would have to say, from what I've seen, our group, the high performance research computing group at Texas A&M, we have a very good group of staff members, research scientists, the executive director, Hong-Gao Liu. It's the design, but more importantly, understanding the researcher. So our staff really looks at the holistic view and understands the need of the researcher that doesn't necessarily understand the underlying HPC. And that right there, you just hit on what will, and people love to use terms like this. I hate being one of those people, but I'm going to do it right now. That will be the pivot point for the paradigm shift. It's when exactly what you're talking about is kind of this intel inside, but the laptop just works. When the hardware and the software are working for the researcher to say, I want to go cure cancer. I want to go solve this problem. It's just going to work like me going to Google.com. I mean, that's an amazingly liberating thing when you think about, the term gets thrown around a lot, saying democratizing, I don't know that it's democracy, there's definitely still a bit of the haves and have nots, but that accessibility and that user experience, which we don't talk about enough, there's a lot of educational barrier between first-time folks and those going out there. Dr. Perez, I want to stay with you just for a second because I know that you're a teacher yourself, that you also do a lot of recruitment, and that you work with a variety of different ages. What would you tell or what are you telling rather, young people or anyone, I would say folks who might be more green in this space. I hate the age discrimination, that's not even going down that path. What would you, how would you welcome newbies to our little lovely nerdy world right now? What's been your tactic? And David, I'm curious if you have an answer on this too. I want to hear Dr. Perez's answer for that. Yeah, exactly. So my approach normally is I don't really, I just introduce them to the resources, and I don't really focus on, this is HBC and this is that. It's like, this is what you can do with it. And I give the introduction that way, and I feel that that tends to bring them into the fold easier because then they can start thinking about, oh, what can I do? What can I do? Instead of worrying about, oh, this is hard, this is hard, right? So, yeah, so that's my general approach. And the scary words can scare them off in itself. Yes, yeah. I have to, I have to go ahead. Well, I was going to say, I think, you know, maybe pushing Dr. Perez a little bit more here as well, that coming up early, you know, over the weekend, when we all got here, well, you guys may have gotten here a little bit earlier, but coming up here on a Friday to spend time with the undergraduates, making sure that that level of, I know, yeah, I mean. That's the best, so I'm super partial to the students. I love adults, you're all lovely, but they're the best. And I think those are the things, and then building the right technical environment so that they can learn and learn easily and not have it so scary. Yes, what you said earlier about, it should all just work, but we all know it's easier said than done, and it takes partnerships, it takes an ecosystem, and it takes kind of working on those things and kind of pushing through the barriers to get those things done, and that's what I love the most about this particular project with ACEs and everything else that the team has done at Texas A&M, is they've made it happen, and now they get to do really fun things, like spend a weekend in Denver with undergraduates, so. And I think the democratization, whatever word you want to use, but opens up more collaboration. Okay, if they have access easily to the resources, and again, we are big believers in proponents of more collaboration, more openness creates more value, and diversity value as well. So how do you guys see this composability at the hardware level changing the research landscape because when you have people coming in, processing technology, people process technologies that the transformation equation, okay, more people are coming in. The processes are changing because AI workflows are a little bit different. The needs of the researchers are changing, maybe they're coming in at all levels of the spectrum. What's the impact to the composability of the hardware from a research landscape perspective? You know, I think being able to incorporate the new things that are coming along, like you said, but you also said a keyword with infrastructure, and to me, that means standards. And if you think about the new things that are coming, you think about what's out there. It's a super important conversation. We're going to see some really interesting footprints over all sizes, and we're talking about power, cooling, overall laws of physics, where our next generation of infrastructure is going to be pretty interesting to see because we're going to want to enable very large-scale training instances. We're going to want to enable inferencing at different parts of our different environments, different use cases. And so how we make sure that that's all standards driven and that we're not asking researchers like Dr. Perez to cobble together a bunch of things on their own, that's super important to us right now. And we think that kind of accelerates, man, sorry for the pun, but we think it kind of accelerates the ability to build these solutions in the future when there is that kind of next generation or next two generations of clusters or projects that we have to build. Maybe the word cobble together is going to be no longer in the vocabulary if this continues to have opposable infrastructure, right? No one wants to cobble anything together. I think of heavy lifting, mundane tasks, provisioning. Well, and the tools are going to be moving around a lot. You know, I mean, we even look at chat GPT and all these little startups that come around it and the things that are happening right now. If there's not a universal setting like we've seen with dongles, just as an example, between micro USB, USB-C, my gosh, we're all sick of it. If there's not a standard, it actually impedes not only the hardware, but the software development that everybody gets to collab with. David, I want to go to you with my next question. SC-22 was a big year for Dell. It was. Some absolutely massive beast announcements, which were very exciting. That's right. We've had a lot of y'all on the show this time around. We love to hear it. I'm curious, what are we going to be talking about next year? That you're allowed to tell us. I understand this a little. I know, right, if I told you, would you? We would promote it and be charming and help recruit more researchers and more developers and more engineers for your team. We are really looking forward. If you look at what we've been talking about in as far as AI solutions, generative AI solutions, helping enterprises just to branch out a little bit for a moment, helping enterprises deliver large language models on-prem. We are super focused on those things. So we are interested in making it easier, but when I think about the infrastructure, the stuff that you really geek out on at Super Compute, you're going to see some really interesting things from Dell and our partners. You're going to see some interesting things in terms of how we bring different solutions to market, different capabilities in all gamuts, they're all spectrum, I guess, of AI. And I would probably leave it there because I can't take all the good copy a year in advance, but I'm super excited about it. It's exciting. We were having our conversation just yesterday with the team about all the great stuff we're going to be talking about because it really is going to be about the next generation of AI and computing in general. Well, my only last question, Dr. Perez, for you guys, the last minute we have left, put a plug-in for your groups, what are you working on? What accomplishments are you proud of? The things you would like people to know, put a quick commercial in for your group. I would have to say that it's our outreach and the way that we provide the hardware. So our interfaces where we make it easy for the user to come in and set up a job and submit it with having very little HPC experience, we want them to focus on their research. So that is our main goal, so we want to provide these wonderful resources for the entire HPC spectrum with the new and novel resources that are coming up in the AI ML field. And we want to let the researcher use it without having to think about it. David, plug-in for PowerEdge, powering all this, what's on the roadmap, what's coming around the corner, what's this hardware matters? We're really, you talked about having a really awesome set of purpose-built AI products last year and we are still enjoying that conversation with our customers and enabling our customers. I mean, as well you should be in fairness. Yeah. We're, it is a lot of fun right now and I think the possibilities are just huge. And when you think about the next generation of infrastructure and you think about our ability to drive standards into the market so that this is easier to adopt and build the right platforms for the right use cases, it's going to be super exciting. We will continue to do what we do with PowerEdge is make sure we've got the right testing and validation and capabilities and design points in the hardware so that when it's time to build the next aces, we're there and we're ready to do it with the right partners and the right capabilities. So. Get that hardware in there so they can compose down to the hardware level. We really look forward to hearing about the continued projects that come out of your collaboration and all that lies ahead. David Schmidt with Dell PowerEdge. Thank you so much for being here. Dr. Lisa Perez, Texas A&M. Thank you for what you do inspiring the existing and next generation of folks who are going to shape our future. John, thanks for always being entertaining and fun to share many days. I'll show you my college transcript if you're interested. Nevermind. I don't know what we need to say. If we're going to say anything, I'm going to go ahead and say, welcome to the Washington Huskies, proud UW alum, number five in the nation right now in football. You are watching Supercomputing 2023 here live from Denver, Colorado. My name's Savannah Peterson. Thank you for tuning into theCUBE, the leading source for technology coverage.