 Welcome to Dell Technologies World. It's the premier technology event of the year. Join John Furrier, Dave Vellante and Lisa Martin as they talk to the trailblazers and trendsetters of future technology. Dell Technologies World 2023 and theCUBE, the leader in live and emerging high tech coverage. Hey everyone, welcome to theCUBE's live coverage of Dell Technologies World 2023. Lisa Martin here with Dave Vellante. This is our CUBE After Dark coverage Monday night, day one of the event. We're going to be here all day tomorrow and all day Wednesday, but you know that because you've been following along. Dave, we've had great news today, lots of conversations. We have a packed schedule, loads of talk this morning about what you predicted with your crystal ball, generative AI. Well, you know, I don't know if you know this, AI was invented last fall. So that was the hottest topic going. It is the hottest topic going. We're going to be talking about that next. One of our alumni rejoins us. Andrea Booker is here, Director Accelerator Portfolio at Dell Technologies. Great to have you back, Andrea. Thank you, thank you. Steve Graham joins us as well. The CEO and Founder of Scalers AI. Guys, we're excited to talk to you. Michael Dell Chuck did as well. A lot about generative AI, the opportunities, the responsibilities. But, Steve, talk to us about Scalers AI. What is it that you do? What was the catalyst to start the business? Yeah, absolutely. You know, we founded Scalers AI because our focus is really about fast-tracking industry transformation. And we found no better technology than AI to do just that. And the modern techniques around AI that have emerged recently, deep learning, transformer models, large language models. We've reached this incredible platform to really change the way we live and work with AI. And so, you know, we got too excited and decided to found a company to, you know, do all this amazing stuff to make our lives better in transform industries. And you're both a customer and a partner of Dell. I see, I saw a blog on LinkedIn. You had early access to the Dell PowerEdge XC9680 system that you built. But you did some cool generative AI use cases. Talk to us about what you did with that Dell technology. You know what's, so what's amazing is the timing couldn't be better for the 9680 platform. You know, Dell really took a leadership position with the latest H100 GPUs. And as you know, like innovation today is GPU constraint. And so getting the best latest GPUs is a game changer in how you can fast track your industry transformation or produce the best LLMs in the market. So we're just thrilled that we got early access to the 9680. Not only is it like packed with eight H100 GPUs that, you know, allow us to build these great large language models. It's got, you know, broad count based ethernet for, you know, fast networking capability. And also, you know, couple of fourth gen Xeon processors as well. So this is really one of the leading edge systems you can have right now. You know, I'm surprised if they have any left for sale I'd be shocked because again, like innovation is GPU constraint. The world is moving to generative AI. And this is the difference between like when we got the opportunity to access it, when we take a large language model, you know, like a chat GPT model and we run it. You know, it's two X faster on the 9680 with these H100 GPUs. And you know, that's the difference between you being first to market versus your competitor. And you know, we're seeing a ton of companies integrate generative AI into their core platforms and technologies like, you know, Bloomberg came out with Bloomberg GPT, you know, core to their product skillset. So if you're thinking in any industry, you're thinking about how you can adopt this technology as soon as possible. And that's the fastest platform we've seen in the market. So thanks, Andrew, for putting that out there. Sure. Look at the gold right now. So it really was a big thing. What is that? So talk about that. What's the foundation of the infrastructure that's running generative AI? Cause I think it's a harbinger for the new data infrastructure. Take us inside and how's it different? It's very vast, right? I think generative AI and the different applications, it's exploding into so many different workload applications, right? One is LLM, chat GPT, I think kind of set off, but there's so many other use cases and models. I mean, it's kind of cool to see the demo in which you can be like, hey, I want to see a dog in a bathing suit in Iceland at the moment, right? But then there's also fantastic other ways that we are able to utilize this technology and the workloads to have operational efficiency and other kinds of outcomes. So, you know, specifically for the XC9680, that is our flagship. And you've heard Michael, you'll mention it today. You're going to hear it more tomorrow. We are very, very excited, because it's our first foray as Dell with this brand new powerhouse server. And it is the flagship of all our AI servers. But, you know, the way that we really did it was really, really mindful. We really strategically designed this in partnership with NVIDIA. You know, and we actually were very, very proud to first announce it at SC22, which was a huge event for us. And then we were the very first to launch it in March of this year. And we have just seen an explosion of customer demand for this product. But the way that we really looked at making it was really customers in mind for flexibility and different workloads and applications. So we actually have the A100 and the H100 both offered within this product in the latest generation of 16G, CPUs, E3 memory and all the other bells and whistles that come along with it. And we did that so customers had options between choice from a price perspective as well as from a thermal perspective because these are beasts. They're breaking the paradigm on both avenues. So obviously very compute-centric. Very. Right? How storage-centric, how network-centric. The pieces have to fit together, I presume, but what's the balance that is going to drive this new wave? Yeah, I mean, I think what we're seeing is, I mean, network is extremely important. So you want to run these large GPU clusters. And so that's incredibly important as well. And one of the really unique things is about large languages models is we train it on nearly the entirety of human data and the footprint of human data on the web. And that's not a lot of data, so it's not a tremendous amount of storage. But as Andrea mentioned, we play with large language models for text data, but we also play with stable diffusion, which is the other type of generative AI where we're just prompting an image, like you alluded to a cowboy on the moon with a red horse, and it's amazing we can take that text and image and generate it. So we actually, that was a little bit more fun than we had. So when we got early access to the 9680, we had to make some more art too. And so when you start getting to those multimodal, like generating video content, then you started getting more storage constrained as well. And there's no reason we can't do motion pictures with generative AI as well. That's really on the onset of the technology. So text today, stable diffusion present, really high fidelity, compelling artwork can be generated with stable diffusion. And in the future, you can think about motion pictures being built with AI. And what's really amazing about this technology is it really unlocks a tier of innovation. So people that may aspire and really creative people that weren't natural drawers, can now put their ideas from pen to paper with this technology. And the democratizing this technology for the world is just an incredible opportunity to drive innovation. So it's going to be a blast to see how this technology is used. Absolutely. Speaking of that democratization of AI, Andrea, talk a little bit about, Michael talked about this this morning during the keynote, chucked it as well. The responsibility factor, the ethics factor when it comes to AI, I would love to be able to easily Google a dog in a bathing suit in Iceland, I think is what you described as a great image that I would probably put on my Instagram. I saw a lot of demos before I came. Steen's having a good time with this. What do you want to look at from Dell's perspective, from the hardware perspective, the ethics, the responsibility that Dell is helping to build into the technology to help put some guardrails on this democratization. Yeah, I think when it comes to AI and the explosion, there's a sense of responsibility that all of us play. You know, first and foremost, one of the big things with this technology is making sure that our customer's proprietary information is secure, right? We really, really focused on having a solution that enables them to do that. We have our secret sauce, you know, it's full iDRAC systems management that enables and sets customers up to be able to have that kind of functionality and that kind of security. But you know, in terms of the applications and how it be used, there's so many ways that technology can be used for good and evil. Right, I mean, call us way to spade. However, the power I think of this technology is what it's been able to do for healthcare, for finance, for retail, right? I mean, if you think about the healthcare applications and how you can get to quicker diagnosis for a plethora of different illnesses from cancer detection and so many other types of, I don't know, use cases out there, you know, to have that faster insight into what is actually ailing you, I think it's peace of mind. Oh, huge. So we have to keep in mind all the good that can come out of it. That's a great point, yes. And also, everyone has to have their social controls and have a focus on what's right as well. So Steve, what about some of those sort of interesting use cases that Andrea was just referencing? So healthcare seems like machines can certainly assist and maybe it can even make better diagnoses than doctors in some cases. What's the kind of killer use case for retail? You're in public safety, you're in manufacturing. What are you seeing, I mean, it's early days, but how are, you said customers are, everybody's going crazy for this. How are they thinking about using it? Yeah, I think really the preeminent interest is really on the large language models. And, you know, when you look at that, I mean, the real, you know, core killer use case that everybody's looking at right now is unlocking their proprietary data. And this is very important for AI safety as well because, you know, some of the most trusted companies in the world have proprietary data that they need to keep proprietary. And running that on premise, you know, on a 9680 type system, building your own custom model with your own proprietary, you know, insights and trade information is incredibly important. How that manifests itself is like, how do we empower our employees with all of the data that our company has proprietary? So instead of like, you know, boring trainings and onboardings with Ecopious PowerPoint presentations and bugging your peers all the time, you can ask those questions to a chatbot that's been chained on your proprietary, you know, information. And what about, you know, for your sales advisors and your customers, why not enable them and unlock all of your company's insights so they can get seamless access to it? But it's always better right now with humans in the loop. So, you know, the net result is always better with human in the loop. And this is like, you know, one of the core use cases of AI that we all, we got excited around that was, you know, computer vision deep learning base. Historically, you talk about medical is like radiology. You know, the number of medical images exploding. The number of radiologists, same. Well, they can't all work harder, right? So why don't we use AI to detect, you know, a bone fracture, right? And then, you know, and then they can go double check it. But, you know, and they can, and that's where the human in the loop all my help is so much. And so, you know, I'm not sold on, you know, the generative AI capabilities being better than these experts in trade secrets. But, you know, I think, you know, whether it passes the bar better than my lawyer, I still, if my lawyer can work three times faster and that gets driven in affordability, I still want their trusted insights as well. So, a ton of amazing use cases that are human in the loop, that's really important. And I think, I mean, ultimately our economy's built on, you know, population growth and productivity. And this is a hack for productivity. You know, a massive hack for people in productivity. So, I think, huge opportunity in human in the loop applications as well. So, it's interesting, Andrew, you were talking about SC-22. I've been interviewing a bunch of folks that are attending ISC-23 in Hamburg, going on right now, or actually this morning. And the number of them said, you know, we're going to run this on-prem for a lot of reasons. One is, we want to show off our supercomputer to our donors, you know, universities and things like that. The other is, you know, when you get over, I don't know what the magic number is, 50,000 cores, I'm just kind of making it up, but it's more cost effective. The other is IP leakage. So, are we seeing a pendulum swing back, do you think? I think I have seen a pendulum start that shift back, right, you know, with anything that's new, it over-indexed one way and then it kind of level sets. But I think that's what's so unique about Dell and the model that we have. For customers who really want to be in the cloud, we do have Apex offerings that we're able to, to enable an ecosystem for that. We also have the on-prem options. And, you know, the other emergence, especially with these acceleration systems, is we break the paradigm thermally and for a lot of data centers, especially as liquid cooling becomes more prevalent, in which we also are able to bridge customers even on that paradigm with our MDCs, modular data centers. And so we're able to meet customers with our ad on all those paradigms. But I would say I'm slowly starting to see that shift back to on-prem, especially as we have these capabilities. And especially for products in which latency is super important. Steve, so the speed with which chat GPT can, you know, give you a response to your prompt is just blowing people away. And in speaking to some of the technical people behind it, my takeaway is a lot to it, but my takeaway is that one of the breakthroughs was the ability to predict the next word. Yeah, yeah. Okay, and that seems to be, was a major breakthrough, and then to the extent that you can predict the next word in the internet. Yeah. Now you get to, okay, and I'm simplifying, obviously. For some of these other use cases, Michael Dell was talking this morning about, people are going to want very specific foundation models for their business. Yeah. What's the secret sauce there? What's the sort of technical breakthrough that you're seeing if you picture it? Yeah, well, the amazing thing in use cases like that, and that's where everybody wants to go, is fine tune the AI models for their custom use case. And there's a ton of open source tools and platforms to do that. So, you know, we use techniques like embeddings, and there's some great open source projects around embeddings. And then we use like vector databases that use that nearest neighbor capability that you're saying, predicting the next word. And we can build that on the foundations of these existing large language models, and then fine tune them. So you could be timed to market faster for your custom use case as well. And so these are great general purpose tools, but again, like as Michael alluded to, they don't really transform industries, and so they sell that industry specific problem in high fidelity as well. Where's the bottleneck in all this? Is it compute? Is it, are we going to see new types of databases that lay data down on a flash drive differently? Is it algorithms? That's not the algorithms most likely, but where's the bottleneck? Well, there is, is there no bottleneck? It's like we haven't even, the hose is so big. I think right now it's getting enough GPUs to satisfy the demand. It's supply. Honestly, it's supply. Go ahead. No, I mean, we're in a GPU constrained world, and that I mean, absolutely that's a huge target. I think, as far as like, this technology's really hit mainstream within the last six months, and they're not as many people yet in the world that know how to fine tune a model into high fidelity into that industry specific domain as well. So that's another piece of the equation that's extremely important. And I think we're always limited by humanity's ambition and imagination as well. And so, we need leadership at these large enterprises to adopt the latest technology so their workers can have better experiences, so their customers can have better experiences, so their product can be transformed for the better, and if they don't do it, their competitor sure will. And so, I think leadership as well needs to get behind these generative AI use cases and making an important priority for their company. And more awareness to your point, Andrea, in terms of all the tech for good and all the opportunities that are there, we tend to hear so much not here in the general media about a lot of the negative potential things that scare people or scare the general public, but it was so much optimism this morning that Michael Dell shared. You guys have shared it as well. I think there needs to be more awareness for that. Steve, my last question for you, one of the things Michael Dell said this morning was, for companies that haven't started playing around with AI yet, you are behind. How can scaler's AI help them? I see you're an accelerator. How can you help a company that is dipping their pinky in the water to really accelerate understanding, adoption, et cetera, of AI? Yeah, I mean, we can help them right from the get-go. I mean, we've got all the industry knowledge and the expertise. All they got to do is have some great infrastructure, collect their data in the right way, and we can make sure it's secure and safe and get them rolling. So it's really, when you have the expertise like we have and the tooling from companies like Dell with the 9680 platform, it's all there. You know, the time to transformation, you know, we're talking about results in months. So, you know, pick up the phone, call, have intent as a leadership team, get your priorities, you know, your top priorities, and make it clear where you want to go. Dip your toe in, because if you haven't, you're already behind. Guys, thank you so much for joining David and me on theCUBE tonight. Day one of our coverage of Dell Technologies, we appreciate learning about scaler's AI, what are you doing from a technology perspective with purpose-built servers for generative AI, and all of the cool things that can happen. We appreciate you guys, thank you so much. Thank you. Thank you. For our guests and for Dave Vellante, I'm Lisa Martin, and you're watching theCUBE After Dark, as we're calling it tonight. We're coming to you live from Dell Technologies World 2023. Stick around, our next guest joins Dave and me in just a minute.