 Hello everyone, welcome to theCUBE's coverage of High Performance 2023, covering everything HPC, machine learning, AI, high performance analytics and quantum computing, part of ISC 2023, CUBE coverage. Today's topic is cloud for HPC. We've got two great guests, Sherry Simekwadel and Andrew Lee with Rescale Partner. Just looking at Valley startup, hot startup. Sherry and Andrew, thanks for joining the program. Thank you for having us. You know, I love HPC, I love the cloud, how it's brought that to the forefront in all applications. High performance is table stakes now, certainly with cloud scale, but now you've got machine learning, you've got a lot of other things going on. More emphasis on silicon, chips, all this action coming in and cloud driving it. I want to get your take on it. We'll start with you, Sherry. But in the cloud scale, HPC, what's it all about? What are the key issues right now that's given this market a boost? So yeah, one of the most important issues in HPC after actually pandemic, what happened is the complexity of HPC. So what the value of the cloud here is it provide the flexibility? So for the customers that's having issues, putting their new data center, they have issues with the real estate, either it's going to be the cooling issue or power issue and also just buying the infrastructure. We are saying the cloud provides much more value here. You can start with very small and then scale it as your business scale. So that's pretty much it. Also, I got some questions for you, so I'm going to come back to you. Andrew Lee rescales the company, Silicon Valley startup growing, partnering with Dell here. Take a minute to explain what rescale does, but we'll get into some of the questions. Yeah, so rescale, we specialize in HPC built for the cloud with the advent of machine learning and artificial intelligence, especially with chat GPT. More and more users are doing machine learning to do training in neural nets, which actually utilizes a lot of frameworks that high-performance computing allows. Rescale here makes HPC for the cloud very simple for scientific users and engineers to get on cloud resources and basically be virtually unblocked using a multi-cloud infrastructure. You know, the interesting thing is, is I want to get your thoughts as we talk about the cloud, with the hyperscalers for instance. What's the, you know, what's going on with the hyperscalers? How does your cloud for HPC offering differ from the competition? Yeah, so our offering, we do have some partnerships as well with all the hyperscalers as well, but what's unique is that it's multi-cloud along with utilizing a very solid infrastructure and that allows scientific and engineering users to utilize the most optimal infrastructure of their choice. You never want to actually be locked in to any particular vendor because your scientific engineer workloads could be very specialized. A lot of times, especially with the explosion of architectures, you know, GPUs, CPUs, FPGAs, all different types of vendors and processors, it makes it the flexibility and choice that much more important. Yeah, it's super computing. You got super cloud, which we call multi-cloud, hybrid cloud, but it's not really hyperscale. And then super apps, AI and Fuse, seeing AI enabled driving a lot of that. So a nice super layer kind of a stack emerging. But I have to ask you, because you're seeing in all of the pressure points for these apps that are taking advantage of the AI, they're putting more pressure on the hardware, right? You know, got GPUs you mentioned and other silicon. What do you guys see as the barriers at this point to harnessing an HPC cloud for researchers and then the enterprise? So the HPC stack is very interesting. You have tons of different types of software ecosystems. So for example, on our platform today, we have over a thousand ISVs. You know, you can open source like OpenFoam, Gromax, PyTorch, right? You also have commercial workloads like Ansys and Siemens, and then custom codes, right? For various use cases. Combine that with different types of, you know, schedulers, compilers, libraries, and then on top of that different architectures from, you know, AMD, to NVIDIA, and all the different type of architectures that they provide ARM. All of those combinations, we basically estimate that that could be, you know, 50,000 plus combinations that you can use to optimize your stack. So what Rescale does is that we actually make it all very simple in partnership with Dell to have all of that preconfigured and pre-tuned. And that way we actually have an AI-powered recommendation engine, so that way we can recommend the best optimal stack for the users. Sure, we're laying in here on the Dell side. What are you enabling? What's the key to the partnership here? So for us, we already have, so Dell is an infrastructure company. We sell HPC server for people that have data center. But the most important thing here is how to make it seamless for our customers to shift to the cloud. If they wanted to take advantage of the flexibility of cloud, so they already have some on-prem system and how we can make it easy in partnership with Rescale, so they don't even see it. It's a very seamless person to the cloud and that's what our customers are coming to and asking for us. Talk about the portfolio. You guys are making even better as adds to the portfolio. What's the impact? I think the impact to the portfolio is very significant. So we have talked about the infrastructure side of it. I think our servers and storage portfolio can address all the needs of the customer, but I think that it's just the impact for these software is mostly on the OPEX model for the customer that wanted to take advantage of it and we make sure that everything underneath is tuned and optimized for HPC, but just giving that flexibility if you don't want to invest the first, if you don't want to just buy infrastructure and put it on your premise, you can take advantage of this. So metering has been important in the cloud. This brings up the issue of the tools on HPC. What kind of tools does HPC on demand offer to help customers keep track of all the metering issues and expenses and usage? Cost is huge. If you get out of control, it's more than leaving the lights on. As I say, it's a lot of money. You know, watch what you're doing here. The cost could be pretty crazy. So we have a lot of different types of policy settings and administration settings basically that users can actually utilize and deploy. So for example, we have different types of workspaces we can enable within those workspaces we can add different users. Each of those users can be allocated a specific budget. Now they can be hard limits. So for example, like $1,000 and then that's it. Or it can actually be a little bit more open-ended. So depending on your project, your group as well as the user, you can allocate that. And this is actually done globally. So you can actually have different environments where you would invite users from, say, US and Canada, you know, APJ or EMEA. And that way they can actually all draw down from the same budget pool or not, right? And all of that is actually at the control of the administrators that we actually enable. Maybe it makes sense to explain what is HPC on-demand mean? HPC on-demand for us basically means for scientific and engineering users to be able to access virtually unlimited compute. What we've seen is that R&D investments are really important. For every dollar of a spend, basically, enterprise company can actually reap the reward of over $40 or $50 of revenue in terms of the investment. So being able to unblock that for scientific and engineering users for the research and development purposes certainly accelerates us. And that's how we get the future of flying cars and automated driving vehicles and whatnot, right? There's a lot of research out there that's very important and this will certainly unblock all of that. I think that brings up the segue to the hyperscalers. I've heard people say it's difficult, complex, takes a long time to get it going, can be expensive. This is where HPC on-demand differs from that. How do you guys compare when you look at the offerings on the hyperscales versus your unlimited compute HPC on-demand? How do your performance compare? So yeah, it's a good point. I think the value, the value proposition I offer compared to the hyperscalers is that vendor locking. I think it's very important for the customers. We wanted to take advantage of on CSP-1 someday in some of the instances there. And the other day, maybe the pricing is better on another instance and you wanted to kind of use that. So kind of replicating all of these processes and workload from one hyperscaler to another is going to be a very challenge for our customers. But with this control plane that we have for HPC on-demand, all of those are kind of masks from the customer. You're just interacting with one single unified control plane. That's value one. The other value is we try to make it easy for our customer. HPC is complex. If you are going to hyperscalers, you basically need to tune and optimize different instances as you need to configure the network, firewall, all of that. And when we are comparing it, what we can provide with HPC on-demand, we brought about 25 different number of steps to about less than seven or eight. That's it's very easy for our customers to just bring their input, click on whatever the software is, make sure the instances is optimized and then run their simulation. So that's the second point. Yeah, and that got the flexibility there. So you think you guys are competitive with the established cloud service providers? Yeah, absolutely. On top of being multi-cloud, so that way we can actually optimize the infrastructure. We can also deploy hybrid environments. And what that allows is having the best of both worlds of what the on-prem cost can offer as well as cloud bursting use cases. So essentially being able to kind of mix and match between on-prem and various multi-cloud infrastructure, we can be extremely competitive in terms of both performance as well as cost optimized, depending on what your use cases are. I think the on-prem cloud, that's a great position. That's where the demand is upon intended. HPC on-demand seems to fit that well. The question that comes up, I'd love to get you guys to thoughts on this and weigh in is making it easy, more seamless and even more intelligent because it has to come in and deliver the AI, has to deliver the goods and then connect in a cloud manner. How do you guys make that easier and seamless? So we start basically from the end users in mind. So as I mentioned, the scientific engineering user base, they have different types of codes and softwares, open source commercial. So we have over a thousand software partners on our platform and over 5,000 plus versions. So a lot of times it's very difficult for all of these enterprises to manage all their versions and all their softwares and their entire stack and optimize it for the appropriate infrastructure. We basically do that right out of the gate and basically it's a very simple user interface where you can actually just select your software and where you want to deploy and essentially just submit run, right? And all that information in terms of the performance intelligence will be captured to help recommend better ways to run. Cloud and HPC go well together. The cloud operations means on-prem, you brought up the edges coming around too. Andrew, what are you seeing in terms of the AI piece? Because now the data becomes super valuable. You got the compute, the HPC on demand, you got the data, you got the cloud, you got on-prem, you got all the other clouds, you got edges around the corner, intelligent edge. What do you see happening with the data and where HPC goes when you start bringing all this new AI to the table? Yeah, so what we're seeing in the field is that all customers want us to complete the digital thread. So what we mean by that is basically all the data acquisition, modeling, simulation data, be able to model and simulate your products and all the way to manufacturing data and then you have edge data, all of that to be able to have a sort of a closed loop ecosystem where you can actually utilize the data in the field to kind of bring it back to improve your products. So we're definitely working towards that. I don't want to say that we have all the answers to be honest. No one does yet. Yeah, nobody does yet. But that's what we're driving towards in terms of this partnership with Dell as well. Yeah, I think a lot of people are really seeing value in the data. We'll see how it all pans out as architectures come in. And it's an architectural place. Sure, talk about the Dell. You got a partner here for the folks watching. What should they know about Dell's role in HPC in the cloud? What's the big story? Sure, so as you know, like Dell is a very collaborative company. If we don't have any offer internally, we try to partner with other people. For this case, we rescale to kind of provide this collaborative solution for our customer. So we have that customer first in our mindset. That's important. And what we see here for HPC on demand is the value that the VSCL is providing for our customers. All those software requirements that the customer are asking. We wanted to have the administration. We wanted to make sure our budget is in good control. We wanted to see all these different software are in install. So that's why we think that this partnership can bring to the table. So, and we are very excited about it. It brings up a really good example. We've been saying on theCUBE for many years, hardware is really software too. It's a software business running on hardware. And now hardware is back because hardware, people see the processors, they see the GPUs, they see the value, but it's the software game. This has always been the play. Sure, but how many times have you seen, hey, we're a software company. It's hardware and software working together. This is the magic of this market right now. I think this is a great example. Just your thoughts on when people say, is it just hardware? Talk about the software involvement. How much software is involved? Absolutely. So you are right on point. So we provide the hardware and software actually. So behind the scene in this collaborative partnership with Reaskal, we have done more than 50,000 number of simulation hours to just make sure that all of the software that are putting on the system is completely optimized and tuned for that specific hardware. So a lot of engineering work has been done from our side as well as the Reaskal side to ensure we have the best hardware software for the best hardware and this circle goes on and on. This is a great conversation. We have to end it there. Andrew, I'll give you the last word at Reaskal. What do you see taking advantage of this innovation? What's next for Reaskal? What are some of the things on your mind right now? So for us, we are seeing explosion in terms of AI and mal research computing. So for us, we merely want to ride the wave to be able to kind of be a piece of this action as well as just help enable a better future for everything, including all the research that's done in life science and manufacturing and technology, it's just exciting place to be. And we want to do it with Dell, obviously. Yeah, great relationship. Congratulations, both of you. Thanks for coming on theCUBE's coverage of high performance in 2023, part of ISC 2023. Thanks for watching. I'm your host, John Furrier. Thank you for having me. Thank you. Thank you, Sherwin. Thank you, Andrew.