 Welcome back, everyone, to theCUBE's continuing coverage of SC23. We're talking about all things supercomputing. I'm your host, Rebecca Knight. For this segment, we've got two great guests. We've got John Zawistowski, a global account executive at SCI Comp, and he is also a CUBE alum. Welcome back to the CUBE, John or Jay-Z, I should say. Thank you. And Scott Badden, he is the senior solution architect at SCI Comp. Thanks so much for coming on theCUBE, Scott. Thank you, Rebecca. So Jay-Z, I want to start with you. Tell our viewers a little bit more about SCI Comp. I know you're a platinum IBM business partner. You've been around for roughly 30 years. Give our viewers a little bit more background about your company. Yeah, that is true. We are a platinum business partner, been around in business for 30 years. We were a global organization, so we were actually in 46 countries. We ship hardware, software, run services to about 150 countries globally. We service a lot of the well-known Silicon Valley companies. And our movement to cloud was probably about three years ago when I saw some movement in the HPC world moving in that direction. OK, OK. Now, Scott, I want to come to you now. Some of the major pain points for companies is that they have these desperate silos of data. Can you describe this problem and explain how you're solving it for clients and partners? It's been a problem for a long, long time. And you have different implications and different shares and different places. People put data and it's hard to move around. It's hard to manage. And so we try to do two things with us. We tried to make it easy to deploy in a cloud and also tie directly into existing storage. So you're not moving data around. You're not having to manually do any of that work. So we tie it together by moving the data where you need it when you need it automatically. OK. And really, as storage platforms become data platforms, there's a real growing need for high-performance storage solutions that can efficiently handle all of the demands of the many and various data-related tasks that we all have to do over the course of our workday and modern workloads of organizations. Jay-Z, what are you hearing from your customers on this in terms of the pain points that they're feeling that Scott was just describing? Well, getting the data in the right area at the right time, right, for the right job. And that's really what this is about. So like Scott said, you don't want to move all the data all the time into different places. You want to be able to have it where you need it, where the jobs need it, where these analytical and AI jobs need it up front. And that's the beauty of what we've developed. We took the complexity out of it. And that's what we've delivered is a managed service so that they can concentrate on what they really need to do for the business and not where their data is and how they get it to where it needs to be. Yeah, no, I mean, I think that that really does seem to be the key is that the company can focus on the job whereas you are removing the complexity. Scott, so talk about some of the key features and benefits of the SICOM storage that's fueled by IBM storage. So the biggest feature of it is that it's pre-configured, pre-tuned, and easy to deploy. That is the number one challenge in HPC storage is getting it up and running, getting it configured, making sure it's optimized. We've done all that for you. And so in minutes, you can spin up a petabyte of data. You can run it for hours, days, years, break it down, spin it up again. And that's the main value add here, right? Is to be able to easily manage this. And as Jay-Z mentioned, we're a managed service. And so it's not that we just spin it up and let it go. We manage upgrades and maintenance and support for this as well. Jay-Z, talk about the availability and the user accessibility. Is this available on-prem, cloud, hybrid? How are you helping customers accelerate their adoption of this? Well, and that is the beauty of what we've developed, right? It can run on-prem, it can run in the cloud, it can run in the hybrid environment. So we can have data in all locations and have one source of truth and be able to deliver that data to the organizations. We can install this on-prem. I can tell you that back several years ago when IBM was doing this on-prem, it would take a good five to seven days to spin up an environment. So the fact that we can spin up a petabyte worth of data capacity in minutes versus days is a huge accomplishment. And getting, like I said, getting those, the hybrid piece put together, there's a function in this software that allows us to not just use our capacity and our software, but we can actually attach to disparate other software vendors, storage vendors, and pull that data into where we need it to be. And we can connect to any kind of cloud object storage and pull the data out of cloud object and put it into a high-performance environment. So it's very, very flexible. And are you hearing from customers that they require this flexibility? What was the onus or the driving force behind making the technology work in that way? Yeah, a lot of companies have already put data in these cloud object storage environments in the cloud, right, in these different cloud hyper scalars. And they want, now they're realizing that, okay, well, we still need to get to that data and do some analytics on it. And we'd really like to do it at speed. So this was a perfect match to be able to go and grab, we didn't put the data there via our mechanism right up front, which we can do, it's capable of doing that, but somebody else put the object there and we're able to pull it back in. And that's a huge, huge win, I think, for everybody. Scott, one of the things we always like to talk about on the keyboard are these customer stories where really we can bring these products, tools and services to life. Do you have a customer story to share about how this has worked in action and how it is transforming a business or workload? Yeah, so we had a customer who was my ad into the cloud and they had been running this very large HPC instance on-premises and didn't know how to move it to the cloud. So we helped them deploy using Sycom storage. We helped them deploy in the cloud, migrate their real applications to the cloud, not only the storage, but their applications as well. And now they're integrating with Azure Blob, right? So they're able to, we were able to get them from on-premises, get them the same, actually better performance in some cases in the cloud, and then move them to the next step of integrating with the new cloud features, things like Azure Blob. Okay, what other kinds of reactions are you hearing from clients and customers? Jay-Z Tabby, you take this one. Oh, okay. That's okay, Scott, you can go ahead. That's all right. Well, the reaction we gave is it's very positive, right? The ability to be able to spin this up, we on Google Cloud, we have a very easy way to integrate. So you can build, you can, our deployment tools are easy to integrate into their own automation. And so we can just become part of their workflow. That's one of the things with Sycom Storage as well is that we know we don't live in a bubble, right? We don't live outside of everything else. We have to integrate and work with all these other applications. And so our customers found that very easy to do because of the way we automated the solution. Right, right. I mean, that is the thing. If you are helping people's jobs become more easier, more seamless, more efficient, you're helping them become more effective at work too. So Jay-Z, why don't you tell us a little bit about how people can get, clients and partners can get started working with Sycom Storage? Well, that's really the beauty of it is we've put this in the marketplace, right? In the actual Azure and GCP marketplaces. So you can go there and click on it if you're familiar with how to use Sycom Storage and you can answer the questions or fill out the forms and deploy your own cluster. Or you can go there and there's a contact form and it'll end up in our email system and we'll contact you and go through what do you need to actually do and what is the application and the workload? So like Scott said, we've got a lot of pre-configured canned, if you will, instances that we understand, oh, you need this much capacity, your configuration will look like this. If you need this much performance, then this is the configuration you wanna go down. If you want integration into Azure Blob, then this is, we've got all of the scripts and everything detailed so that we know what questions to ask you and what to how to help you fill it out and actually get going. It sounds like it's kind of a combination of out of the can but also customized solutions too. Yeah, it's a lot of customized because not every workload's the same, right? So there are things that need to be tweaked and tuned, but that's actually the beauty of it being a managed service and working with us. You don't just deploy it on your own and say, oh my gosh, I'm not really sure what's going on here. Psychomp is there step by step with you and even when it's deployed, we're still involved and engaged. We wanna make sure that the experience is excellent and that you don't really have to think about your storage environment. Right, right. Scott, tell me a little bit about what the roadmap looks like from here on out. As you said, these are in the marketplace. Where do you go next? We have a long way to go and a lot of places to go, right? So we've got the cluster deploying, we've got the performance buckets and attributes like Jay-Z was talking about, hey, I need this much performance, this much. We're moving next is to make it simpler, make it more automatic, make it smarter and also to make it simpler to integrate with other applications, third-party products like Slurm and LSF and custom schedulers as well as different types of application workflows and pipelines. There's a lot of room we can do there to improve the product and we're constantly working to do that. Okay, okay. So I mean, I can't let you go without talking about AI because it really dominates conversations that we were having certainly on theCUBE but also in corporate boardrooms across the globe. From your perspective, what are you hearing from customers about their AI strategies related to data storage and from your vantage point, are they asking the right questions? I mean, what are you hoping to hear more from customers? So AI is a key focus these days. It's another workload. It's something that fit in. From our standpoint, we can support AI workloads. We were actually looking at using AI for our own product to make it smarter and faster, right? And so we hear a lot about AI, we hear a lot about training and how you support these models and this is one of the driving workloads for this application, right? Because you need the data, there are a lot of times pulling it out of objects or pulling it out of other public libraries and things and needing a place to work on it. So it fits well into the AI model and it's becoming more and more popular. Jay-Z? Yeah, that's the buzz, right? It's generative AI, it's lints, you know and as companies actually figure out what that means to them, then they're engaging with us to help them figure out, okay, because AI is only as good as the data is put in. So it's making sure that you have the right data that you're selecting the right data and able to grab the right data from the right locations and feed it into these models and actually inference and get the right answer out. It's a journey, right? AI is definitely a journey for every corporation that's out there and as it grows and as people understand it more and understand how to use it more, you see it in the healthcare industry especially, right? It's really helping them find answers faster and that's really what our storage helps that analytic model do, right? Is actually get to that answer in a quicker fashion. Right, and you mentioned the healthcare industry as a really exciting use case for AI but we're also seeing a lot of troubling and frankly worrying scenarios. Are there, what keeps you up at night in terms of the emergence of generated AI and other AI technologies? Well, you know, Do you want to take that one, yeah. Yeah, in the industry, it's really interesting because you're gonna have bad actors no matter what you do, right? It's all gonna be, we have to be good stewards of how we use the AI environment and how we move forward. So, yeah, unfortunately, there are bad actors and there will always be, if you look at just the cyber issues that we have, right? All of the ransomware issues and all that that happen and now they happen more rapidly because they're using AI. You know, they've got more sophisticated as things go on. So, it's how do we curtail that and bring it back in and make sure that what we're doing does not affect that part of it but helps that part of it. Excellent, excellent. And Scott, I'll give you the last word here. We are now, we're at the end of 2023, frankly, a couple more weeks left of the year. What are you going to be focusing on 2024 and beyond? You talked a little bit about the roadmap earlier, but what is the main focus for SCICOM for this year? For SCICOM storage, I think in 2024, like I mentioned before, it's going to be tightly integrating with the third party tools, making the whole experience. So, I think we've got the storage part of the experience down and it's working really well. Our next step is to take it to the next level and allow it to integrate with third party applications and other ecosystems more seamlessly. I think that's our big focus for 2024. Sounds like a good one. Excellent. Well, thank you both so much, Scott and Jay-Z for joining us on the queue. Thank you so much. Thank you, Rebecca. And I hope you will stay tuned for more of our ongoing coverage of SC23. I'm Rebecca Knight. Stay tuned for more of The Cube, your leader in enterprise technology coverage.