 From around the globe, it's theCUBE with digital coverage of Actifio Data Driven 2020 brought to you by Actifio. Welcome back, I'm Stu Miniman and this is theCUBE's coverage of Actifio Data Driven 2020. We wish everybody could join us in Boston, but instead we're doing it online this year, of course, and really excited. We're going to be digging into the value of data, how data ops, data scientists who are leveraging data and joining me on the program. Scott Buckles, he's the North American Physician and Executive for Database Data Science and Data Ops with IBM. Scott, welcome to theCUBE. Thanks, Stu. Thanks for having me, great to see you. Start with the Actifio IBM partnership. Anyone that knows Actifio knows that the IBM partnership is really the oldest one that they've had. It's hardware, there's software, there's joint solutions go together. So tell us about the partnership here in 2020. Sure. So it's been a fabulous partnership in the data ops world where we are looking to help all of our customers gain efficiency and effectiveness in their data pipeline and getting value out of their data. Actifio really complements a lot of the solutions that we have very well. So the folks from everybody from up top, all the way through the engineering team is a great team to work with. We're very, very fortunate to have them. How many or any specific examples or anonymized examples that you can share about joint links? Yeah, I'm gonna stay safe and go on the anonymized side. So, but we've had a lot of great wins, several significantly large wins where we've had clients that have been struggling with their different data pipelines. And I say data pipeline, I mean getting value from understanding their data to developing models and doing the testing on that and we can get into this in a minute. But those folks have really needed a solution where Actifio has stepped in and provided that solution to do that at several of the largest banks in the world, including one that was a very recent merger down in the Southeast where we were able to bring in the Actifio solution and address the customer's needs around how they were testing and how they were trying to really move through that testing cycle because it was a very iterative process, a very sequential process and they just weren't doing it fast enough and Actifio stepped in and helped us deliver that in a much more effective way and a much more efficient way, especially when you get into a bank or two banks rather that are merging and have a lot of work to converge systems to end to one another and converge data, not an easy task. And that was one of the best wins that we've had in the recent months. And again, going back to the partnership, it was an awesome, awesome opportunity to work with them. As I teed up at the beginning of the conversation, you've got data science and data ops. Help us understand how this isn't just a storage solution when you're talking about VDP. How does DevOps fit into this? Talk a little bit about some of the constituents inside your customers that are engaging with the solution. Yeah, so we call it data ops and data ops is both a methodology, which is really trying to combine the best of the way that we've transformed how we develop applications with DevOps and Agile development. So going back 20 years ago, everything was a waterfall approach. Everything was very slow and then you had to wait a long time to figure out whether you had success or failure in the application that you had developed and whether it was the right application. And with the advent of DevOps and continuous delivery, the advent of things like Agile development methodologies, data ops is really converging that and applying that to our data pipelines. And so when we look at the opportunity ahead of us with the world exploding with data, we see it all the time and it's not just structured data anymore, it's unstructured data. It's how do we take advantage of all the data that we have so that we can make that impact to our business? But oftentimes we are seeing where it's still a very slow process. Data scientists are struggling or a business analyst is struggling to get the data in the right form so that they can create a model. And then they're having to go through a long process of trying to figure out whether that model that they've created in Python or R is an effective model. So data ops is all about driving more efficiency, more speed to that process. And doing it in a much more effective manner. And we've had a lot of good success. And so it's part methodology, which is really cool and applying that to certain use cases within the data science world. And then it's also part of how do we build our solutions within IBM so that we are aligning with that methodology and taking advantage of it so that we have the AI machine learning capabilities built in to increase that speed which is required by our customers. Because data science is great, AI is great, but you still have to have good data underneath and you have to do it at speed. Well, Scott, it's definitely a theme that I heard loud and clear at IBM Think this year. We did a lot of interviews with theCUBE there. It was helping with the tools, helping with the processes. And as you said, helping customers move fast. Big piece of IBM strategy there are the cloud packs. My understanding you've got an update with regards to BDP and cloud packs. So to tell us what the new releases to do for the show. Yeah, so in our 3.0 release that's coming up, we will be able to launch BDP directly from cloud pack so that you can take advantage of the Actifio capabilities, which we call virtual data pipeline, straight from within cloud pack. So it's a native integration and that's the first of many things to come with how we are tying those two capabilities and those two solutions more closely together. So we're excited about it and we're looking forward to getting it in our customers' hands. All right, and that's the cloud pack for data if I have that correct, right? That's cloud pack for XR, yes, absolutely. I should have been more clear. No, no, it's all right. It's definitely, you know, we've been walking that there's different solutions that I've been building out with the cloud packs and course data, as we said, it's so important. Can bring us inside a little bit if you could, you know, the customer, what are the use cases, those problems that you're helping your customers solve with the solution? Sure, so there's three primary use cases. One is about accelerating the development process. Getting into how do you take data from its raw form, which may or may not be usable in a lot of cases it's not, in getting it to a business ready state so that your data scientists, your business, your data models can take advantage of it about speed. The second is about reducing storage costs. As data is exponentially grown, so has storage costs, we've been in the test data management world for a number of years now and our ability to help customers reduce that storage footprint is also tied to actually the acceleration piece but helping them reduce that cost is a big part of it. And then the third part is about mitigating risk with the amount of data security challenges that we've seen customers are continuously looking for ways to mitigate their exposure to somebody manipulating data, accessing production data and manipulating production data, especially sensitive data. And by virtualizing that data, we really almost fully mitigate that risk of them being able to do that. Somebody either unintentionally or intentionally altering that data and exposing a client. Yeah, Scott, I know IBM is speaking at the data-driven event. I read through some of the pieces that they're talking about. It looks like really what you talk about, accelerating customer outcomes, helping them be more productive. If you could, what are some of those kind of key measurements, KPIs that your customers have when they successfully deploy the solution? So when it comes to speed, it's really about, we're looking at about how are we reducing the time of that project, right? Are we able to have a material impact on the amount of time that we see clients get through a testing cycle, right? Are we taking them from months to days? Are we taking them from weeks to hours, having that type of material impact? The other piece on storage costs is certainly looking at what is the future growth? You're not necessarily gonna reduce storage costs, but are you reducing the growth or the speed at which your storage costs are growing? And then the third piece is really looking at how are we minimizing the vulnerabilities that we have? And when you go through an audit internally or externally around your data, understanding that the number of exposures and helping find a material impact there where those vulnerabilities are reduced. You know, Scott, last question I have for you, you talk about making data scientists more efficient and the like, what are you seeing organizationally? I mean, I have teams come together, are they planning together? Who has the enablement to be able to leverage some of the more modern technologies out there? Well, that's a great question, and it varies. I think the organizations that we see that have the most impact are the ones that are most open to bringing their data science as close to the business as possible. The ones that are integrating their data organizations, either the CDO organization or wherever that may sit, even if you don't have a CDO, that data organization and who owns those data scientists and folding them and integrating them into the business so that they're an integral part of it rather than a standalone organization. I think the ones that sort of weave them into the fabric of the business are the ones that get the most benefit. We've seen have the most success thus far. Well, Scott, absolutely. We know how important data is and getting full value out of those data scientists, critical initiative for customers. Thanks so much for joining us. Great to get the updates. Oh, thank you for having me. Greatly appreciate it. Stay tuned for more coverage from Activo Data Driven 2020. I'm Stu Miniman, and thank you for watching theCUBE.