 Hello, and welcome to IBM's Beyond Firewalls, resilience strategies for all, focused on the importance of cyber and data resilience. I'm Rob Streche, managing analyst with theCUBE Research. Today, I'm joined by Ram Parasalman, executive director with IBM. Welcome, Ram. Good to be here. I'm glad to have you in here because I think again, this is near and dear to my heart, again, being cyber resilience month, here in October. I think there's a lot people can learn from what IBM has been doing. It has a rich history. You and I have both a lot of experience in this industry as well. So let's kind of jump into some of the practical aspects of what's going on in cyber and data resilience. Why don't we start off with what's critical in the role that you're playing with IBM and what you're up to at IBM? Yeah, for sure. So I am responsible for end-to-end data resiliency. As you know, one of the primary targets for the bad guys, the bad actors, is the data. So what does it take to deliver end-to-end data resiliency, right? We've heard this quite a bit, but what does that entail and how can we bring it to the market? That's my primary responsibility. So I am responsible for product management for data resiliency. Yeah, I mean, it's such a huge amount of effort in the landscape and the attack surface is just huge from a data perspective. I think especially with ransomware and other cyber threats that are coming to fruition at this point in time, there's so many different vectors that bad actors are taking. At the same time, some of the most successful are leveraging social engineering and other things that are outside of it and then they're able to get into this. So what are you seeing, because you're out there talking to customers quite often, what are some of the key issues that you're seeing in the cyber resilience landscape? If you look at it, ransomware is the type of attack that grabs the headlines. It's just so pervasive, right? So what I like to tell our customers is basically ransomware is like a bachelor's degree. So it's kind of a must-have or kind of all attackers are well-versed with what ransomware is. The reason it takes or grabs the headlines is it's so popular and voluminous, right? But attackers are not sitting just on ransomware. They're moving beyond, right? So today, if you look at it, I think exfiltration, stealing of data, leaking it in the dark web, these are more commonplace than they were when we began COVID in 2019. So these are the master's degree, the PhDs, if you will. So the attackers are not just getting more sophisticated in the kinds of attacks they cause, but they're also doing it pretty fast and causing more damage, right? I think that's a key, right? Is that the speed at which these attacks are taking place has just increased tenfold. I think, again, we all love, and if you don't say the word AI in something, you get a bad mark on the inner webs. So when you start to look at it, how is it that we can keep up with the speed? Because the emails that used to start the ransomware attacks now look and are formatted really nicely out of LLMs and things of that nature. How is it that we can really keep up, or how can organizations keep up with the speed of these attacks? Yeah, this reminds me of a scene from Top Gun Maverick, right? So it's like, the bad guys have the same manuals we do, right? So you gotta throw away the conventional wisdom, if you will. It's interesting you mentioned AI. Just like we have access to AI to help with detection, the bad guys have access to the same AI, right? To help cause attacks faster, right? So it's about how you use and harness the tools, if you will. There's research from IBM that states what used to take attackers 60 days because these attacks today takes less than four days. Yeah. So you figure the math, right? Whereas the response, it's still roughly about two to three weeks for us to give a basic or kind of the initial response to these attacks. So we are not moving as fast as the bad guys are, right? So it says something about the impedance mismatch between how fast those guys are causing attacks and how pervasive those attacks are to the enterprise. Yeah, it's just crazy, because I think that when you start to look back at how fast they're moving and how they are using all the same tools, I think ultimately, I think AI will give the defenders a better chance than it does, because I think it helps with, from our perspective in the organizations we talk to, they're seeing it as a way to help level the playing field a little bit, help upskill their defenders inside there or especially at the small or medium-sized business where they may not have as many people or that person is wearing multiple different hats. What are you seeing and how are you trying to help organizations come up to speed with that or gain that speed? Yeah, one of the things which mentally, when you talk about defenders, the one thing that comes to your head is they're all united, they're standing in formation and they're sharing intelligence, right? So, but that's not what you see in practice in enterprises. I think what's hurting the most are silos between teams, between products and between intelligence, right? So, there is not that sharing. So, if you take data teams, AI teams, storage teams and your security teams, these are all disparate teams within the organization where there is not a clear collaboration between these teams in the face of an attack, right? So, what happens and why it takes us to know much longer to respond to attacks than the attackers to cause it is because we are not able to share and collaborate information so that we get to that recovery plan soon enough, right? So, one of the key things we need to do as an industry is to bring these silos, right? Because the longer these silos exist, the longer it's going to take us to respond to those attacks. Absolutely. Yeah, and I think again, what I look at is that it has to be not just, hey, I call up or I slack or I teams or whatever methodology of chat that you're using but it has to be more integrated into the products that they're using ultimately. What are some of the things that you're, I mean, you're on the product management side. What are some of the things that you're bringing to the table to help organizations really get a handle on that and come up to speed and really deal with that? Yeah, so it's really boils down to three things, right Rob? So, one of the first things is detection, right? So, the earlier you detect, the better. But it's not about you just saying I detected something, it needs to be high fidelity, right? So, basically saying, hey, you know, I've detected something but that's highly reliably, you know, you can say that that's an attack, right? Because you don't want, there is already alert deluge, right? The security teams deal with trillions of alerts a day and they don't know what's credible and what's not. So, you want to be sure that the alerts are kind of accurate and high fidelity, right? In addition to being early. So, that's one thing. We'll talk about it a little more, I guess. Second one is, this is very important, safe recovery. Because again, the last thing you want is to recover some data and then be attacked again. Or to lose a lot of data and be true, super safe inconvenience your customers. So, safe recovery is extremely important. And we'll talk a little bit more about that too. The final one is really integration of the existing workflows between security and storage. So, again, this is the thing where we say, hey, it's not about some team detecting, you know, an attack. Another team being called into the rescue. And these teams are not on the same page, right? It's about how can we make these more collaborative? Not just during work time, which is when an attack is on, but also during peace time as a discipline. Just to improve the rapport between these teams and get them working together. Yeah, I think that's totally key. All three of them are very key. And is this where Defender comes in and how it helps organizations really get a hold of that? Yeah, so IBM Storage Defender was pretty much founded. To solve these problems. So, really, it's about bringing together all of the technologies that exist today, but that don't work with each other, right? And across your data estate. So, one of the primary things we have to set as a boundary condition here is to say, you cannot really exclude any part of data, right? Because people have conventionally talked about this as, hey, you know, I'm a backup company. I looked at backup data and I'm ensuring a backup is immutable. And, you know, kind of I can bring together a backup when you need it, right? So that's kind of one half of the problem. But on the other, there is this primary storage, right? Or primary critical workloads that get stored on arrays and flash storage, which basically is being handled by a different team. And there is a different set of tools that are managing those. But when it comes to an attack, you just don't know where the recovery is going to occur from. Are these primary hardware snapshots? Or do you need a backup? Or is it a combination of these two? This is where the recovery tends to be different from what we are used to, right? You know, we've been doing DR for ages, right? Disaster recovery. Recovering from a cyber threat is fundamentally different than recovering from a disaster. Yeah. Right? And it needs precision. It needs kind of piecing together the evidence and ensuring to know what's safe to recover from. And it's bringing these things together, right? To help the customers recover from a threat. Yeah, let's kind of unpack what is a, you know, what is good detection look like? And how do you really accomplish good detection? Yeah, let's start with detection, right? So it's about leveraging the collective intelligence from all the sensors that you have out there. Some of these sensors are hardware based. For example, we have flash core modules that can detect inline these corruptions, right? To data as they're written. Now, is that going to be an accurate indicator of cyber threats? Not by itself, right? So you take that as a sensor. It's much like our home alarm system. That's one sensor. And then you have some sensing or some detection you do based on file systems, right? With which you say, hey, based on read, write, and the IOs that I have going, I'm able to credibly detect there's something off here. And then of course your backups, which you can say based on metadata or other analysis that we do, you can say that, okay, you know, I have more credibility that there could be something else going wrong. But think about this. Each one by itself only tells you part of the story. The complete story and the higher fidelity and detection comes from inferring across these. And this is where AI comes in to our earlier point and we're able to now say, I'm seeing three sensors go together for a certain timeline. This to me credibly indicates risk to your data. Yeah, and I think what's powerful is that IBM has the technology and has had it there for, I mean, AI is not new. I mean, I know it's new in the buzz and all of that, but AI has been around in modeling and machine learning has been around for a while. It's more how you apply it because I think to your point, you know, it's understanding what is the place that I should go and recover from. So what is, yeah, and well, it's kind of got detection. Now I've figured out what's going on. Now I got to figure out where to go back and actually recover from. How does that work? Yeah, this is where a company spends so much of their time, which is, okay, I've got an attack, right? Attack has already taken place or is in progress. These attackers, as we spoke about, they're rapidly expanding their blast radius, right? And in that time, what's happening here on the recovery side is teams are going piece by piece, copy by copy, trying to figure out, is this copy safe or no? Okay, shall we go to N minus one, N minus two, right? Point in time. Right, the known good. The known good. So there is no pre, like your prefetchers, right, in cash. There is no prefetched known good copy that you have as a baseline. Imagine if you had that and you said, okay, hey, I'm aware that the attack has taken place. And oh, by the way, I already have a baseline that we can start testing. You know, get started with the recovery. So that's kind of what we are trying to go from a place where teams are stepping through copy by copy into a holistic place where you can now, you know, not only just orchestrate across all the copies that you have at different points in time or with different levels of granularity, but you're also able to say, as a baseline, I know that this copy is safe, has been tested safe. So that I can now take it to my clean room, start testing and then start recovering. Yeah, I think that's totally a very important piece in gaining confidence across those silos that we were talking about, right? Because when you start to have teams sharing data and having the same methodologies to understand, okay, here's my known good and how I figured out my known good to start. It also helps smaller companies that may not have as many people to get in there and be able to start, you know, nowhere to start. It's a force multiplier to them as well. Absolutely, and if you think about the earlier comments we made about DR versus cyber recovery, it's again like a home being invaded, right? So what's safe for one application may not be safe for another. So you need to have this notion of a safe copy per app, right, per workload. Because at the end of the day, it doesn't just help to say I have this copy that I know is safe and then hand it over to my data team to go recover. If the application doesn't work, I mean, it doesn't make any sense, right? I mean, it's not going to help you at all. So that's where the notion of what's safe for one app may not be safe for another is the fundamental difference between disaster and cyber threat because the extent of impact can vastly vary depending on type of attack. Yeah, and I think we had some of your colleagues on earlier and they said, I think by some of the research that you had done, it was like on average, it was like 300 days to know that somebody had been in there. And I think to that point, it's one of those things that you figure it out when it's too late, but you want to stop that. You're never going to get day zero fixed, but you want to not have another day one and go through that. And Howard, what you're bringing to bear, really helping again to your earlier point, break down those silos between sec ops and storage and data teams and how does data recovery really play together as breaking down those silos? Yeah, that's a great question again. So if you think about traditional security has been perimeter, endpoint, network focused, right? So it's just kind of all in these boundaries that enterprises used to have and have had for decades. But come the age of COVID and digital transformation that we've been talking about for a decade now, this traditional definitions disappeared. So which means no longer do we have a strict boundary or perimeter, it's really become more about the data. So what is it that data, these security teams don't have with their tools? Because you have XDRs, Extended Detection Response. You got your CASBs, Cloud Access Security Brokers. You have a bunch of other poster management tools and whatnot. Yet they all lack is an awareness of the sensitivity of the data, right? Because security teams are grappling with all these events that are affecting the network or the endpoint. But what storage can give them, and this is where this whole category of cyber storage comes in, can give them a prioritization framework based on the data, right, the criticality of the data. So I think when you add this layer to the XDR, these teams now start trusting each other better because security teams are now getting out of the trillions of alerts that they have, they're getting the few alerts that they can respond to because it impacts the companies the most. So I think, long story short, the key point is we have to fit within existing workflows because security teams have standardized their workflows. They have playbooks that they recover from using a source, security orchestration, remediation tools. So you don't want to recreate those workflows and create something side-banned, right? You want to be able to fit within these existing workflows so that teams are not adopting something new. They're now incorporating intelligence into something they're not going to use. Right, and it's bringing to bear all of the different things. So you're not just looking at the endpoints. You're not looking at that. You're actually looking at what's critical in the data, which is I think the key, because not all data is created equal. And so if something has to go back or be offline and having background in DR and having had, we had at the financial institution I was at, we had over 500 different applications and out of that, we had 80 that were business critical and out of that, we had eight that if they were offline, for any amount of time, we were either getting fined by the SEC or we were going to have very unhappy customers who couldn't get to their data. So I think that really is key to bringing that back together with SecOps and the data teams, understanding how they play together and storage plays that critical role. Where do you see this evolving? Where do you see your tooling evolving from here? I think what you just mentioned brings us to this minimum viable company, which is basically saying, what are my critical workloads? How recoverable are these workloads, right? So even to give companies a visibility, today if you ask nine out of 10 companies, really don't have an idea of how recoverable their most sensitive data is. Because there are two questions there you pose. One is, do you know where your sensitive data is? And the second is, do you know how recoverable that is? And you mentioned the SEC, it's important because SEC is now gotten regulations where all companies are required to report cybersecurity incidents via an 8K8 form. So this brings accountability now as a boardroom discussion. It's no longer an option, it's a mandatory discussion in the boardrooms, it has to be documented. So the future, if you ask me or even the present of Defender is it's the same values. It's basically how do we build even better detection with even higher fidelity? Second is kind of how do you recover and recover your minimum viable company sooner, right? So we want to cut short that time, it can't take three weeks for a casino to get recovered or for a hospital even worse, right? If your operations can't proceed for two days, two hours, right? It's very critical. So shortening the amount of time it takes to get the minimum viable company up is something we are going to be maniacally focused on. And what this takes is sharing intelligence across the different sources, across vendors. That's why we say data resiliency is a team sport and therefore we will be working with our companies and friends across the industry so that we collaborate with the intelligence that we have. This is not different from 9-11 or other threats, right? That phase the nation, for example, where teams are required to share intelligence and kind of come together so that you outsmart the bank guys and deeper integration with SecOps workflows and existing tool chains, right? So that you're not reinventing some of these workflows. Absolutely. Well, I want to thank you for coming on and really digging in there because I think this is such a critical piece to how organizations are going to really come to build out big and small, they need to have this and to your point about minimally viable company and understanding that. I think the automation and making it simpler for companies to understand and take advantage of is unbelievably critically key to that decision. So where should they go to find out more about what's going on and how to understand Defender and get more information? Yeah, one of the easiest things to do is just go to our website, just look for IBM Storage Defender. I'm sure the link will be on the show notes. The other is to get started with a cyber resiliency assessment test. This is an IBM free for any customer that's interested, when we analyze what's your current posture like? What could you be doing? What are some of the simple steps that you can take? Because building resiliency is like building muscle. It's like strength training. You don't start overnight and go to lift 100 kilos. That doesn't happen one night. So it's a journey and you begin that by knowing where you stand and then you kind of take those steps that you progressively improve your resiliency with. And then of course, if you're interested, go dive into Storage Defender and you'll see what the demo can do for you and I'm sure there'll be something compelling there. You know, at the beginning of the journey, it's much more to go, right? And biolactors are not sitting on us. They're moving faster, so we have got to move even faster. Yeah, no, I agree. And we'll put the links to that in the resources tab down below as well. Well, I want to thank you again. Thank you, Rob. This has been fun. Yeah, thank you. And remember, you can stay up to date with all things cyber and data resilience by visiting silkenangle.com. And thank you for watching this episode of IBM's Beyond Firewall's Resilience Strategies for All on theCUBE, the leader in high-tech enterprise analysis and coverage. Thank you and stay tuned.