 We're back live at our studios in Palo Alto. The Cube, Dave Vellante with Rob Stresche at IBM Storage Summit. Ian Shave is up next and we're going to dig into and discuss designing data resilience in at the core of a system. Ian, thanks for coming on theCUBE. Hey, now thank you for having me. You're very welcome. All right, what does it mean to have data resilience at the core? You guys talk about that a lot. Tell us. Yeah, so it's ensuring we've got, you know, kind of resilience built at all levels. There's some kind of obvious things to think about, like, you know, data encryption, but there's clearly about, I really want to break it down to like, it's about securing the system and then securing the data. So we've got to make sure we're not giving bad actors easy access to the system to start playing around with it. Clearly we want to avoid that. We want to make sure the right personnel are actually involved. So clearly it's things like having, you know, multi-factor authentication, two-person integrity, all of these things so that we can be sure those that do have access are the right people and people know what they're doing and they've only got access to the tasks that they need. I think that's kind of the fundamentals. Then obviously we're trying to work into not only how do we make sure the system is secure, how do we make sure the data is secure? So that actually, you know, you're going to have copies, for example, that people can't get rid of. So you're going to have a recovery capability. Yeah, we had just been talking to some of your ecosystem partners and I think what was really interesting is the discussion about how you bring together both the storage virtualization software with the flash core modules. How does that really play to give your customers an advantage in this way? So I mean, particularly when it comes to obviously data resilience, fundamentally what we've been building is ensuring that we secure the data. We want to make sure that, as I said, you're going to have copies of data you can come back from. But we're also trying to ensure and what we're, you know, the new thing we've added is that we can discover when the threats are actually getting in. And I think this is the great combination of both the software of the array and obviously the kind of the elements that we've got in our flash core modules. Obviously we've got such intelligence in there that we can actually tell potentially within minutes of someone, you know, a threat getting into the storage array, we can alert the customer to that. And clearly that's going to give them a huge advantage. I think, you know, we've seen when threats actually come in, one of the biggest issues is them not knowing. You know, it's the ransom demand or everything stopped working tends to be the alert mechanism. We want to give them much better visibility that we want to make sure the moment that data is being encrypted or corrupted or, you know, anybody's playing with it, we want to make sure we're letting them know really, really quickly. So I think that's where we start bringing it all together so that we're protecting it, but also giving them the kind of the traffic lights, if you like. So they know when the threats actually coming in and they can then take an action on it. And then if you kind of take that integration further, it's how we can then integrate that into things like cybersecurity tools. So we make their whole kind of alerting and automation really, really simple. Talk about intelligence in the system. Presumably there's not a person inside the system. It's machine intelligence. Is that AI? Wait. What is that intelligence? I haven't met them. If there's a person in there, I haven't met them yet. And boy, must they be small and on a very carefully controlled diet, I think. But yes, there's certainly is an awful lot of AI in this, both in the system and what we're doing with the metadata, you know, is sending it to machine learning in the cloud. So obviously we can help have that intelligence, leverage AI, leverage Watson X, so that we can be sure that we understand the patterns of these threats and can let customers know really quickly, as well as obviously other kind of issues that may arise within the system. So obviously you want this to be as real time as possible. So how do you deal with the latency problem? If you're doing the machine learning in the cloud, how do you compress the time to action? So the great thing is, because obviously we've got these flash core modules, it's computational storage, right? So we've got some great, you know, processing power if you like, literally down at the flash level. So they're literally just collecting metadata. We have ensured it was a paramount, you know, criteria that when we're doing these things, it doesn't impact performance at all. That is why we send the metadata that we collect up into the cloud, because we're doing the processing up in the cloud, not affecting the production system. So therefore we don't introduce latency to the production systems. And this capability is because you have your own chips, right? I mean, this is the proprietary nature of the chips that you have that you've built into the flash modules. Absolutely. And that's what's allowing us to do it so incredibly quickly. You know, because we've got the intelligence in there, we've got the processing capacity within those drives to help do this. Yeah, if it was an industry standard drive, we can do it in one of our arrays without the flash core modules. But obviously it's not going to be quite as fast during the detection, because again, the key remit don't slow the system down. That's obviously a paramount. I haven't met a customer yet that has said, yes, please slow my production systems down. So we are avoiding that at all costs. Does having your own IP inside the system, does that give you a cost advantage that potentially would offset some of the other sort of expense, if you will, of creating that? How do you balance, I guess specifically is my question, the increased cost and maybe there isn't an increased cost but it certainly sounds like there's increased value there with the need to be price competitive. Yeah, the great thing is they had spare the compute capacity within the drives that has allowed us to do this kind of data checking without any impact. So we already had some spare sort of processing in case we were to see in there, which was great. But from an economics perspective, of course, we're leveraging QLC based flash within these drives as well. But we're doing it obviously in such an intelligent way that we're not suffering the where issues, we're not suffering the performance issues, but it is allowing us to have incredible density and keep the cost down. And I think that's a key driver, not only for, the computational bit clearly helps with data resiliency, but it's also helping us with client sustainability initiatives as well. So we're keeping power consumption down, keeping that density really high so they don't need reams and reams of rack space to have all the stories that they needed in the past. Yeah, and I think you just hit on a very important topic that we haven't really touched on, which is the sustainability. And I think you being over in the UK and Europe and you must see this from a lot of the different customers that you have over there where they're looking to you to help them get a handle on the sustainability aspect of it. And is this a key way, this technology is helping them with that? Definitely, and you're absolutely right. Over here in kind of Europe, sustainability is a big thing. I mean, bluntly, there are some areas around Europe where, I mean, energy supply is a very real concern. Are they gonna get enough? Nevermind the cost. I can tell you from my own personal consumption in my home, my costs have gone up probably three X over the last sort of three, you know, two, three years. So there's a very real cost thing. There's a very real concern about can I get enough? You know, can I actually put any more in this data center? Am I gonna get enough energy? So sustainability has become a really hot topic. And it varies by customer. I mean, you know, what I met with, I remember when we mentioned sustainability, they kind of said, I'm really interested in how you can reduce floor space and power. I'm not that worried about your recycled cardboard. So it does mean different things to different people. And you mentioned earlier that you had this additional headroom so that you could inject all this intelligence into the system without taking a performance hit. And thinking about the future, how much in data-driven apps, the explosion of data, we've been talking about the explosion of data, you know, since the beginning of time of computer time. And now it's like the curve is bending. So my question is regarding the future of the roadmap, how much headroom do you have to continue to be able to sort of accommodate the needs of your customers? Well, of course, as we go through each generation, if we need more headroom, we'll factor it in anyway, of course. I mean, I think you're right. The curve has changed a bit that in the past, you know, there was always going to be twice as much, you know, processing power for about the same money. We all know that curve's kind of long gone, unfortunately, unfortunately for us and unfortunately for our customers. But certainly they are getting, you know, more and more efficient, leveraging things like AI is allowing us to do an awful lot of smarter things. I mean, I mentioned about how even with the discovery from a date going back to data resilience for a second, you know, we're already potentially detecting threats within minutes of them getting into the storage. We know we are pushing and driving towards, we'll get that detection down to a second. So we are definitely going to continue to drive that, leverage more processing capability and leverage more kind of metadata measurements that we can use to help us become even, not only more accurate, but obviously, you know, cover a greater scope of the type of threats that the customer might actually be subjected to. You know, this might be out of your swim lane, but you're right. You were early referring to, I presume, you know, Moore's law and how that's tailing off the doubling of, you know, transistor density every two years, essentially basically doubling performance. Having said that, there's like this new dynamic that's going on where you have these alternative processors, whether it's the GPU, the CPU, the NPU, different accelerators, you combine those together, you're actually blowing away the historical Moore's law. How do you think about that? Do you, presumably that's a huge advantage for customers? Yeah, and I'll probably let my colleague, who I think you're talking to later, Andy Walls get into that in a little bit more detail, because he will, he'll, I'm sure, be able to take you to the nth degree on that, but clearly the team is always looking at what are the best technologies for us to actually leverage, but it is the balance of, how do we increase the performance, but without exponentially increasing the cost? That's always the tricky balance, because as I said, I think at one point, when it was you get, you get double for the same money, that was relatively easy. We're trying to play this difficult, but you know, very careful balance, I think, right now. You know, I'm not finding a lot of customers are saying, I would really love to spend twice as much as I was before. In fact, they're looking for us to help reduce costs, which is why we keep trying to bring down or increase the density of the drives to, you know, kind of reduce the cost that way. And then we're just looking at what are the best technologies for us to leverage to deliver the performance they need, whilst helping them now get the protection they're going to need to, because clearly, you know, resilience is to me really the hottest topic. You know, maybe sustainability around Europe, it can be a bit hotter, but I don't think anyone can avoid the issue around, you know, it's inevitable that they're going to get attacked at some point and how are they going to be able to cope with that? Particularly when, you know, really in Europe, there's a lot of new regulations coming into effect, which is clearly directing them, you know, slightly differently as well. And I know we've, you know, we've heard of some that are potentially going through Parliament in Canada, so that they are spreading around the globe. Yeah, so fair enough on your points there, your real job is to help customers get as much value as they can and apply technology to create, you know, business advantage. So what is, what are they telling you in terms of, you know, real world experiences and the value that they're receiving? Can you help us understand that, quantify it or at least frame it? Yeah, sure. I mean, so a lot of it is clearly, clients are always looking at how can I run things more efficiently and efficiency can mean different things to different people, but for a lot it can mean, can you help me run, can I make this run faster if I put it really bluntly because it can make my applications run better? It means I can make decisions faster, it might mean I can, you know, complete my batch runs, which means I might get my trucks out in the morning at it, all those sort of good things. So those things definitely come into it, but kind of as I alluded to, the thing that I'm being asked probably 70, 80% of the time right now is how can you help me with this sort of ever-increasing threat? Actually, I'm going to correct myself on that same a little bit. There's an element of once we talked to them about it, they then definitely want to know more about it. We are seeing there's a lot of customers out there that actually still don't truly understand the exposure they potentially have. So once we explain the value of what we can bring to help them cope with and recover from cyber attacks, that suddenly becomes one of the top topics for them. But other than that, it is just how do you help me reduce costs? How do you help both operational and obviously capital costs? How do you help me improve the performance of my environments? Yeah, and I think you touched on it just a minute ago, and I think it ties in with cyber resilience is the regulations and privacy. What are you seeing? Because I mean, being in the European theater there, you probably, they're definitely further ahead than the US is from a regulation perspective like GDPR and things of that nature, even though we have here CCPA in California, which is similar, but it's not pervasive. What are you seeing when it comes to the tie-ins between the privacy and cyber resilience in the customers that you're talking with every day? So I think the key driver with a lot of these European regulations, it's all around, they talk about it as operational resiliency. And there's a certain regulations which is called DORA for financial organizations. There's another one called NIST2, which is for other industries. And even in the UK, they've got the Financial Conduct Authority, for example. And I know they've already been imposing fines on some organizations that have not complied with the regulations. And I think for any organizations even outside of that area, look at what happened with GDPR. It kind of started in Europe, but then what organizations soon realized was if they wanted to trade with those European unions, they had to comply as well. So we're kind of expecting a similar thing with even these new sort of cyber resilient guidelines. And fundamentally it's all about they want to ensure that they can still trade. That's really what it's behind it all, because you can imagine the damage to an economy if banks can't transact with each other. If manufacturing suddenly can't manufacture, the impact to an economy could be huge. And that's the key driver, which we're hearing of other areas around the globe. I think there's a very much a realization this is something everyone's got to think about in a lot of detail. How do you ensure that you don't have critical industries or critical companies suddenly non-operational for days? I mean, that's just, that's obviously not going to help any economy. Well, Ian, I love this discussion about real time or neoreal time. I love that we can bring you from England into our live studio real time. Really appreciate your time today. Thank you. No pleasure. And Ian mentioned Andy Walls last week. I had the opportunity to sit down with Andy from our Boston studio. And we got into it. The word of the month is entropy, this randomness of data with all this AI, creating all this new data, entropy is winning. How do we handle that? How do we combat that? How do we detect data corruption in real time? Keep it right there. To the IBM Storage Summit, you're watching theCUBE.