 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Welcome to theCUBE and this special IBM Brocade panel. I'm Lisa Martin and I'm having great opportunity here to sit down for the next 20 minutes with three gentlemen. Please welcome Brian Sherman, a distinguished engineer from IBM. Brian, great to have you joining us. Thanks for having me. And Matt Key is here, Flash Systems SME from IBM. Matt, happy Friday. Happy Friday, Lisa, thanks for having us. Oh, our pleasure. And AJ Costimento, solutioner from Brocade is here. AJ, welcome. Thanks for having me along. And AJ, we're gonna stick with you. IBM and Brocade have had a very long, you said about 22 year strategic partnership. There's some new news in terms of the evolution of that. Talk to us about what's going on with Brocade IBM and what is new in the storage industry. Yeah, so the newest thing for us at the moment is that IBM just in mid-October launched our Gen 7 platform. So this is, you know, think about the stresses that are going on in the IT environments. This is our attempt to keep pace with the performance levels that the IBM teams are now putting into their storage environments, the all flash data centers and the new technologies around nonvolatile memory express. So that's really what's driving this, along with the desire to say, you know what, people aren't allowed to be in the data center. And so if they can't be in the data center, then the fabrics actually have to be able to figure out what's going on and basically provide a lot of the automation pieces. So something we're referring to as the autonomous sand. And we're going to dig into NBME of our fabrics in a second, but I do want to continue with you. In terms of industries, financial services, healthcare, airlines, there's the biggest users. Biggest need. Yeah, pretty much across the board. So if you look at the global 2000 as an example, something on the order of about 96, 97% of the global 2000 make use of fiber channel environments and in portions of their world, generally tends to be a lot of the high in financial kinds, a lot of the pharmaceutical guys, the automotive, the telcos, you know, pretty much if the data matters, you know, and it's something that's, you know, critical, whether we talk about payment card information or healthcare environments, you know, data that absolutely has to be retained, has to get there, has to perform, then it's this combination that we're bringing together today around the new storage elements and the functionalities they have there and then our ability in the fabric. So the concept of a 64 gig, you know, environment to help basically not be the bottleneck in the application demands. Cause one thing I can promise you after 40 years in this industry is the software guys always figure out how to consume all the performance that the hardware guys put on the shelf, right? Every single time. Wow, there's gauntlet thrown down there. Matt, let's go to you. I want to get IBM's perspective on this. Again, as we said, a 22 year strategic partnership. As we look at things like not being able to get into the data center during these unprecedented times and also the need to be able to remove some of those bottlenecks. How does IBM view this? Yeah, well, it's certainly a case of raising the bar, right? So we have to, as a vendor, continue to evolve in terms of performance, in terms of capacity, cost density, escalating simplicity because it's not just a case of not being able to touch the arrays, but there's fewer people not being able to touch the arrays, right? It's a case where our operational density continues to have to evolve. Be able to raise the bar on the network and be able to still saturate those line rates and be able to provide essentially a cost efficiency that gets us to a utilization that raises the bar from our per capita ratio from not just talking about 200, 300 terabytes per admin, but going beyond the petabyte scale per admin. And we can't do that unless people have access to the data, right? And we have to provide the resiliency. We have to provide the simplicity of presentation and automation from our side. And then this collaboration that we do with our network brother and my brocade here continue to stay out of the discussion when it comes to talking about networks and throughput bottlenecks. So we truly appreciate this Gen 7 launch that they're doing. We're happy to come in and build that pipe on the flash side for them. Excellent, and Brian, as a distinguished engineer, I'd love to get your perspectives on the evolution of the technology over this 22 year partnership. Okay, thanks Lisa. And it certainly has been a long standing, great relationship, great partnership all the way from inventing joint things to developing, to testing and deploying to different technologies through the course of time. And it's been one of those that where we are today, like AJ had talked about being able to sustain what the applications require today in this always on type of environment. And as Matt said, bringing together the density and operational simplicity to make that happen because we have to make it easier from the storage side for operations to be able to manage this volume of data that we have coming at us. And our due diligence is to be able to serve the data up as fast as we can and as resilient as we can. And sticking with you, Brian, that simplicity is key because as we know, as we get more and more advances in technology, the IT environment's only becoming more complex. So really truly enabling organizations in any industry to simplify is absolute table stakes. Yeah, it definitely is. And that's core to what we're focused on in how do we make the storage environment simple? It's been one of those through the years. And historically, we've had entry-level, us and the industry as a whole has had entry-level products, mid-range-level products, high-end-level products. And earlier this year we said enough of that. It's one product portfolio. So it's the same software stack. It's just, okay, small, medium, and large in terms of the appliances that get delivered. And again, building on what Matt said, from a density perspective where we can have a petabyte of uncompressed, undata-reduced storage in a 2U enclosure. So it becomes, from an overall administration perspective, again, one software stack, one automation stack, one way to do point-in-time copies, replication. So in focusing on how to make that as simple for the operations as we possibly can. I think we'd all take a little bit of that right now. Matt, let's go to you and then AJD, let's talk a little bit more, dig into the IBM storage arrays. I mean, we're talking about advances in flash. We're talking about NVMe as a forcing function for applications to change and evolve with the storage. Matt, give us your thoughts on that. We saw a monumental leap in where we take some of the simplicity pieces from how we deliver our arrays, but also the technology within the arrays. About nine months ago here in February, we launched into the latest generation of man technology. And with that, following the story of simplicity, one of the pieces that we've been happily, essentially negating a value prop is storage level tiering and be able to say, hey, well, we still support the idea of going down to a near-line SaaS and enterprise desk and different flavors of solid state, whether it's tier one, short usage to tier zero, high performance, high usage, all the way up to storage class memory. While we support those technologies and the automated tiering, this elegance of what we've done as latest generation technology that we launched nine months ago, has been able to essentially homogenize the environments to be able to deliver that petabyte per rack unit ratio that Brian was mentioning, be able to deliver over into a all tier zero solution that doesn't have to go through woes of software managed data reduction or any kind of software managed tiering just to be always fast, always essentially available from a 100% data availability guarantee that we offer through a technology called Hyperswap. But it's really kind of highlighting what we've taken from that simplicity story by going into that extra mile and meeting the market in technology refresh. I mean, if you say the words IBM over the Thanksgiving table, you're kind of thinking, big blue, big mainframe, old iron stuff. But it's very happy to see over in the distributed systems that we are in fact leading this pack by multiple months, not just the fact that, hey, we can announce sooner, but actually coming to blu-ring on-prem the actual solution itself nine, 10 months prior to anybody else. And when that gets us into new density flavors, gets us into new efficiency offerings, right? Not just talk about, hey, I can do this petabyte scale on a couple of rack units, but with the likes of Brocade, hey, that actually equates to a terabyte per second in the floor tile. What's that do for your analytics story? And the fact that we're now leveraging NVMe to undercut the value prop of spinning disk in your HPC analytics environments by 5x, that's huge, right? So now let's take Nearline SAS off the table for anything that's actually for data of, you know, such an antical value to us. So in simplicity elements, what we're doing now being able to make our own flash that we've been deriving from the Texas Memory Systems Acquisition eight years ago, and then integrating that into some essentially industry proven software solutions that we do with Inspector Virtualize. That appliance form factor has been absolutely monumental for us in the distributed systems. And thanks for giving us a topic to discuss at our socially distant Thanksgiving table. We'll talk about IBM. I know now I have great fodder for conversation. AJ, over to you, a lot of advances here, also in such dynamic times. I want to get your perspective, Brocade's perspective on how you're taking advantage of these latest technologies with IBM, and also from a customer's perspective, what are they, are they feeling and really being able to embrace and utilize that simplicity that Matt talked about? Right. Yeah, so there's a couple of things that fall into that, to be honest. One of which is that similar to what you heard Brian describe across the IBM portfolio for storage, in our sand infrastructure, it's a single operating system up and down the line. So from the most entry level platform we have to the largest platform we have, it's a single software up and down. It's a single management environment up and down. And it's also intended to be extremely reliable and extremely performant, because here's part of the challenge. When Matt's talking about, you know, multiple petabytes in a 2U rack height, right? But the conversation you want to flip on its head there a little bit is, okay, exactly how many virtual machines and how many applications are you going to be driving out of that? Because it's going to be thousands, like between six and 10,000 potentially out of that, right? So imagine then, if you have some sort of little hiccup in the connectivity to the data store for 6,000 to 10,000 applications. That's not the kind of thing that people get forgiving about, right? You know, the, when we're all home like this, and when you're healthcare, when you're finance, when you're entertainment, when everything is coming to you across the network and remotely in this fashion, you know, and it's all application driven, you know, the one thing that you want to make sure of is that network doesn't hiccup because humans have a lot of really good characteristics, patients would not be one of those. And so, you know, and you want to make sure that everything is in fact in play and running. And that's one of the things that we work very hard with our friends at IBM to make sure of is that the kinds of analytics that Matt was just describing are things that you can readily get done. It's the, you know, speed is the new currency of business is a phrase you hear from, a quote you hear from Mark Benioff at Salesforce, right? And he's right, you know, if you can get data out of, you know, intelligence out of the data you've been collecting, that's really cool. But one of the other sort of flip sides on the people not being able to be in the data center and then to Matt's point, not as many people around either is how are humans fast enough? When you look, honestly, when you look at the performance of the platforms these folks are putting up, how is human response time going to be good enough? And we all sort of have this headset of, you know, a network operations center where you've got, you know, a couple dozen people in a half lit room staring at massive screens on the wall, something to pop, right? Well, okay, if the first time a red light pops the human begins the investigation, at what point is that going to be good enough? And so our argument for the autonomy piece of what we're doing in the fabrics is you can't wait on the humans. You need to augment it. Yes, you know, I get that people still want to be in charge and that's good. Humans are still smarter than the silicon. We're not as repeatable, but we're still so far smarter about it. And so, you know, we needed to be able to do that measurement. We need to be able to figure out what normal looks like. We need to be able to highlight to the storage platform and to the application admins when things go sideways because the demand from the applications isn't going to slow down. The demands from your environment, you know, whether you want to think about, you know, take the next steps with, not just your home entertainment systems, but learning, augmented reality, right? Virtual reality environments for kids, right? How do you make them feel like they're part and parcel of the classroom? You know, for as long as we have to continue living a modified world and perhaps past it, right? You know, if you can take a grade school from, you know, from your local area and give them a virtual walkthrough of the Louvre, where everybody's got a perfect view and it all looks incredibly real to them, those are cool things, right? Those are cool applications, right? If you can figure out, you know, a new vaccine faster, right? Not a bad thing, right? You know, if we can model better, not a bad thing. So we need to enable those things. We need to not be the bottleneck, which is, you know, you get Matt and Brian over an adult beverage at some point and ask them about the cycle time for the silicon they're playing with. We've never had Moore's law applied to external storage before. Never in the history of external storage has that been true until now. And so they're cycle times. Matt, Brian? Yeah, you struck a nerve there, AJ, because it's pretty simple for us to follow the linear increase in capacity and computational horsepower, right? Do we just ride the X86 bandwagon, ride the silicon bandwagon? But what we have to do in order to maintain that simplicity story is follow the more important was the resiliency factor, right? Because as we increase the capacity, as we increase the essentially the amount of data responsible for each admin, we have to literally logarithmically increase the resiliency of these boxes. Because we're going to talk about petabyte scale systems and hosting really 10,000 virtual machines in a two-year form factor. I need to be able to accommodate that to make sure things don't blip. I need resilient networks, right? I need to have redundancy and access. I need to have protection schemes at every single layer of the stack. And so we're quite happy to be able to provide that, right? As we leapfrog the industry and go in literally situations that are three times the competitive density that what you see out there in other distributed systems that are still bound by the commercial offerings, then, hey, we also have to own that risk from a vendor side. We have to make these things essentially rate six protection scheme equivalent from a drive standpoint and active active controllers everywhere, be able to supply the performance and consistency of that service throughout even the bad path situations. And to that point, one of the things that you talk about that's interesting to me that I'd kind of like you to highlight is your recovery times, right? Because bad things will happen, right? And so you guys do something very, very different about that. That's critical to a lot of my customers because they know that Murphy will show up one day. So I mean, because it happens, right? So then what? Well, speaking of that, then what, Brian, I want to go over to you. You mentioned, Matt mentioned resiliency. And if we think of the situation that we're in in 2020, many companies are used to DR and BC plans for natural disasters, pandemics. So as we look at the shift and then the volume of ransomware that's going up one ransomware attack every 11 seconds this year right now, how Brian, what's that change that businesses need to make from cybersecurity to cyber resiliency? Yeah, it's a good point. And I try to hammer that home with our clients that you're used to having your business continuity, disaster recovery. This whole cyber resiliency thing is a completely separate practice that we have to set up and think about and go through the same thought process that you did for your DR. What are you going to do? What are you going to protest? How are you going to test it? How are you going to detect whether or not you've got ransomware? So I spent a lot of time with our clients on that theme of you have to think about and build your cyber resiliency plan because it's going to happen. It's not like a DR plan where it's a peer insurance policy and when like you said, every 11 seconds there's an event that takes place. It's going to be a when, not an if. And so we have to work with our customers to put in a place for cyber resiliency and then we spend a lot of discussion on, okay, what does that mean? For my critical applications, from a restore time, a backup of the mutability, what do we need for those types of services? In terms of quick restore, which are my tier zero applications that I need to get back as fast as possible? What other ones can I stick out on tape or virtual tape and do things like that? So again, there's a wide range of technology that we have available in the portfolio for helping our clients from cyber resiliency. And we try to distinguish that cyber resiliency versus cybersecurity. So how do we help to keep everybody out from a cybersecurity view? And then what can we do from the cyber resiliency from a storage perspective to help them? Once it gets to us, that's a bad thing. So how can we help our folks recover? Well, and the point that you're making, Brian, is that now it's not a matter of could this happen to us? It's going to, how much can we tolerate? But ultimately we have to be able to recover. We can't restore that data. Then, you know, and one of those things, when you talk about ransomware and things, we go to that people as the weakest link in security. AJ talked about that, there's the people, yeah, there's probably quite a bit of lack of patients going on right now. But as we look at, AJ, I want to go back over to you and kind of look at from a data center perspective and these storage solutions being able to utilize things to help the people, AI and machine learning. You talked about AR, VR. Talk to me a little bit more about that as you see, say in the next 12 months or so as moving forward these trends, these new solutions that are simplified. Yeah, so a couple of things around that. One of which is this iteration of technology, that the storage platforms, the silicon they're making use of. Matt, I think you told me 14 months is roughly the silicon cycle that you guys are seeing, right? So performance levels are going to continue to go up. The speeds are going to continue to go up. The scale is going to continue to shift. And one of the things that that does for a lot of the application owners is it lets them think broader, it lets them think bigger. And I wish I could tell you that I knew what the next big application was going to be but then we'd be having a conversation about which island in the Pacific I was going to be retiring to, right? But they're going to come and they're going to consume this performance because if you look at the applications that you're dealing with in your everyday life, right? They continue to get broader. The scope of them continues to scale out, right? There's things that we do. I saw, I think it was an MIT development recently where they're talking about being able to and they were originally doing it for Alzheimer's and dementia, but they're talking about being able to use the microphones in your smartphone to listen to the way you cough and use that as a predictor for people who have COVID that are not symptomatic yet. So asymptomatic COVID people, right? So when we start talking about where this kind of technology can go and where it can lead us, right? There's sort of this unending possibility for it but what that depends on in part is that the infrastructure has to be extremely sound, right? The foundation has to be there. We have to have the resilience, the reliability and one of the points that Brian was just making is extremely key. We talk about disaster tolerance, right? And business continuance. Well, business continuance is how do you recover? Cyber resilience is the same conversation, right? So you have the protection side of it. Here's my defenses. Now what happens when they actually get in, right? And let's be honest, right? Humans are frequently that weak link, right? For a variety of behaviors that humans have. And so when that happens, you know, where's the software in the storage that tells you, hey, wait, there's an odd traffic behavior here where data's being copied at rates and to locations that are not normal, right? And so that's part of when we talk about what we're doing in our side of the automation is, how do you know what normal looks like? And once you know what normal looks like, you can figure out where the outliers are. And that's one of the things that people use a lot for trying to determine whether or not ransomware is going on is, hey, this is a traffic pattern that's new. This is a traffic pattern that's different. And are they doing this because they're copying the data set from here to here and encrypting it as they go, right? Cause that's one of the challenges you gotta watch for. So I think you're gonna see a lot of advancement in the application space. And not just the MIT stuff, which is great. The fact that people are actually able, or I may have misspoken, maybe Johns Hopkins, and I apologize to the Johns Hopkins folks if it was, but that kind of scenario, right? There's no knowing what they can make use of here in terms of the data sets, right? Because we're gathering so much data. The internet of things is an overused phrase, but the sheer volume of data that's being generated outside of the data center, but manipulated, analyzed and stored internally, right? Cause you gotta have it someplace secure, right? And that's one of the things that we look at from our side is we've got to be that as close to unbreakable as we can be, and then when things do break, able to figure out exactly what happened as rapidly as possible, and then the recovery cycle as well. Excellent, and Matt, I want to finish with you. We just have a few seconds left, but as AJ was talking about this massive evolution in applications, for example, and we talk about simplicity and we talk about resiliency and being able to recover when something happens. How did these new technologies that we've been unpacking today, how do these help the admin folks deal with all of the dynamics that are happening today? Yeah, so I think the biggest drop the mic thing we can say right now is that we're delivering 100% tier zero NVMe without data reduction value props on top of it at a cost that undercuts off-prem S3 storage. So if you look at what you can do from an off-prem solution for air gap and run cyber resiliency, you can put your data somewhere else and it's going to take whatever long time to transfer the data back on-prem till we get back to your recover point. But when you work at economics that we're doing right now in distributed systems, hey, your DR side, your copies of data do not have to wait for that off-prem bandwidth to restore. You can actually literally restore in place and you couple that with all of the technology on the software side that integrates with it. I get incremental point in time recoveries either it's on the primary side of DR side, wherever. But the fact that we get to approach this thing from a cost value than by all means, I can naturally absorb a lot of cyber resiliency value in that too. And because it's all getting all the same orchestrated capabilities regardless of the big, small, medium, all that stuff rise the same skill sets. And so I don't need to really learn new platforms or new solutions to providing cyber resiliency. It's just part of my day-to-day activity because fundamentally all of us have to wear that cyber resiliency hat. But as our job as a vendor is to make that simple, make it cost elegance and be able to provide a essentially homogenous solutions overall. So, hey, as your business grows, your risk gets averted and your recovery means also get the thwarted essentially by your incumbent solutions and architectures. So it's pretty cool stuff what we're doing right now. It is pretty cool. And I'd say a lot of folks would say that's the Nirvana, but I think the message that the three of you have given in the last 20 minutes or so is that IBM and Brocade together, this is a reality. You guys are a cornucopia of knowledge. Brian, Matt, AJ, thank you so much for joining me on this panel. I really enjoyed our conversation. Thank you again Lisa. Thanks Lisa. My pleasure. For my guests, I'm Lisa Martin. You can watch in this IBM Brocade panel on theCUBE.