 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Palo Alto, California. One of the Chertoff events is called Security in the Boardroom. They have these events all over the country and this is really kind of elevating the security conversation beyond the Edge and beyond CISOs to really the Boardroom, which is really where the conversation needs to happen. And our next guest, really excited to have, we've got Chad Sweet, he's the co-founder and CEO of the Chertoff Group. Welcome Chad. Great to be here. And with him also, Reggie Brothers. He's a principal at the Chertoff Group and spent a lot of time in Washington. And again, you can check his LinkedIn and find out his whole history. I won't go through it here. But first off, welcome gentlemen. Thank you. So before we jump in, a little bit of, what are these events about? Why should people come? Well, basically they're a form in which we bring together both practitioners and consumers of security. Often it's around a pragmatic issue that industry or government's facing. And this one, as you just said, priority of cybersecurity in particular in the Boardroom, which is obviously what we're reading about every day in the papers with the pettia and not pettia and the wanna cry attacks. These are basically, I think, teachable moments that are affecting the whole nation. And so this is a great opportunity for folks to come together in a intimate form and we welcome everybody who wants to come, check out our website at ChertoffGroup.com. Okay, great. And the other kind of theme here that we're hearing over and over is the AI theme. We hear about AI and machine learning all over the place and we're Mountain Viewers, self-driving cars driving all over the place and Google tells me, like, you're at home now. I'm like, oh, that's great. But, you know, there's much bigger fish to fry with AI and there's a much higher level. And Reggie just came off a panel talking about some much higher level, I don't know, issues is the right word. Maybe she's is the right word around AI for security. So I wonder if you can share some of those insights. I think issues, challenges are the right words. Challenges, that's right, better words. Because particularly in talking about security application whether it's corporate or government, this should become trust. How do you trust that this machine has made the right kind of decision? How do you make it traceable? One of the challenges with the current AI technology is it's mostly based on machine learning. Machine learning tends to become a black box where you know what goes in and you train what comes out. That doesn't necessarily mean you understand what's going on inside the box. So then if you have a situation where you really need to be able to trust this decision this machine is making, how do you trust it? What's the traceability? So in the panel we started discussing that. Why is it so important to have this level of trust? You brought up autonomous vehicles. Well of course, you want to make sure that you can trust your vehicle making the right decision if it has to make a decision at an intersection. Who's going to say? How do you trust that machine becomes a really big issue? I think it's something that the machine learning community as we learn in the panel is really starting to grapple with and face that challenge. So I think there's good news. I think it's a question that when we think about what we have to ask when we're adopting these kind of machine learning AI solutions, we have to make sure we do ask that. So it's really interesting to trust this issue because there's so many layers to it, right? We all get on airplanes and fly across country all the time, right? And those planes are being flown by machines for the most part. And at the same time, if you start to unpack some of these crazy algorithms, even if you could open up the black box, unless you're a data scientist and you have a PhD in some of these statistical analysis, could you really understand it anyway? So how do you balance it? When we're talking about the boardroom, what's the level of discovery? What's the level of knowledge that's appropriate without necessarily being a full-fledged data scientist who are the ones that are actually writing those algorithms? So I think that's a challenge, right? Because I think when you look at the types of ways that people are addressing this trust challenge, it is highly technical. People are making hybrid systems where you can do some type of traceability. But that's highly technical for the boardroom. I think what's important is that, and one thing we did talk about on the panel and even the prior panel was on cybersecurity governance, we talked about the importance of being able to speak in a language that everyone at the delay board is going to understand, right? You can't just speak in a computer science jargon kind of manner. You have to be able to speak to the person who's actually making the decision, which means you have to really understand the problem. Because I think in my experience, the people that can speak in a plain language understand the problem the best. So these problems are things that can't be explained. They just tend not to be explained because they're in this very technical domain. But Reggie's being very humble. He's got a PhD from MIT and worked at the Defense Advanced Research Project. He can over the box. He can over the box. I'm a simple guy from Beaumont, Texas. So I can kind of dumb it down for the average person. I think on the trust issue over time, whether, and you just mentioned some of it, if we use the analogy of a car or the boardroom or a war scenario, it's the result. So you get comfortable. The first time I have a Tesla, the first time I let go of the wheel and let it drive itself was a scary experience. But then when you actually see the result and get to enjoy and experience the actual performance of the vehicle, that's when the trust can begin. And I think in a similar vein, in the military context, we're seeing automation start to take hold. The big issue will be in that moment of ultimate trust, i.e., do you allow a weapon actually to have a lethal decision-making authority? And we just talked about that on the panel, which is that the ultimate trust is it's not really today in the military something that we're prepared to trust yet. I think we've seen, there's only a couple of places like the DMZ in North Korea where we actually do have a few systems that are, if they actually detect and attack because there's such a short response time, those are the rare exceptions of where lethal authority is at least being considered. I think Elon Musk has talked about the threat of AI and how this, if it's not, we don't have some norms put around it, then that trust could not be developed because there wouldn't be this checks and balances. So in the boardroom, the last scenario, I think the boards are gonna be facing these cyber attacks and the more that they experience once the attack happens, how the AI is providing some immediate response and mitigation and hopefully even prevention, that's where the trust will begin. The interesting thing though is that the sophistication of the attacks is going up dramatically. Why do we have machine learning and AI? Because it's fast, right? It can react to a ton of data and move at speeds that we as people can, as your self-driving car. And now we're seeing an increase in state-sponsored threats that are coming in. It's not just the crazy kid in the basement hacking away to show his friends, but now they're trying to get much more significant information, so we're trying to go after a much more significant system. So it almost begs then that you have to have a North Korean example, when your time windows are shorter, when the assets are more valuable and when the sophistication of the attacking party goes up, can people manage it? I would assume that the people role will continue to get further and further up the stack where the automation takes an increasing piece of it. So let's pull in that, right? Because so if you talk to the Air Force, because the Air Force has a lot of open autonomy, DOD in general does, but the Air Force has this chart where they show that over time, the resource that'll be dedicated by a machine, a time machine will increase and resources to human decrease to a certain level, to a certain level. And that level is really governed by policy issues, compliance issues. So there's some level over which, because of policy and compliance, a human will always be in a loop. You just don't let the machine run totally open loop, but the point is it has to run a machine speed. So let's go back to your example with the high speed cyber attacks. You need to have some type of defensive mechanism that can react in machine speed, which means that some of the humans are out of that part of the loop, but you still have to have the corporate board person, as Chad said, have trust in that machine to operate at this machine speed without a loop. And on that human oversight, one of the things that was discussed on the panel is, that interestingly AI can actually be used in training of humans to upgrade their own skills. And so right now in the Department of Defense, they do these exercises on cyber ranges and there's about a four month waiting period just to get on the ranges. That's how congested they are. And even if you get on it, if you think about it right now, there's a limited number of human talent, human instructors that can simulate the adversary and oversee that. And so actually using AI to create a simulated adversary and being able to do it in a gamified environment is something that's increasingly gonna be necessary to make it, to keep everyone's skills and to do it real time 24 seven against active threats that are being morphed over time. That's really where we have to get our game up to. So we watch for companies like Circadence, which are doing this right now with Air Force, Army, DISA. And also see them applying this as Reggie said, in the corporate sphere where a lot of the folks who will tell you today, they're facing this asymmetric threat. They have a lot of tools, but they don't necessarily trust or have the confidence that when the balloon goes up, when the attack is happening, is my team ready. And so being able to use AI to help simulate these attacks with against their own teams, so they could show the boards actually, our guys are at this level of testedness and readiness. Yeah, it's interesting how it's talking to me in the background as you're talking about the cyber. But there's another twist on that, right? Which is where machines aren't tired. They didn't have a bad day. They didn't have a fight with the kids in the morning. So you've got that kind of human frailty, which machines don't have, right? That's not part of the algorithm generally. But it's interesting to me that it usually comes down to as most things of any importance, right? It's not really a technical decision. The technical piece is actually pretty easy. The hard part is what are the moral considerations? What are the legal considerations? What are the governance considerations? And those are what really ultimately drive the decision to go or no go. Absolutely, I think one of the challenges that we face is what is our level of interaction between the machine and the human? And how does that evolve over time? People talk about the central model, where the centering, the mythical horse in human, where you have the same kind of thing with the machine and the human, right? You want the seamless type of interaction. What does that really mean? Who does what? And what they found is you've got machines that have beaten obviously our human chest masters and beaten our gold masters. But the thing is what seems to work best is when there's some level of teaming between the human and the machine. What does that mean? And I think that's gonna be a challenge going forward is how we start understanding what that frontier is. Where the human and machine have to have this really seamless interaction. How do we train for that? How do we build for that? So give your last thoughts where I let you go. The chime is running, they want you back. As you look down the road, just a couple years, I would never say more than a couple of years. And Moore's Law is not slowing down. People will argue they're crazy. Chips are getting faster, networks are getting faster, data systems are getting faster, computers are getting faster. We're all carrying around mobile phones and just blowing off tons of digital exhaust as our systems. What do you tell people? How do boards react in this rapidly evolving, like an exponential curve environment in which we're living? How do they not just freeze? If you look at it, I think to use a financial analogy, almost every board knows the basic foundational formula for accounting, which is assets equals liabilities plus equity. I think in the future, because no business today is immune from the digital economy. Every business is being disrupted by the digital economy. And their businesses are underpinned by the trust of the digital economy. So every board, I think going forward, has to become literate on cybersecurity and artificial intelligence will be part of that board conversation. And they'll need to learn that fundamental formula of risk, which is risk equals threat times vulnerability times consequence. And so in the months ahead, part of what the Cheroff Group will be doing is playing a key role in helping to be an educator of those boards and a facilitator in these important strategic discussions. All right, we'll leave it there. Chad Sweet, Reggie Brothers, thanks for stopping by. Thank you. Thank you. All right, I'm Chad Frick. You're watching theCUBE. We're at the Chertop event. It's a security in the boardroom. Think about it. We'll catch you next time.