 All right, here we go. The Daily Tech News Show is brought to you by people like me, not outside organizations. To learn more, go to dailytechnewshow.com slash support. This is The Daily Tech News Show. I'm Tom Merritt, and welcome to a special edition. We'll be talking with Steve Grobemann, Senior Vice President and Chief Technology Officer for McAfee. Steve, thanks for joining us today. Hey, you bet. Tom, great to be here. Now, one of the things that we wanted to talk to you about, of course, would be related to cybersecurity, because that's what McAfee does. Where does AI and machine learning currently intersect when we're talking about security? So AI and machine learning has become one of the most powerful tools the industry has found for the field of cybersecurity. McAfee, as long as many other players in the industry are using it early across the board for classification, identifying what's a threat, what's not a threat, identifying different types of behavior, and even moving forward, doing it for things like attack reconstruction, better understanding what's happening within your environment to help set up remediation. Now, you had a blog post recently about the concept of human machine teaming, which someone in our analyst supporters said it sounded like the show where they had the robot and the cop partnered up, but I think it's a little less dramatic than that. Can you define what that means and how it differs from AI and machine learning if it does? Sure, so what it really says is in order to get the maximum value out of AI and machine learning, it has to be coupled with the human intellect that you have from humans. So human machine teaming takes humans and has it worked iteratively with the technology and this provides a better outcome than either the humans working alone or just delegating everything to technology. And what we've seen in a field like cybersecurity where there's a human adversary on the other end of the wire, it's critical that we don't assume that the next attack will look just like the previous and that's where that human intuition coupled with the scale and sophistication of some of the newer AI and machine learning capabilities gives us a much better outcome. Yeah, I'm intrigued by that because I think a lot of people jump to the conclusion that, oh, well, machine learning, that means the algorithm can predict everything and know everything. But I like what you're saying is that, hey, we all know that anybody who's used Siri knows that these things aren't as good as us at thinking yet. Can you give me an example of where a human could provide something that AI is not quite there yet? Sure, a few quick examples. Humans are much better at understanding the consequences of an action, being able to understand the situational elements of both an attack and steps to remediate. So if you're going to take a machine offline, if it's the CEO's machine, that's gonna have a different level of impact than if it's a line worker on a manufacturing floor. Being able to understand that the adversary will look for ways to have an attack work differently than what the algorithms were trained on. We have to understand that in cybersecurity, machine learning is different than machine learning in other fields. So if you take, for an example, weather forecasting, as we get much better at forecasting hurricanes, it's not as if the laws of physics decide to change and make water evaporate differently. Despite what movies teach us, the hurricane does not follow the hero of the story, yeah. Exactly, but that's exactly what bad actors and attackers do. They look for ways to confuse the models. They look for ways to evade the models, what we call evasion tactics and countermeasures, and even come up with techniques to make it much more difficult for a defender to defend an environment. Yeah, I like that we accidentally hit on the fact that bad actors in movies and bad actors in cybersecurity kind of act the same way. So I think a lot of people would say, okay, but you could teach a machine to know that the CEO's computer is important. You could teach a machine to anticipate, well, this is what bad actors generally do. What can the human machine teaming do that AI and machine learning can't do on its own? So one of the things that a human can do is it can understand something that it's never seen before where machine learning is very effective is at classifying or identifying something that looks similar to the data that it was trained on. But what a human can do is they can understand if a bad actor is taking a new approach and start to reconstruct what's actually happening. What humans can also do is they can start to detect when certain evasion tactics are being used. So for example, one of the evasion tactics that we're looking at is something called raising the noise floor. And it essentially means that an attacker will create a large enough set of false positives to force the defender to recalibrate their model. So take an example where if I wanted to break into your house and you have a motion sensor above your garage door that sets off an alarm, if every day for a month I drive by at 11 p.m. on my bicycle and trip the motion sensor and it sets off your alarm, what are you gonna do? You're gonna either turn it down, ignore it or disable it. And that's exactly what bad actors can do with some of these technologies is they can create benign samples that look malicious and essentially force a false positive. And given the cost of dealing with those false positives it forces the defenders to recalibrate their model. So what we need to do as humans is be able to understand when one of these attack scenarios is underway, understand the different approaches that a bad actor may be taking to create countermeasures or evasion tactics and then adjust our defense posture accordingly. Tell me if I've got this right because it sounds like one of the implications of this is a human can see a single event and say that reminds me of this other single event and I'm gonna start looking for patterns that fit that. I experienced this one time before, I read about it one time before whereas the way machine learning works it needs repeated exposure to things to build up the pattern. It not only requires repeat exposure but it's very poor at looking at what we call out of sample data. So if there's a new attack that doesn't look like any attack that has ever seen before it's very difficult to understand that that really is an attack scenario whereas humans are very good at being intuitive and if they see a new scenario that is causing damage in a certain way or is different from behavior that they would consider normal they're quite good at being able to understand the situational nature but also recognize humans are very bad at dealing with the massive quantities of data that exist in a modern organization. So being able to use the power that our latest technologies have to both collect, analyze, process and provide intermediary results to the humans that's a very big part of the strategy moving forward. Yeah it sounds like letting the human officers go on a hunch when it's productive but having the AI correct that when it's like no we can pretty much say that your hunch by the data is not going to pan out. Could correct and even understanding all sorts of one-off scenarios that are very obviously not an issue to be concerned about but are difficult to program every possible example. So for example if you have traffic coming from Russia or some other country that may be of concern that's difficult to understand well what if the employee is actually on vacation there? Then all of a sudden that's obvious it's likely not a issue of concern. There's so many different types of examples where human intuition can very easily spot either something to be concerned about or something not to be concerned about and combining that with what we get out of these machine or machine learning models is key and we can look to other fields of science as examples as well. One of the things that we found with weather forecasting and Nate Silver calls this out in his book is as well as the computer models do with weather forecasting they're always better when layered on top of them is human intellect that can understand some of the nuance that can't be picked up by the model. So if there's a structure or a certain geographic feature that is something for a human to very easily work into the model where it's difficult for a generic model to pick up every bit of nuance that exists within an environment. When you were talking earlier about the noise floor too it reminded me of a study that Nicholas Christakis and Hirokazu Shirato at Yale University did where they found and I won't go too far into the example but they found when they had AIs that people didn't know were AIs introducing just a little bit of error into what they were trying to do in this experiment. It improved the human performance because if you had too much error it would mess everything up but if you had just enough the humans were motivated to be a little more creative to figure out a response and a solution based on that error. No, that's exactly right. And I think one of the things that that reinforces is to make sure that humans don't start coming to the conclusion that the machine is always going to be correct. So machine models are very good at providing probabilistic output as to what a situation may be. So the fact that there's an 80% probability that something may be malicious means it likely is but we can never forget that there's that 20% of the time that it's not gonna be malicious. And really recognizing that it's as important to correctly respond to cases where the machine gets things wrong as to allow the machine to help guide us to make better decisions. So how long, and I know you don't really know the answer but if you had to guess how long before the machine learning catches up and gets 99% good at this sort of thing? I think part of the challenge in cybersecurity is given that there is this adversary, it's going to be a much slower ramp to being able to get to those very high percentages. What we see in our industry is as soon as a new technology has enough deployment to defend a reasonably large percent of environments, it creates incentives for attackers to really focus on these countermeasures evasion tactics and whether it's doing what we call machine learning poisoning essentially figuring out what data they can feed into a model while it's in that training stage in order to make the model less effective or better evasion tactics to work around the model, those are key things that will in my view never make it such that the models reach that point where they fully replace the human and you'll really need that human intellect which on its own is constantly evolving and able to comprehend all sorts of new situations. Yeah, the hole is greater than the sum of its parts is kind of the way to sum that up I suppose. Exactly. Now that brings up the idea as all security researchers know as soon as one side uses a tool the other side does as well, what happens when the adversaries start using machine learning? It's a great point and if you think about every technology that's ever been invented both attackers and defenders look at how can they use the technology in order to achieve their objectives more effectively and more efficiently and we're seeing that with attackers with machine learning and artificial intelligence. So for example, if you look at what is machine learning very good at? One of the things that it's good at is classification problems meaning it's able to sort a problem set into different classes of things. So from an attacker's perspective being able to find what victims will be easy to breach what victims may have a higher probability that they have data of value or will pay a ransom demand in a ransomware attack. So these algorithms allow an attacker to take their investment in cyber crime and optimize its return. The other thing that attackers are able to do with artificial intelligence is use artificial intelligence to supplement things that have traditionally taken human skills. So a good example of that is in spear fishing campaigns it requires tailoring the content of a message that's going to socially engineer a victim to click through and fall for the spear fishing attack. And if you think about it from the perspective of a fishing attack used to have to choose between a generic message that would be mass targeted amongst a large group with a low return on victim conversion or a spear fishing attack that took a lot of effort to tailor each message in order to get a higher return. What artificial intelligence now does is it allows the attacker to have the best of both worlds because they can do a volume attack but tailor each message to be much more specific to the victim in order to get that higher return. So we need to recognize that although these technologies are doing amazing things for defenders they're also providing a lot of value for the attackers as well which makes defense even more difficult. I wonder, this just comes to be off the top of my head right now. When you're on the attacker side you care more about volume in a lot of cases especially things like ransomware where you may want to just blast it out and see what comes back in the net. Does that give defense an advantage because it is using the human AI teaming to look for specifics whereas the adversaries may not be as motivated to fine tune their approach that way. I think it's too important to think about the objectives as maximizing to a specific objective. So in the case of ransomware it may be about maximizing revenue for the attacker which may be about volume but it also might be that they've found investing in a smaller number of attacks that are tailored for very specific environments might yield a higher return. So for example, as the consumer ransomware market has started to dry up we've seen ransomware attacks start to matriculate to soft target larger organizations, hospitals, consumer groups or police stations, things of that nature that don't typically have the resources to invest in a strong cyber defense as you would see in things like financial institutions. But they're actually executing on a smaller number of targets and what we project is they'll start walking their way up to harder and harder targets where they'll start holding other groups for hostage whether it's manufacturing facilities or even potentially governments. Well, Steve, thank you so much for taking the time to talk with us about this today. It's fascinating topic and an interesting perspective on how AI can be implemented differently depending on the different industry that you're talking about. Well, thank you so much. It's been great to chat with you. If people wanna find out more about what you're doing or about this topic, where should they go? So we have a number of blog posts off of our MacPhee.com website as well as a few research papers that we've commissioned as well and they can find information there. We'll have a link to one of those blog posts in our show notes at dailytechnewshow.com and of course you can go to MacPhee.com and look for the blogs there as well. Thanks to everybody who supports this show. We just asked that you give a little value back for the value you get at patreon.com slash DTNS. Our email address is feedback at dailytechnewshow.com. We're live Monday through Friday, 4.30 p.m. Eastern, 20, 30 UTC usually at alphageekradio.com and diamondclub.tv. We're at facebook.com slash dailytechnewshow and our website you probably guess is dailytechnewshow.com. Thanks for listening everybody. Talk to you later. This show is part of the Frog Pants Network. Get more at frogpants.com. The club hopes you have enjoyed this program.