 Hello, and welcome to this episode of The Security Angle. I'm Shelly Kramer, Managing Director and Principal Analyst here at theCUBE Research. And in this episode of our show, we are going to talk about a CISO's take on the rise of AI-enhanced fishing and fishing. So to set the stage here as we dive into this topic, according to Deloitte, 91% of all cyber techs begin with a fishing email and 32% of successful breaches use fishing techniques. This is not really news to anybody that's paying attention. I mean, fishing is the number one threat vector that a lot of cyber criminals rely on. So we've talked about this a ton before. It's a given that I don't always feel bad when I say this, but this really is true. You know, we're the weakest link in the security chain. And so as we all rush to embrace AI in various ways throughout our organizations, threat actors are also wasting no time in leveraging AI-powered tech to help them, you know, supercharge their efforts at wreaking havoc, getting access to data and infiltrating networks. So we know, you know, threat actors are developing more intelligent ways to craft fishing attacks. And we're seeing them use things like automation and machine learning technology. This allows them to send large volumes of customized fishing attacks. These are called spear fishing attacks to target enterprises. And this increases the likelihood of infection. They also use three primary methods of fishing attacks. These are link-based attacks, malicious attachments and natural language threats. Today, I am thrilled to be joined by my colleague, fellow analysts and member of the CUBE collective community, Joe Peterson. And our special guest today is Bill Harmer, the operating partner in CISO at Craft Ventures. Welcome. Thanks for having me. So excited to start my week off with the two of you. So before we dive in more fully, Bill, will you share a little bit of your backstory and kind of walk us through your career journey? This is my move to learn something that surprises me. All right. So I have been in the IT industry for 30 years. I've been in security for 25. I started my security career in the adult content industry. So in building some of the largest porn sites in the world back in the late 90s, we were being attacked regularly. You know, when in 1997, you can make $20 million a year in pure internet business. You've got people coming after you. We were running dual DS3s at the time. And I love the way this comes out. I was running 90 megabits a second to the internet in 97. Everybody talks about gig off their phone today. But I was the single largest consumer of internet bandwidth from Bel-Canada, even beyond their entire home internet systems. So yeah, that's where I learned to craft. Back then, there was very little. There were no tools. We had to build our own firewalls or our own defense systems. We had to learn how this worked. And the only way to learn it was to actually do it. So we learned how to crack into stuff, how to break through defenses, and then start building off of that. I turned that into a career in startups. I went to Docspace, single largest, I believe it's still to this date. It was an $800 million acquisition by an American company of a Canadian startup. I went through the banking industry. So I did some time at Manulik Financial. And I very specifically say I did some time at Manulik Financial because that's what it felt like at times. A little different ends of the spectrum. Foreign industry and your financial services. Yeah, very different. Exactly. Yeah. And then I went back into the startup. So I was the head of security and the Global Privacy Officer for Success Factors. Pre-IPO went through that public offering into the acquisition by SAP. They acquired us for $3.4, $3.6 billion. Went from there to Zscaler, Pre-IPO. I was there for five years. Spent a couple years at Secure-Off. Most recently, I joined Kraft Ventures in 22, two years ago actually last month. Excellent. One of the things that, so thanks for delivering. I didn't know in advance what your answer was going to be. Thanks for bringing the porn. And somebody listening to this will be, what? But what so many people don't realize is that the porn industry has a long been on the cutting edge of advancements in technology. And so it doesn't surprise me at all that that's where you hone your chops and it makes perfect. It was the only thing that was making money back when building infrastructure was out there. So it literally paved the way. It's, I love to joke about it. For years I couldn't talk about it. Couldn't I? I was just a taboo. But nowadays it's one of those whatever, been there, done that, got a t-shirt. But we were doing things like live stream broadcasting. What we're doing today, this recording, we were doing live streaming back in 1998 because we had the money to buy the licenses, which were really, they were like $15,000 a license. But we would turn that around and try something like, I think there was a company in our building called 2x4. And they do like these annual general meetings. So they put on the stage, the lights, the cameras, and they do all the orchestration. I was sitting talking to the guy one day, I said, so what are you doing? He told me and I said, well, what about streaming that to the internet? And he was like, you can do that. I said, well, I can. I got, you know, two $10,000 cameras up in the office. And I got a couple of streaming licenses. We got some ISDM drops. We'll hook into the hotel. Went to downtown Toronto, set it up and broadcast one of the first annual general meetings to the internet in, I think that was like 98. So it gave us those abilities and, you know, the cutting edge, you could really try and push the boundaries on a lot of stuff. Yeah, absolutely. Well, that's fascinating. So let's dive into our conversation now a little bit and talk about generative AI. Oh, do you see phishing attacks getting a big boost from junior AI? So if you go back to like some of the original phishing attacks, you used to see those ones that had like misspelled or poorly worded. And for a long time, I honestly thought that it was just because it was bad Google translations, but it turns out they were actually being crafted that way specifically, they were looking for somebody who was not aware enough to pick up on the grammar or the spelling mistakes, and they were more likely target. And just because you said earlier that you don't like saying that humans are the weakest link, I don't like saying it either. In fact, what I prefer to say is that they're the easiest target. Because when you think about cybersecurity, even professionals trained in cybersecurity, it's difficult to stay current. And you know, if you were to put me into a CFO role, I'd be fired in two weeks. Like I just, that's just not what I do. And what we expect is we expect everybody in our organizations to be as good as us at cybersecurity. But on the AI side, the ability to now ingest enough data around tone and cadence and words that are used from say, a CEO, or understanding what people respond to by pulling their Twitter feeds or their LinkedIn or whatever, you can craft these things really, really well. And you can craft very specifically to the target, but you could do it at scale and at volume, right? So before, when you were hitting, if you're going after CEO, you see had one person sitting there working on it, manually creating it, working it. Now you can almost say, I want to target all CEOs across the oil and gas industry, and start building those fishes and start firing them out. And Gen AI helps you do it in a blink of an eye. Absolutely. Absolutely. So I know that we've talked about this before, Joe. You and I talk about this all the time. But Belle, will you explain to us, to our audience a little bit about, you know, what we, so we talked about fishing, but there's also fishing and smishing attacks. They're also on the rise. Will you just tell us a little bit about those two different kinds of attack vectors? Sure. Absolutely. So, you know, fishing is the usual, send the emails, smishing is doing it across SMS, right? So the text based attacks that you get in, where you get the hello, hi, or more likely now what you're going to start seeing is something from your boss saying, I need you to do this. I'm not at my computer, right? They create a sense of urgency. They create a condition where you can't reply to them correctly or in the normal try to get you to break protocol. And fishing is voice based. So voice based or voice generation has been going on for quite a few years. One of our portfolio companies resemble AI, they build artificial voices for the movies, video games, stuff like that. So you can record voices and you'll get these now you get these calls where you'll be saying hello. And the key with this, I find is to stick with one word. Don't start using different words. Hello, hi, are you there, right? You're giving them different tone, different intonation, they're recording those things. And if they can get enough of those from you, they can then go create your voice. And what'll happen is they'll leave a voicemail. And I think you probably seen some of the ones that are sort of rampant on the internet right now, you know, hi, it's your daughter or it's your mother. I need money, right? I've got an accident. My phone's dead, I'm borrowing the truck drivers. And that's where they're going with this. And it's just it's going to become faster, cheaper. And those are the two pieces that are concerned. Once it becomes really cheap and affordable, then the attacks start to really rise. That's kind of crazy. You know, we talked a little bit about the ways that fishing is going to change. How have you found that AI has changed the fishing attacks? You can, you can now do live. So instead of just creating a generated message that fits out and either is left as a voicemail or just plays through. There's actually, I believe there's a security company, I think it's called Juniper, that will call you with this British sounding voice and they'll say, can I have 30 seconds of your time? And they got me. Honestly, I'll admit they got me because it sounded good. I was in my car. I had about three minutes to go in the drive. I said, yeah, I got 30 seconds. And I, you know, they had asked the question and I'd give an answer. And then I started to notice this strange, very specific two to three second pause at the end of every sentence before they came back on. And I was like, that was AI. That was definitely AI. And so I got the call again said, hi, you know, this is so and so from Jennifer. Can we kind of take 30 seconds of your time? I said, you're an AI. Phone went dead. Like literally, so they've even they're, they're working and they're using this. I'm willing to bet they're using this to absolutely make it better and better. But what you can now do is you can actually do deep fake voice live, right? So I could fire up an interface that my voice would go through. And while you see my mouth talking, you're going to share, I don't know, my boss's voice, President Obama's voice, you know, any of these, you know, any of these voices can come through. If you can just get enough samples to build it, you can then create it. So that way, when people say, well, what about this, you can answer and it sounds okay, well, it sounds like a real person. I recognize the voice and it makes it more real to them. Yeah, this technology, this technology is getting so scary for video video fishing. So the video is coming as well. Yeah, it is all coming super quick. Well, and I think what I think about when it comes to things like this, you know, people like the three of us, we are public personalities, you know, I mean, in some ways we're kind of making it easy for people because we've got a ton of videos out there on the internet. So people have access to our voices and our intonation and all of that sort of thing. And so I think it's it is interesting. And I certainly it certainly gives me pause. Well, you know, go back to two factor, right? Think of the concept of two factor and go back to old school shared secret. Family members, especially kids, I learned this when I had, when my when my child was young, we had a shared secret. So if somebody was going to come pick them up at school, right, and it wasn't mom or dad, and it wasn't scheduled, we hadn't told them beforehand, if you know, I got stuck at work, even if it was, you know, granny and grandpa, they had to have the word, right? They had to have the shared secret. And we're going to see that more. I think you should see that more. It's a very easy defense for it. So if you get that mother calling, I've been in an accident. Sure, what's the secret word, right? And they're either going to hang up or say, I don't remember. And if they don't remember, call back at a band, right? Find a way to get a hold of them through your usual channels. That's clever. That is clever. But you know, as I think about this bill, is it fair to assume that given the sophistication of the of attacks enabled by AI that the only viable approach is to rely on more machines to sort of identify signals that are indicative of anomalous behavior, like a machine to algorithm or to watch for algorithms? Are we seeing that happen? Absolutely. Absolutely. So that company I meant to resemble AI, that's actually what they're doing. A couple of years ago, they started thinking about, you know, if we can build these things, can we detect these things as well? So they actually have an algorithm that can detect, I think with something like 90 to 95% certainty in under 300 milliseconds, a real or a deep fake voice. And you can use those in places like call centers where you can patch in and you could, you know, these calls are recorded for training purposes, you can now feed this into and see if you're being fraudulently scammed on that side. So, you know, no, that is not Bill Harmer. No, he did not, you know, authorize the bank transfer, et cetera. And then you can use it sort of after the fact to find out, you know, in the investigation of the forensics, what happened, you'd be able to do it there. It's difficult when you get into things like cell phones, because Apple and Android are very protective about getting in between the user and the phone call. So actually doing it live for the moment, I think, is not going to be there. But as we see sort of the advent of more small models where you're less carrying around a phone and more carrying around a small AI that handles tasks, because if you think about the whole concept of your phone and all the apps on it, right, your phone is there to make calls, the apps are there to read email, do banking, find directions, blah, blah, blah. They're all task based. When we get to the point where you're running a small model on your phone, you don't need the apps, you just simply need your model to take your instruction. You know, it's time, go pay my credit card. It knows how I pay my credit card, knows how much I pay, typically whether I pay clear or whether I pay minimum, blah, blah, blah. And it will go through those types of changes for you. And at that point, I believe it will be able to also listen in on the phone calls and be able to take action. So when you look at a Zoom or even a thing like we're doing right now, we could have an outside recorder that's doing note taking, those types of pieces. And I think you'll start to see come through on the phone calls where it would then fire up and maybe even, you know, maybe then the telco do it AT&T or Verizon would then, then you've got them listening in on your calls. So, you know, it's a, it's a, you know, where does protection become big brother, right? Right. Right. That's, yeah, that's a, that's a great question. Um, so let's talk a little bit about phishing emails. They tend to use specific linguistic patterns, um, you know, things like a sense of urgency, use of a bulk greeting links to malicious websites and attachments. I'll be one out of percent transparent here and tell you that I trust nothing. I don't trust anything from anybody when I get a link. I mean, I am looking so carefully about whether or not I should click on it. And I know, I mean, part of it is when you're immersed in the space, of course, you bring that knowledge and expertise to the board, right? But anyway, in addition to linguistic analysis, can you share an example of how an AI system can use NLP to extract and analyze entities that are included in our email content? Sure. Um, you know, you're going to see, um, patterns that are not typical of humans, which is really interesting. Um, at least for now, um, you're going to see patterns that are more grammatically incorrect or grammatically correct. And most humans aren't grammatically correct. Like we tend to have slang involved in, in the way we do it. And we're starting to see that in typing, right? Like you can actually type, I think wana or you can type, uh, you know, gotcha and, and it treats it as a real word nowadays. But I think what you're going to see is you're going to see that in combination with things like the headers that are inside it as well, right? That's, that's the thing about email is we could eliminate, I think a fair amount of this to a certain degree by implementing DMARC across the board, right? If everybody implemented and we would start to have some ability to manage around the fake email accounts or the overlays that sit in front of it to look like, because we use the, the, um, the, um, sort of the convenient, uh, email address, not the full email address. Um, and if you've got an AI that's able to read through these things and find them, find the ones that don't match, find the, and, and start to pop them up. And Google, I think, you know, Microsoft both do a fairly good job of this where it says, you know, it throws up the warning and says headers do not match or email does not match or this is coming from one that is internal and it's come from external. Those things, it takes that load off your plate so that you don't have to try and figure it out. Um, but yeah, you've got, you've got to start looking at the, the, the, uh, email as a whole. And that's why I think, truthfully, I think they'll get out of the email game and start getting more into the SMS and into the voicemail because it gets you out of the corporate governance. SMS is not very well governed even today. Yeah. And then it's easier to click the link because, you know, the, the email card, the SMS comes in and you can't right click it and look at it the way you can, right? So sometimes there's, you know, even I'm doing the, the select all copy pasted into a notepad and try to see what's beneath it. Yeah. You know, I have been deluged with smishing, um, at, at tax and in the last couple of months and it is so interesting. And it's from people, all kinds of people throughout my organization and the same, you know, this is John and, you know, like need a huge favor and it's a language, isn't quite right. So I know, and I mean, I know what these attacks look like anyway. But, you know, and I'm gotten to where, and it's like, there's no reason for me to respond, but every now and then I'll just, you know, because it makes me feel good. I'll respond like, let me guess, you can't get away and you need to run out and get some Apple gift cards. I'm on it. Obviously, you know what you've done? You've given them a live end to that number. That number is now ticked. Put it into another category where they can resell it. So yeah. Not to do that. Yeah. Never respond. Never, never, even as much as much as you like, and you see it with, uh, spam phone calls, right? Spam phone calls comes in and they're pretty good about, you know, warning you it's spam, occasionally not paying attention as soon as you hit it and you get no answer. You get three more calls after that because what you've done is you've confirmed a validation of the number. It's live. There's a person on the other end, right? And they, they can resell those to somebody else on the, on the dark web somewhere. Yeah. All right. So learn from me, people. Right. Right swipe. What is it? Right, right swipe, delete and report junk as to this possible report junk. Learn from, learn from my mistakes. So talk with us a little bit, if you will, Bill, about AI powered risk based authentication and why it matters. Well, so I'm a firm believer that there's nothing in this world that AI will not make better or worse depending on how we touch it, but it will touch everything. And authentication today is problematic at best. I don't know if you guys were aware a couple of weeks ago, it was revealed that the single largest database of compromised credentials was found in the wild, 26 billion. Yeah, it's, it's literally, it appears to be 10 years worth of compromised credentials brought into a single database cross referenced and indexed beautifully. And of course, you know, next, you know, I'm seeing people with Twitter accounts getting popped because yes, they have a complex password. Yes, it was very long. No, they didn't put two factor authentication on, and they've had years to brute force this thing. So the concept of using AI around authentication, it allows us to move faster and to always authenticate. That's going to be the future, right? We've been talking about multi-factor authentication. And for anybody listening, I strongly recommend you turn on any form of multi-factor. SMS is probably the lowest form. If that's all you have, do so, but otherwise, you know, Yubi Keys, Google authenticators, any of those. But until we get somewhere, until we get to a point where systems don't require passwords, we have to do this. But the AI can come in as we bring it in to the authentication and start to authenticate at all times. Like for instance, during this conversation, you know, perhaps I've got email running in the background and it's going, well, he's not paying attention to email. Why is he authenticating? Maybe I'd better check to see if it really is. And we've done this in light forms with improbable speed, which would be, we call it the Superman effect. You log in from your, I log in from my hotel here in Toronto. And then five minutes later, I log in from Kazakhstan. Improbable speed. Superman can't go that fast, right? So we know that, and that's a little bit of machine learning that says, oh, okay, that's not probable. We're going to block the second one or maybe we're going to block both. We don't know which one's right. But we're going to start seeing that from authentication perspective because right now we fire and forget on authentication. We authenticate somebody, we let them in, and then we hope that the permissions control what they do. And what we need to be doing is we need to be watching what they do and then adjusting their, their authentication as those things happen. So if they, if I suddenly clicked on an HR link, even if it's by accident, oh, wait a minute, he's done something that's anomalous. Let's re authenticate. Maybe we're going to buff the authentication where you go to factor, we're going to go to third party, we're going to escalate to a supervisor and confirm, right? And that's the kind of thing that we'll get to. Well, we need to get there quickly. Yes. Absolutely. So, so Bill, you know, you and I chatted a couple of times and during one of our conversations, you shared this really interesting concept of a digital identity that acts as a single truth. And we might get there. So can you take a minute and share your thinking about this? Yeah. So there is a convergence happening, right? I've been on the internet for 25 plus years. I have fingerprints and footprints all over the internet from me as a person. Um, and we're getting, you know, AVP, Apple Vision Bros came out and everybody's wowing about the augmented reality. The meta quests have been out and we are moving towards that, that immersive digital identity. And that's where I believe this goes. We have historical things in the US social security numbers in Canada, social insurance numbers, all fully compromised, every one of them. I don't think anybody and it's a single factor, right? It's Bill, he's this nine digit number, right? That's, that's all it is. We need to get to a place where we have a digital identity that is tied inextricably to the human. And I want to be very clear in this because it, it borders into the world of anonymity. I'm not saying we do away with anonymous access to different things. Freedom fighters, whistleblowers, journalists, people in duress, they need access to, to services through anonymous methods. But what I'm saying is there should always also be a way to absolutely say when I want to, yeah, this is me, you're seeing my video, you've got my email and here's my digital identity to validate it, right? And I think we will get there. Um, there's, there's talk of it in, excuse me, in both the US government and Canadian government that I know of. Yeah. But I, I think it's just, it's, it's, it's going to come. It has to show up somehow sooner or later. And I'm hoping sooner rather than later. Yeah, absolutely. No, no argument for me on that front. So, um, I know that you see a lot and, and hear a lot in your conversations with other CSOs and in their teams. And you know, what do you see the biggest challenges that CSOs and their teams are kind of struggling to get their arms around these days? Interestingly enough, I just did a panel this morning on, on one of the topics. And right now it's the legal impact of being a CSO. There is a lot of question in the industry of, could I do this? Do I? Yeah, do I belong here? We've had a couple cases. Sullivan's case has been, I don't know if he's appealing. I'm assuming he will be, but he, you know, convicted misprison of a felony and obstruction. Tim Brown, SolarWinds has been named by the SEC. He wasn't even CSO at the time. He is now. And it's, it's, it's tough. I mean, I say, I'm, I'm partially glad this is happening. I don't wish this to happen to anybody, but it is happening. It is legitimizing our profession a little bit more. We're showing up at the table now because we have skin in the game. Yeah, that's good. And I think a lot of CSOs are now trying to decide, well, I don't report to the CEO. I report to a CIO or farther down the chain. Why should I carry a chief title with the responsibility and the liability that come with it if I don't have the authority to manage it? Right. And I think, I think that's really what we're, we're, I'm seeing a lot of conversations around that. Everything from renegotiating employment contracts to include limitations on liability, D&O insurance, if you're an officer, parachutes, right? You know, I will point to you walk away and how do you walk away? Yeah. Oh, well, and I think about Bill, I think about the fact that I feel bad for some of the systems that I know because our hands are point blank tight. They'll go to the CIO and they say, look, dude, we're, I can think of a big retailer than I know. And he went to the CIO and said, we're at risk. The CIO said, yeah, we don't have to punch it. Yeah. Well, yeah. I think that's literally the crux of the Solar Wind's case against Tim Brown said, you know, he was saying, we don't have the resources to, to, to do what we say we do. And they went ahead and did things and didn't fund it and then put out statements to the SEC and put statements on their website about the efficacy of their security program. And you know, you know, what does that do? Like, I think he was head of security or he was director or something at the time, like, does he go to the press? Maybe? You know, and one of the recommendations I said this morning was that I think every CISO should have a year salary banked so that way you can walk away when that situation happens because every, I didn't get every CISO will sooner or later find themselves in that I don't agree with it and I'm being told to shut up and move forward with it. And at what point do you just go, no, we're at crossroads. I'm not crossing that line because to me may not be a, you know, there may be a legal gray area, but do you want to be in a legal gray area when the SEC comes back or the FBI come back or any of those? Well, and I think that also, you know, it puts a huge onus on people in those roles to come from a place of paranoia, you know, and to document everything. And you know what I'm saying? I mean, because because that the potential for liability is great. And, you know, you can't destroy it. And, you know, I mean, I think that when you say something like have a year salary banked, I would guess that you say that realizing that for most of us, that's not something that's all that easy to do, right? So it's way easy for us to say it's not not sorry about that. It's not that easy for us to do. And so, you know, therein lies the rub, you know, one of the things that Joe and I have talked about in prior episodes was another challenge is, you know, you mentioned now I feel like I've got a seat at the table. Well, that's great. But part of what we're seeing happen is or the reality of this landscape is that, you know, few boards are comprised of people who have any kind of security experience or who former CSOs or anything else. So they're in a position where they're making decisions about this company and about keeping us safe without really having any, you know, it's kind of like, I think of it a little bit sometimes when you look at the congressional hearings when they're talking about, you know, TikTok or social media or whatever. And some of these guys and women have no idea, you know, what's happening really in the technology sector. So it's, it really is a very interesting challenge. And quite honestly, I mean, how CSOs sleep at night, I am not sure. It's a really hard job. It really is. And I feel for my colleagues, because the more this goes on, the more I'm seeing it almost divide in that summer, just like I'm done, I'm getting out. I don't report up. I'm going to ditch the chief title. I'm just going to be head of security and report to the CIO. And the CIO could be on the hook again. That is one, honestly, that's, yeah, that's a, that's a very viable option. I think the others are simply going, no, we're fighting to change the reporting structure and get myself in there. I'm going to take the responsibility. And I'm going to demand the authority that goes with it. The two need to be, they need to coexist in every situation. So it's, yeah, it's, it's, it's an interesting one. It's an ever evolving. And I think when you look at what the SEC did, when they left off, the one requirement that everybody thought was coming with the new regulations for disclosure four days after material breach, that there would be a certified security personnel on the board, independent, just like serving doctor when they had to have independent financial on the board, that they would put a security person and they left that one off. And I don't know why they did it. Like I can make guesses at why they did it. Personally, it seems like they're setting it up for somebody to fail horribly. Like that's the only thing it makes no sense. There aren't a lot of CISOs out there. Like there aren't enough to fill all the roles, I think. And maybe that, they were worried that would be a problem. But to put the control in place that said, you have to define a material, disclose a material security breach within four days of determining that it's material. So they gave them wiggle room in that. And he's been six months deciding if it's material, you watch that will. Oh, I need to see what the next revenue is to see. But again, it's coming down to a board that has zero cybersecurity expertise on it, making that decision. Yeah, it's scary. And they wonder why, what is it? They'll keep me honest. Two years is the average life of two years. Yeah. Well, that is not surprising in any way. So as we wrap the show, I'm going to ask you, Bill, to give us one piece of advice that you will leave our audience with as they work to get their arms around AI security and the challenges that that presents. What's your best advice? Oh, so best advice I can give somebody on getting their arms wrapped around this is don't sweat the small stuff. There's everything's going to break in life. Everything's going to go this always thing. But you've got to find your true north where where you're headed. And the way to do that is to immerse yourself in the business side. If you understand your business, um, you're able to speak to the executive team at a business level. And that means the word that, you know, CVE is not coming out of your mouth. Vulnerability is not coming out of your mouth. Patching is not good. That stuff's all gone. If you're able to talk to them about your business and stay on your true north with that business talk, you will gain a lot of respect and a lot of understanding that will help you make headway into it. But you can't sweat the small stuff. Lots of things are going to fall by the wayside. They're going to fail. Um, you've got to keep your true north moving because otherwise you will be, you know, like a hamster on cocaine, you'll just be all over the place trying to do all sorts of stuff that really has no meeting in the end because we're all in business to make, well, I shouldn't say to make money, but most businesses are in business to make money. Others have some altruistic right, but there's always one goal for the business. And if you know what that goal is and how they achieve it, you can help them achieve it by reducing their and mitigating their risk to that business. And that's what our job is. That's what our job is. Well, perfect advice and a great place to wrap our show. Bill Harbor, thank you so much for joining us today. And Joe, thank you for hanging out again. And it's always a pleasure to start my week with you and with our brilliant guests. So Bill, we will definitely be revisiting this conversation because you shared some terrific information and we really appreciate it.