 Absolutely, I think if I were to net it out Jeff, what I'm sensing is there is a whole movement to shift security left, which is this whole idea of IT stepping up as the first line of defense, reduce cyber exposure, take care of patching, multi-factor authentication, reduce the attack surface, intrinsic security, right? So, you know, dev ops, sec dev ops, take care of it right up front before the apps even get built, right? Then there is another movement to shift things right, which is take care of the new aspects of the attack surface, right? What the hackers always take advantage of are the areas where they sense we are unprepared. And for a long time, they've seen us being unprepared in terms of reducing the attack surface, and then they go after the new aspects of the attack surface, and what are those? IT, IoT, OT, data as an attack surface and the edge, as an attack surface and the edge, right? So, these are areas where there's a lot of activity, a lot of innovation, you know, we are on the floor here, if you walk the corners, shifting left, shifting right as in all the new aspects of the attack surface. I'm seeing a lot of conversations, a lot of innovation in that area. Well, I think it also boils down to real-world examples. We have to really understand the demographics that we're working for. I think today it's the first time, really in history, that we have four generations working side by side in the workforce. So we have to understand that people learn differently. Training should be adjusted to the type of people that we're teaching, but phishing doesn't just boil down to clicking on links. Phishing teaches also, it boils down to tricking somebody, getting someone's trust, and it can come in many different forms. For example, think of social media. How do people connect? We're connecting across social media on many different platforms. I'll give a very easy example, LinkedIn. LinkedIn is the business platform. We're all connected on LinkedIn. Why are we connected on LinkedIn? Because that's a social platform that people feel safe on because we're able to connect to each other in a business form. However, think of the person who's getting their first job with an organization. Their first job in, maybe they're a project manager, and they're working for Bank A. Excited to be working for Bank A. Hey, I'm gonna list all the projects I'm working for. So here's now my resume on LinkedIn. I'm working on Project A, B, C, D, and this is my manager I report to. Perfect. There's some information sitting there on LinkedIn. Now, what else I will tell you is that you might have somebody who's looking to get into that bank. What will they do? Let's look for the lowest-hanging fruit. Ooh, this new project manager. Oh, I see they're working on these projects and they're reporting into someone. Well, I'm not a project manager. I'm a senior project manager from a competing bank. I'm gonna befriend them and tell them that I'm really excited about the work they're doing. So you're there social engineering your way into their friendship, into their good graces, into their trust. Once somebody becomes a trusted source, people share information freely. So people are putting too much information out there on social, trusting too easily, opening the door for more than a phishing attack and things are just rapidly going out of control. Right. So my co-founder and I both came from the world of being practitioners and we saw how limited the space was in actually changing human behavior. I was given some animated PowerPoints that use this to keep the Russians out of your network, which is a practical joke unless your job is online. So I took a huge step back and I said, there are other fields that have figured this out, behavioral science being one of them. They use positive reinforcement, gamification, marketing and advertisements as figure out how to engage this human element, just look around the RSA floor. And there are so many learnings of how we make decisions as human beings that can be applied into changing people's behaviors and security. And so that's what we did. New adventure. So this is my first early stage company. We're still seeking series A. We're a young company, but our mantra is we are the data value company. So they have had this very robust analytics engine that goes into the heart of data and can track it and map it and make it beautiful. And along came McNeely, who actually sits on our board. Oh, does he? And they said, we need someone, it's all happening. So they asked Scott McNeely, who is the craziest person in privacy and data that you know. And he said, oh my God, get the identity woman. So they got the identity woman and that's what I do now. So I've taken this analytics value engine. I'm pointing it to the board. As I've always said, Grace Hopper said, data value and data risk has to be on the corporate balance sheet. And so that's what we're building is a data balance sheet for everyone to use to actually value data. For me, it starts with technology that takes, look, we've only got so many security practitioners in the company and actually defend, in your email example, we've got to defend every user from those kinds of problems. And so how do I find technology solutions that help take that load off the security practitioners so they can focus on the niche examples that are really, really well-crafted emails and help take that load off the user? Because users just are not going to be able to handle that. It's not fair to ask them. And like you said, it was just poorly timed that helped protect it. So how do we help make sure that we're taking that technology load off, identify the threats in advance and protect them. And so I think one of the biggest things that Chris and I talk a lot about is how do our solutions help make it easier for people to secure themselves instead of just providing only a technology advantage. So the virtual analyst is able to sit on premises, so it's localized learning, collect threats, understand the nature of those threats, to be able to look at the needles of the needles, if you will, make sense of that and then automatically generate reports based off of that, right? So it's really an assist tool that a network admin or a security analyst is able to pick up and virtually save hours and hours of time of resource. So we have this, we call it a threat research group within the company and their job is to take all the data from the sensors we have. I mean, we have, we look at about 25 petabytes of data every day, all our solutions are cloud solutions as well as on-prem. So we get the benefit of basically seeing all the datas that are hitting our customers every day. I mean, we block about one million attacks every minute, like every minute. One million attacks every minute, right? We protect over three million databases and we've mitigated some of the largest DDoS attacks that's ever been reported. So we have a lot of data that we're seeing and the interesting thing is that you're right. We are having to always, we're using that threat research data to see what's happening, how the threat landscape is changing, therefore guiding us on how we need to augment and add to our products to prevent that. But interestingly, we're also consuming AI and machine learning as well on our products because we're able to use those solutions to actually do a lot of attack analytics and do a lot of predictive research for our customers that can kind of guide them about where things are happening. Because what's happening is that before a lot of the attacks were just sort of fast and furious, now we're seeing a pattern towards slow, slow and continuous, if that makes sense. So we're seeing all these patterns and threats coming in. So we're fighting against those technologies like AI, but we're also using those technologies to help us decide where we need to continue to add capabilities to stop it. The whole bad box thing wasn't a problem right a number of years ago. And so it's ever changing in our world which frankly speaking makes it an interesting place to be because who wants to be in a static? In a boring place. Well I mean, whether you're a good packet or a bad packet, you have to traverse the network to be interesting. We've all put our phones in airplane mode at blackhead or events like that because we don't want to do beyond it. They're really boring when they're offline but they're also really boring to attackers when they're offline. As soon as you turn them on, you have a problem or could have a problem. But as things traverse the network, what better place to see who and what's on your network than on the gear? And at the end of the day, we're able to provide that visibility, we're able to provide that enforcement. So as you mentioned, 2020 is now the year of awareness for us, so the threat aware network. We're able to do things like look at encrypted traffic, do heuristics and analysis to figure out should that even be on my network? Because as you bring it into a network and you have to decrypt it, A, there's privacy concerns in these times but also it's computationally expensive to do that. So it becomes a challenge from both a financial perspective as well as a compliance perspective. So we're helping solve that so you can kind of offset that traffic and be able to ensure your network's secure. So when we started developing our cyber recovery solution about five years ago, we used the NIST Cyber Security Framework which is a very well-known standard that defines really five pillars of how organizations can think about building a cyber resilience strategy. A cyber resilience strategy really encompasses everything from perimeter threat detection and response all the way through incident response after an attack and everything that happens in between protecting the data and recovering the data, right? And critical systems. So I think of cyber resilience as that holistic strategy of protecting an organization and its data from a cyber attack. Yeah, I think the human element is the hardest part in mind of this conference and its theme, the human element. The hardest part about this job is that it's not just mechanical issues and routing issues and networking issues but it's about dealing with all types of humans. Innocent humans that do strange and bad things unknowingly and it's in malicious people who do very bad things that are by design. And so the research suggests that no matter what we do in security awareness training, some 4% of our employee base will continually bail security awareness tests while we fished and actively. And so one of the things that we need to do is use automation and intelligence so that you could comb through all of that data and make a better informed decision about what risks you're going to mitigate, right? And for this 4%, they're habitually abusing the system and can't be retrained. Well, you can isolate them, right? And make sure that they're separated and they're not able to do things that may harm the organization.