 Aaron is the CEO founder of Verica, a chaos engineering startup and is a frequent author, consultant and speaker in the space. So we are glad that Aaron has joined us today. So without further delay, over to you Aaron. Well, thank you very much for the kind introduction. It's a pleasure to speak at Agilindia. You know, the program for chaos day is quite amazing, so I'm honored to speak here. Let me go ahead and share my slides. Today, we're going to talk about security in chaos engineering. Below the bottom of every slide, you should see my Twitter handle at Aaron Reinhart. And behind me, you should see my email, as well as my email should be at the end as well. All right. So some of the things we're going to cover today are how we combat complexity in software. We'll go over chaos engineering, talk, you know, walk towards security chaos engineering through building a foundation on chaos engineering. Talk a bit about the role of security and resilience. And then we'll talk about security chaos engineering. A little bit of info about me. I'm the CTO and co-founder of Verica. My co-founder is actually Casey Rosenthal, the creator of chaos engineering and they ran the chaos engineering teams at Netflix. And I, before I co-founded Verica with Casey, I was the former chief security architect at United Health Group for the largest healthcare company in the world, largest private healthcare company in the world. I also spent my career, my career was mostly a software engineer before I got into security. But I had parts of my career I spent at NASA safety and reliability engineering, department of defense and a few other private sector businesses. I'm a freaking speaker and author on chaos engineering and security. I just finished the O'Reilly book on security chaos engineering. It comes out. I'll give everyone a link towards the end of the presentation to make sure they get a copy of that. I also wrote chapter 20 of security chaos engineering in the most recent chaos engineering book. And yeah, and I'm the leader behind chaos layer, the open source tool that applied chaos engineering to security. So just to sort of sort of break around on where we're going to head into here is that, at least from the problem space, incidents outage and breaches are costly. I mean, that's not new. That's not really news to anyone, but they seem to be happening more and more often. And that's what we're going to talk about today. It's an obvious problem. You can go to any news outlet of your choice and services and technology seem to be encountering outages in incidents on a regular basis. So why do they seem to be happening more often? Well, it's important to actually understand the complexity and software and where that comes from. So our systems have essentially evolved beyond our human ability to mentally model their behavior. So if you've never seen these diagrams, so what you see in front of you, the Amazon.com and Netflix, every dot is actually a microservice. Okay, so all those dots are microservices that facilitates Amazon.com and Netflix and this used to just be Facebook, Apple, Netflix, Google, the Fang sort of companies that were doing, you know, sort of building these large microservice architectures. And the problem with the advent of cloud continuous delivery and microservice architectures is our systems now are so large, they change so rapidly, and it's very difficult for a human to mentally model and understand its behavior. Well, now it's no longer just Amazon and Netflix. Now everyone else is picking this up and starting to encounter the negative effects of complexity at scale. So Sidney Decker likes, he wrote this book called Drifting to Failure wrote a couple other great books in space. He's one of the world's authorities on safety engineering and complex systems. He actually writes a lot about airline accident investigations. So never read any of his books on a plane flight. But he likes to say that the growth of complexity in society has gotten ahead of our understanding of how complex systems work and fail. I'm going to dig more into this term complex systems and and how it applies to the software. But Sidney Decker, a lot of Sidney Decker's work directly applies to what we do. So what do I mean by complex systems? Well, you know, what you see in front of you, this is actually a snapshot from Netflix's visceral. It's an observability tool written by actually Casey Rosenthali with the first version of it. But basically, the reason why I have this up on the screen is, is that this is a, when I say complexity, it's very difficult to ascertain for any human, what's going on at any given point in this microservice architecture. It's sort of larger circle as a microservice and in all the dots of request. So it's very difficult to understand all the intertwined relationships and how, you know, how other services affect each other, especially in terms of cascading failure. So what does this complexity sort of come from? Well, actually, a lot of the more modern techniques that we are such as immutable infrastructure, such as, you know, patterns, info code, you know, continuous delivery, CI CD, auto canaries, feature flags, these are all, I mean, these are all helping us deliver value to customer faster. And that's what we need to be doing. There's also increasing the complexity of the process in which we build software. Well, what about security? Well, the state of security is still mostly monolithic. We think about security design and architecture in a monolithic sense. It's getting better. So Neil Yu, Bank of America in the United States, he wrote a model called the distributed immutable ephemeral model. And, you know, it's starting that kind of software engineering based approach to security, starting to pick up speed, but it's still not quite there yet. It's not widely adopted. Like DevSecOps is beginning to pick up steam, but it's not yet widely adopted. And another issue with sort of how we approach security is that we still love to think our solutions are mostly expert systems. What I mean by that is they require some kind of domain knowledge. I mean, you can't just go pick up a Palo Alto firewall and immediately start using it. It requires some kind of training around panorama and how Palo Alto firewalls operate in order to function for them to function. Whereas a lot of software engineering tools, you can just go to GitHub and then start rocking something new, a new piece of code. And we're still predominantly designing and implementing security in a staple way in a mostly stateless world. So we have this issue of this gap in how we design and build security and how we're building and delivering software. So what is the answer to all this complexity? Well, is it the logical answer would be to simplify, right? Well, newsflash software is officially taken over. So what you see on the right hand side here is you see actually the new OSI model. It's software, software, software, software, software. Software has officially taken over just about the entire stack. So another important thing to remember about software is that it only ever increases the complexity. And, you know, there's an old saying in software that there's no problem software and another layer of abstraction can't solve. Well, at some point, the abstractions start to break its speed and scale. So software complexity. So the nature of software complexity was well written about in the 1980s. There's a paper called No Server Bullet that articulated, you know, the two different domains of complexity and software. One is the central complexity, the other is accidental. Central complexity basically states that organizations are destined to. So essential complexity is based on concepts similar such as Conway's law. What I mean by that is that organizations are destined to design computer systems that reflect the way they communicate. Well, the way the business that you work in or, you know, makes, you know, the way the business operates, mirrors itself and how the systems that are being built. You can't really change the complexity that's being inherited from the business unless you change the business. Whereas accidental complexity comes from the process of how we build things and put things together as humans. And there are two different schools here of thought is that some folks believe you can actually reduce the complexity. But really, most people think you're just moving it around. But the answer is not to reduce it or to move around. The answer is actually to learn to navigate it, understand the system and understand the difference between how you think the system works and how it works in reality. So Dr. David Woods is one of the world's foremost authorities on resilience engineering in medicine. And it's so remarkable to read anything from him because it makes exact precise sense in software. He says as the complexity of a system increases, the accuracy of any single agent or humans own model of the system decreases. So what does all this have to do with my systems? Well, the question is, is I ask you is how well do you really understand your systems? I mean, really? Because in reality, you have to remember systems engineering is a very, very, very, very messy exercise. And I have theories on why we don't remember this, but it's a very messy exercise. So in the beginning, before I co-founded Verica, I was an architect. That was my function. And we love to work in diagrams and representations of the system. But in reality, the system never, ever looks like that. So in the beginning, we'll have these nice 3D diagrams. We've got the code. We've got the image. We've got the staging and production environments. The plan is clear from day zero. Well, some almost never reflects what we think it is. After a few months, what ends up happening is that we start to learn about how the system actually functions and operates through a series of bump-force scene events. And I remember, so I'll tell a story here in a second. So after a few weeks, after a few months of going live, there's an outage on the payments API at the hard-coded token. Of course, you go back and actually fix that. Day after that, the web application firewall had an outage and it started disabling traffic. You start to learn about what you really didn't know about your system through these unforeseen events. And it just continues to go and go and go on further and further. Well, our system slowly drifted into a state of unknown. And that's really what we're trying to combat with chaos engineering. Start to navigate this complexity and of course correct our understanding of how things are really working. So one of the stories I was going to tell is that early on in our startup journey, we had a conversation with the World's Largest Payments Providers. And they were telling us about how they had this legacy system. It was the core application, a flagship application for the company. It made all the money, but they needed more flexibility and one of the benefits of Kubernetes. So they were worried about moving off of this legacy system onto something they weren't sure about, that they didn't really trust. Because the engineers in the legacy system understand the system. It really has an outage. They're competent in the technology. They're comfortable. They trust that system. But I started asking the question to myself is that was that legacy system always that way? Or was it through a series of unforeseen events that they actually learned how it actually operated and got more comfortable and confident in the way it functioned? And partly that's where we're coming from with chaos engineering is that it's a proactive way to discover those same sort of unforeseen events without encountering customer pain, or pain as an engineer. So our systems in the end have become more complex and messy than we remember them. So what does all this stuff have to do with security? I'm getting around to it. So it's important to remember that the normal condition of a system is to fail. It's humans that keep it from failing. But humans, a core part of how we operate is we need failure to learn and grow. It's essential. And it's no different for the things that we've built. Scott Sagan, I stole this from John Alsbaugh. Scott Sagan likes to say that things that have never happened before seem to be happening all the time. And that's sort of the ethos of what's happening. So how do we typically discover when our security measures fail? Well, we typically discover through security incidents of some kind. Security incidents are not an effective measure of detection because often it's already too late. And we have to be more proactive about that. So no system is inherently, this is actually taken from Sydney Decker. But it's important to remember that no system is inherently secure by default. It actually takes humans to make them that way. A lot of times in the security world, people often point to human error and human factors as being sort of root causes. And that's a fallacy because actually it's humans that actually are the ones that create security and create safety. It's also important to recognize in terms of chaos engineering and incidents and sort of changing your approach is that people operate differently from a cognitive perspective when they expect things to fail. So when there's a security incident, people tend to freak out. They're worried about that this might be the security breach, that code I pushed last night. I knew there was an issue with that module or that image I got from a third party repo. People are freaking out and they're worried about their jobs. They think they might have caused something. This is not a good learning environment here. People are freaking out. Often the focus is not on let's figure out what happened. It's get that thing back up and running. We're losing money as a business. So chaos engineering we do not do here. We do it here, right? We do it when there is no active incident, when there's no active outage, right? We do it to proactively understand does the system work the way we think it does? And this type of environment, it's much easier for people to have an open mind and be aware and learn things about the system. That's the point of it. So chaos engineering. Several great speakers have talked today about chaos engineering. So I'm going to try to tell you a few things you might not know. I'll start with the definition. The definition of chaos engineering is the discipline of experimentation on distributed systems in order to build confidence in the system's ability to withstand turbulent conditions. Well, chaos engineering is about establishing order, not creating chaos or establishing order from the chaos. So no chaos engineering talk would be right without explaining chaos monkey. So a lot of people will tell me, Aaron, you know, we can't do chaos engineering. We're barely doing DevOps or CI, you know, like, you know, chaos engineering is just an advanced thing. Well, really, it's actually not, right? Actually, Netflix in 2008 when they created, oh, 2008, 2009 when they created chaos monkey, they were just transitioning from DVD shipping, DVDs in the mail to streaming their movies on the cloud in AWS. At the time when they were doing that and building that architecture, AMIs were just disappearing. There was a feature of AWS at that point in time. And Netflix just made this huge bet and they needed to ensure that, you know, when that kind of behavior occurred, that their services were resilient to that kind of failure. So really what it did to put well-defined problems in front of engineers says, during business hours, my service can be exposed to this type of problem. Okay, fine. I will design it to handle that failure mode gracefully and to anticipate it. And that's really, so really it's about putting well-defined problems in context in front of engineers so they can solve it. Because it turns out the better you define a problem, engineers are more likely to solve it. So it's, Netflix is, so chaos engineering at Netflix was born out of cloud transformation. Who's doing chaos engineering? I track a little over 13, 1400 companies now doing it. Everything from healthcare to banks to media to streaming to retail. And this is a worldwide thing. So principles of chaos.org is important to recognize because this is the rule set. It's very important to read and understand this before you get started in chaos engineering. So there are currently three books. The Netflix wrote the case you wrote the first one, which is when he was in Netflix. And we just finished the official orally book on chaos engineering, the official animal book on it. If anybody wants a copy of that, there's a link to get a free download copy of that at the end of the presentation. But also the security chaos engineering orally book will be out at the beginning of next month. So be more than happy to hook anybody up with that if you're interested. So instrumenting chaos. So there's, we like to loosely break down instrumentation into two buckets, right? There's testing, which is a verification or validation of something you already know to be true or false. It's a binary thing. We know what we're looking for before we go looking for it. Whereas experimentation, we're trying to derive new information that we previously did not know the sort of the unknown, unknown kind of kind of scenario. We're trying to make sure in the formal hypothesis using sort of Western science methodology, we're saying if the system were to encounter X problem that we believe we designed why to be the result. We believe that's how it operates. That's how it should operate. And with chaos engineering, what we do is actually exercise that experiment. And the point is to derive new information about the system. So some pitfalls about chaos engineering that you should be worried of is that it's not about breaking things on purpose. It never has been about breaking things on purpose. The purpose of chaos engineering is not to break things on purpose. If anything, we're trying to fix it on purpose. Casey loves to say that I'm pretty sure if I went around, I'm pretty sure I won't have a job very long if I go around breaking things all day. And so it's not about breaking things on purpose, nor do you have to do it in production. So what is security chaos engineering? Well, I'll break it down quickly. It's actually not a whole lot different. The use cases and perspective is just slightly different. So hope is not a strategy. So hope is not an effective strategy. This is a constant theme in chaos engineering talks. But the point is, is that hoping, why started with applying Netflix's chaos engineering to cybersecurity? So my concern was, is that we build all these security controls into our applications, into our infrastructure. But we're rarely ever, we're kind of hoping that it works when it needs to work. We're not exercising it in the events and constructs in which it must operate, meaning an incident. So we can't, engineers don't believe in two things. We don't believe in luck and we don't believe in hope. So it's about understanding where the gaps are before an adversary does. So what we're trying to do essentially with security chaos engineering is that the majority, so the majority of your data breaches are, even if you look at malicious code, the majority of malicious code, I'd say 90, 95% requires some kind of low hanging fruit to exist, meaning a misconfigured port, a misconfigured permissions or a user or a network path, things that just shouldn't exist that we think we can be planned for and solved for. But the idea is we're trying to inject those conditions proactively into the system to ensure actually those controls we put in place to detect things like a misconfigured port, detect it and block it and we're notified. And that's really the ethos behind it. So we often, we often miss remember what our system fully are. As a result, the opportunity for access mistakes increases is that, you know, it's not, I described this a little bit earlier, the speed and scale that we're building software in today's world. It's one, it's difficult to continually build and change software as a software engineer, but it's even more difficult to align security with that changing ecosystem. And so it's not just that we miss remember it, it's the speed and scale that is causing further causing issues. So it's really about continuously verifying that our security works the way we think it does. And like I said a little bit ago is it's about trying to ensure we can actually catch those conditions, those low hanging fruit conditions that actually make malicious code successful, that we can catch them before an adversary can take advantage of it. Our goal that we're trying to go after is we're trying to reduce the uncertainty in our security posture through building confidence in how the system actually functions. So some use cases for chaos engineering for security. Predominantly when I started United Health Group exploring this space, I really started with as the Chief Security Architect, I was always concerned about my guidance for an application or a product that would come to me with diagrams and information about the system. I would try my best to give them the best possible security designed from the information they gave me. But I was never sure if what I recommended actually ever got implemented or it was effective. I needed ways to sort of skip through all the people in the process and the sort of words on paper and ask the computer a question, hey, when I inject this condition into the system, are you effective? Can you catch it? Are you designed appropriately? And it can't just be a point in time. It has to be continuously similar to a regression test is that as you scale and security controls are often implemented the same way in multiple environments, there's a lot of drift between environments and so you have to continuously validate that a control works throughout all its different ecosystems. So control validation is a big area. Instant response is also one of the areas I got a lot of value from applying security chaos engineering. Also, one of the biggest gaps in application security and software security in general is the ability to observe security events in software. And so one of the things that chaos engineering experiments can do for security is that highlight areas where we didn't have log events or where the log events didn't make sense or where we weren't able to correlate and alert because we didn't have the right data or the tool wasn't providing enough information to trigger a correlation. And lastly, every chaos experiment, whether availability or security-based, has compliance value. So make sure you keep a copy in a high integrity way of each experiment's run and you can use that for compliance audits. So instant response, the issue with instant response is response is that no matter how much money you spend on security, no matter how many humans you have, you still don't know a lot. I mean security instance is very subjective. You don't know where it's going to happen, why it's happening, who's trying to get in, how they're going to get in, and what they're going to do to get in. You just don't know those things no matter how much money and effort you put behind it because you're kind of sitting there waiting for an event to occur to ascertain whether or not it was effective or not. And a lot of times we don't account for cascading failures in terms of security incidents and breaches, and we're not looking for them. So one of the things that you can use security chaos engineering for is to proactively inject the signal into the process and now you have a point that you know where the event started. So now you can measure, did you have enough people on call? Did they have the right skills? How long did it take for them to resolve the incident? Did the tools and technologies provide the right information and context to the team? Because you're in control of the signal, you now have an ability to sort of manage and understand what normally is a subjective process where you're kind of waiting for things to happen. So it's really about flipping the model. Instead of the post-mortem being after the fact, the post-mortem becomes the preparation exercise. And we can learn about how brittle our systems are proactively and fix them before they manifest into painful outages and incidents. So this gives me the chaos slinger. Chaos slinger was about four years ago. I was part of a team that wrote the first ever application of chaos engineering, or Netflix's chaos engineering to cybersecurity. It works mostly for AWS, it's open source. There's example code, and I'm going to go through a quick example here in a second, but it's predominantly serverless. It's a Lambda-based application. It has an opt-in opt-out model. Most chaos engineering tools have an opt-in opt-out model. The reason for that is that you want to be able to control the blast radius. For example, you may not want to inject a misconfigured port on your edge, for example. But you can check that out. The project is now United Health Group. I think no longer supports it. They have their own version of it that they utilize. But everything you need to write your own chaos experiments for security is there in the framework, is in the three functions that are in the example framework. So an example. So we call this example port slinger. So when we open-sourced chaos slinger, we needed an example that everyone could understand, whether you're a software engineer, a network engineer, a technology executive at SRE, everybody kind of understands what a firewall does. And for some odd reason misconfigured and unauthorized port changes happen all the time, still in 2020. So our hypothesis is that if misconfigured and authorized port change were to occur, as a security team, this is something that we plan for. This is a very easy thing that we expect to solve for. We've been solving firewalls for 20 years, is that we expect it to be misconfiguration to be able to be detected and immediately blocked. So if someone accidentally and maliciously introduced a misconfigured port, then we would immediately detect block and alert on the event. Well, so we wrote chaos slinger. So it's three functions. It's slinger, it's tracker, and generator. Generator does target acquisition. Slinger actually makes the changes and tracker tracks the changes and reports it to Slack. Well, so what we're actually doing is that we started introducing a misconfigured port sort of event in our AWS environments. As a company, we were very new to AWS. And so we started experimenting and started injecting these conditions into our environments. So we, like I said before, we expected our firewalls to immediately detect and block it. Well, that only worked about 60% of the time. So that was the first thing we learned. The second thing we learned, and actually the firewall issue was actually a drift issue between our non-commercial and commercial software. But that kind of stuff happens. The second thing we learned was that our cloud-native commodity config management tool caught and blocked it every time. So the thing we're barely paying for was more effective than things we were paying for. Three, we expected both the config tool and the firewall to throw good log data to a security log event or a SIM. We didn't have a SIM at the time. And so that actually happened. That was successful. That correlated alert. The alert went to our security operations center, and the analysts there didn't know which AWS environment it came from. They got the alert, but they didn't know what to do with it because we didn't, like I said, we were new to AWS at the time. And as an engineer, you can think, well, you can just map back the IP address and figure out what environment it came from. But that could take 5, 10, 15, 30 minutes. Not to mention if SNAT is in place, if SNAT is in place, it might take you an hour, an hour and a half, two hours. When you are a Fortune 500 company, a Fortune 10 company, millions of dollars are being lost per minute. And that could be $30 million. That could be $60 million. Well, in this case, there was no loss of revenue. There was no outage. We proactively discovered that these things weren't working the way we expected them to. And we were able to fix them and learn from it, and it not be a blame, name, and shame type of experience. So at every stage in sort of the process, we sort of ended up writing additional experimentation. But we're trying to, like I said, we're trying to understand whether or not a security apparatus, the technologies, the humans, operate the way we expect. So I'll leave you with one last sort of thing to ponder on. There's a John Allspa, you know, which John Allspa is one of the fathers of DevOps and as well as a resilience engineer for software. He likes to really sort of make people pause and think and really think about looking for better answers instead of always, instead of looking for better answers, start asking better questions. That's really what we're doing with Chaos Engineering. We're asking the system better questions. And there is the book information. If anyone's interested in getting a copy of the book, go to verica.io. And you'll get a free copy of the book. Oh, I get that. So I'm assuming that I just... Oh, okay, bye. Hey, hi, Eric. Do I just sort of go in there, the Q&A? Is that okay? Are you going to read the questions? Yeah, so you can also go there yourself, but I can call it out right now. So there is one question. How is security chaos testing different from penetration testing? Sure. That's a question I get a lot. I've actually written a lot on the topic, but in terms of offensive testing, there is penetration testing. There is red teaming. There is purple teaming. And there is breach and attack simulation tools. So none of those... By the way, none of these tools are... All these tools are really great ways to find objective information about the system. But how is it different than penetration testing? Well, partly it's the goals of what we're trying to achieve, but it's also... So penetration testing, we're really the goalers trying to get in, trying to expose a flaw within the system. And there's a difference between the idea of what we think pen testing is and how it functions and how people are doing it in reality. Pen testing, typically, especially in terms of software, is not really tuned well for distributed systems. And this is part of the issue with purple... So purple teaming, red teaming, penetration testing is a lot of overlap. So the pen tracking is a technique within red teaming. So red teaming is something that people still do, but people have transitioned more to what they call a purple team. Well, a purple team exercise is really... Instead of the red teaming just banging away and beating up my environment and telling me all the things that are wrong that I have to fix, the idea was that blue and purple would merge. I mean blue and red would merge with, I mean, defense and offense would merge with clear information and build a better understanding. But purple teaming really doesn't... hasn't evolved into software. It's really a data center, a network and active directory kind of corporate infrastructure kind of tests. What people mostly do is drop malware on a desktop once a quarter. But the world I would like to see is that chaos engineering, breach and attack simulation, these newer techniques provide an opportunity for pen testing to drive more value. And so chaos engineering for security in particular is we're not trying to attack and get in the system. We're trying to inject the conditions by which we expect them to operate if that makes sense. We're trying to inject like, I know that my firewall should be able to detect a misconfigured port. So I inject it. Or I know that this Kubernetes pod shouldn't be able to communicate to this whatever thing we're trying to communicate externally or shouldn't be able to pull from an unknown repository. So we actually write the test and inject it and observe whether or not the system functioned and operated as it was supposed to. So it's failure injection, I guess, versus attacking. And when you're attacking a system, especially a distributed system, you make a lot of noise. It's very difficult to ascertain what worked or what didn't. So I'm more interested in the context of what worked or what didn't that I am that you were able to get in. Because the context is more constructive for me to be able to know what the facts. Okay. So this is another one from Narage. So this one's from Narage. There are several AI tools in the space of predictive anomaly detection. What is your take on them? I'm an engineer by trade, so I'm very opinionated on this. So I was using machine learning, DL. So part of the issue with like, there's a lot of solutions out there that say, give me all your security log data and information and I will predict the next breach is going to happen. Well, that has one major flaw in its assumption is that you're assuming you have the right data to begin with. So one of the biggest issues in software security is actually the logs and events. Is that writing security events for software, you have to write three layers of log. You have to write the web tier, the data layer and the business logic layer. Now, between Go and Rust and Python and Java, there's different frameworks on how you do that. They don't map well to each other. It's very custom in nature. Even Java 6 and Java 7, they both have frameworks, but they differ. So we lack a lot of observable events from a software security perspective. It's one of the biggest gaps from my perspective. So what was I going to go with that? I'm going to answer the question. What was the question? Oh, AI tools, yeah. So I'm sorry, I just wanted to go with that. So a lot of these tools out there will say we can inject all your data and then predict the future. Well, the problem is we don't have the data for software. You might be able to do some of that for infrastructure, stuff that doesn't change 10 times a day or 100 times a day. And software, we're changing sometimes 10, 100,000 times a day. And from my perspective, I find it very hard to predict it though. I'm not saying there's no value. I'm just saying that being in the silver bullet is probably saying you can predict the future assumes you kind of understand where you're at and we don't. Cool. I think that's about it then. So thanks, Aaron, for sharing your experience with us. This is just Nareesha. I wanted to quickly jump in and thank Aaron. We were really, really sad, Aaron, that we lost you when we had to kind of go ahead with the conference during the pandemic. We decided we'll still go ahead and we were pretty sad that we lost you because you couldn't travel. And I was so glad when you reached out again when we decided to go virtual on your own without us following up. So I greatly appreciate that. It's a great event. You guys put on a great show. Look at the talent you got on the schedule. I myself will be watching a few other events today. Awesome. I just wanted to personally thank you and you can see the light cloud over there. People really appreciate your talk. So thank you so much, Aaron. Thank you for having me.