 depending on who you ask and when you ask them, I'm either the head of security in tendermen or I'm the director of security in tendermen. I refer to myself as a security hat, so there's a problem, nowhere to email. So today I'm going to be talking about two of the things that we've been doing on the open space platform. There's really interesting problems we get from the decentralized network, and they must get them under the strangers on the internet really quickly to fix the security issue. We've had a couple of extra sizes in this now, and what I really want to make sure I do is share some of the things that I have been thinking about and some of the places I have looked for opportunities to make some of these problems less difficult and ways that we can use blockchains to set us up to be more successful on the security hat. So here's a little bit of what we'll cover today. Just for warning, this is not going to be a super technical talk. I couldn't hold out to get how the issues were working on this and talk about it. It's actually not really just citing or groundbreaking code. There's really a process that goes along with the technical implementation and super supreme work. So the first thing I always have to talk about generally is what is security? How does it work? Why is it there? How long has it been there? We'll talk a little bit about our first mainnet security incident. I will refer to this as story time. And members of my team who are here are probably tired of hearing the stories but it was my one time, it was my go time. So I will use that as an excuse for how we started thinking about these things. Talk a little bit about some of the things that come up when you're trying to manage to centralize yet coordinated incident response, especially when you have lots and lots of stakeholders that I don't just mean people who in our case, you know, have Adam or people who might be holding a theorem. And then I also will talk a little bit about how governments is very uniquely positioned to be able to help us when we're trying to be more proactive and prepared when responding to security incidents. So just to talk a little bit about what security is, I will skip over about 10,000 years of examples of human being horrible at managing keys. There's some great stories from ancient Rome about how you, if you had a key, would have embedded in a piece of jewelry. We'll skip that today. But security at its very base is made of seven principles that are extremely important to layer and implement correctly in the right places to make sure that you are protecting and defending what's important to you, your community, and the code that you run. For one, there's a question of comprehensiveity. There's opportunity, rigor, minimization. Compartmentation is extremely important, making sure that you're not putting all of your most valuable things in one place, but that you're distributing them right in the big deal for us in the blockchain space. For us with COSOS, we think a lot about fault tolerance and making sure that our network is a productive amount of malicious input and there's proportionality. I won't give you a full lecture on this, but these principles, especially if you think to them, can really help inform your thinking about practice in the security space. There's some other aspects of security that are incredibly difficult. There's the technical piece, which lots and lots of us get very excited about, but there's also a human bit because humans are very, very difficult to forfeit and organize and manage when we have very strong and intense emotions, like fear, which is kind of a big deal when you're in the middle of a security emergency and you think this thing you might have done is a zero day. There's also the question of being proactive about security, meaning that you try to stop bad things from happening and you identify risk early and you step in and you take time to mitigate problems before they really become emergencies and then there's reactive security. Incident responses in area where you can be proactive in building run books and planning for emergencies that will naturally come up in the course of running the blockchain, but it's reactive because you basically have to detect that this badness has happened or this incident is occurring and then spring into action. There's also a very, very deep line between offensive security and defensive security. In the security space, I am hardcore blue team, meaning I want to build defenses, I want to protect people, I want to protect things, I want to care for and maintain things a lot, but offensive security can be very important to you as an organization. Are in a space where you're the marketer and attacking things and going after them and deeply understanding how they break can inform how you run your defense. Some of the ways that you see security happen in some of organizations tend to be through security programs. Depending on what your organization does, you can have product security, you can have application security or infrastructure security. They can all be in one realm, they can be separate programs. It really depends on how critical each of those things is to the operation of your business and their operation of your network. But then there are also things that we don't really cover as much that are just as important as the technical ones. Operational security is one of the things in the blockchain space that is very significantly behind where it needs to be for the risks that they are working with. Assessments are difficult. I think I've run like a dozen security audits in the last two years. While it's really important to make sure that you have someone independently evaluating your code, all of the business negotiation that goes on and scoping out an audit and defining the testing you need to do, these are things that are hard. They are immensely difficult to do if you've never had to do them before, but you have to think about them in a comprehensive and programmatic way to make sure that that audit you're releasing to the public actually does attest to the things that you're claiming it does and you've chosen the correct remediation steps to close bugs or issues that were surfaced within the audit. And then there's education. That's also extremely difficult if you're in a smaller company and put on a dedicated human who's working on security education programs and planning all of the time. An incident response, which is the one for me that has been the most interesting in blockchain space because I get tools that other people in conventional technology don't get, but I also get challenges that they maybe wouldn't know how to solve even if we told them, which is kind of exciting. And I personally am very competitive with security. Like I want us as a space to win. I want to be like, look at what we did while everyone was making stupid blockchain jokes. So this incident response area is a place where I think we have a huge opportunity to shine. And then there's just a couple of other things about security that I really learned and taken to heart as a practitioner. Security is very much a shared responsibility. There's only so much I can do as a core maintainer or a core developer to make my community secure. There's lots of stuff that I need them to do and to be set up to do to help so that we're able to quickly resolve issues and build immunity. And security is definitely much, much more than finding all of the bugs and code. The next time that it tells me it's time to find all of the bugs, I'm going to get a level of sassy that is relatively unmatched for me and give them a lecture about stopping the bugs from getting in code. More than anything else though, security is really about enabling and enabling people to be able to act quickly and reduce harm. One thing that's really important, we all know security is never going to be a 100% guarantee. But it's more successful when we make it more expensive for an attacker to launch a successful attack. All we have to do all day long is work at the attackers' ass and make them jump through poops and spend lots of money and have to do things that are just not male and manageable, scalable, to exhaust them. And maybe if not make them go away, make them leave us alone for a while. There are things that we can do in order to plan and prepare for incident management that other organizations and other technologies can, but more than anything else, it's really important to recognize that decentralization might make parts of security more complex. It doesn't require us to reinvent the wheel. If we're more busy reinventing the wheel than anything else, we're gonna miss a lot of opportunities to get very small, very nuanced, very important things right. So, this is the story of time. Once upon a time, I made the fatal error of accepting a keynote in Australia. And so I was like 17, 18 time zones away from my core development team. Now, I made it a point to make sure I still showed up for my meetings and did all the stuff I'm supposed to do, but in the middle of this week that I'm in Australia and I'm getting chewed up by the ocean and spit right back out of the beach, I have a high severity security incident planned in my security app in walks. So I have been planning for this for like a year and a half at this point. I'm like, all right, let's go, time. All the stuff that I did here, it works. Somebody gave me a bug. Oh my God, I'm so excited. And a few of my colleagues were like, hmm, this is not exciting, what is wrong with you? But we had an issue come in and looked at it and I realized, uh-oh, this is a problem. Basically the issue broke our security model in that it would allow for someone to be able to bond bond their stake and not have to wait three weeks, which is something that is built into our crypto economic layer. And it undermined practically anything that you would want to do on the network on that layer. So what we realized was to fix this, we were going to have to do an emergency vulnerability coordination exercise, meaning you were going to have to patch the code. We were going to have to develop some tooling to investigate exploitation because if exploitation happened, I better be ready to show people where it happened, why it happened, how it happened. And additionally, we had to think through notification. So my office wall at home has all of these sticky notes on it. It looks like, kind of looks like a conspiracy theory map, but basically what my office wall does for me is this. It's got the process mapped out, just documented in three or four different places. And I, because I had already planned on having to do this, knew what was next. Have my, I have my moment, but what we knew we wanted in the case of any security emergency was that we wanted to be able to be the best dependency that we could be to people who rely on our code. We wanted to make sure that we reduced harm. We wanted to make sure that we provided visibility and transparency. And the way that we specifically do that in this case is these very comprehensive incident report posts that I put up within a week or so of something bad happening that requires emergency exercises. Couple of things we knew we were going to come up against. We knew that we, time was really, really important here. We were monitoring for exploitation. We were able to see where the person who reported the vulnerability had been exploiting the bug just to check and see if it works. They did this in a way that didn't cause harm, but we also knew that if one person knew about this bug, it was likely other people could have code discovered it and just didn't say anything. Because we knew time was at the essence here, it was especially important for us to make sure that our communication and coordination was fast and it was good and it was clear. And when we were asking people or recommending that our validators do specific things, we're not telling them what to do. But we specifically are making sure everything we say is actionable and clear. One thing that we committed to very early on was making sure that we pre-notify people who rely on our code, meaning instead of just dropping a security bug on a Tuesday, Monday, I send out a message through our forum, through a mailing list or through any other channel that we need to send it through, to say, hey, heads up, you're probably gonna need to patch. We've got a bug here and it's either high or critical severity, but the patch will land around this time. Part of the reason we do that is because we have an extremely global community and if we have everything decided on specific standard time, somebody is always waking up at two in the morning and somebody's always having to jump away from the dinner table and go deal with some security patch that's incredibly important for them to apply. So in addition to this, we knew we were going to need a little bit of discretion. We were gonna have to handle this privately. We can't release an exploit to the entire world and say, look what we found because badness happens when you do that. So we realized we had to quickly stand up, save space where we can talk to our network operators. In this specific case, we already know the topology of our network pretty well. So we created a telegram room, which was the moment that maybe we wanted to hide under a table, but we have a few opportunities to do things much better than getting 100 people in a telegram room and talking about exploitation scenarios and security blocks. One thing that was particularly rough in this specific spot though, when we were trying to pre-notify people, we had a critical emergency in the hopper. We were trying to go and make sure that we reached out to all of these different projects and all of these different companies four or five on our coast. More than anything else, the most frustrating thing was that there were no operation, practically no operational security act email inboxes with a human behind them. There was no way for companies to get a straight security message from us. In a lot of cases, when we were trying to proactively reach out to exchanges, they told us to file customer support tickets, which is not scalable. And it put me in a position where I had to hope that when we have emergency hard-forking happening in 24 hours, some exchange is going to pay enough attention and have a process for an advisory. Some support all the way to engineering or security in enough time for them to act quickly and not be harmed. Because realistically the moment a patch lands in your code base and your GitHub repo, it's not really trivial if you're good at reverse engineering to figure out what changed and to find an exploit and then turn them around and automate attacks against anyone who hasn't patched yet. It's basically relatively simple reconnaissance and the combination at that point. So if you happen to know anyone in a position at an exchange to set up a security act, if you're a company who relies on blockchain code, if you're a developer who works with a bunch of different projects, please set up security at inbox with some form of an obnoxious alerting. So instead of just having that be a reactive place, we can at least quickly get to you and tell you what's up. So we had this giant retrospective and basically what happened as part of the retrospective was this. I had to figure out a bunch of ways to make the process that we had to go through this first time much, much, much better. And I will make sure that I post these slides online and take the links to all of the things that I've screened so much here. But basically what we realized was this. We built a public blockchain in hand. It's probably time to figure out how and where we fire fighters and our emergency services. So the thing that was interesting about me being an Australian was this. I happened to be speaking at a conference that was their national conference for their computer emergency response team. Which I wanted to go there because I wanted to learn a lot about this public private and academic partnership of a bunch of organizations who in an emergency can pop up and they immediately help organizations of any size of any kind solve these problems that come up when big security problems happen. This particular serve, even though hardly even a long time ago had some fantastic stories about how they helped their members and how they were able to help civic governments and universities basically patch and quickly identify what to do in the case of really, really high risk. And I thought, hey, maybe we do that. We are public stuff, it's important, we should do it. So what we realized is we should probably stand this up. We can make it somewhat decentralized. We probably need some folks from our foundation. Probably need core developers. We definitely have to give our network operators a seat at the table. But there's a lot of opportunity here. We don't just have to have everyone around for an emergency. We can also use this group as a way to make everyone smarter about security. Instead of being in a position where we're the ones doling out advisories and maturity models and all kinds of tips for how to better run your AWS stuff for your GCP, this group could also put out proactive educational information to help mature the entire network. So here's what we came up with as potential core requirements. It's probably best when you are responding to an incident to not have to deal with like 110 people at once. It gets busy. It's usually better to have your N as less than 10. I was thinking about fun outfits to plan for when we have to respond to emergencies. Space suits make it really hard to type, so I killed that idea. But that being said, we basically decided we are going to kick this group together. We need to make sure that we've got people who are from different parts of the world that aren't just North America and we probably need to make sure that it's distributed across multiple kinds of stakeholders. We have validators with tons of stake and we have small validators who maybe aren't as mature of the security fund and who need a little bit more support from us to be doing the best that they can in those particular states. We also realize we probably should outline expectations for service. So initially, we were thinking maybe run a year that we would have people as part of our cert, which we are working on coding now. But we wanted to make sure that we're keeping them accountable, especially in the area of confidentiality and response. If you're a part of the cert team and you have access to vulnerability information, I don't have to be worried that you realize you can use a vulnerability that you have access to to exploit a validator in a single known environment or against a competitor that you can't stand because you guys ship post too much in telegram channels. So there's a couple things we still have to do to make this successful. We have to encourage everybody to set up a security at email address so we can send out our signal flare when bad things happen. We are working on adding a security parameter to the Cosmos SDK. Basically, a place to stick your security at email address where I can query the network daily from our archive note and have a security response list instead of going through like 110 websites at any given time and hoping I find a way to get to a human. And also realize we should probably go ahead and start working on a module that allows us to elect individuals to be represented in this group. Instead of just picking out a bunch of people I know, that's probably not the best way to make this particular group work, especially because of the need for trust. What's really interesting about this is we internally have been playing with something of the call liquid democracy. We've been thinking about how blockchain governance can be a tool to help us accomplish things in a wider context. But I'm pretty proud that I got to work with a couple of our core developers and this is really the first application of that liquid democracy principle that we have been talking about over the past year. One of the things we're doing with the module is we're making sure that we are writing the code that will govern the ability to turn off transfers if there's an emergency. We have 10 people, maybe we need an N of nine to turn off the network. And there were initially some concerns about, oh my God, what if they all get malicious and turn off the network, that would be terrible. But really what is key here is remembering that we are trying to be inclusive and give people a seat at the table and give people agency in our network to be able to represent their point of view and their concerns when an emergency happens. I don't wanna guess what validators are thinking. I wouldn't tell me what they're thinking. And if we need to go have at it for 20 minutes until we come to consensus on something, that's fine. But I'd rather have them there than have them over there. Another thing that I have been looking at here, if you're familiar with how the DNS group key for the internet is managed, it's really interesting. But there's one particular thing where basically they have a complex ceremony, it's completely audible, they have split the group key among seven people. That's the example that I'm looking at here. So finally, I will wrap up really quick. Blockchains, they're coordination tools more than anything else. Not intensive security is making sure that you're using the right tool to get the job done. And I happened to think that on the incident response front, we could build a security incident response that we wanna see in the world. That being said, I wanna make sure that I have a shout out to my core developers who have been helping make this idea come to life, especially Rigel, who is one of the first people to get excited about my democracy and whose natural state it's just wild. So thank you very much. If I have any questions, I ran over a little bit, but I'm happy to take them in the hallway in the room.