 We love you. All right, guys. Hey, everyone. So we're here to talk what Web3 projects need to know about Web2 cybersecurity. And really I think this is going to be a great kind of follow on talk to the bridge security framework talk that Patrick just gave. This is kind of the compliment, right? So he went over really some of the smart contract and protocol layer aspects of bridge security. Unfortunately, a lot of bridges have been hacked, though, not from, you know, the hard core cryptography, but from basics, from cybersecurity basics. Okay. The cybersecurity basics matter. They matter because they have geopolitical significance. So before we get started on the panel, take a look on the screen. There was a great article by CNET that came out a couple of days ago, and it really talked about how Lazar's group, which is affiliated with North Korea, is seeking to infiltrate this ecosystem by setting up front companies and then having applicants, job applicants, apply to these front companies and then the engineers get fished, right? And that's actually how the run and bridge attack happens. Okay. So the fundamentals matter and it doesn't just impact people in this room, the people at this conference. It impacts geopolitics because it funds things like North Korea's new program or, you know, transnational criminal organizations and stuff of that nature. Okay. So what do we need to take away from this panel? So a lot of fundamentals from the last two to three decades of Web 2 cybersecurity. There's cloud security risks that we have to understand and mitigate. A couple of that, there's front-end security risks, right? There's operational security risks, there's also corporate security risks, and then obviously big topic with some of these bridge hacks has been key management, endpoint detection and response, and again, management of individual nodes and validators. It doesn't mean we need to discount smart contract security risks. What it means, though, is we need to shift the security conversation in this space from very siloed, you know, individual audits to a holistic end-to-end scope of the project's entire architecture, right? And not just their technical architecture, but the people architecture. And that's something that, you know, we have some newer entrepreneurs in the space, and that's something that as we kind of get some battle scars as an ecosystem, I think we're going to learn time and time again. Okay. So who are the panelists? We'll start left to right. We've got, I guess, we'll start over there, Naseem from A16Z Crypto, we have Taylor from MetaMask, we have Corey from OpenSea, special guest Hudson from CryptoTwitter, and then another special guest, Moody from also from CryptoTwitter, oops, not, Polygon, Polygon says that. Cool. All right, guys, so we'll start with kind of a basic question, and I'll ask it to Naseem first, and then the rest of the panel. Naseem, if you could kind of describe, you know, from the context of a normal Web3 or like a representative Web3 project in the ecosystem, what are the Web2 parts of their tech stack, and then what are the Web3 parts of the tech stack, and then what are the security relevant parts of each? I actually feel like it's a fairly long answer that is required for that, but you know, we do spend kind of quite a bit of time going from the early days of the company up until kind of like the ongoing updates to, you know, decentralized protocols. I would say that, you know, it kind of like starts with the people, it's essentially just like a set of people who just want to, you know, design the components, build it, deploy it, make it available, and just like maintain it over time, so there is kind of like the, everything that it encompasses, so that means, you know, you essentially start introducing devices to build the components on, you start involving kind of like the entire kind of like, you know, secure software development lifecycle that is relevant to every single person kind of like building technology in general, and I'm trying to cover like anything that is, you know, kind of common to what to and what to do here, so you're going to have like this part then, you know, these people need to build an organization, right, and like with that, you know, kind of like corporate security, as you mentioned earlier, kind of like anything, you know, from like endpoints, you know, security standpoint, device security, and so on. I would say that the like, the app tech is also like very, very similar, you know, always kind of like, you know, analysis of your code as you kind of like develop it, and obviously you want to iterate pretty fast, so you're going to build your CI, you're going to integrate, you're going to run, you know, security testing, and bond scanning, and all the stuff that we all know and love, doesn't really matter whether you're building for WebTool or 3 yet at this point, right. I think that, yeah, like in my mind, everything that spans over, you know, network security, essentially like network security, corporate security, app sec, security engineering, key management, and by the way, like the key management is a part that is like, highly overlooked, like a lot of people think like, okay, it's okay, I can deal with this later, you know, I'm just going to have like, you know, like a single key on like either a ledger or like some, you know, fairly not robust, you know, key storage, and then kind of like push it out to later, but you know, key management is like a fairly complex thing that like we have to deal with in WebTool, yeah, I would say that like these are probably like the majority of the things that we see, and obviously, you know, comes with like, you know, physical security, I think that like, it kind of like ties into, you know, key management and the security of the people as well, because it's not just about the phones as well, on that note, we've heard stories of people walking around certain places with, you know, crypto swag and exactly having their phones out and so on. So that probably ties into this exact as well, right? Yeah. And we had like a quite a few conversations actually with, you know, various teams of our portfolio that were coming here. And, you know, because decentralization is a spectrum, and like you try to kind of like start obviously centralized with just very few people and kind of like, you know, centralized over time, there is like a point in time and we're probably at this point in time, like right now, where any person that kind of gets hacked, attacked like physically can kind of like result in either assets getting in trouble or even like reputational, you know, damages that can be created as well. Or potentially physical access to company or protocol. And so I think that, yeah, the reputational damages actually is something that is highly overlooked as well. It's not just about your protocol. It's like, you know, everything that comes around that. Do you guys have your stuff squared away or not effectively? Yeah, pretty much. Anyone else on the panel? Go ahead. So when I think about Web 2 versus Web 3 security, especially right now, how early we are with Web 3 and Ethereum and blockchain, there is going to be a need for Web 2 stuff in your stack. Not even consumer facing necessarily, but like, who here actually like doesn't run their project on GitHub and uses something like MatterMost instead of Discord or Slack or other non-hosted. You're awesome, whoever that one guy in the back is. But yeah, yeah, good for you. But unless you're doing that, like you really need to be very, very careful about where your data is going and keep up with that. Something else is that I think has overlooked a lot, especially when with Web 3 and how open it is and how open it is to contribute are two things. Watching your GitHub to make sure that the, you know, PRs that are being put in are actually from your team members that you trust and that you've vetted, SushiSwap had a bad incident where someone joined their team and then put something in that like, I think, stole keys or something on their webpage. Insider threat. Yeah, insider threat, basically. We'll talk about that. And then secondly, this is something, I worked at the Ethereum Foundation as the org security lead for three years. And something I learned, especially as I was leaving and hadn't reflected back on, there were just a handful, maybe two people in the organization that had master access to just about everything. And that's incredibly bad. It's even, it was even worse earlier. So just bear with me. Yeah, like what happens if those people get run over by a bus, right? Not even that. They could literally take down the Ethereum website, post on the Ethereum official Twitter, post on the YouTube, pretend that Vitalik died, crash the market. Like there's, there's like a lot of things. And there's a monetary incentive there. And there's a monetary incentive. Everyone has a price, right? Yeah. Yeah. So the thing is, if it wasn't that we just really liked and trusted the people doing it, like we got lucky, but really, there should be different ways. And there's, you can look up ways that like people have theorized how to do this best for years, separate the people who have access to the most critical parts of your organization and have an audit trail, have like a full proof audit trail they cannot wipe. And so, yeah, that's, that's kind of what I think of because it's like, Web2 Security seemingly has mastered this in some levels, but Web3 like forgot about it. So that's what I think. So on the sushi incident, first of all, glad we are not talking about the horse incident, but Off limits. Yeah, off limits. That happened a while back. So what happened actually was one of the contractors had pushed access to the codebase. There was no PR review approval process and it just got, they like got angry with something post-malicious code. Nobody noticed it got pushed out and the key takeaway from that is you should always manage access. People should only have access to things that they really need in a production environment. No single person should have push access. It should always be approval based, have at least one or two reviews required and expanding that further really like just minimize access to only things people really need. For example, like access should not be considered as like a pride or something. So co-founders, for example, or founders of a company shouldn't really have access to critical things like your emails, servers or whatever, unless they are actually doing something of it, just because they have a higher precedence or whatever doesn't mean they should have access. The thing is like people are people, no matter what, people will get compromised one way or another. So you have to have that second layer of security where even if someone gets compromised, the impact is minimum. So a practical example in the Ronin case, the bridge hack, their employee got compromised. That's that I can understand. Like if someone, a phishing attack, he was like it was a very well orchestrated attack. So I'll give it to him like, okay, sure, he got caught. But the issue, the main issue was that that one single employee had access to everything like that one single employee's access could have drained 600 million, which should not have happened in production. That was the key problem in that hack. Not that the phishing happened. That is going to happen. You just have to protect from the impacts. Go ahead. So I'll just add that I think if you want to look at the different what's missing in the Web 3 versus Web 2, just look at the Web 2 security or chart. You have your incident response teams. You have your, you have a corp sec group. You have a production sec group. You have an app sec group. You have your security reviewers. You have your vendor security audits. You have your, you know, all those roles are there because that's how you achieve security. Just have Sam's son. Well, not right now. But Sam doesn't scale though, right? Yeah, yeah, yeah. And so I think like if you want to know what you're missing, like look at the Delta between your org chart and, you know, any well established Web 2 company. So that's an interesting question then. So in the various life, life side, like life phases of a crypto project, what needs to be internal versus external? Right? So obviously everyone goes out, gets an audit. It's kind of a commonplace thing. Um, I still love audits, right? I'm the co-founder of a auditing marketplace. So we love audits, but it's not the pancia. So I guess core to you first and the rest of the panel, like when do you need to hire an internal CISO? Uh, we're starting to see later stage projects hire a CISO, but is this something where, you know, you raise venture money and then on your use of funds, like employee three is an internal CISO? Well, I have a bias that yes, this play, maybe play two actually, but, uh, you know, I think, I think an under, underappreciated property of security is that it's, uh, it is not something that's added on later. It is an emergent property of your design. And so if you don't start from the beginning thinking about security, you're not going to have security at the end. Uh, and I think that like, so yes, I would, I would argue that you should have them early. Sorry. Yes. I would like a hundred percent agree with that because, um, security is really about like the whole system and it's like the social system. It's a people system. It's, it's how you interact with your community. Uh, it's how you hire. It's who you hire. It's how you vet them. Um, it literally touches like everything that you do. Um, and if any one piece is like, not secure, that's an entry point. And the thing about specifically like these crazy hacking groups that are coming out of North Korea and other locations is that, uh, let's say that they just get access to your Slack, right? So it's like your internal Slack, you got like 20 employees, right? They will then use that, just that access and they will sit there for like literally months and they will read everything that you do and they will understand how you operate. Not creepy at all. By the way, it's super crazy. Uh, and they will use the information they glean there to then like escalate to fish you spearfish you via email or spearfish you via telegram or LinkedIn DMs or whatever they, whatever you use, they're going to get you and, and they just over the course of like months and even years, they'll just like keep going and going and going. Um, and so it's like these little things that we don't take seriously because you don't have anyone on your team early that's thinking about security. That is the problem. Right. And the solution is not to get a frickin security audit. I'm sorry. Love security. No, no, it's a secondary check. Audits are just a secondary check. Yeah. Audits are not going to solve your internal security problems for you. Projects have to solve that. Yeah. And if you, especially if you hire like a good security person early on, they will, uh, indoctrinate your team as it grows and they will, uh, design your team, right? Your org chart, your organization, your products, how you work, right? They will help design that from the gecko and therefore every layer of your stack as it gets bigger, um, as the responsibility gets spread out across more people, as you scale up, as the financial incentives get greater, you're just going to be in a way better position. And I really cannot emphasize enough, like what Corey said, like you cannot add security on after, like it's not a thing. You can't be like, oh yeah, we'll just be lazy now and then we'll like fix it later. That's not how it works. Like security is a core part of your entire organizational design, your product design in your system design. Yeah, make this a little spicy. So, uh, let's go. I think like these guys are just securing their jobs right now, but in terms of, I mean, to be fair, we all are, but yeah, but like just being practical. If you're just starting out, I don't think a second employee needs to be the security guy, but what needs to happen is your first developer needs to have a security mindset. Your first IT hire that will probably happen when you are 10 to 25 people that needs to have some security background. How do you quantify that like the security mindset when you're hired? So that's a great question. It's not something I can like, uh, like define qualities or something. But if you, like for me, if I look at someone's code, the way they are writing it, they, the way they are documenting it, I can tell like it's just intuitive that this guy is thinking about security. If you just talk to someone, like look, ask them their priorities, uh, see how well tested their code is, uh, even the code syntax and everything, it will tell you, uh, how serious they are about security. Uh, so yeah, your first developer have someone with a security mindset, uh, let them train others. Then your IT guy probably will be your next hire, uh, get someone with prior security experience, but as it need to be a dedicated security people, and then will you scale beyond maybe 20 people? That's when you should start looking into a dedicated security person for your own. Before that, just have internal people with security mindset plus hiring or not hiring, but like just contracting external service companies to, uh, provide you with security. External companies, they don't only do audits, they will help you with internal security as well. Uh, because the thing is, if you're all gets below 20 people, the security guy won't really be utilized fully unless you are someone like F2X with like 10, uh, 10 X engineers is pushing the whole code out. Then it's a different story, but talking about normal logs, yeah, security people can come in a bit late. And we need the 20 person metric is that for projects that the application layer? Yes. Obviously L2s and I just want to time it on that, that the point that you made about like, um, the security mindset, like what is the security mindset? Um, I think one reason why Web three sort of struggles with this security mindset a lot of times is that we are like so generally optimistic and like visionary and forward thinking and we are thinking about the potential and we are like thinking that nothing is impossible. And so so much of our time is spent in this sort of idealistic future world that we're trying to build. And that's necessary. Like we have to be that in order to build these crazy things that we're building. However, um, like I would say that I have a security mindset, but I would also say that I don't spend most of my time thinking about like the potential of the future. I spent a majority of my time thinking about like how we're going to fuck it up. Yeah, that's like the security mindset on that note. How do you ensure compliance then? Right? I think something we had talked about before is like getting people to actually do the fundamentals, even simple things like MFA being enabled, right? Or Corey, we talked about this, you know, endpoint detection and response actually being used and the people aren't using that when you kick them off the network. How do you how do you get people to actually do the basics? So like when we were starting up right, like and I grew from me and my co-founder to 20 and now we've got like sort of over a hundred on the core team. We got like there's a thousand consensus right now. It's when we had like two people, three people and started bringing people on, the thing that we did from the get go was we we had good documentation about like what was required and what was expected of our employees. It talks about like what they needed to do and it need they needed to do it across like all of their accounts, like there's no personal professional split in crypto. So like you have your personal Twitter like that needs to be security if you're going to be a part of our company. The other thing is a lot of these services like GitHub, you can force like if you want to be a contributor to my repo, like you have to have your MFA on. We bought everyone multiple like all the little hardware doodads right. Everyone has a ledger. Everyone has a Trezler. Everyone has all of them. The Google ones, the yeah, the Yubikis and the Google one, the Titan one. That one was that one was pretty slick. Still go back to my Yubiki all the time though. And these things right from the get go like like the first employees, no matter what, then as you grow, you like your procedures are sort of in place. But then your early employees also like establish this like floor of like this is what's expected, right. And that's the culture, like that's how you establish the culture. Like nobody is going to step out of line when it's already there, right. And you call each other out, like when you see someone being a little bit lazy, when you see someone doing something that's a little bit, you know, even the founders, right. I mean, it starts with founders, honestly, I feel like the first security people that need to be hired are like literally the founders, because like, I mean, they need to have like this adversarial mindset, like from the get go, as we were saying, it's like, I feel like almost like Web 2 versus Web 3 is kind of like the wrong kind of like comparison because like we're really in just the business of security critical software. We had this conversation today with Corey. It's like, you know, it's like everyone here is writing codes, like whenever we, you know, on chain code is essentially like ring zero in a kernel, like there is no safety net, you know, besides like whatever you're going to put in place, essentially, no one is going to do essentially like the security for you. And so it's just like coming with this adversarial mindset as a founder, just like thinking through like everything that is going to go wrong. And just like I was saying is like the most important thing. So probably like start from the top, the culture and the DNA of the company needs to be built around that. Like there is no way for a company to be successful if they don't have that because like the founders need to kind of be behind every single one of these decisions where like obviously security always comes at a cost of other things, right? Like it's always the security, it doesn't live kind of like on some, right? Like in a corner, it actually always is a balance between, you know, convenience, like, you know, product features and like user experience versus, you know, the safety of doing these actions. So I think that like just having the founders kind of like come with this mindset right away is the most important thing. Is this something that DCs in their due diligence process should include? Like asking the founders about their views on security? I think that like we don't want to be too prescriptive about like the mindset that people have before before naturally like an investment, but it's more like as soon as like we meet with them, we kind of like do the rundown of everything and we kind of like provide, you know, insight into all the threats that they're exposed to and kind of like try to really, if they don't have it already, like this mindset, just make sure that like after it is the first conversation, they already kind of like think about like everything that is going to go wrong because it's only a matter of time, right? You were saying it's like it's a matter of price and time essentially. So that's, that's like how we, how we approach this. Interesting. Kind of switching from the project perspective to the user perspective. Medit, you brought this up before. What's the project scene to do to educate any users on their security hygiene? Exactly. So that's one thing this ecosystem needs to understand that at the end of the day, security is really about users, even if your protocol is secure, but nobody knows how to properly use it and everyone just keeps getting compromised, then you have failed. Like you create, like you use the public private architecture, but you never tell anyone how to secure the private key. Everyone just copy paste that on Google. Everyone gets compromised. You have failed your mission. So user education is super important. Metamask here is doing a great job at it, but we need a lot more of it. And to get and to get that, like we need to really understand, like in this space, we need to understand one thing that it is not always about getting the, like getting your mindset very crypto focused or whatnot. You can be a crypto anarchist, but you have to be practical in the sense, stop telling like 60 year 60 year old grandmas to manage their own keys. They are much better off using a custodial service, not your keys, not your coins, everyone's favorite logo. But if you don't know how to manage your private keys, doesn't matter if your key is yours or not, it will just get compromised. So unless you know what you're doing, it's often better to just go with a service product, just a custodial product, rather than trying to learn what you don't know. This is why even in traditional finance like earlier in the day, as you saw, like bonds and everything were paper based, but eventually everything got digitized and everyone just uses a custodian. Nobody really ever buys shares on anything directly anymore, wherever you go, you are buying them through a platform through a custodial platform because track fire has understand that it's it's not it's like if you want to scale, you can't ask everyone to just manage their own keys or secure their whole things. You have to make it much easier. But what we in web three, what we need to do is give users the choice. So if someone is tech savvy enough, if they want to run their own node, if they want to manage their own keys, they should be able to do that. But we shouldn't be like shaming others for using infura or other products or like using custodial products. We just have to change that mindset and educate people on both ends. I so one thing I would add to the UX, which 100 percent it's it's UX is such a big part of security and helping people do the right thing. But I think an unappreciated part of it UX is is that it's actually a team game. It's it's every one of us in this room, every product, every project that works on what three the the UX is not just how good your UX is. It's how consistent is your UX with everyone else? You know, the way users the way they learn what is safe, what is unsafe is consistency. When they that otherwise they have no prayer going on to a website and knowing if this is going to scam them or not because they're going to have a whole new UX and every single time. And maybe this website just does a little bit differently and they have no way of telling just, you know, using your, you know, your lizard brain on in fact, right when you're on that website, what you're what you should or shouldn't be doing. You know, I my favorite example of this is logging in, you know, on a D app, you can actually wallet and, you know, some websites change and like start start treating giving you data about what's associated with your wallet. Some don't yet or they'll immediately challenge you. Some will let you click around and then you then challenge you and you don't you don't have this concept of like, am I authenticated or am I not? Like how, like, where is that? What is it? What is expected of me? And all of a sudden you click something, you just pop up asking you to sign something and, you know, who knows what that something is. Hopefully it's science ethereum, but, you know, and you know, you compare that tool passwords, you know, users have three decades of experience with passwords and knowing how to log into a website. It's consistent. Every website is exactly the same. They know exactly what the experience is. They know what they know what an authenticated state is versus an unthenticated state. Like that is you, you, you we're, we're in ground zero with user education and it's not just how good your product is. It's how good we can be as an ecosystem at being consistent and showing the same experience over and over again, each user's the right moments. And I think that's something we really need to internalize to start pushing more as standards. Yeah, I was just like, I was exactly actually what I was going to say, but I was also going to shield Taylor's project because they might not. If you want to look at good user documentation and education around every little thing, go to my cryptos fax and they're like documentation. It's so good because it like will walk you through like the mem pool. It'll walk you through what a private key is. And all these other websites that are built on Ethereum assume prior defi knowledge, assume prior key management knowledge. And if you're really building for people outside of the ecosystem and including other crypto ecosystems that have dissimilar key structures and things like that, you should really spend just a little bit of time and invest in a tech writer and or even just point to other people's pages that give that education for the users. And like if that sign like sign this message box pops up. I mean, there's so many people who don't realize that doesn't cost money. That is not a transaction that is signing a message from your public from your keys. Keep here. So like we're just so far behind in that because the protocol layer can't be the ones to teach that users aren't running CLI geth and learning that from scratch anymore. That's where I learned it. But that was way long ago where it's very different now. So yeah, just look at really good documentation and try to emulate that in your project. Yeah, I want to emphasize like the amount of docs that I've written, but it's worth it because you will be surprised at like how much people want to learn. And so like there's sort of like this track where like we can make the UX better and we can give users more choices that better serve them. Like we should definitely do all of those things. But if something is like the current way of doing it and it's not clear and it's not obvious, take that opportunity to tell the user just like literally tell them don't treat them like they're idiots, right? Treat them like they're smart, capable adults and tell them what's happening because they will carry that knowledge and then they will actually pass that knowledge along to other people. And it's it's quite remarkable. You would think like the docs I've written like what what what comes of that nobody reads docs like it's weird when my words like show up in other people's like tweets in other people's documents right people reference myself like I'm like wow there's people really reading this. It's it's amazing. I also want to say like back to what Corey was talking about. We do need to work together better. I'm told Nadav is in the audience. Love you Nadav. Nadav called us out recently like now we're getting spicy. Finally. OpenSea called MetaMask out because there is so much user loss happening right now. And a lot of the user loss is actually literally with users of OpenSea who are using MetaMask and we should be working together better to make sure that every single thing that we're doing and showing the users is way clearer. And we should be in lockstep so that when OpenSea releases a new siting mechanism or a new way of handling something MetaMask can actually like, you know, not show gibberish. Literally not show gibberish. Please. Please. Anything but gibberish. And so we have some like exciting things in the works that like hopefully we'll we'll get into production soon that that will ultimately serve not just MetaMask users, not just OpenSea users. Right. It'll serve everyone in the ecosystem. But specifically right now it will prevent the loss that we're seeing and we are seeing an immense amount of loss. What's the way to solve it? Is it just point to point interactions between like the MetaMask and OpenSea or do we need like industry working groups? I know I know that's a can of worms. A lot of people don't like that. We have industry working groups. They fall off time and time again because the new shiny thing comes out. Make them a dow tokenizer. Man, just like that's just so meta to economically tokenize the incentives to economically tokenize the incentives to make layers repositories of this layer protocol. So but yeah, no, I might be able to answer better tailored but all the groups I've seen some of them have worked really great. Some of them floundered because they're the teams, especially the ones starting out, but they have a big impact or small. They don't have the time to commit to going to a bi-weekly Zoom meeting to look into that. And then there's a lot of competing. What am I trying to say? There was like for a while there was even competing ERC standards, you know, a few years ago up until now with even the basics of how we're doing messaging and signing of transactions across a theory that's a very interesting history to look back on. But to fully answer it, I guess, yeah, just pretty much direct communication with the teams. Like if something's happening, just telegram them and then don't just leave them for a few days, like literally get back to them soon, especially if it's like money loss, security related, that kind of that's where you need to have a dedicated person who's doing at least security infra or CISO or whoever who could be like, oh, let me grab my lead dev will fix this in 10 minutes. It all comes back to the point of someone on the team, hopefully from the founder side to Naz's point earlier, has to own security, right? Yeah. One thing that, okay, one thing I'd like to add is kind of like on this conversation, I think that like we should learn from a lot of the great kind of like UX efficient security that like that has been put out. I think that like Apple is probably like top, you know, like the kind of like top company in this space that like has been not in the space, just like in the tech space in general, in terms of like UX security. And, you know, if you take something as simple, you know, I used to work there and in the security team and kind of like saw a bit like the mindset from like internally, and you can see, for example, like, you know, logging in, for example, like used to be like unlocking your phone used to be pin based. And, you know, we all know that like people just like hate to remember pins. So what did people do, you know, when they were allowed to bypass it, they bypassed it, right? And it was kind of like, I think for only 14% of the people used to have the pin on and everyone else had just like the fully or even for the passwords, right? Password. You just be able to exactly one, two, three, ABC or whatever. Exactly. Yeah. And then the moment they switched to Face ID, this stat actually rose to over 90%. So, you know, it like, if you're thinking about kind of like the mechanism to just like make security easy and just like, low with the UX, people will just like come along with you in the process. And so I think that like, and again, we were discussing this with Cori yesterday. I think that like the, you know, the user interface can do a lot more for us as of today, you know, when we're thinking about, when we're thinking about making obviously like educated decision, you know, MetaMask and other wallets are starting to put like great features in order to kind of like show you, you know, what are you essentially achieving by submitting this transaction to the network? What is what is going to happen essentially ahead of time? But something that is super interesting is like, there is more to be done, you know, like you have approvals for your NFTs. These are permissions, right? Similarly for like your fungible token allowances that could be managed actually directly within the wallet. The wallet should be able to do that, just like your phone actually manages your permissions, you know, like giving access to, you know, cameras and like Bluetooth and like location services and so on. Like all of these things are things that we can learn from just, you know, great user experiences. I'm like, you know, mobile devices and so on. And I'm really hopeful that we're going to see more of these things as part of wallets. And similarly for like, you know, phishing, when we see like a lot of people that kind of like gets scammed, signed the wrong like state approval for all and get kind of like all of their wallet trained, even kind of like wallet management should be possible like within the within the wallet itself. Like, you know, think about, you know, your wallet essentially just like spinning off like, you know, generating a new address on the fly, you know, providing essentially like allowing you to just like send funds like minimal amount of funds to just like pay for the gas fees for the operation and not have access to any other asset that is part of kind of like your main address, right? Like just trying to think about like all of these things that don't necessarily need to show any complexity to the user or just like help them in the journey essentially of like there's a startup doing that, right? Do y'all know about this? That there's like a thing where when you launch a metamask transaction, there's a second window and it takes the bytecode and puts it into human English. So there's a hexagate that's working on that. There's a second one to redefine redefine. Yeah. Yep. Well, yeah, maybe blowfish to love it. Yeah, blowfish just raised. I think that he's on still and others. Yep. So something I would add. So 100% agree. I think all that stuff's amazing. And I think if we're going back to this whole thing started Web 2, right? I think one of the things we can also take away from the Web 2 world is that the depth of security controls that have been implemented, you know, no one is going back to right now we're kind of running in ring zero. And when you sign with your wallet, the equivalent of an operating system, like, you know, adding those kind of controls would be awesome ways to put some sandbox and capability-based access controls and access reduction mechanisms. But, you know, there's more to, there's more layers we can add, right? I think one of the biggest areas of my mind right now is, and I've learned this from talking to victims of phishing attacks, is they often don't even know where they sign the malicious transaction act. They're at a loss. And there's no introspectability. We're missing, you know, we're missing logs, we're missing, and I think that we're also missing like a really, like, the key linkage for like on-chain and off-chain transactions. So not only do we need monitoring and logging on the project side, logging on the user side as well. Yeah, if we're going to trust you, ring him, say, your key, your authority, you also owe, we also owe them every tool they need to be able to do that correctly. And that logging and monitoring is a big part of that. Yes. Specifically, one thing that we've experienced at MetaMask, that OpenSea has called us out on and continues to call us out on, is for the longest time, there were transactions. And the transactions have like financial value. And it's pretty easy to teach users like this as a thing. And there might be some bad things that happen depending on if you click this button. And then now a lot more stuff is just off-chain, right? Or it's not on-chain until after a certain period of time, if the order is matched or whatever, right? And because of this like super early assumption that like everything is, there's a key. And then you have transactions and the transactions go on a chain because of that super early assumption that we made, we did not see this coming where now a like just a signature, right? Just like an off-chain signature can actually lose you like all of your NFTs or all of your tokens or allow permission somewhere else, right? And so now we're like playing catch up while OpenSea is like sprinting ahead, like making sure that it's usable, making things scalable. We're playing catch up to try to like map back. And like in reality, we should have never, ever treated a transaction like an on-chain transaction signature differently than a message signature. And because of that very early design choice, like at the core architecture level, we are now like remapping everything and it's like it's just a stupid amount of work, to be honest. And it takes way longer than it should. And it's especially painful when users are losing money because of this every day, right? Like when we are hearing about celebrities losing their NFTs because of this thing that like we know should be better, that is, that's horrible. I also want to comment that like my talk tomorrow, I'm going to be diving into like some of the early, early choices of like wallets but also of the protocol itself and diving into like a lot of the stuff we're talking about here because I think that the biggest sin of it all is that it all traces back to this like private key. And this private key gives full authority over everything, right? Like that's the world that we've like started with and like fundamentally it is so hopeless. Like it is so broken. Like why is this single thing that you can't change, you can't do anything with the thing that grants like so much permission? And I think that like we are now seeing that like at the token level we're trying to like implement permissions and restrictions. But ideally like that would actually live like out the very core layer, right? Where like the first thing, the thing that holds your stuff is actually like a default like you don't do anything. And the first thing you have to do is grant the permission and then you can take the action. So yeah, come to my talk tomorrow. We'll dive into this. But you know, I think like this is why I really want to emphasize like have conversations with like the people that are around you learn from your users, listen to them. Like the most valuable thing that I've ever done is have literal one on one conversations with people who've lost all their money and like just ask questions and let them talk because you will learn so much about where your product is failing and you will be able to serve them so much better if you carry those stories with you. It's not super fun but it is like the most valuable thing that I think I've ever done to this day. Yeah, I agree with that. I'm on the bed that private key such I completely agree that refers back to the point I was saying like theoretical security is one thing. People have built products that are secure in the best case but if they are not practically secure if nobody knows how to use their product if nobody knows how to manage their keys and so on they're just failing at your goal. And then the other thing on monitoring and alerting that's another super important bit. So not just about users but about your product as well. If your bridge loses 600 million you shouldn't have to wait six days before you realize that. So definitely always have and that was because of monitoring. Yes, someone literally message them on Discord Hey, something's not right. So anyway, you should. You should have proper alerting and monitoring, but yeah, also don't go to overboat. Don't start logging private keys of users. You have to find the right balance there. All of that aside, I want to like take a step back and like say we just like never talk really talked about the title which is web two versus web three same or different because I think we all just assume they are the same. Web three just sprinkles a few things on top of web two security but if it wasn't clear like at least in opinion they are basically the same things. Spencer here has been trying to guide us towards a bunch of I think 20 questions or something he prepared. If you have managed to get through maybe three just went on on random spree. But yes, then a great job. Yeah, speaking of a curveball question. So we were sharing war stories outside. Yeah, there's a reason why we're not going with the prep questions. So Taylor, you were sharing one about incident response because obviously after you do monitoring the logging, oh crap, something happened. Right. Maybe you are. You know, what do you do next? Right. So what are the lessons learned for that? So this basis is centralized and a lot of the teams are there for fully remote. We use telegram. We use Slack. We use discord. We use signal. We use like there's like 8,000 jobs. I don't know about you but like for me, my notifications are basically like off because otherwise they're just be like constantly inundated with information and I wouldn't be able to think. However, if something happens, like how do you let that person know that something happened? So for example, this one time Hudson was giving a talk and he got sim swapped while on stage and the sim swap was actually talked to his wife and I think told her that he was kidnapped because they're assholes because they're sim swappers and because me and Hudson have like a relationship that goes back and because Twitter like immediately alerted that Hudson was with sim swap and was like DMing asking for money because of that, like I had the actual sort of like in real life network and the actual phone numbers in place to be able to like contact, right? Same thing has happened with my own team. Like, I don't know, having each of those phone numbers and making like old school phone calls that break through like the do not disturb mode is like critical. Almost every single like incident response includes like tracking down the person who has the person's phone number who has the critical access to stop the thing. Yeah, two things. Number one, I think both of you are on a list. So like if I call you, I go through do not disturb mode, right? Exactly. So like, we have that setup. Yeah, we have that setup. Yeah, so like if one of us, because another time someone posted something on my Twitter because I had like an API key out there. Anyways, the second thing is when I got SIM swapped and as soon as it got on there, I walked, I went back home when I was, everything was fixed. There were 10 very critical group chats that just said Hudson's SIM swap. Don't listen to anything you said. Hudson's SIM swap, don't listen to anything you said. And my startup at the time, Oak and Innovations immediately shut out all my access, cleared it, cleared it completely. The Ethereum Foundation in less than 10 minutes after the SIM swap happened, had a round table discussion, a live war room Zoom call and got all my access taken away in minutes. Did you guys have a plan for that or just kind of sponsored? We did have, so by that time, so I talked earlier about how not great the EF stuff was. By that time, we had improved so much. We had a whole incident response plan, ready for this for the most part. And we had a dedicated CISO role type deal that we have kept over the years and years and years and we have a great one right now who replaced me after I left, or not before I left the EF and I was doing other things. But either way, yeah, that was one of the scariest things because like the cops were called, my spouse is crying. One more thing, and this is something just, not related to this exactly, but something I'm really proud of Lilat for, my spouse, when I finally got a hold of her, the first person after resetting my passwords and I went to the phone store, I called them and I said, hey, this is Hudson and they were crying and they were like, I'm here with a police officer. I know this sounds like Hudson, but I'm gonna give you two questions that only you would know. And I was like, thank you, I've taught you Opset. Oh my God. So they knew it was me and everything worked out from there. Yeah, I was one like really highly at one point you had in there, which is like having the plan. But I think a key is practicing the plan. Rehearsing. Doing tabletop simulations, making sure that documenting where the failures were, why it was hard and smooth it over before you actually need to do it in real life. Actually one of my favorite stories of this was we had a simulation where we had to disconnect our corp from our prod network at a previous company and we had one room that we could get into but that room kept getting smaller and smaller because we never need that room. And then when we, in our simulation when we did, we realized there's only one ethernet jack and we had no way of getting all of us into that room to be able to connect to prod and to keep it up, keep everything up. And then like, so we're like, yeah, so it's just one of those like, it's always the thing you don't think about that fails you in an incident and you want to know that before you actually, before accounts. Something in a sweat and training versus bleeding and war. Another thing I think that like is important to take into account is that a lot about incident response and like incident management is about time, right? Like time works against you. The person is someone or like some entity, some group already kind of like performed like found the issue and kind of like tried to execute it. So your role is also to kind of like expand as much as possible. The time window that you have where time can kind of work for you, right? It's kind of like, and, you know, we kind of like always discuss these things for like, you know, governance protocols and like, you know, particularly it's about like, you know, time locks and just like freezing funds for like a specific amount of time so that like you actually give yourself some kind of like breathing rooms and kind of like your teams to kind of like go over the logs, go over like everything that happened, understand exactly like, is this something legit or not? If this is not legit, you know, obviously if you have other mechanisms in place, great. Otherwise kind of like get ready for, you know, kind of like freezing assets across, you know, chains and so on. So I think that just like making time work for you is also one of the most critical things that you can do. And a lot of people have been kind of like theorizing about this. It's how much time essentially like the bad guys need to spend in your trenches in order to kind of like, you know, get the assets and then get away with it. So that's something that is very, very important and obviously, you know, we'll never emphasize enough that the need to kind of like, you know, baking time whenever you have like a new governance proposal and just like whenever you have like specific thresholds or like limits that are hit, you know, making sure that these things like take time and that you have like some freezing periods where you can actually cancel operations. And just want to hit on this point again, this is a, this all the stuff layers together, right? You have your systems, you make sure you patch systems as you find them and you audit them and then, but then whenever things do happen, you time box along the attacker has in your systems that we have, you can quickly respond like these all the stuff layers and that's how you get a great security. That's the depth. On that one little bit. So with incident response, I agree time is very critical. So what everyone needs to do is have an IR plan, have a disaster recovery plans, literally have playbooks of every scenario you can think of anything that can go wrong, have playbooks of what to do so that even if like you're out sick someday and some other security or someone else in the org who doesn't really know as much about the product can just see your playbook, follow the exact steps and be all good. And like you need to practice these playbooks as well. It's not just about having the content out there. You need to practice all of these playbooks and everyone should be aware like what needs to happen, when needs to happen, how to follow the playbook and only then you will be able to properly respond in a timely fashion. And another bit I would say like coming back to the swim swap thing. So there I wanted to add a little thing. So with swim swaps, in fact, not just swim swaps but any product where the support department of the company can like do whatever with your access or wherever like you don't have control on your product which is most of the web to products. Don't depend on either get something like highly secure or don't depend on the cheapest one. So for Sims, almost all of the public providers like they can just swim swap any of their support engineer or whatever can swim swap you. So never use mobile numbers for 2FA or recovery or anything permission. What do you think of a funny that's the one supposedly non swim swappable cell phone provider? I'm not looking into them too much but they're like a regulator. I mean, they're like legal requirements for them to like allow specific operations and like porting like your line to something else. So like so long as like this operation is like required by law from these providers, there is an entry point, right? And it's like this is just like a matter of like how much social engineering you can do. Like people were saying that like Google Fi was actually not Sims swappable because there is no one to talk to on the customer service. But you know, it could happen. You know, like no one is kind of safe despite because even the more secure cell phone providers. Yeah, exactly. Like, you know, you want to aim for obviously, you know, the regular kind of like non SMS 2FA based, but like if you're, if it is mandatory, go for like, you know, Google voice number, for example, like there's no seem to swap essentially. So, you know, go for for services that do not have like a same associated associated to it. Yeah, I'm back to like the whole stack, right? And like layering this on top, right? Get the most secure SIM and then have nothing like to have nothing on that SIM, right? Or like only the two services that like actually have to be on that SIM and like have that be a number that isn't really your primary number, right? Like all of these things layer and like that's And then you're not signing up for like marketing databases with or anything like that. Do not, yeah. Yeah, or this conference. And this is like, this is why we talk about like the culture, right? We talk about the organization, we talk about the people and like how important that is from the ground up. It's like every little decision that you make has to be the most secure one, right? And so that you have all the different layers on top of each other so that it's less likely that things happen that are really bad, but even if those things happen, you can mitigate the loss, you can respond to loss quickly. We are being. We're getting ushered off the stage. Thank you for coming. Love you all. Thanks everyone.