 My name's Danny. Thanks for having me. I work at the EF. I work on the research team in name. I do a lot of things that aren't just that, but you know, here we go. So resilience. I want to talk about what we might call the metagame of Ethereum L1. We're trying to ship a lot of stuff. There's a lot of things underneath the hood that maybe aren't quite clear that that are being worked on and thought about kind of at all times. I did ask the internet. What should I talk about DevCon? Got some answers. L2, very important. It's going to be a lot of stuff to talk about this week. Again, Deep Hudson, if you want to talk this evening. We can exchange some fun stories. Kevin, I'm glad it's worked. The pseudonym has fooled everyone. And Johnny Ray. I won't do a handstand up here, but I love handstands. And if you all want to do handstands out there, we can join. But that's not what I'll talk about today. So again, I'm beating this thing to death, but the merch happened. Finally. It did indeed take a village. A lot of those people are up here. A lot of those people are out in the stage. And they're also distributed all across the world. Thanks, guys. Quick, just like perspective on what happened. It's been a bit of a non-event, which is good. It's good. That's how it's designed. We can kind of see in that little spike, there's two little spikes at the end. And it looks like participation kind of fell off a cliff real quick, right around the merge, picked up, went back down. It's been kind of leveling off and figuring out. I think it's worth zooming out, because this starts at 94%, not at zero. This is actually what participation looks like. There's a gap because our infra wasn't collecting at that point, but it's pretty damn good. When the system was designed years ago, we didn't know what participation would look like. We think, okay, so normally it's going to be above two thirds. Normally the incentives are such that people are going to turn their machines on to make sure that we're finalizing. But we thought 70, 80, 85 would be kind of normal. But really, people turns out are extremely obsessive about every last attestation. I've got some laughs up here because if you're on a client discord, it's like somebody misses one attestation in a month and they're like, what's wrong with my machine? Anyway, shit's pretty good. There's a couple of hiccups here and there with a couple of clients, and people are resolving them moving forward. We have a nice little stable graph. Blocks, turns out those are good too. They're coming out in a healthy clip. Blue, there's actually three colors on this graph. Blue is successful blocks each day, and almost all of them are. Let's see. But we do miss a few slots from people being offline. I was actually talking to Terrence, and it turns out that we miss fewer slots now than before the merge. So even though there's more complexity in the system, Terrence's hypothesis is that there's a bit more money on the line. You really want those blocks now with those fees. So the number of blocks is even better than it was before. And then what about reorgs? There's another color on this graph, presumably. It's orange. But there weren't many before the merge, and there's not many after. In fact, Michael Sproul dumped from his note. He's seen 26 reorgs, although some of those might be double counted and 85 orphaned blocks. All of these generally show very late properties. So if something's reorged, generally we're seeing things landing towards the last two-thirds of a slot, or even right on that boundary, so the network gets a little bit confused and then resolves it. And I should say all of these reorgs are a depth one. Also from Terrence, check out his talk in two days. 50% of the orphans, so these are the blocks that tried to make it on-chain and didn't come from MedBoost Relays. There's additional latency and complexity that happens there. People are working through it, but that's an interesting component here. Also, it depends on your perspective. Some nodes might see a little reorg, and the other node always saw the correct head. So again, these numbers, that's like one a day. So things are pretty smooth. I think the uncle rate in proof of work, obviously because there's a lot of redundant work happening because of the way competitions in proof of work happen, there's like a 4% or 8% uncle rate, and we're near zero now. So things are generally working. This is what it looks like. It's plain and simply healthy. Again, finality, stepwise, very nice. Almost the entirety of the beacon chain since Genesis has looked exactly like that. It's set for this one time. I was out to dinner on a Friday, and it's not a text you want to get on a Friday night. There's something up with Mainnet from Paul. The network stayed live. We lost some block proposals, and we didn't finalize perfectly into epochs. It looks like we finalized a few in three epochs, and it's been resolved in very clean sense. I know victory lap on the merge, but like victory lap on the beacon chain, it's been an incredible two years. And ultrasound on-scar, ultrasound money, ultrasound Justin Drake. On-scar, it's a nod to on-scar convincing us to merge like five or seven days earlier to save on issuance. But Justin's giving a great talk tomorrow, check it out. And before I move on, there is a merge data challenge going on. All of those are like very simple metrics. There might be some more interesting stuff going on underneath the hood. And submissions are due by the end of the month, so you still got plenty of time. Certainly from the inside, we feel the pain, but I know the pain is felt from the outside as well. The interest, in fact, was one of the suggestions for my talk, how we, quote, ship faster. I don't think I'm going to talk about how we ship faster, but I want to talk about why it's slow. And it feels like this, both from the internal and the external. Hopefully this is not a very busy road. So much of this is fundamental, much of the complexity of shipping here is fundamental. It is fundamentally new research, fundamentally new mechanisms, fundamentally new networking and new cryptography. Ethereum has the luxury of having tons of backwards compatibility considerations at every moment of every day. And quite frankly, distributed systems are complicated. It's much easier to get a computer to do something than to get a bunch of computers that you don't control to do something together and agree, always. And quite frankly, much has improved. A lot of the conversation in the past couple days with a lot of the L1 devs and a lot of people involved in shipping and improving Ethereum has been like, wow, this thing's working well. Like this is a moderately well-oiled machine. Like we're moving faster, we're getting more done than we're used to. And some of that's because more core questions of the research have been answered. So it is a lot of engineering. A lot of that is alignment and ethos. A lot of that is sophistication and specialization of client devs. We have a lot more people on testing. We have DevOps wizards driving our test nets and helping us at every step of the day. Dedicated security analysts, all sorts of academic collaborations. Really fun development retreats. And the process is just, it's refined. It's really moving well. Things are slow. Things aren't going to be fast. But they're moving pretty well right now. But there are actually a number of considerations that are at odds with speed. Things that we're thinking about. Things that we're optimizing at all times. That if you looked at it from a naive perspective, you'd be like, you're shooting yourself on the foot. You're making this hard. But there's good reason that we make things hard. And that's the metagame of Ethereum resilience. And I might touch on some of this in her talk, but we're optimizing for an infinite game. We're not optimizing for a pump and dump. We're not optimizing for an acquisition. We're not optimizing for people being rich tomorrow. We're optimizing for Ethereum existing and running for 50 plus, 100 plus years, and being a fundamentally foundational protocol of the internet and for humanity. We're trying to continue to play the game. We're trying to have redundancy built into the game. We're trying to be able to recover in the event that the game fails. When you're thinking about 50 plus years, shit's going to happen. And we need to be able to pick up the pieces and keep moving. And in fact, we need to be able to harden after this adversity. Not only can you pick the pieces back up, but can you come back stronger. And tuning for ossification and avoidance of capture. The more valuable this thing is, the more it's valuable for people to kind of get their hands in, get their interests in, and this thing needs to be robust against anything that might happen in the next 50 years, 100 years. And when we're thinking about Ethereum resilience, there's a number of ways that we might think about it. The most obvious is probably the first, the protocol. We're trying to make sure this is a resilient protocol, that this thing works. Next is kind of the instantiation of the protocol. That's the network. That's like, does this not only fundamentally this abstract protocol work, but does it work live and is it resilient in its instantiation of tons of computers distributed across the world doing their thing? And then there's kind of the social layer on top. Is the social layer resilient? Is the social layer, can people come and go? Can we avoid capture? Can we deal with issues in that layer? I can get into each one of those. Oh, and shortcuts are obviously available. We could have a single client. We could have a single dictator. We could have truncated R&D and just ship it. We could have easy centralizing solutions. We could have a willingness for downtime. And we could have processes that are right for capture. That's not what we're doing. And that's why it's taking a long time. And that's why, that's how it is. So, protocol resilience. There's a lot of things that go into this. This is probably, again, one of the more obvious things. Tons of research and design. Tons of security and testing. Nothing goes out lightly. The amount of time and effort put on simplicity in ensuring that this system is generally extensible over time is profound. There's a lot of hard jobs in Ethereum L1, but if you've worked on research for any amount of time in this space, you've thrown out things that you've worked on for easily a year. Things you've spent countless amounts of time and you've just been like, you know what, it's not right. It's not simple enough. It's not right for Ethereum. And that is a regular, regular component of this process. Being able to operate under adverse conditions and being able to recover under these failure modes. I think one thing that's a very guiding light, and I highly recommend Vitalik's piece on functional escape velocity, but we're looking for, you know, you can throw everything out of protocol. You can try to build everything into the protocol. You can just be like, okay, this is good. Let's throw it in. Okay, this is good. Let's throw it in. But instead, we're looking for that minimum functional escape velocity, and that's an art. You know, you don't get that easily. You get that through years and years of thought and iteration. And many protocols only try to avoid failure. Whereas Ethereum is borderline obsessive with the fact that it likely, like something can fail. And like all of these protocols, essentially in a crypto economic protocol, you get X properties unless this thing happens. Usually this thing happening being some size attacker, someone willing to throw a bunch of money or burn a bunch of money. And, you know, protocols work unless you hit those thresholds. But in 50 plus year time horizon, we're going to hit, like, not necessarily. I can't predict the future, but like, shit happens. And we need to, we're borderline obsessive with not only trying to avoid these failures, but to be able to recover from them. Which takes time. The next is network resilience. And there's really a heterogeneous network. And that provides resilience in so many ways. So we have multi-client, multi-layered. Justin actually was pointing out there's another layer. We think about the consensus layer, the execution layer. There's actually this other layer, which is kind of the cryptographic layer. We rely on a multitude of black box crypto tools, which allow for expertise and separation of layers here. We have hobbyist stakers. We have home nodes and regional diversity. Every single one of these things makes it harder. Like, every arguably multi-layered doesn't. But the fact that we have, like, nine clients working on this thing does not make shipping easier, but it makes the network more resilient under failure modes. It makes users more resilient in that they can pick different things in the case that something fails. Hobbyist stakers allow for, like, a diversity of participation. They allow for fail, like, backups in the event of failure modes. I'll talk about that. Home nodes, regional diversity, similar things. I think one thing that we don't think about when we think about, like, this heterogeneous network and how it helps us, we really obsess over, like, we want perfect distributions. We want, like, 20% of each client. Thus, if one goes down, then, like, we're 100% good. That's the ideal. And that actually optimizes for kind of the continuity of the network. Like, the network can be continuously resilient in the event that you get these perfect spreads. So, for example, you know, if we have five clients, they're perfectly distributed across the network. If one goes down, you might get a few, fewer block proposals until it comes up or people switch their nodes. But you get finality. You get, like, a really nice quality of service. It's great. We should move towards that direction. There's tons of reasons as a whole and as individuals, you know, we should try to optimize for this. But it also helps us even in imperfect distributions. And these are more in the tail-risk scenarios. So, if that big, if client diversity looks like that on the network, that large block goes offline and it's chaos, right? Like, we're not finalizing. We're getting maybe 50% of the blocks that we're expecting and shit's going down. But the network continues. Blocks are built. Transactions are processed. And users have options. So, in that event, users over the course of a day can dynamically, you know, either that big block gets fixed and the network stays online or that big block has issues and users have options to switch through and they can recover and make sure the network continues. In the event that it's just one block, you don't have all of that, you don't have zero ability to recover until they turn that thing back on. So, this goes for some of the other things here. I think people sometimes say, why are we optimizing for home nodes? Why are we optimizing for home-stakers? You know, like, if you look at staking distribution, you know, home-stakers are a third or less. Like, it's clearly not working. But it does not... It's not giving us that kind of perfect distribution that might help in continuity in the event of intermittent failures. But it does help with recovery. It's critical for recovery. It's critical for these failure modes. So, in the event that some of you have a mega pool and a mega cartel of pools, they finalize something invalid. They try to take the chain over. Then you have a resilient ability to continue. That big block finalizes something crazy, and that small chunk of hobbies validators become the backbone of Ethereum and continue forward. And they don't care. Ethereum doesn't care. You know, and the ability... Like, if that was just three large pools and their cartel, like, it's so much more difficult to recover. So, we do think things hard ourselves for continuity and recovery. We also have... There's kind of like these soft things that we're optimizing all the time in the social layer. You know, multi-client is very important for network resilience, but it's also very important for like a diversity of perspective. It's very important for ensuring that every stone is unturned. It's important for kind of the security of this thing and how everything kind of communicates and comes together. Similarly, a diverse staking set brings more people to the table, a more diverse perspective on the needs of stakers and the desires of the network. Optimizing for the global, very similar. We have this incredibly... Like, if you talk to academics, almost comically open research process, which brings in a multitude of perspectives and helps harden the social layer and ensure that the social layer is diverse. Similarly, open processes, open door all around. And Albert Ney once said, Ethereum is an intellectual gravity well. And he said this five years ago, and it felt kind of true, but it's just like increasingly true. Like increasingly, every time I talk to new academics or new people joining, it's just like the obsession and the fervor to be involved in this intellectual pursuit of Ethereum, the ultimate nerds type. It just increases more and more. Similarly, Ben calls Ethereum a bizarre in the sense that it's not top down and controlled. It's not top down organized. It's not like a well-structured thing, and there's chaos. But time and time again, because of this open nature, of this open market of ideas and intellect and engagement and software development, like people just show up. They join the party at the right time. They help move things forward and get things done. And this is not... This open, wild structure is not how you optimize things and move as quickly as possible, but is a way to be resilient. Ethereum's L1 structures tend toward a multitude, and thus I would say Ethereum's L1 structures tend toward ossification. The more people that are at the table, the harder the conversation becomes to change things. And I won't get into it too deep, but I personally believe Ethereum ossification in the not-so-distant future is incredibly important and incredibly valuable. Like I said earlier, the ability to modify, change, or manipulate this machine is very valuable to us, because, quite frankly, Ethereum's not done. Ethereum needs to reach that functional escape velocity, but it will be increasingly valuable to others to also potentially attempt to manipulate for their own value. So a lot of what's involved in kind of bringing more people to the table is to make sure we have a more resilient protocol while we still can manipulate it, but to ultimately tie our feet down with bags and not be able to move. Where does the EF fit in? I think I'll probably talk about this a bit. I think Josh is going to have some interesting perspectives on this as well. But in general, we're here to help. We're here to coordinate. We connect the dots. Some very interesting work and valuable work does come out of the Ethereum Foundation, but tons and tons more. This is increasingly just a small piece of the puzzle. This is the last suggestion. Reporting back from the trenches, the art of project managing the merge. One, Evan, I'm not a project manager, but, you know, I think the biggest thing is if someone were to attempt to do such a process in the future is know enough to connect the dots, contribute enough to gain the respect needed of the group, and ultimately get out of the way. There are incredible experts at every layer of everything in the stack, and in creating the room for people to do the hard work is one of the most important things here. I do want to flip this. L1's thinking about resilience all the time. Every piece of the puzzle is all about resilience and all about Ethereum lasting and being a utility for the future. But I do turn this on to application layer. I think it's definitely time to think long-term. I see a lot of unnecessary debt taken out in the application layer. Whether that be complexity, governance, where things maybe shouldn't be governed, upgradability, bad token distributions. These things, their liabilities in the sense that they can help you. Small bits of this can be very important to applications, but I feel like naively, if we take a look at applications, they take out quite a bit of this debt for a very unclear return. I would suggest the application layer, more Unix philosophy, more of doing small things, more of little widgets on chain rather than behemoth type contracts that try to do it all. When in doubt, minimize or eliminate governance. And when you do need governance, make it small. It governs this one component that interacts with all of these ungovernable components. Governance is a liability of being able to manipulate and change things like with L1, is going to bite you in the ass if it's not done with incredible care. I recommend to tend towards ossification. Similarly, if you need upgradability today, have a path in which you don't need it in the future. Similarly, I think there must be more interesting value generation models than the ones currently being explored today. I think there are clearly some very interesting things that are being done with Ethereum with tokens, governance, etc. But, I promise you, on this new landscape of coordinated games on top of Ethereum, there are going to be interesting and less potentially risky value generation models. Check it out. Explore the non-financial, especially with L2s opening up more scale for Ethereum. There's much more room in this stack to be doing things beyond just that DeFi, beyond just that speculative cool little picture. I do think, especially in the identity space, especially in the privacy space, there is a lot of incredible work to be done in the next few years. Oh, and if you're writing L2s, it's time for fraud proofs, it's time for decentralized sequencers. I think that goes without being said, but if L2s are going to inherit not only the security but the legitimacy of Ethereum, we can't go without this stuff. And I think the former is obvious. The latter is not so obvious. Obviously, you can construct secure L2s without a decentralized sequencer, but when you start thinking about regulatory risk and other types of things at play here, these are a must. So I suggest playing the metagame of resilience across the stack, not just at L1, unify this whole thing. Everything built on Ethereum isn't going to necessarily last for 50 plus years, but the applications that matter will. And a quick happy birthday to someone in the audience that I could not do any of this without. Thank you.