 Good morning everybody. My name is Justin, and along with Alvarez and our friend Kelvin from Optimism, we're going to go over new technologies for on-chain gaming. So Alvarez and I work on Lattice, along with many others. It's a project that's been out of Xorex PARC in order to push the envelope of on-chain gaming. We basically want to enable people to build crazy things on top of Ethereum. And things like full-on virtual worlds, worlds whose rules run on the EVM and whose state is secured by Ethereum. People usually refer to those projects as on-chain games, but we prefer to call them autonomous worlds, because we think it goes well beyond gaming. It's about giving autonomy and sovereignty to complex systems. So the problem is building large on-chain projects actually really hard, which is why today we're going to go over two new key pieces of technologies built by Lattice and Optimism in order to enable autonomous worlds. And I'm going to pass it to Alvarez to go over the first one. Cool. So the first key piece of technology that we want to introduce today is MUT. MUT is an on-chain game engine, or how we like to call it an engine for autonomous worlds that we develop here at Lattice. And when we started working on on-chain games, there was no engine to build upon. So we had to run through all of these very common, very general problems, very non-trivial problems as well. But they were common in all the games we were building. So we set out to build this key missing piece of infrastructure and build MUT with the goal of solving all the hard problems of building on-chain games. So what are the hard problems of building on-chain games? Well, they fall into mainly three categories. The first one is how to make sure that your clients and your contract state are always in sync. Then the next one is how to architect your game in a way that makes it easy to add more content later without having to refactor the entire code base. And then the third thing, if you care about impact beyond just an individual game, then how do you make sure that your game is interoperable with all the other games out there? So before I go into how MUT solves all of these problems, let me quickly go over the previous common approach of approaching these challenges. Usually, the most common approach before was you have one struct per entity type on your contract. Like, if you want to have a monster, then you have a monster struct. And in that struct, you store all the data for that entity. And then, of course, on the client, you have to replicate that interface in order to represent the state there. And then you have getter functions for each individual type of entity that you want to sync to your client. And you implement your logic based on these specific structs. And every time you modify one of these data structures, you have to emit a custom event that then can be caught by the client to then update the local client state. And this sounds already pretty annoying, but it gets even more annoying when you want to now add content, because now you have to add a new struct and you basically have to edit your entire network stack in order to make it compatible with this new struct. From the events over the getter functions to the event handling on the client, you have to modify everything. And then on the interoperability side, basically all you have are existing interfaces like ERC-20, ERC-721, et cetera, which we're not built for on-chain games and are very limited in what they can express, right? So our goal with mud is to solve all of these like general problems so that you as game developers, on-chain game developers, can just focus on making a fun game. All right, so this is for the why. Now let's go into the how. How does mud solve these problems? Mud is built around an architecture pattern called entity component system. This is a very popular pattern in the traditional gaming industry for a reason. But if you're not familiar with it, I'm gonna give you a brief crash cross on ECS. In ECS, an entity is just a numeric ID. So in our case, it's just a U in 256. And then you have components. You have components, oops. You have components that store the data for this entity. So in our case, we could have a component that stores the data. Like you can think of it as a mapping from the entity ID to the component value. You can think of it as a fancy like standard Ethereum mapping. And then you have the systems, which implement the logic. So components are only data, systems implement the logic. And systems don't act on specific entity types because there are no specific entity types in ECS. But rather an entity is just the collection of components that are attached to it basically. And so your move system as an example doesn't care about whether it's moving a donkey or a dog. It only cares about whether this entity that it is moving has a position component attached to it and then it can modify that position component. And if you think about it, that's actually in a way how Ethereum already works today. You can think of addresses as entities and then a token contract like a ERC721 token contract can be thought of as a component and a system mixed together and attached to this entity, to this address. If you wanted to model this in like pure ECS, then you would just have a balance component that stores the data and then a transfer system that implements the logic of transferring stuff. All right, so with ECS in mind, how does much handle the state sync? In mud there's a very central contract, the world contract. And every time you add a contract, and every time you add a component, which is also a contract, it gets registered on the world contract. And then every time that component gets updated an event is emitted automatically through the world contract. That means the client only has to listen to this one central stream of events coming from the world contract and can then like keep the local state in sync basically. And the great thing is with mud, you don't have to worry about any of that because mud handles all of that for you. What you have to do as a game developer is just create a component contract, give it some ID and then create a component on the client and give it the same ID and mud handles the state sync automatically. And the great thing with this general approach is also that we can provide generic indexes that help your client catch up faster and reduce their RPC load. And you don't have to write any like custom sub graphs because all the data is stored in a very generic way. And that's why we can have indexes. Cool, now to the fun part of adding content. As a quick reminder in ECS entities are just a collection of components. So our fighter entity here is actually just a collection of a health component and attack component and a movable component. And now if we want to add more entities or stronger entities like a dragon, then we can just modify those component values, give it more health, give it more attack and suddenly we have a dragon. Another way to add more content is to modify or like recombine existing components together. For example, remove the movable component and suddenly you have a defense tower. And the last way of adding content is adding new components. Every time you add a new component, the number of possibilities of combining your components actually doubles. So you can just recombine your components in a new way, you have double the amount of entities you can now represent. As an example, you add a healing component, now you can build a healing shrine, a healer and a healing potion without changing any of the existing logic. And then when it comes to interoperability, in theory, everything that you build on chain is interoperable to everything else already. You just have to make like a custom integration for everything, which gets obviously kind of annoying and is not scalable. So in other words, interoperability needs interfaces to scale. And you can think of mud as an interface for those on chain worlds. You can read the state from your own world in the exact same way as you can read the state from other worlds. So all those words are just by default interoperable with each other. And you can think of existing interfaces like ERC721 as an interface for ownership. But mud on the other hand, since every component is a standardized way of storing the data and we have the standardized query system, basically mud is an interface for anything. Like you can just write a query for any world out there and just represent whatever you want. Basically this query, this very simple query gives you all the movable attack entities owned by this address, but you can get arbitrarily complex with that. So this is how mud solves all the problems for building on chain games. But the great thing for mud is that it's actually completely genre agnostic. We built two different games in-house at Lattice over the last couple of weeks to completely different genres. The first one is called Skystrive and it's an on chain RTS. The goal is to build your army, to defeat your opponents and then steal the loot from the center of the island and then escape with the loot as the first person. It uses 39 components, 20 systems and not a single line of networking code because mud handles all of that for you. And we're actually gonna play tonight at, or not tonight, this afternoon at the Hacker Basement at 4 p.m. So if you wanna check it out, come there. And then the other game we've built is yet to be announced. It's an on chain Boxel game. It features an infinitely procedurally generated world. You can mine, you can build stuff, you can craft stuff. And it only has eight components and seven systems. And also here, zero networking code. And those two games are completely different genres but they build on the exact same infrastructure. They just use different components and combine them in a different way. Yeah, and this is mud. As a summary, with mud you don't have to write a single line of networking code because mud handles all of that for you. Adding content is completely trivial. Adding a new component doubles the amount of entities, the types of entities you can represent. And any mud world is by default interoperable with any other mud world. And that is how we solve all the problems. Cool, all right. Gonna drop some more crazy stuff. Full nodes are great. So after this like short introduction to mud, I wanna go over two kind of like advanced features slash mental model from what mud is. So the interesting thing with full nodes is that they give you access to the entire chain directly from the node database. With a node you can do things like, oh, what is the counterfactual of this? Can I execute a transaction but slightly change the storage of that contract? Like what would happen if I were to make a transaction on Uniswap and I have a billion die instead of like 10? The problem is that traditional DAP clients today are not full nodes. They rely on things like Infra or Alchemy to serve their data and then they have to build this like complex state machine of indexers and events in order to serve the data they need to their users. As an example, the Uniswap client has to connect to a full node and it has to fetch the state it is interested in just in time because otherwise it would take way too long. So as you like move around on Uniswap and you like select different poles, you have to wait every single time to know what's going on. I don't know if you've ever read the Uniswap client code but it's pretty insane just to be able to keep such a short like a simple state machine in sync. And there needs to be like a lot of custom code and indexers. Additionally, the Uniswap client cannot actually simulate transactions and it doesn't know what will happen if you like execute that single swap. So the way it actually works is you have the Uniswap client and then remotely you have a full node and you have a bunch of indexers and the Uniswap client has to make very slow network requests in order to be able to populate the information that the user needs. So why don't we put the full node and indexer inside the client, right? Like that would allow you to make instant queries. You would be able to index the chain however you want. You don't have network delays anymore after you're synced and then you can simulate transactions, right? That would be amazing. And actually for like the Ethereum OGs here that's how Mist was working which was the Ethereum browser like five years ago but the thing is full nodes are super expensive. They require a lot of bandwidth and storage. They have very expensive cryptographic data structures that are needed in order to serve light clients. So it's pretty much like it's not really practical today to put full nodes in the clients. And so because of that most dApps today like if you use DeFi or like whatever kind of stuff usually they have UX hurting network calls you have to wait a lot for every single thing. You have to wait for the TX to be mined in order to know what the side effects of what you have done like actually is. There are services today like Tenderly and stuff that allow you to simulate transactions but they add additional complexity to the code base. And more often than not they need to use remote indexers which add yet another like surface of things that can possibly go down in complexity. So can we do better? Now that we have like slightly better infrastructure than when Web 3.js and EtherS.js was invented is there a way to essentially have our cake and eat it too? The one interesting fact about autonomous worlds or on-chain games is that there are more often than not standalone unlike traditional dApps. What that means is that the state transition function of an autonomous world almost always only depend on its own state. So as an example when DocFest was running on Xdai DocFest could have run on a chain with nothing else but DocFest. DocFest didn't care about the other things on Xdai, right? That's unlike most dApps today which rely heavily on like a plethora of different services on chain that they connect to things like Oracle's, things like ERC20 contracts and so forth. And again, this is unlike as an example Uniswap where if you were to want to simulate the transactions of like a Uniswap trade, you would need to know about the other smart contracts like the ERC20s on each side of the pool because they could be implemented slightly differently. So one like mental model for what mud is is that it's a namespaced full node which is why we think this goes like well beyond gaming. It's a way to build actually like better clients than what exists today. And the reason we can build a namespace full node is because as I said earlier autonomous worlds are mostly standalone. So mud sinks a world, the word contract that Alvarez talked about. It's a namespace for data and logic. So data are components and logic are systems. So mud can sync the data of every single component connected to every single entity and don't know the EVM by code of every single system. And the way it does it is it does its initial sync via a general mud indexer or a full node and it keeps its state up to date via a full node or a mud stream service. Mud doesn't need the cryptographic data structures that are usually in a full node because we don't want to serve light clients. We just trust a node remotely but what we want to be able to do is like give like build extremely snappy applications by having the entire state and logic client side. So I think that is interesting with mud is that components are self descriptive. Components of on-chain schema that explains how to interpret their actual like row bytes. And so the mud client can read the on-chain schema. And so compare that to a full node. With a full node you know the actual storage slots of every single contracts but then you just have like 256 bits of like you don't know what that is. This is why people write view functions in their contracts in order to be able to like load actually contextual data from those applications. With mud what you get is a key value database of entities mapping to a bunch of components. And it doesn't matter who deployed those components. The client just knows what they are. It knows what their name is. It knows what their structure is. So as an example when you sync like a mud full node so a mud client you would know that the first entity 0x0 has position 1245 and health 200. So it's way more contextual than the way full nodes are done today. It's because I mean Ethereum is trying to be as general as possible whereas here we impose constraints in order to get features. This allows a client to run complex queries on components without any network delay. So you can run aggregated queries like oh give me all the entities that have a position and a health with value 10. You can do crazy stuff like aggregates and stuff like that but all of these are executed instantly in like one millisecond on the actual state. So if you were to rebuild Uniswap with mud today or at least the Uniswap client you would be able to move around and do things without any network delay when it synced. And don't only the actual SNIT snapshot usually takes like less than a second. So the other thing that is pretty interesting with mud is that it ships with a local EVM. So when you make a transaction in mud that is when you call a system what mud does is it runs an EVM on that system given it knows it's bytecode and it injects the ECS state when that actual like EVM bytecode needs to read the state. It registers all the side effects that happen. So as an example if you were to like call the move system on entity one and you say move to extend y minus three mud can run that into an EVM and know that the side effect is that like oh no the position component of entity zero one is 10 minus three. It also sends the transaction on chain of course and then when the transaction has been mined on chain it compares the side effects that happen on chain with the one that were predicted. If they match nothing happened and so from the user's perspective it seems like the transaction was essentially executed instantly which like gives way better UX. And when it's not the case when the prediction fails so as an example if someone else kind of like clashed with our state by having their transaction executed before ours or if we're behind the tip of the chain we just revert the side effects and apply the actual ones. And but that happens like quite rarely actually. This is how modern MMOs work. They do prediction and rollbacks all the time. So yeah with mud you can essentially read and index components without network delay client side and you can also simulate transactions without waiting. And we think that, oh sorry we think that this actual like increase in user experience is pretty insane. We've seen it for games because that's what we're trying to do but I can't wait for someone to be able to defy protocol on mud and make it like the snappiest swapping protocol ever. Another thing I'd like to talk about is is this like concept of extending words with mud. So today with like fully on chain permission as app things like Uniswap or DocFress developers can extend the protocol via new contracts and custom client, right? That's the power of those applications. So as an example in Uniswap you can build an LP kind of like liquidity mining program on top of Uniswap permissionously. You don't have to like phone the Uniswap company in New York to ask if you can do that. Similarly with DocForest you can create smart contract players or new features. However, the problem is that developers that do that they need to ship new clients and indexers when they do that. So as an example if you wanna build a yield, sorry not a yield, a swapping kind of like aggregator that finds the best prices where you need to build one inch. You need to build a new client. Also users need to know where those new features are. They have no idea that someone out there built new contracts and new features for that specific D app. And that actually creates a clear distinction between first party code and third party code which in our opinion like greatly hurt like interoperability and like creativity. So as an example in DocFress you can do something like, hey I'm team XYZ and I'm gonna deploy a thing that marks some planets as rewarding and if players captures them they get some ETH. That's pretty cool you can do that. But now how do you user know that this exists, right? Like where are the contracts? How do you like even if you had that information in your client how does your client know what to do with that data? Does it render a button? Does it trigger a shader? Similarly you probably have to rebuild all your indexers from scratch. So you get a fragmentation essentially between first party code and third party code. So again can we do better? In mud the central contract is the word contract. If you know the address of the word contract you know everything you know about all the components you know about all the systems you know about all the entities. And the thing that is interesting with the word contract is that it actually has no owner. If you deploy yeah so it has no owner it's permissionless and so that means there's no difference between first party code and third party code anymore. Everybody's first party. So the creators of the word contract the employers of that contract actually have no special kind of access to it. They're not privileged in any mean. The word contract is non-aggradable. So the rule is anyone can create components and systems. So when Alvarez was talking earlier about the health component and position component and the move system anyone can register those components and those systems on the world itself. And when you do that when you create new components that is like new data or new systems that is new logic they're accessible in the client they're indexed they can be found in the debugger and they can be executed in local EVM. And there's actually no difference between like what the core team deployed and what like any other people did deploy as well. So the only like the rule is that all systems can read from any components. So and those components can be deployed by different teams. So as an example you could write a system that runs a query on the world that queries a component that like deployed by team XYZ and a component deployed by team TTT and they can actually aggregate those data together. The only rule is that components have to whitelist which systems can write to their states. And that's really important because otherwise an attacker could deploy a system that just resets everybody's inventories. So the way it looks like is you have this this like graph of components and systems that kind of like trust each other but also you have this interesting social phenomena where users of those applications decide what is real. So if you have two position components that like don't match with each other the users have to decide okay do we believe in position one or in position two. And that's really what happens today on Ethereum if I deploy a new die contract my die is worthless. So it's all based on essentially what you think is real but it is made in like mud is architected in such a way that you never have the problem where you need to create some form of like oh like we need to like give or upgrade keys to a DAO in order to let our players upgrade it. So this can emerge naturally essentially from the web of trust of the players themselves. And this leads us into like an interesting kind of like idea of augmented reality. So this is not the augmented reality you know with fancy glasses and hardware. It's more about layering base rules with new interpretations. So beyond the core components and systems all players believe in probably the ones that have been deployed by the core team the low level physics. It is possible for anyone to create augmented reality layers that only a subset of players engage in and that permissionlessly. So we're gonna illustrate this. Imagine a game. It's a very simple game. It was deployed by the one core team. It has three components, position, movable and resource. It has three systems. You can move, you can pick up a resource so you can drop it. And you can see there are three players there is 0x01, 0x07, 0x05. There are little characters riding horses and they have a bunch of resources on the floor. What players do is they can ferry resources around. This game doesn't actually have any goal. It's just some rules of physics essentially. And like as an example I can call move on my player and I would move my horse from one place to another. Now team TTT, team tic-tac-toe comes in and they deploy two new components on the world because they can, you know, they don't have to ask anyone. Those two components are the stake component and the board component. And they add three systems, challenge, accept, challenge and resolve. Now all clients are aware of those new components and systems. They just don't know what to do with it yet. If you open your game at that time you'll see a little install box. It's like, hey, there is some piece of JavaScript that is linked with those components and systems. Do you want to use them? If you do, now you can play a radically different game while still being compatible with the low level rules. So as an example, I can take my player, I can move to the goal at 0x03, I can pick it up, move again and I can come next to that player at 0x07 and call the challenge system on that player. That challenge system is new. It was deployed by the team tic-tac-toe and they attach one ETH to my transaction. The other player, if they're aware of that augmented reality layer, they can accept the challenge, stick the same amount and now this new system is going to create a new entity 0x09 because anyone can create entities and attach the stick component and the board component to it. So now for the players that have the actual code that allows them to experience the tic-tac-toe augmented reality, they can see a board on the floor and now they can play tic-tac-toe. They can drop the resources in order to play tic-tac-toe and someone can win, they can call the resolve system and they can take the stick and destroy that entity. And you can do that without clashing with the rules that all other players believe in. So tic-tac-toe is just like tennis. Like tennis is an augmented reality layer on top of our physics. I can drive in my car and look at tennis players and we're not breaching the rules. Like we're all living according to the same rules but we're just experiencing the world in different ways. So it's an augmented reality. And from the perspective of other players, they're just like, what the hell is going on? Why are people dropping resources on the floor on a grid? But they can still coexist together. So there are a lot of augmented reality out there. The main one is capitalism. It was made up. We all wear the glasses that allows us to see capitalism but I can interact with an anarchist that doesn't believe in money. I can go and have dinner with them and we won't actually experience the same augmented reality but we share the same low level rules. Competition is also an augmented reality. The thing that is interesting with this game is that it has no goal. Some people just made a set of interesting rules and resources but now anyone can come in and essentially puts a competitive layer on top of it. Many games are also augmented reality. So for the people here that have played Minecraft, in Minecraft servers, a lot of people build minigames. You know, you have to break the rule underneath people's feet and fall into lava. Those minigames actually don't clash with the rules of Minecraft. They're just like new systems that have been created on top. And you can do that permissionlessly. You don't need a governance for the world because the world is onerous. Anyone can deploy new components and systems and players should believe in whatever components the system they care about. And now I'm gonna pass it to our friend Kelvin who is gonna drop some more cool stuff. So you wanna build an autonomous world? Good luck, good luck. Have fun trying. Just kidding, of course. I'm gonna be talking to you today about the OP stack. Technically, we were originally hoping that this talk would be after the other talk about the OP stack. So just pretend that you're five hours in the future. You've watched the other talk and you know, whatever, time doesn't matter anymore. So introducing the OP stack, except Carl has already done it five hours from now. Kinda, we still need to write all the docs. So if you wanna use this, you're gonna have to dig deep a little bit. You can talk to me afterwards about how you can actually achieve this. But alright, introducing the OP stack, I wanna talk to you a little bit about basically how you can achieve this, build your own system today and get really good security guarantees at the same time. Alright, so, boom. What is the OP stack? Essentially, the OP stack is Rollup's Gone Modular. Over the last year, we've been designing and building this thing called Bedrock, which is the next major upgrade to optimism. And while we were doing that, I think we realized that the key part of building a solid Rollup client was to make it as modular as you can possibly make it. And we sort of, we'll talk about this a lot in the talk later today, the back-to-back talk that I have with Carl. But we kind of realized at a certain point that if you really wanna maintain one of these things, you can't allow different parts of the system to bleed complexity into other parts of the system. And the classic example that we saw over the last two years was we separated execution from proving. This was the big thing. This is what basically, Optimism's EVM equivalence upgrade, Arbitrum's Nitro upgrade, they all follow the same pattern, which is let's build the client the way that we wanna build the client, and then let's make execution work on top. So the end result of all of this over all this time is that we've broken up our system into a highly modular system with lots of different pieces. We think that there's three simple layers that kind of follow what you think. Oh, there we go. Look, you've seen these things before. Consensus, execution, settlement. You've seen these things before. If you've heard anything about modular blockchains, you kind of have an idea of what the OP stack is all about, except the OP stack is putting this modular theory into practice. So the real difference here is it's not just charts on a blog post about, okay, if you plugged in this data availability layer, you get this behavior, and if you plugged in this data availability layer, you get this other behavior. Instead, it's actually concrete components that you can implement and you can switch out to get the behavior that you want. So I'm gonna talk to you about the core concepts, the different components that you can switch out, and sort of what you might do if you wanted to build a game to use these different components and do something really interesting. So the first layer that we like to break out of these different, there's obviously the three primary layers. Inside of consensus, we think of two sub-layers. Quickly here, we've got what we call data availability and derivation, but I'll start with data availability. So you've probably heard about this, you probably have a basic idea of how this works. Data availability is where you publish your data, right? So the idea is, well, people don't always wanna publish their data to Ethereum. All the roll-ups originally were built under the idea that, okay, we're always gonna publish our data to Ethereum, so let's just build our architecture under that assumption. And so the OPI stack basically says, well, actually, whatever, as long as you kind of have an array of blobs, that's what we call them, they can be blocks if you're Ethereum, they can be blocks if you're Celestia, they can be all sorts of different things, but essentially what you want is an array of things where you can publish data to. And ideally, you have some properties about this data availability layer. Ideally, it's somewhat immutable, otherwise the whole thing is just gonna keep reorging itself and that's gonna be really annoying. Ideally, the data is actually available, otherwise you can't do anything with it. So there's some properties that you want about this, but the nice thing is the OPI stack basically says, well, whatever, you can define any data availability layer as long as it fits this basic idea that it's an array of byte strings, you can slot it in as your data availability layer. So concretely, what can you do with this? I think the easy example is, instead of putting all your data onto Ethereum, you can use something like a data availability committee instead, reduce your costs, make your system cheaper. I think this is a really good application for gaming because you basically, you don't need those ridiculously high security guarantees that you do when you have a basic rollup where you're just putting data on Ethereum. The ability to switch this out for a data availability committee or a different data availability layer all together is really important to get the exact security properties that you want depending on the amount of value that you actually need to secure. All right, derivation. Derivation is interesting. I think derivation is one of the coolest parts of the OP stack design. And essentially, the idea is that derivation is how you are pulling inputs for your blockchain from the data availability layer. So derivation is basically like a function that's aware of the structure of the data availability layer. It's like, let's say we're putting data on Ethereum, it's aware of the block structure, it's aware of how data is put onto Ethereum and it parses that data, it pulls it out and it turns it into inputs to your layer two execution engine. Derivation is really important and it's, you know, generically, you can understand why this is important. Let's say we're a rollup, you know, the same thing that we're doing for unannounced game will be announced soon. If you're one of these systems, what you do generically is you post data to an address on layer one and maybe you have deposits and maybe you have some other sort of information and you transform all of that. Maybe, you know, if it's data posted to layer one by the sequencer, let's say you decompress all of that data, you transform it into inputs on layer two and then you're gonna execute those inputs. Derivation is really interesting and I think there's a lot of hidden things that you can do inside of here that maybe aren't always obvious. One of the things that I think people can achieve with this is have in-game events or have events on your chain happen when things happen on the layer one. So let's say you wanna have an in-game event happen every time there's a Uniswap swap event on layer one over a certain value. What you can literally do is in your derivation function you can take our derivation function, you can tweak it slightly, and you can say, okay, I'm also gonna look for Uniswap swap events and then whenever there's a Uniswap swap event, I'm gonna generate a transaction on layer two that makes this thing happen and the end result is that in my game fireworks go off whatever you want. You can basically modify this however you want to have the state of your layer one define what's going on on your layer two. It doesn't just have to be transactions. You can build amazing things with this and it can be very, very stateful. So you can do a lot of cool stuff here. Boom, execution. Okay, execution is probably what you think execution is. It's your execution engine on layer two. It's your state transition function. It's the thing that takes the inputs that were generated by the derivation layer and it takes the current state and it's gonna translate that state into a new state. It's going to take that state into a new state and that state's gonna be used to derive new inputs and it's gonna be, you get it, right? Every time I make a transaction and something happens in my voxel game and the world updates, that's what's happening under the hood in the execution engine. The execution engine in the OP stack lives behind the engine API, right? So what we decided to do was ethereum, while it was going to the merge needed consensus clients and it needed execution clients, right? So you need a way to talk between the consensus client and the execution client and that interface between the two clients is the engine API. So what we did was basically take the same API and you stick the data availability and the derivation thing and you separate that from execution by that same exact API. So it looks exactly like ethereum looks. The nice thing about this is that you can plug in absolutely anything in here. You can, it doesn't have to be the EVM. You can take the EVM and maybe you can make some easy tweaks. You can add a new pre-compile. You can tweak a few op codes if you really want and that'll just work really easily. In optimism, in bedrock, we add a new transaction type. We add this deposit transaction type. That's really easy. But it really can be whatever you want. It could be Bitcoin. It could be a Game Boy. As long as you have a state transition function and you wrap it inside of the engine API you can do whatever you want and it should just work. The whole thing should just work. So if you wanna build a game and you wanna use the EVM and you wanna add a new pre-compile because you have some complicated game state function that's just too expensive to run inside Solidity then just modify the EVM. If you wanna run a totally different execution client altogether, you can also do that and it just sits behind the same API and all the rest of the stack, all the roll-up stuff, all the transaction stuff, it just keeps working as if you didn't do anything at all. All right. And then finally, settlement. Settlement is this weird one. It's a little fake. It's, you know, what does this really mean? I like to say that settlement is about establishing a view of your system on some other system. And it's all about making claims, right? I'm making a claim about the state of my system onto another system. And so settlement is really useful, obviously in traditional roll-up, settlements is really useful if you wanna do withdrawals. If you wanna be able to move funds out of your roll-up onto, let's say, layer one, you need to be able to settle the state of layer two onto layer one so that I can say, okay, that is the true state of layer two. Let me pull it out. Let me operate on it and maybe I'll give you a withdrawal as a result. But the thing is you can do a lot of really interesting things with settlement. You don't just have to make claims about, let's say, the total state of the system. I think a really interesting way to think about settlement for gaming is that you can have a short-lived chain that plays some game. Let's say we all wanna play a chess tournament, right? We can play a short-lived game and then at the end of the game, we see who wins the whole tournament and we can make a claim and we can say, Kelvin won the whole tournament. And that's the claim that I'm settling. And then you can have this generic proof system that will just prove arbitrary claims about the state of your layer two. And so you can prove this idea that, okay, Kelvin won. And then we can resolve that winner back to layer one and we can pay that person out. And then we can throw the chain away because we don't need it anymore. So we got this short-lived verifiable system, right? This game that we know that the whole thing ran as it should have ran. There's no weird state coming out of it. We prove that back to the base layer and then we can throw the whole chain away and we don't have to worry about storing it. And so if we're short-lived, high-capacity games, you can really bump up the gas limit. You can do crazy stuff because you're not worried about state bloating to infinity. And then you can just settle any sort of information about the state of your layer two back to layer one. All right, no sequencer, no problem. Actually currently a very big problem, but this is just an idea of basically where we imagine this system going in the future. Because there's a big issue, which is that every single time I talk about this idea, somebody comes up to me and they ask me, doesn't that mean you still have to run all the sequencing infrastructure? And the reality right now is yes, but we wanna get to a future where the answer is no, right? Because most people don't really wanna run this infrastructure. You wanna focus on building a game, especially let's say you just wanna build something small. You wanna run it for a couple of days, you wanna put it up there. The reality is, I'm sure that the average person does not wanna deal with the level of stuff that we have to deal with at optimism to keep the sequencer running. So we have this concept that we're exploring called shared sequencing. Basically shared sequencing is taking all the headache of running your own sequencer and it's deleting all of that. And the basic idea is that all of these different autonomous worlds can share a single sequencer. And of course when I talk about the single sequencer, you can see there's multiple machines there. I like to think about the sequencer as sort of a single logical entity, but in the future what's gonna happen is you're gonna get decentralized sequencing, you're gonna have something that looks like a leader election where at any given time slot there's a specific sequencer and that sequencer is sequencing your rollup and then there's the next time slot and the next sequencer comes in. So the thing with this is you can do something really, really interesting. If you have one sequencer sequencing all of these different chains at the same time, instead of having multiple different chains that are talking to each other, the problem today is if I have, let's say I'm Ethereum and I have Cosmos, I wanna sort of, I wanna interact between a Cosmos chain from Ethereum, they don't share a validator set, which means that my communication inherently has to be asynchronous. But if you have a single sequencer producing the blocks on many different autonomous chains, the autonomous worlds at the same time, you can get this amazing property of atomic composability between all these different chains. You basically have a single sequencer saying, okay, there's a transaction coming in on chain A and there's a transaction coming in on chain B and I am supposed to guarantee that they can come in atomically and I can do that because I sequence all the chains at the same time. So now you have different games, different worlds, different realities that can interact with each other. They're their own states, right? The validation is separated. None of the validity of one of these chains depends on the validity of the other chain, but they can talk to each other as if they're on one unified chain. That's crazy, right? Now you can have, I can have an action in one game all of a sudden create some simultaneous action in another world at the same exact time if those two worlds want to talk to each other. So this is part of something that optimism is playing with the work calling the super chain. We think that this extends a little bit more than just it shared infrastructure. I think that the incentive here is not just to share infrastructure, it's to share code, it's to share a set of values, it's to basically collaborate on having all of these different games and all these different worlds. Maybe it's not games, maybe it's entirely different rollups, but they basically, I think we think that there's a strong incentive for these systems to not just be able to transact atomically, but to really be able to collaborate with one another towards some coherent vision because you're much better joining in on the system than you are trying to build your own chain separately with your own sequencer set and you have to run all the infrastructure and you don't get to talk atomically with the rest of the system. So long live the super chain. I would highly recommend if you're interested in this, how this might work in more detail, come to the back-to-back talk with Carl and me later this afternoon, we'll go into much more detail about how all this works. All right, where are we? All right, so the other question I get is why make it free and open source, right? Aren't you basically just letting people compete with optimism and build their own rollup? And the answer is kind of yeah, right? This is the idea. And basically the argument is that the reason we have to do this is because there's just no other way to do this. We need, we think that there's gonna be this explosion of people who are interested in building layer twos, interested in building layer threes, right? We're already seeing all these big L1 systems come in and they're experimenting on the execution layer, right? They're competing with Ethereum on the execution layer. They're saying we're gonna build a paralyzed VM and that's our advantage. And if you really want to survive, you're going to need to be able to compete on this stuff and we think you're gonna need to be able to compete on this stuff without having to have 25 engineers work on this problem for three years like we did and instead you're gonna want the ability to have three engineers figure out how to do this in three months to start a business out of it. So this is basically the argument. If you wanna really make it possible for Ethereum to continue to compete and for people to experiment on the execution layer, experiment on the derivation layer, experiment on all these different layers, it just has to be available, right? If there's not a permissive license on it, you can't use it. If it's not modular enough, if it's too hectic to try to hack in your modifications to the execution layer, you can't use it. So the goal is just make it as available as possible so that Ethereum can continue to compete on every single layer of the stack but you can still stay within the Ethereum ecosystem. And you know, this is us today, this is where we are. This is, we got Lattice, we got a couple other people and you know, this is the world that we're imagining, right? You were imagining that basically everybody is building on a system like this and they're collaborating and they're working on a shared infrastructure. And by working on a shared infrastructure, you share audits, right? You can share engineering time. It basically takes you 100 times less effort to build 100 companies instead of having every single company fragment and build their own system. You basically have all the freedom in the world without actually worrying about the low level technical details about how are you gonna publish transactions to layer one reliably? How are you gonna deal with reorgs, right? How are you gonna make the proof work? Cause all that stuff is extraordinarily complicated. There's no reason why every single person who wants to build one of these chains should have to basically build it from scratch. Doesn't make any sense. All right, so some closing remarks here. Basically, go nuts, build something crazy. You can do so much with this architecture, right? You can swap out the data availability layer, make your chain cheaper. You can swap out the execution layer, build an entirely on-chain game. You can literally take an emulator of some system, you can put it into the execution layer, and you can even prove that the whole emulator functioned properly. You can do an enormous amount of stuff with this. And the nice thing is you don't have to worry about how are you gonna go build the sequencer? How are you gonna go build the proof and all these different things? You basically get all that for free. So that's the idea. Where are we? Thanks for coming to my TED Talk. Where are we right now? So the code is all there. It's possible to hack on this stuff. The real, the next goal is to take all these modules and make really, really clean documentation so it's clear what you have to change, where you have to change it to make all this possible. If you go in today, Lattice has been extraordinarily brave to go in and basically hack this system together and really make it work for them. And we've seen other people do it as well. If you're interested in doing this, come talk to me later and I can give you pointers. But the goal is, right now it's still a little early. We're looking for people to come in, help us figure out where the APIs aren't clean enough. Help us figure out where the documentation isn't clean enough. If you know what you're doing and you're really interested in hacking on something new, come find me and I will try to help you get started with this whole thing. And the goal is make this accessible to the average person who just doesn't wanna deeply understand how this whole architecture works. So that's me, I'm gonna hand it off now. All right. All right, cool, three minutes left. Announcement time, announcement time. So just like the OP stack is meant to like, essentially increase pluralism by making something free and open source and easy to use. We kind of did the same thing with Mod. Mod is MIT licensed at soft launch today. And to kind of like pave the way forward, that is an optimism I've been working together over the last month and a half on an autonomous world that was built on Mod and runs on the OP stack. So we're like pretty excited to show OP craft to everybody. It's a 3D voxel game, powered by Mod, running on a crazy degenerate OP chain. Yeah, I even have a video. I will see if it plays, but Alvarez and I yesterday were trying to build a house. So yeah, that's running on DVM guys. It's a procedurally generated world. I call it to see what people will do with that. You can deploy marketplaces, oracles, extended, build augmented reality, tennis, capitalism, whatever you want. It's gonna be open source soon. It's playable today. Let's actually see how we build the house. I remember it was quite, quite hard, given we didn't really have consensus on how it should look like. Yeah, okay, cool. We did it. OP craft is gonna be soft launch and playable at the autonomous world arcade at 4 p.m. today at the hacker basement. So if you wanna play it, come there. Additionally, we're gonna have a tournament of Skystrive, which is the earth yes that was built with Mod. We have played a student weekly for like the last month. People really like it. There are Starcraft players that don't even know that it's running on chain, they just like the game. So yeah, we're gonna try to hit 64 players. So come along as well, hacker basement, 4 p.m. So what about the other stuff? Well, Mod is actually already available. We just don't really talk about it because it's not very documented. It's like the OP stack. It's on mod.dev. The code is on GitHub. We have a bunch of teams building stuff with it, but it's, yeah, talk to me if you're interested. The OP stack is gonna be announced soon by Carl at 1.30 p.m. on the main stage. And yeah, that's us. Thank you guys.