 Hello everyone. Thank you for coming or for staying. Those of you who stayed past Patrick's talk. I understand. I don't have his celebrity status. But yeah, I will be talking about sequencers layer two sequencers and more generally the principles of ordering and execution. It will be sort of a survey of the design space. There we go. Cool. So the stuff that will be covering stuff will be covering. I'll start with sort of an introduction into sort of the motivation and design rationale of kind of what sequencers are and why we even have them to begin with on layer two. We'll talk about the current state of sequencers in the layer two world. Basically that they're centralized, but there'll be a little more to say than just that. And then focus on ways we can sort of improve this state of affairs. Trust minimizing them to centralizing them sort of limit sequencers power in various other ways. And then if there's time we'll sort of wax theoretical about how these concepts could apply to layer one, what extent they do to what extent they don't. But we'll see how things go. So, oh, yeah, hi, that's me. My name is Daniel. I'm, I'm, I do engineering and tech research at off chain labs. We are the team behind arbitrum the layer two that hopefully, you know, And yeah, I won't be focusing on arbitrum, but there'll be a bit of like, you know, arbitrum perspective arbitrum bias. So, okay. To get into the subject of sequencers, I think sort of the fastest way to understand why we have them is to think about why we even have layer twos to begin with. Sort of what we want from layer twos, right. So the baseline starting point for what a layer two is, is we're trying to scale a theorem, while not introducing new trust assumptions so inheriting security from a theory. That's our starting point. The scaling part is we want to improve the status quo of what a theorem is like to use in in some other ways. There's all sorts of ways it could potentially be improved at layer two, but notably we want it to be cheaper because when layer one gets congested it gets expensive. And we would also like it to be faster because when layer one gets congested, it becomes slow. So an ideal layer two would just sort of do all of those things. So what we've sort of seen in the layer two space, I paced around so I have to get used to not leaving the mic, forgive me. What we've seen in the layer two space is that basically when we're talking about layer two in the context of Ethereum, we're talking about rollups, that's the design that's dominated. And that's in large part, I think because of the UX, the sort of transition in UX, if you're sort of used to using Ethereum, used to layer one, it's actually just very similar. You can use a lot of the same tools. It feels very similar. But the key trick that rollups use as layer twos is they require that you publish transaction data on layer one itself. And by doing so in a roundabout way that I won't get into, that's how we're able to claim that these things are trustlers that they inherit L1 security. So via layer two magic, we can sort of enforce the safety of the layer two chain after data is published. But essentially all we're doing is publishing data as far as layer one is concerned. So all of the other work that goes into processing a transaction, validating it, updating state, that happens in a separate environment. We're not using layer one resources so we can make things cheaper. So long story short, by publishing data on layer one, we get trustlessness, we needed that. Yeah, we get cheaper transactions, we needed that. The other thing we said we wanted or top of the list of things you wanted was fast transactions, right? So do rollups give you fast transactions? It turns out is kind of a loaded question, right? So if you've used any rollups, you've probably noticed it's faster than using layer one, but it sort of depends what we mean by fast transactions. And even right here with what's on this slide, we can see we have a bit of a contradiction here, right? By fast, we mean faster than layer one. But what I said is that the requirement of rollups is that we post data on layer one. You can't go faster than layer one by posting data on layer one because that's a circular problem. Or perhaps a triangular problem. I'll be honest, this slide probably isn't necessary, but I really wanted to include a trilemma somehow. So this is a new trilemma. I don't expect you to catch on. But this is another way of thinking about the sort of situation we're in with rollups, right? We have this nice feature of open participation that's kind of directly downstream from the fact that we publish data on layer one. So we can have trustlessness, but that means we can't quite have fast finality. We can only have two of these things. There's other L2 designs called channels, which give us fast finality and trustlessness, but those, the UX is very different. They don't have the flexibility of rollups and so on. So we kind of have to decide where in the design space we're going to live. And in fact, by the way, if you, those of us, the arbitrary emojis in the crowd who followed our first test net release, it was actually, you know, it had no notion of fast transactions. It was entirely like that diagonal line, the one that says rollups, right? And as we put that out, one of the questions we kept getting was, okay, it's a layer two, aren't we going to get fast transactions? And we kind of just said, no, that's not really possible. But there was clearly demand for it, right? So we kind of reached this point, this kind of middle ground settlement where we said, okay, what if we can provide a fast path that's trusted, but it's optional? And we were not the first layer two team to come up with this or do this by any means. Pretty much all L2s do this. In fact, we were actually came around to this idea later. And the idea here is, you can imagine, so you have a situation where there's some party, a user can trust them. The trusted party says, I promise to include your transaction later, and then hopefully later it does. That's basically what it comes down to. The sort of version of this that doesn't work, the naive solution is you let a user trust whoever they want, whoever they want. So if a user just decides, I trust this random party, that party promises to include their transaction, even if the party is trustworthy, this doesn't quite work. Because that party can't really predict the future. And even if it thinks it will include your transaction in a given order, someone else kind of might get there first. So even to have a trusted solution like this, you need to sort of enshrine a party within the protocol, give it this special permission privilege. And that's kind of what the sequencer is. So the sequencer, in other words, is literally this party that we give the ability to directly post transactions into L2. Everyone else kind of has to wait. So the sequencer has this narrow short-term view of what will happen on L2. Therefore it can give trusted kind of soft confirmation transactions we call them. So to sort of recap that, for those who just saw Patrick's talk, this is a bit redundant, so I'll go quickly. But when we introduce the sequencer, the sort of life cycle of a transaction looks something like this. The normal in the sort of normal state of affairs, the user kind of gives their transaction off to the sequencer. The sequencer immediately gives this promise, totally trusted, that says, I will include your transaction later. At some point later, in the case of Arbitrum, usually every few minutes, it'll post a batch of transactions on chain. And at that point, the sequencer is kind of out of the picture. So once it's on chain, we're in full roll-up mode, full trusted mode, it's committed to this particular ordering. And as far as a user is concerned, your transaction is as finalized as a Layer 1 transaction, right? Because all of the data is available there. Anybody can execute it. Anybody can see what the final state will be. The actual explicit claim or commitment to what that final state actually is happens later, and the sequencer is not involved. And that's that third step there. So that's how to research the state, but there's no rush to do that. That's really just so that we can communicate back to Layer 1 and process withdrawals. But from the layer, if you're just interacting on Layer 2, once your data is on chain, you're done. So that gives us this way that we can get this nice fast, low latency, fast transaction path trusted if you want to, but it's optional. And that's very important. So just to drill that point home, because you should be suspicious when I start saying it's optional, what does it mean that using the sequencer is optional? Well, in the sort of normal case, the system is working well. If the sequencer gives you this fast promise, you can just ignore it and say, screw you. I don't care what you say, I'm going to wait for you to post on chain. So it takes another few minutes and then you get the full sort of trustless finality. In the unhappy case where the sequencer just goes rogue and isn't even answering anything, there is still a way that you can do anything you want. On Arbitrum, there's this sort of alternative path. Basically, long story short, it's a lot slower. I think Patrick talked about this well. I'm going to stop talking about him, but it's on my mind. But yeah, the point is the system can work in a sort of slower way and a slightly more inconvenient way. It can work without the sequencer entirely. So it's entirely optional. And that's why we can still call this thing a layer two and sleep at night. So I've just been talking about the sequencer, like it's my friend or something. But what actually is it? And basically, we just define the sequencer as this entity that can give these fast promises that has all the properties that I had. But you might ask, how does it decide which transactions to include, who controls it, in what order these transactions go, and so on. And basically the design space for what the sequencer is is pretty open. It could be anything. It's going to be controlled by some smart contract. It could swap in whatever sort of mechanism you can think of with the one caveat that because the whole point is to give fast transactions, whatever mechanism we use can't involve interacting with layer one because that just defeats the whole purpose. At least the way we see it. Other people would describe it differently. But yeah, it could be all sorts of things. So the current state of affairs of what sequencers actually are in layer two, as far as I can tell, I think at least predominantly most of the layer two is most of the major ones. Is that they're centralized? Somebody did something fancy on a layer two that I don't know about. You can yell at me afterwards. I apologize. But generally speaking, sequencers are centralized. That's the status quo. And, you know, we get asked about this a lot. It's probably the most frequent question that has to do with like, you know, progressive decentralization and decentralization roadmap is like when are you decentralizing the sequence? And it's not a bad question. It's a good question. These are good things to ask about. But I think often it comes from a bit of a misunderstanding. So there's kind of two things to say to this issue of centralized sequencers. And the first thing is it might not be as bad as it initially appears, specifically centralized sequencers. I'm suddenly nervous someone's going to like screenshot this slide. If you're going to screenshot this slide, screenshot the next one too. That's all I ask, okay? Because the next one is very important. I'm not claiming centralized sequencers are fine. But the first step of the answer is the power that a sequencer had is very limited and circumscribed, right? It can't, for example, simply steal money from the system, and it can't lock up users' funds forever. So other parts of the system are sort of more important. Even a centralized sequencer arguably could, you know, you could argue that an L2 could just have a centralized sequencer for good. That wouldn't be the end of the world. And the other thing that's worth saying here is that, you know, again, on just about every L2, certainly Arbitrum included, in the current state of affairs, they all sort of have more fundamentally centralized parts. But even if we said, hey, we decentralized the sequencer, if we didn't take care of those other things, it sort of doesn't really give you anything. You can read more about that on our docs or L2 bit. But, you know, we don't want to, we don't want to mislead by emphasizing one thing and not another. So those other things, contract upgradeability, the power validators have, those are the things that you, you know, if you want to spam our discord about something, those are the more important things. Don't spam our discord. You know what I mean. Okay, important follow up, obviously having a centralized sequencer is not ideal. Looking at the time, I'm going to try to speed up a little. So what can a centralized sequencer do that's bad? Well, even if it's honest, honest mistakes happen, right? So even an honest sequencer could have downtime because of infrastructure failures and server issues. And that's bad, because it sort of slows the whole system down. It's very inconvenient. I'm going to skip over one thing in there. The juicier stuff is what if the sequencer is actually malicious? It just turns evil. What can it do? Well, it can equivocate. In other words, it can make one of these promises and then not make good on it later. It can make inconsistent promises to different users, right? These fast transactions are trusted. It can violate that trust. It can't censor you technically, but it can temporarily censor you, right? So if it stops processing your transaction, you'll be able to get it through eventually. And we like emphasizing that fact. But the also important fact is there is a short window of time that you'll have to just wait. And that also could suck, right? Being unable to transact for some number of hours might be a real problem. The sequencer can do that to you. And then finally, the kind of elephant in the room here is MEV. Ah, MEV. Okay. So all I want to say here, there's plenty of other talks about MEV, people who have more and more interesting things to say about MEV than I do. I'm not an expert. I don't even know what the M stands for. It's unclear at this point. It was minor, now it's maximum. I learned when I was procrastinating this, this is called an orphaned initialism. If it stops meaning anything. But when we talk about MEV, we're talking about the power you get when you have the power of ordering transactions, particularly the power to order them in such a way that benefits you, the order to extract value from them. And what I would say is this whole architecture of introducing a sequencer has this side effect that the sequencer has control over transaction ordering, which means if it's economically interested, it might use this to extract value. Whether this is a, you know, a bad thing we need to fix or an opportunity to take advantage of is sort of a philosophical debate. But we can say this is the case with sequencers, right? We have this power. We need to at least think about it. And yeah, the way we think about it is in terms of sort of minimizing it. So I'm going to sort of run through some of the strategies that we can be used to sort of improve this bad situation of centralized sequencers. And they'll all involve improving kind of one of these four things, I mean the equivocation, blah, blah, blah. Okay, so the simplest thing that we can add to sequencers to make them better a little bit is the sort of crypto economic penalties. We can require sequencers to be staked and we can say now if they particularly equivocate, basically when they give these off-chain promises, they'll be signed. So they give a signed promise and then users can use that to prove that they equivocated and if they equivocate, we can just slash the sequencer. So that's cool. This helps with the equivocation problem, not the other problems. And in fact, this could be applied even to centralized sequencers. This is easily really any sequencer mechanism this could and probably should be applied. A thing to note here is all we can really do in this case is punish the sequencer. We can't sort of rectify the situation for users. And that's because if the sequencer is equivocating, each of these claims are sort of internally consistent, they're just inconsistent with each other. So it's not really clear which one is the more valid one to take. Either way, if we simply took one, we'd be screwing over the other user unfairly. But all we really can prove is that the sequencer did something wrong, so okay, we have this like provable cost to equivocation for the sequencer. Something. In terms of particularly the MeV problem, shout out to Shutter here who's working on this strategy. I probably should have put their logo bigger or something. But there's this idea of threshold encryption. So, and sort of using threshold encryption to minimize the MeV that a sequencer can extract. And the idea here is when a user, instead of simply passing their transactions onto the sequencer directly, there'll be this step where they pass it in this encrypted form. And they encrypt it. There's sort of this network. They call them keepers, which is kind of cute. They do this distributed key generation. So you encrypt your transactions, give it to the sequencer. The sequencer commits to an ordering blindly. It doesn't know the contents of the transactions. Commits to it off-chain, right? And then only after it commits to them do we sort of reveal the contents of it. So now the sequencer can't easily extract value because it doesn't know what it's looking at. So this is cool. And there's, you know, some added levels of, I mean, so like some potential concerns, let's say. The parties that are in charge of this distributed key generation, the idea is you distribute it so that they can't easily collude. But if they do collude, they could, for example, reveal the contents of the transactions to the sequencer before you want to. Also, if they sort of don't reveal keys in time, things like that, there's certain ways that they can delay things further. And remember, this was kind of all about improving latency. So we do want to take latency concerns seriously. Even in the sort of normal case, when you're doing the threshold encryption stuff, there's rounds of communication that are required. So it's inevitably going to add some latency. So that's one concern in our eyes. But yeah, cool stuff. So, I mean, these two techniques so far, again, you could apply these to a centralized sequencer, where I really talked about decentralizing it, just limiting its power. Now let's get to some ideas for more properly decentralizing it. So one design space, let's say, is that of MEV auctions, as it's known, proposed by some of the folks at Optimism, along with a few others some years ago. I'm not going to describe it sort of in the abstract. I'm not claiming this is their plan or anything like that, just sort of talking about the design space. So the idea here is, you can imagine, at any given time, we can point to who the sequencer is. It's still one specific party, even a centralized party. But every so often, over time, we hold an auction and you can buy the right to become the sequencer. And so in that sense, it becomes, you know, in the big picture permissionless. Now why would you want to be a sequencer? Seems like a thankless job. The answer is, by being a sequencer, you have the power and the ability to profit by extracting MEV. So this design kind of leans into the MEV thing and kind of says, yeah, let's take advantage of this and use MEV as a revenue source. Again, that's sort of a philosophical distinction, or an ideological one even, in terms of how we want to handle it. I think important in the mental model for these is that these auctions kind of have to be infrequent. It's not as though, you know, it's not as though these potential sequencers are sitting, looking at transactions, seeing ways to extract value from particular transactions, and then bidding on the rights to order them. It's more like they're bidding on their future potential for ordering. They're taking a bet on their own MEV powers in the future. They can't really be frequent for a few reasons, these auctions. You can't have frequent sequencer turnover. The main reason being that you sort of, at any given point, have to know who the sequencer is so that it can give these fast transactions, right? Otherwise, again, it sort of defeats the whole purpose. So imagine on the order of maybe hours, maybe days, I don't know, we hold these auctions. So I'll talk about why, you know, myself, I'll just speak for myself, but generally, I'll set that, you know, of off-chain labs are sort of resistant to this design space. So if, you know, if it depends, because it depends on the ability to order transactions and extract MEV, this is sort of inherently at odds with the other thing that we want sequencers to do, which is give low latency, simply because if you're, you know, you need some time to figure out the optimal order of transactions to extract value from them. So there's some tension there. In practice, maybe that won't matter. Maybe they only need a few extra seconds, but it's not nothing, right? And again, the way I kind of see it, latency is the name of the game with sequencers. So that's one thing. Because these auctions sort of have to be infrequent, like I said, and there, you know, you have this temporary centralization. There's some concerns about, you know, some random party coming in and censoring transactions, griefing them, right? It's sort of the double edged sort of decentralization in this case, right? You open it up to any party, but that means any party can come in and sort of not do what they're supposed to. And things like not including transactions, it's a hard thing to prove. It's hard thing to punish because of like data availability problems and stuff. The other practical, the thing that we would probably expect is that, you know, there'll be a single party that just keeps winning the auction again and again, simply because, you know, some, you know, whoever sort of best optimizes for this, there will be some party who does that, right? So it's open, but you might get a sort of practical centralization. But again, the sort of the bigger thing is this question of, are we really comfortable with leaning into the idea of extracting MVV or introducing new parties that have this MVV power versus minimizing it? So, real, I'll try to just get this in before I stop. I know we're almost done, but I want to talk a bit about sort of where our heads at, at off-chain labs in terms of decentralizing the sequencer. The model here, this sort of fair ordering model, is we replace the sort of single party sequencer with a fixed federation or committee of sequencers. And these, in order to give a receipt, they have to come to consensus, a sort of, sort of like a BFT style consensus. But it has this special property, which is the ordering of transactions is sort of enforced within the consensus itself. Unlike, you know, a lot of BFT leader selection algorithms where there's a leader's chosen and that party kind of has full control over what to propose. In this case, that control is distributed and we enforce fair ordering. What we mean by fair ordering, it's a little hard to formalize, but it's something like, you know, you have transaction A and B. And if a super majority of sequencers kind of witness transaction A coming before transaction B, then as long as the super majority is honest, that will also be the result. So, and yeah, some nice research has been done, some nice progress has been made into sort of improving the style of algorithm. The initial ones were just practically required a lot of network level communication so they weren't usable, but the latest thing called Themis is kind of the, a nice breakthrough, which makes it viable. So this does introduce an honesty assumption. We have a fixed set of parties where we're assuming an honest super majority. Again, any sequencer solutions going to have some centralization and again truly be open. But that is a definite downside. The more interesting downside here, and this is some of the pushback that we got has to do with, okay, we've taken power away from sequencers, but now if you're a power user who's trying to extract MVV, what sort of what world does this create for you and what are you capable of doing. So, just because, okay, so we have fair ordering, there's still this question of what exactly is the ordering that we're enforcing. And initially we were thinking FIFO first in first out, right. The order that the sequencer sees them is the order that they that they sort of give the receipts and publish transactions, which is what which is what the centralized sequencer does now, if you trust me. The, the issue here is now you can imagine a power user who sees an arbitrage opportunity or something if they want to get their first now they're incentivized to have like direct network level access to the sequencers like literally set up servers geographically close to the sequencers and have fancy hardware so that he can communicate clearly where, which is, which is sort of all exogenous to the system it's it's not really something we want to incentivize. It's it's it's sort of benefiting users in this weird way and probably a pull towards centralization because someone will optimize that best. So, we can do better. So, this idea, the top of the slide is cut off, but very recently we got this proposal from some of the folks at at flashbots, who really Shin, aka, I guess, sexy son set you pronounce it. But the idea here is we have a sort of hybrid ordering policy, so we can keep our fair ordering algorithm, but we don't enforce the fairness at the transaction level we enforce it in the sort of chunks of time. We can imagine discrete intervals of like half a second. I think that's the rapid of music. Anyway, this is a really interesting topic. It's all very new and fresh. So I recommend if you're a researcher or interested person. This is a good place to start in terms of fair sequencing. I have more to say about L one, but you can find me later. Thank you all for listening.