 So in my talk today, I'm going to be talking for the first time about how we've designed a polygon-midean rollup and specifically how we use this hybrid UTX on account-based model to achieve some interesting properties. So just to set the context at first, the goal that we have in mind is we want to build a scalable decentralized rollup with privacy-enabling architecture. And what I mean by that is that our immediate goal is to achieve scaling, but we want to design the rollup in such a way that when we want to turn on privacy, it will not require a complete architecture overhaul. It should be very easy to achieve that. And I'm sure a lot of you here are already familiar with this, but just to set the context of what is a decentralized rollup, we have users, we have rollup operators, and we have Ethereum L1. And in this model, users send transactions to the operators. Operators aggregate those transactions into blocks, and then they submit kind of the state delta in a context of a ZK rollup with a ZK proof to Ethereum L1. And then what we get, and this is not specific to a decentralized rollup, this is true for any rollup, we inherit security from Ethereum. What is specific for a decentralized rollup is that a rollup has its own L2 chain and its own consensus mechanism because the operators need to agree on the state of the chain. And then we want to have the set of operators to be permissionless, meaning anybody can join and leave the set as they please. Now, compared to a centralized rollup where we have only one operator, a decentralized rollup has a number of challenges. And the most important ones are, you know, you need to have a separate consensus mechanism. You also have this execution bloat problem, and I'll explain what it is in a couple of seconds. And then you have a state bloat problem. So in this talk, I'm going to talk about specifically execution bloat and state bloat. And so let's get into it. So what is execution bloat? Execution bloat basically means that the network needs to execute all the transactions. And more specifically, a block producer needs to execute on the transactions in the block, but also everybody else in the network needs to re-execute the transactions to make sure that block is valid. And, you know, that leads to a lot of re-executing the same code over and over again. What is state bloat? I'm sure a lot of you are familiar with that. This is basically means that state size grows with time. The more accounts there are, the more tokens, accounts hold and all of that, you know, the state size increases. And the reason why we can't do much about that is that nodes or operators need to be able to hold the full state to be able to validate the blocks. And nodes need the full state to be able to produce new blocks. Now, why are these things bad? Like I said, there are challenges. So why exactly are they challenges? So the first thing is if you have state bloat and execution bloat, you need powerful machines to, like, let's say, we have thousands of transactions per second. You need a powerful machine to process that. If you have a large, you know, terabyte-sized state, you need a large machine to hold it in memory. And that leads to decentralization. And if you don't have good solutions to this problem, you might as well just build a centralized rollout. The other one, because everyone sees everything and everybody needs to re-execute transactions and have the full state, there is inherently less privacy in this setup. And last one is especially specific to state bloat. This is not sustainable. If the state grows, you can, you know, scale the rollout only as fast as the hardware scales. You can go like a hardware in a single machine or something like that. So what do we want to achieve? What is the ideal solution to this? So the first thing, we want to minimize execution bloat. And that means we want to execute transaction only once. And also, we want to make sure that it doesn't have to be executed by the same party. So it's not the same bloat producer that needs to execute all transactions. We want to have distinct actors in a network that can execute transactions. We also want to minimize the state bloat. And that means we don't want to enforce the condition where you need to know the full state to validate blocks. And we also don't want to enforce the condition where you need to know the full state to produce new blocks. ZKPs can give us these two upper properties. If you have ZKPs, you can produce a proof of execution, for example. And you don't need to re-execute the same transaction over and over again. But to achieve the other two properties, you need something else. ZKPs alone are not enough. You need what I call a concurrent state model. And before I get into the concurrent state model, let's talk about the popular approaches to what are the popular state models right now. So we usually have an account-based state and a UTXO-based state. And if we look at pros and cons of each, let's say for account-based state, it's great for express as smart contracts. This is what we love about Ethereum. We can write very cool applications. You have a lot of freedom. And they all interact with each other very well. It's not great for concurrent execution. It is possible to achieve what it is not very easy and has a lot of issues. And it is also a bad fun in the immediate because if you want to, if you have accounts and you know which account participates in which transaction, it's very difficult to hide kind of this transaction graph, so to say. UTXO-based model is kind of opposite of that. It's great for concurrent execution because in a UTXO model, transactions are logically separate from each other. It's actually a very good tool for anonymity. Like if you want to achieve anonymity, you almost have to use a UTXO model. It's not the only thing that you have to use, but it is one of the kind of basic building blocks. But it is not great for express as smart contracts. You can kind of get smart contracts in the UTXO model, but it's not easy and the more expressive they are, the more it starts to look like an account-based model. So what we want to do is combine the nice properties of each of this into a single model. And I call this like basically account-based model, UTXO-based model, and combine that with EK proofs and get something that I call as the actor-based model with concurrent of chain state. And I'll get into what all of those terms mean in a course of this presentation. So first thing that I want to explain of how this works is how do transactions work in this model, and what is an actor model specifically and how we think of transactions in that model. So just to take a step back and explain what is an actor model, it's a concept from distributed kind of systems where you have actors which are kind of state machines with inboxes. And actors communicate by sending messages to each other. And the important property is that the messages are synchronous so an actor can produce a message and then a different actor can consume this message at a later point in time. The way we apply this actor model to a blockchain is that in our context, in context of MIDEN, actors are accounts. An account holds a state and exposes an interface. An interface is just a collection of methods which every of those methods is a MIDEN VM program. And MIDEN VM is a fully-truly complete CK VM. So you can think about those very expressive functions that you can write for the account interface. Accounts communicate with each other by sending nodes to each other. And nodes can carry assets. And a node also has this spend script which needs to be executed to be able to consume a node. And one important property is that in this model is actually takes two transactions to move assets from one account to another. So in a traditional kind of ethereal model, for example, you usually have just one transaction that moves assets from one account to another. In this model, you have to have two transactions because the first transaction is to create a node and the second transaction is to consume a node. Now let's talk about transactions in a bit more detail. So what is a transaction in the context of MIDEN? A transaction always involves only one account. The transaction does not involve more account. And in the course of the transaction, the state of the account gets updated. Transaction can consume zero more nodes. And the transaction can produce zero more nodes. So in a previous example, for example, there was one transaction that produced one node and one transaction that consumed one node. And we can have all the transactions that produce and consume nodes in the same transactions. Now the execution graph of how, let's say, nodes get consumed to explain how this whole process works is, let's say we have a transaction that wants to consume two nodes in a context of one account. So the way it will start is we have this prologan epilogue that do some bookkeeping to make sure that, let's say, sum of inputs is equal to sum of outputs. And nothing kind of no new assets get created in the process of a transaction. But then we go into this execution stage where the first thing that happens is we execute a script of the node, of the first node in this transaction. And then this execute script can call any number of methods on the account interface. So in this case, let's say there is a receive method that receives assets on the account. A node can pass assets to the account through this receive method. And one important thing is that account methods are the only ones that have access to account state. A node cannot modify the state of the account directly. It needs to call a method on the account interface to modify an account. And then the account interface methods can create other nodes. That's how you can, for example, create new nodes in the process of a transaction. And then if we have another node, we do the same thing. We sequentially execute the second node in the context of the same account. And that node can, again, call the same or different method on the account to have different effects and so forth. Now, in our context, because we can execute nodes only such a single account, what we do is we execute a transaction and immediately produce proof for it. So in our case, we use the start-proving system. So MyDnVM is a start-based VM. So whenever a transaction is executed, we immediately produce a proof of execution. And because, again, I mentioned that transactions are logically distinct, they only touch each account separately, we can produce many transaction proofs in parallel. So we actually produce all the transaction proofs in parallel. And then what we do is once we have a bunch of these transaction proofs, we recursively aggregate them into batches. And these batches then recursively get aggregated into block proofs. And then these block proofs get further aggregated into like epoch proofs, and that's what gets submitted to Ethereum. Now, it's important to know that all of this recursive aggregation can also be done in parallel. So as I mentioned, all transactions can be proved in parallel, but also all batches can be proved in parallel. The only thing that doesn't get proved in parallel is the final kind of tip of this block proof. And then there is another interesting property is that we can prove transactions locally, and I'll get into that in a second of what exactly it means. But then these aggregation steps need to be done by the network. For example, a block producer or a block producer can delegate this kind of aggregation to someone else, some other actors. Now, let's talk a little bit more about this concept of local versus network execution. So in a traditional model, when we execute a transaction, we have a step that prepares some inputs for the transaction, signs the transaction, and so forth. Then we execute it. Then in the context of a ZK system, we generate a proof for this transaction. And finally, we get this transaction proof that according to the previous slide gets aggregated into batches and then finally end up in the block. Now, in a network model, the block producer, so that the user prepares the transaction, sends it to the network, and then the block producer would execute this transaction, generate the proof, and then aggregate this proof, as I described on the previous slide. In a local context, the user can actually do all of this. So the user can both prepare the transaction, execute it, and generate a transaction proof. And then what gets sent to the network is actually just the transaction proof itself. And then the block producer doesn't actually need to execute the transaction and doesn't need to generate the proof for it. The block producer just needs to aggregate it with other transactions, which it has generated the proofs. One important thing to notice, how do we handle shared state? Because what I described works very nice when you have transactions which go and don't touch multiple accounts, or when you have nodes that go to different accounts and so forth. But let's say we have something like a Uniswap situation where we have several accounts that want to send nodes in exchange, let's say, assets for some other assets using a Uniswap account. So the way we would do it is that first, we would have each account generates its own transaction to create a node that targets a Uniswap account. This would be two separate, logically separate, transactions. Then the block producer would generate a third transaction that would consume the first two nodes in a single transaction. And also as a result of this consumption of this node, it would generate other two nodes that would target back, create the exchange tokens back to the original accounts. And then we would have the additional transactions that the users of accounts one and two would execute to consume this nodes back into their respective accounts. So basically, in this model, we still have this ability to interact with a contract or account with a shared state. Just in this case, the transaction that interacts with the account with a shared state needs to be a network transaction. It's not a locally executed transaction. It must be executed by the network or the block producer, because the block producer needs to sequence the nodes according to whatever logic they want to do and then execute all of the nodes against the same account. Now, just to summarize this pros and cons of local versus network execution, so if we want to have a shared state kind of an account with a shared state, we cannot use local transactions, we can use network transactions. Now, if we use a local transaction, we can have privacy, because nobody actually on the network needs to execute those transactions. We cannot have privacy with network transactions, because obviously somebody needs to execute them. Now, generating proofs is a fairly computationally intensive process, so the client hardware requirements might be high for local transactions. But on the flip side, because you generated the proof locally, there is much less work than the block producer needs to do. They don't need to generate the proof for the transaction. They don't need to execute the transaction, so the fees for such transactions, for local transactions, would be lower than for the ones that are requested for the network to execute. Now, the next thing I want to talk about is what kind of a state model do we need to support this type of transaction model, and this is where the UATXO and account-based model kind of comes together. So my little roll-up state is actually described by three databases. Usually, you have a single database, you have usually an account database, or any CHXO context, you have a kind of a CHXO database. But in our context, it's actually three separate databases. There is an account database, there is a nodes database, and there is a nullifier database, and I'll explain why all of them are needed. And then in our case, updates to all of these three databases, like when you have a block, a block contains information that updates all of these three databases and takes the state of the network from state n to n plus 1. Account database. Account database holds all of the current states of the accounts. And we use a sparse merkle tree as a data structure that holds this information. And the sparse merkle tree maps account IDs to account hashes. But we have one kind of twist to this. We have two different types or two different modes of storing accounts in this database. The first one is on-chain state, which is basically the same as what you would get as a theme, where for each hash, the nodes store also all the associated data for the accounts, such as storage, code, nodes, and so forth. But there is also an option to do just an off-chain state where what the nodes store are just the hash of the account. And the user himself or herself is responsible for storing the actual state of the account. So nodes in the network do not store the actual account state. Let's go to the nodes database next. Nodes database stores all nodes that have been ever created. And for this, we use a Merkle-Mountain range, which is an append-only accumulator. And a leaf in this Merkle-Mountain range is basically just a set of nodes that were created in a specific block. And there are a number of reasons why we chose the Merkle-Mountain range. It's very convenient for a number of purposes. But one of this is that you can extend or add new nodes to this accumulator without actually knowing most of the previous nodes. You can discard a big part of the nodes database and still be able to add new nodes to it without problem. The other property that is very important in the ZK context because we need to prove that we're spending a node that has been created at some point in the past is that the witness kind of inclusion witness does not become stale. So if you have a Merkle path, it actually just needs to be extended from time to time very infrequently. But it doesn't become stale. And that means that the ZK proof that you generate does not become obsolete very quickly. This is a very important in the ZK context. And then lastly, we have the nullifier database. And the reason why we need this nullifier database is that we have the account database, which stores states of accounts. We have the nodes database that stores all the nodes that were created, but we do not remove nodes from the nodes database because we want to have this nice property of append only accumulator. Therefore, we need another data structure that will tell us which nodes have been consumed. So the nullifier database is something that keeps track of nodes that have been consumed. And for this, we also use a sparse Merkle tree, where we basically map a node hash to either 0, 1. 0 indicates that the node hasn't been consumed. 1 indicates that the node has been consumed. So whenever we generate a proof for a block, the proof must include that this node existed in the accounts database and it did not exist in the nullifier database. We actually have a slightly more sophisticated data structure where there are multiple epochs. And those are time periods. And for each epoch, you have a separate nullifier tree. And then nodes are expected to keep the last two epochs, but can discard the nullifiers for the prior epochs. Now, we have this different databases. And there are very different growth drivers for each of these databases. So an accounts database grows primarily with a number of public accounts or the accounts that have on-chain state. Because it does grow with a total number of accounts, but if you only have to store a single hash for an account, that's almost negligible. Like you can store a billion accounts, and it's going to be only 64 gigabytes. And also, we can dynamically kind of prune this. We can, for accounts, for example, that haven't been used in a while, we can just remove all the data and store the hash for that account. Nodes can choose to do that if they wish to. The nodes database grows with a number of unconsumed nodes. So as soon as a node is consumed, it can be safely discarded. You don't need to store it anymore. So unconsumed nodes is what drives the size of the database. But also, you can have this pruning where you can remove some of the nodes and just keep the hash. Then finally, we have the nullifier database. And this one is a different one because you can't easily prune nullifiers. To be able to create new blocks, you actually need to keep all the nullifiers. And the nullifier database depends on the throughput. So like the more transactions per second you have, the more nullifiers you need to keep for a given epoch. We can make epochs smaller, but there are some downsides to that. So overall, if we look at kind of like what sizes of these databases could be, that nullifier database is gonna be by far the ones that drives the size of the overall state is gonna be larger than the nodes or account databases combined. Now, I have a few slides to wrap up the talk to say, well, what did we achieve? First, we have this concept of different models of execution. So the network execution and local execution. And we have this concept of on-chain data and off-chain data. And the combination of this gives us different nice properties. So for example, if we have on-chain data and network execution, this is a typical public transaction, something that happens on Ethereum right now. We can also have stateless transactions if we have off-chain data, but network execution, where the network doesn't store the state of the accounts, for example, but the user needs to provide the state of the account with every transaction so that the network can execute the transaction. And the next thing we can do, if the data is off-chain and local execution is happening, we can have private transactions where the network is not aware not only of what code was executed necessarily, but also is not aware of the data that is in the account. And we can also hide the transaction graph using UTXOs. I'm not gonna get into that right now, but it's slightly more complicated, but we can do that as well. And then finally, for completeness, there is this local execution and on-chain data. I personally don't know which use cases that would cover, but maybe people will come up with something. How did we address execution bloat with those models? So first, we achieved no re-executions. So all transactions are executed only once. Second, we have concurrent processing where transactions can be processed in parallel on independent machines, and you can almost scale this thing horizontally by adding more and more machines to generate the proofs. And finally, we have this local execution where transactions can be executed by the users that are involved in those transactions. And the nice property here is the more locally proven transactions you have, the less computation bloated on the network has to encounter it. Because let's say 90% of transactions are something that's proven locally. There is very little work that the block producer needs to do to, they don't need to execute them, they don't need to prove them, they just aggregate them into blocks. And then regarding state bloat, we have kind of this dynamic pruning where we can collapse accounts and nodes into their hashes. We can have very light verifying nodes. If you only want to verify state transitions and you don't wanna create new blocks, you actually don't need to maintain the nullifier database at all. And in that case, as I mentioned, the nullifier database is the biggest part of the state so you can actually discard the biggest part of the state. And we have this nice thing where because the nullifier database dominates this overall state size, the overall state size really depends on TPS. So the higher the TPS, the bigger the state, but it doesn't vary with the number of accounts, for example, as much or number of nodes in the system. And last thing that I wanna leave you with is that this is what we're trying to achieve where the more privacy there is in the network, the more scalable it is, the more scalable it is, the more private it is. And this is our goal with the Might and Rollup. Thank you. So how would the network resolve when two accounts try to spend the same UTXO like in attacks or something like that? So if two accounts are trying to spend the same UTXO, that's a conflict, you can't spend the same UTXO twice. So I think the problem, so it's not really a problem in that case. Like if you are trying to spend, like if you have UTXO and I have UTXO and it would submit transactions for whatever reason that both of us can consume, the block producer will need to decide which of those transaction goes through because you can't execute both of those transactions simultaneously because one of them will produce a nullifier that the second transaction will not succeed because a nullifier for this UTXO has already been created. So like in the Uniswap example, you can send a note that says, I wanna swap token A for token B at this price and somebody else can do the same thing and those are two different requests. But then the block producer will aggregate those requests, sequence them in a single transaction and execute them and there will be no conflict in that because the state of the Uniswap contract gets abated sequentially after each consumed note. So you're not consuming the same UTXO, you're applying the different notes to the same account, but yes, that cannot be done locally, that needs to be done by a block producer. It's an optimization. The idea is that if you want to have, if let's say your note was created in a prior epoch and the nullifier was created then, you will need to provide the path that proves that it hasn't been consumed yourself. The notes are not responsible for that. So the notes are meant to be like a short-lived object. So they are not meant to stay in a state for a long time. And if for whatever reason you decided to keep the state there for quite long, that's your responsibility to be able to provide this proof to the network. It's not network's responsibility to keep it for more than let's say six months or so. Thank you.