 Well, let's start. Well, I'm Joe de Marina. I'm the technical lead of the identity project. Identity project is an identity project that tries to solve self-sovereign identity in a scalable manner, in an accessible manner. So that's valuable for the users, once that everybody be able to be a certification authority by itself. And we want to do that in a privacy by default by design. That's the main goal of the Ivan III. In these, when we talk about the scalability of these, when we want this system to be scalable, we have a piece that we call it the trusted relayer. Mainly, it's a piece that puts a lot of claims. And we need that this relayer, well, you don't have to trust it so that you can achieve. And for doing this relayer, we are using so we have to use a technology that's mainly the Zcar roll-up. So we decided in item three to implement also the Zcar roll-up because most of the pieces are exactly the same. So in this presentation, maybe I'm going to talk about the Zcar roll-up, how we work, how we work the implementation that we are building right now, and explain some of the details of this. So let's start. Well, this is just a little bit of a spoiler. This is the results that we are going to, that theoretically are possible with this system. It's important to see that this is just in Ethereum 1.0. We are not talking here in Ethereum 2.0, or we aren't talking about anything. Just using Ethereum 1.0 in the lower layer. So let's see how we achieve these results. What's the general idea of the roll-up idea? Here is we have a kind of sidechain, like Plasma, if some of you have been using. In this sidechain, we have a database of balances. All these balances is a state. All these states, we can create like a miracle tree, and we have a hash of all these states. And in this state, we have a set of transactions that we put in a batch, and in each batch, we create a new state. So we process a set of transactions for each batch. And the thing here is that in the blockchain, in general, we just put this state. So in Plasma, for example, this is how we warranty that this state is valid. How we warranty that these transactions are processed in the right order, in the right way, that there is no double-spendings, everything is exactly in fits. So in Plasma, there is this gaming stuff, in Zika's roll-up, maybe what we do is, besides the next state, besides the next state, we also put a proof that this state is valid. This is a zero-knowledge proof, okay? So here is the key piece, the key technology that makes this enable. This is the Zika's logs. The important part here is the S, that's just from succinct. That means that we can generate a proof that maybe 1,000, or even, theoretically, one million transactions are valid from one state to the other. So we have one 256 bytes, just a hash of a state. We go to the next state, 156 bytes, and we can add a proof that proof that's valid. And this proof, it can take long to generate, but in order to verify, it's time constant. In 10 milliseconds, it doesn't matter how many transactions we are processing. In 10 milliseconds, we can verify that proof. And that's the key part of all the roll-up stuff, okay? This system is perfect, but there is an important problem that needs to be solved here. And it's what happens if this operator can be decentralized, that then we will see how it works. But what happens if this operator just computes a new state that's actually a valid state with some valid transactions, but what happens if these transactions are not available? Nobody knows what are those transactions. This is what's called the data availability problem that I'm sure many of you have been hearing. So the idea is that we need to guarantee that this data is available. In this case, in this implementation, of course, this data can be put in many places, but in this implementation, what it does is it put the minimal data of this information, the from, to, and amount, that's it. We put it that in the same chain. So we put it in the data, in the data field of a transaction, we are putting a set of all the transactions that are putting there. That's a lot of data there, but we will see the calculus and we will see that that's not that much. Okay, so that's the main thing. Of course, we are trying to compress a lot of this data in order to have more transactions. Okay, so let's go a little bit in detail of how this circuit works. When I mean circuit, how we generate this proof. With which we generate this proof, circuit mainly is what it is, it's a deterministic program. It's a program that we put an input, we do the computation, we have an output, in this case it's the new root, and we prove that actually the input matches the output. Okay, so this is the proof, the zero knowledge. Mainly the circuit has a set of transactions processors, so we have the old root, we process a transaction, we have a new root, process a dollar transaction, we have a new root and maybe, I don't know, 2,000 transactions, and then we have the new root. Okay, so let's zoom in in one of those transactions processors. What do we have here? So here you can line the main pieces of this circuit. First we have a signature verification. This is very important because by design of this circuit, we don't need to store the signature on chain. The signature is a very big piece of data that we need to put, but verifying that, this signature can be off chain. The transactions can be included, but by design they will not be valid if there is no signature that, if that does not exist in the signature, that's valid for that transaction. But if it's valid, it happens, and if it is, so we don't need to store this signature, so this is important. Okay, then we have, of course, we need to modify the state of the sender, of the state of the receiver, of the receiver, and so we have this process of the Merkle Tree process also that we are mainly, we are just updating this Merkle Tree with different states. We need to have two of them from the sender and from the receiver, and of course we have all the logic of the transaction processing. Here is, we were in key that there is no global spans, that we have enough money to send, and all that logic. So that's very much the picture of the circuit. You see here, it's important to see the input that there are, of course, there is public inputs like the old route. There is parts of them that are also public, it's a front to an amount, that's a minimum part that needs to be available for computing the new state, and then there is a lot of private symbols as the current state that only needs to be verified with the route. Okay, cool. So let's see how this rollout would work. What kind of transactions? Imagine that we start a new rollout, we have an empty, we have an empty route. The first thing that we can do is do deposits. So mainly how it would work, you will send, you will take a token, system works in many, in any token, so it's multi-token system, so you can send the tokens, either or whatever, to a smart contract, and the rollout will create a special transaction, that's a deposit rollout. That mainly what it does is creates a new leaf in the state with the initial amount that you are sending here. So people would just get it moved either to the side chain, okay? Then we have the normal of chain transactions that are mainly just transferring from one account to the other account. Of course, it needs to be the same coin and now all the logics in there. And then we have the exit, how the exit works. The exit mainly is just sending the money to 0x0, so you send the money to the 0 address and this will, mainly what we do is, this will create another tree for each batch. We have a separate tree of exits. So you are sending that to that tree. So we are constructing this tree of exits. So with this, from the smart contract side, again in the main chain, we can withdraw. We prove that we have a leaf in this exit tree. We mark it, we flag it as a withdrawing and we get the money back. So here we see the full picture, how we deposit, we move inside and then we exit the stuff. Of course, we can do other, we can mix things. We can mix between one chain and of chain transactions. For example, we can force a transfer from a chain transaction that is very convenient. For example, if a smart contract wants to do a transfer inside the other chain and the last one probably is the most difficult one. It is that we can load, so we can deposit on top of another or an account that's inside and then we're doing a transfer in the same. As you can see here, there is always two, there is always two miracle tree processors. In general, it's from the sender and from the receiver but you are using accordingly depending on the transaction that we have. So we explain to people how the exit mechanism works. Ready? So let's talk about the operator. Who's forging these badges? Who's creating these, who's creating these proofs? What we do here is, okay, we have a lot of logs. We define that the slot is a number of logs, let's say for example, 20 blocks and we define an era that's maybe a set of the slots, maybe let's say maybe 10 slots, okay? So here it works very much like a proof of a stake. So you set up, you just put some mistake in the, you just put the mistake so you will be able to forge some of the blocks. So when you put the stake, you will start, so you will register for era plus two, so for two eras in front of them, okay? And for each error, there is a raffle, okay? The raffle happens like one era before, okay? And who the raffle is on? Well, the raffle is the random number, the way of generating the random number mainly is, well, the operators, they commit. So they create this line of, well, this chain of hashes, so I create the hash of the hash of the hash of the hash of the hash, I commit to the last one and then for every block that's forged, it reveals the last one, so just the previous one, the previous one, the previous one, so that needs to be matched with the next one, so it's already committed random number and the raffle is made with the hash of all the, of all these committed random numbers from all the blocks in each era. So it's quite random on this. So here is the chances that you are assigned to a specific slot to get, to be able to forge blocks are according to the quantity of the stake that you're putting. Actually, it's a little bit more complex, it's maybe it's like a square or it's like a x to the 1.1, 1.2, we'll see, this is a proportional part, maybe we want that the people that concentrates, but it's not the same, for example, it's not the same, for example, having one staker with 10 stakers with one ether each or one staker with 10 ether. Actually, the chances would be the same, but in one you would reach just 10 ethers and in the other would reach just one ether. So we are compensating the accumulation, the people that's taking more will have an extra just for accumulating that. And we are doing that, but just creating this effective stake on that, okay? Yeah, one important thing, once you are assigned to a slot, in that slot, the operators can mine, can forge as many, as many patches as they want. So that's an interesting thing because they can, if they have enough power to generate, but just that's good for them. So this is an incentive that's going on there. And another important characteristic here is the kind of pipeline. So the idea is that the blocks are not forged. So the blocks cannot be, needs to be committed before a specific time in the slot. So in the last part of the slot, you cannot commit to new blocks. You can forge blocks that are committed previously in the beginning of the slot, but you cannot commit to new blocks. This allows to the next operator, the one that's coming in the next slot to start mining, to start computing the proof for this. Okay? Important, the proof is very, very parallelizable. Parallelizable in two ways. One is that measure that I have a processor that's computing a proof. Well, if I have a state here, I can start computing the proof, but maybe in computing the proof, I can forge another block and start computing maybe in another processor, the proof. That's one way of parallelizing. But another way of parallelizing is just that the proof computation, just by itself, is very parallelizable itself. So maybe it's more convenient to put all the four processors just to help in the same proof. That would be like the last one. This will have the advantage that the finalization of, or at least the finalization of the batch would be faster, but of course require more investment. Okay? Slashing. When are you slashing? When many of you are slashing for two things. First, the operators needs to forge a block, needs to forge a batch in each slot. If they don't forge a batch, they are slashed. They just lose the state. And the other is if they commit to a block and they are not forged, so they do not forge that commit, they are so slashed. That are the two things that they are slashed with. Okay? Here you see the format of the data availability. This is a part of the transaction that we need to reconstruct the full state and needs to be available. And we're putting this on chain. Okay? This is why it's so short. We have only three bytes from the front, three bytes from the two. Here you see the deposit. When you're doing the deposit, you are assigning a number. So you get like, you're doing a kind of login here. So you get a very, very short address here. Okay? You get the address inside. And the two bytes is a floating point style number. So in two bytes, we can put almost any three decimal number, three and a half decimal numbers with from zero to two to the very big number. Okay? So that's very important because this is what allows us to do as many transactions in this system. Okay? Well, here is how the hashes would work. You see all the transactions that we need to hash. You see the on-chain hash and the on-chain hash because when you are doing a, when you are forcing a transaction, for example, imagine that you want to force an exit. If you want to force an exit, you will mine the transaction on chain. This will force the operator to mine that transaction. Transactions that are forced, that are put on chain are forced. They are, the operator have the obligation to force that. So in the transactions, there are more in the accumulative hash and for the data availability part, it's just an SN chain of 156 of all the data in there. Okay? These are, you know, it's a detail that's important because yeah, we want to process a lot of transactions. So how do we calculate the fees? We cannot, the operator or in chain, in the on-chain part, we cannot put a lot of logic there. So we need a more simplistic mechanism. So what it does here is that the operator chooses a fee and then it's able, in that block, will only be possible to mine transactions that the user is willing to pay more than that fee. So then computing how much fee the operator gets through words is just a multiplication. And in this fee plan, there is also a limit. There is only 15 slots. So you can select 15 different coins, which fee you want to pay for each coin and then only transactions of that kind will be mine. Okay? Signatory verification, here there is a, I think here there is a lot of investigation to do, where we're using EDDSA in baby job, we're using the Poseidon hash function. It's a function that's still not, but it's safe, it's very new. And right now, I know that at least a few foundation and other people are running some competitions in trying to break these new cryptographic functions, but they are very efficient and it works very well. Maybe it's possible to do patching inside the snark. Normal patching of EDDSA would require to work with a modular math. That's not exactly the one that's snark. So it's not as easy as it looks like, but there is some investigation there. But mainly what we're doing is just a normal EDDSA for each transaction. And this is right now one of the most important consuming in the number of constraints that we're having here in the circuit. Okay? Here is, we already talked about the chain transactions. Yeah, another important thing that the system has is atomic swaps. Atomic swaps is our transactions. Just without adding anything on chain, just you can sign the transaction that this transaction will be only mine if these other transactions happen. And the same in the other transaction. So this allows to do atomic swaps. This is, for example, for exchanges. This is very convenient for doing this kind of atomic swaps. So this is just like an extra field that's sign it when you are creating the off-chain transactions. And this is not without adding in any extra cost in the on-chain part. Yeah. Well, here I want to talk a little bit about the improvements that we are doing right now. Well, we have a full implementation of the proof generation in CUDA, in GPUs, in the BN128. Here is more or less the numbers. It's like, let's say, this is a very round numbers because we are still optimizing and we are still working, but in a soup 10K hardware, we can compute 2048 transaction proof in about 10 minutes. That's our number of the numbers that we are working right now. So they are starting being quite wide. There are still some optimizations that we can do. Here we are analyzing maybe is working with FPGAs and other technologies can be improved in here. It's a matter of speed, cost, complexity. This is very much the part of engineering that we are working very much on. Okay, here is the numbers. As you see, the first line is how much cost? Well, first, the number of transactions that we have in Ethereum right now is about 32 transactions. Here I'm talking just about normal Ethereum transfers. So we have 10 millions divided by 21K gas. This is 32 transactions per second divided by 15. We get the 32 transactions per second so that we have right now normal Ethereum transactions just to have a reference. If we implement this system in the current chain as is now, the cost of putting all this data valuable in chain is quite high. You see 2048 transactions times eight bytes per transaction times 68 gas per byte. That's a total number of more than one million for 2,000 transactions. So that means that we would be able to put five batches in a block. So that would give the relative number of about 682 transactions per second, which is not bad. But after the Istanbul, where the proofs of these data bytes in the chain plus the reduction cost of the proof verification chain is reduced, doing the same numbers, we would be able to put 15 batches per block and then the number, the theoretical limit of the transactions, we are talking always about the BN128 core, just without doing any special things, we could go to reach these limits. Of course, when we're working with this number of transactions, you'll start having other problems. You need to process 2,000 transactions. The transactions will start getting more complex or there are half other difficulties that needs to be solved also there. But that's another, the theoretical numbers of where we are. Some of you may think that, okay, yeah, but you have to put this expensive hardware in order to, it's time again, okay, let me move forward, let me move forward. But just, yeah, the cost of the hardware should be less if you do the addition, it's less than 1.1 cent, okay? So here are a set of tools that we're working in, in item three, we have a, we have a circum, circumlib, and the number of you are here. Here is the Kudasargs. A lot of things is working, it's going in this place. Of course, we are still working in an entity protocol. This is just a specific part of the entity protocol. I don't know if you have seen here, but we're running this game for just very fine of this kind of trusted network. You see some papers here. If you want to play the game, that's up to you. That's a nice fun game to play. And yeah, we want to release the, we want to release also the, we want to release all the API for the entity and all the stuff. And we are working very hard also in the Rola and we hope that in the next weeks, we will be in the testnet. That's very much.