 So, hello everyone, my name is Yea, I'm the co-founder of scroll. Today I'd like to introduce scrolls architecture and our PR for test net upgrade. Before diving into more detail, for those who are not familiar with who we are, scroll is a general purpose scaling solution for Ethereum. So in short, it's just making Ethereum cheaper, faster with a higher throughput. And more specifically, we are building an EVM-equipped ZKRO app. So, technically speaking, it's a ZKRO app solution, which is considered to be the most secure scaling solution with shortest finality based on math. And we are also EVM-equipped by saying that, I mean, like in our ZKRO app, it's bytecode-level-equivalent, which means developers can reuse everything that they use on Ethereum Layer 1, including like CarHat and all those development toolings. And we can achieve native bytecode-level compatible, which means you can migrate the code from Layer 1 to Layer 2 seamlessly. And so in the rest of the talk, it will be divided into two parts. In the first half, I will talk about the architecture of scroll and how your transaction is being processed on scroll. And in the second half, I will talk about our important upgrade for our test net and the roadmap in the future. So now let's take a look at the architecture of scroll. So before diving into more detail to give you a better sense of how scroll works, let's take a traditional architecture for ZKRO app. So the idea of ZKRO app is that instead of sending all the transactions to Layer 1, you send all your transactions to a Layer 2 node. And then Layer 2 node will run some zero-node proof algorithm and a generator proof. So the proof will be verified on Smart Contract Layer 1. And so verifying the proof is mathematically equivalent to executing all the transactions. So that's how you get the scalability, because for example, Ethereum only got 10 TPS, but each transaction is verifying some proof, which is equivalent to executing 100 transactions. Then you can scale your network massively. So intuitively, the architecture of scroll looks like this. So you need some sequencer, which is sequencing the transaction after receiving that, and a generator Layer 2 blocks. And then you also need some relay to relay message between Layer 1 and Layer 2. For example, there are some deposits from Layer 1 directly through the bridge contract, and your relay need to relay this message from Layer 1 to Layer 2. And also there are some deposits where sequencers need to send this message to the relayer. And after sequencing the transaction and getting Layer 2 blocks, it will send to the prover, and the prover will run some algorithm, like zero-node proof algorithm, and generate the proof. And the relayer will send me the proof necessary data. And a unique feature of scroll is that we are not running this prover in a centralized way. But instead, we have a decentralized prover network for generating the proof. So in our architecture, we have a coordinator, which will receive blocks from the sequencer and generate execution trace. It will dispatch the execution trace for different blocks to different provers in our network. And the provers, we call them rollers in our network to distinguish from miners. They will run the KEVM and generate the proof, and then they will send back the proof to the coordinator. The coordinator will then send to the relayer and the relayer will go up to the Layer 1. So the magic thing actually happens on the ruler side, where you are running some KEVM and generating proof for the validity of all the transactions inside the block. So now let's take a look at what's happening inside the ruler. So after receiving this execution trace from the coordinator of a certain block, the ruler will run the KEVM. So what is the KEVM? So the KEVM is composed of several circuits. So the circuit means, so one circuit can verify certain functionalities for certain parts. For example, EVM circuits can verify that your EVM state machine moves correctly from, for example, push to pop and to the next upcode you are executing. And then RAM circuit is useful to prove that your read and write for this virtual machine is consistent. For example, you previously write to someplace and then you read. So this RAM circuit can prove that those are consistent. And there is also a storage circuit, which means when you are updating the storage, you are doing things correctly. And there are some other circuits to prove some other functionalities for EVM, including like ECDSA circuit for signature and some BICO circuit and some CATCHAC circuit for other functionalities. And you need a circuit input builder in between to translate your execution trace directly fetch from GAS to the circuit specific witness. And then intuitively, the KEVM should have multiple proofs, right? Because it needs to have a proof for EVM circuit as a proof for RAM circuits. But so all those proofs need to be verified on layer one efficient. So what we do here is that we build another aggregation circuit. So this aggregation circuit is used for proving that the proof is correct. So for example, like, you know, your EVM, the aggregation circuit is saying that EVM proof is correct, RAM proof is correct, and other circuit proofs are also correct. So this is your aggregation circuit. And in the end, you will only have one block proof for the whole block to prove that your execution trace is correct. And moreover, what's to notice is that our coordinator will dispatch block to different provers. So those rollers will generate proof in parallel for different blocks. They are not competing for the same block, which will have a better utility for the prover network in our system because all the provers are doing something useful. They are not doing something redundant. And now let's take a look at how your transaction is being processed on scroll and the workflow of scroll from a timeline perspective. So let's start with the workflow of Zikerov. So on ECM layer one, because you need consensus, so you generate block very slowly. And on layer two, you can generate block much faster and with a higher throughput. So you generate multiple blocks, and then after a period of time, you roll up your transaction data and generate a valid proof to prove that all the transactions are correct and send that to ECM layer one. But what's to notice is that this block data doesn't really rely on valid proof. It's used for data availability. So what you can do here is that also part of the scroll design is that we separate this block data with valid proof. So you will submit the block data first on-chain to get some committed version, which for example, users can see their transaction on-chain even without the proof. And then you wait for some proof generation to finally finalize your transaction. So accordingly, you have three different status for your layer two transaction. One is called pre-committed, which means your transaction is sent to a sequencer, and the sequencer has already included your transaction in a layer two block. So it will send back a pre-confirmation, which is just maybe three seconds and something like that. So you get this pre-confirmation from our sequencer. And the next state is called committed, which means we already roll up your data on-chain, which usually takes minutes. And so users can, this is a much stronger confirmation because users can see their data and they even like replay their data by themselves. And finally is finalized, which indicates that you already generated the proof, and the proof got verified on layer one. So that's the final state where you get the final confirmation on layer one because your proof is generated and verified. Let's take a look at the, like from a timeline perspective. So you send your transaction to a sequencer, and the sequencer has included your transaction in a block. So the orange one, it means the block is pre-confirmed. And then the sequencer will upload your data and with some proof to layer one's row of contract. And then your block gets committed. And then the sequencer will dispatch this block to the coordinator, and the coordinator will find one prover inside our network for proof generation, to generate proof for this block. And similarly, for the next block, sequencer will also, after committing this block, also like coordinator to find a, find a roller in the whole system. And similarly, you can do the same thing for block three and block four. And after those proof generation, the prover will send back the proof to the coordinator, and the coordinator receive multiple proofs. And then we do another like dispatch, to dispatch those proofs to another prover, and let the prover do some aggregation to further reduce the verification cost, because you can actually aggregate multiple block proofs inside one, inside using one proof. And then after this proof aggregation, you finally get one proof, which can prove that P1, P2, P3 are correct. Which means the block one, block two, block three, the transactions inside are valid. And then you submit this proof on chain for verification. And the row of contract will use their previously input as some public input. And this proof to verify that it's correct. And then like finally, your block get finalized. So that's the final state for, for your transaction. And we have built a special row up drawer to show the block status. So for example, you, you have like a few seconds ago, you have pre-committed block, which is the orange one. And then a minute ago, you have multiple committed block. And there is a commit transaction hash where you can find which transaction is committing your data and you can find your data on chain. And there is also like finalized transaction hash, which means for example, your, your, your proof gets verified. And there is a finalized transaction hash here for showing like which transaction contain this proof and we, like when you get verified. So this is a special explore, like built by us for letting user to know that what's happening, like inside. And now, like after some, like talking about the technical background, I will introduce our scrolls pre-alpha testnet and where we are. So three months ago, we have released our testnet, our pre-alpha testnet. That version is mostly for the community users, where we can get user feedback. Like they can, they can play with our pre-deploy applications. For example, a fork of Uniswap and also through their familiar, like wallet, like MetaMask. So it's all for users and the users can also bridge their assets between layer one, layer two. Like for example, they can experience the deposit and the withdrawal. And we can also see their transaction status through this row of keys explore. So that's where we are, like it's all for collecting feedback from the community to improve our UI and UX and also fix some bugs ahead of time. And we'd like to thank our community for their helpful feedback so that we fix a lot of bugs on the UI side and they improve our front end a lot. And we have onboarded over 10,000 users to test our bridge and depth. And at the meantime, we are still scaling up our pooling infrastructure to support 100,000 users on our waitlist. So the reason for that is that we don't open enough provers for this testnet. So once we open this decentralized pooling network for everyone, we can scale out the users or the transaction throughput massively. A few days ago, we make a very big announcement, which is the upgrade version for our pre-alpha testnet. So it's a very important milestone for us where, so the most important upgrade for our testnet is that we are not only a testnet only for users and for pre-deploy contracts, but it's a testnet for the developers where developers can deploy arbitrary smart contracts on us. So it's very important because it's not only interaction between users, but you can actually deploy things on us. And you can experience a seamless migration without any need to change any line of your code. You can just directly copy-paste your code from layer one and directly deploy on layer two. And we also support all the toolings around because we are natively EVM-compatible, and even EVM-equivalency on the backup level. We can support remix, hardhat, and even fungery and all the toolings around. And this ago, we have a Hexon at East Global for lighting hackers to register to our testnet and deploy things on us. We have also done some live demo at East Global and also yesterday at the ZK community session where we let the community to deploy smart contracts on us. And we have opened this register to all the developers. So if you want to become an early tester or the contributor, snap at scroll.io slash early death. And you can experience how easy that is to deploy things on us. Now, just a quick summary for user and developers. So the developer experience will be exactly the same as the ECM layer one. So for the concrete performance, so layer two block generation takes less than three seconds, which means, for example, for users, you can get your pre-confirmation within three seconds. It can be even further as we move to multi-block aggregation. It can be even bring down to one second. And your experience will be pretty good. And the deposit, it really takes two minutes because you need to wait for six layer one blocks. So it's not because us, but you need to wait for layer one block confirmation. And withdrawal takes around like six minutes or more, depending on your concrete, like how many poolers you have in your network and what's the throughput. So usually this takes like two minutes to one hour. But the faster pooler generation already, like for one block is six minutes. So it's very fast. And yeah, so that's for our pre-offer testnet. And now let's talk a little bit about our roadmap and where we are and what we plan to do. From a high level, our roadmap look like this. So in phase one, we have a pre-offer testnet for user and the developers. So users can interact and developer can deploy arbitrary contract as far as they are registered. And in phase two, we will move to alpha testnet, which we will move to that very, very soon, which is a permission list version. And like anyone can directly use that without any permission. And developer can deploy like any contract without like register. And so that's for our alpha testnet. We are moving to that very soon. And in phase three, we open this layer to prove also into the pooler community. Or usually it has a large overlap with the minor community. So which means in phase three, we will open proof generation for anyone to be the pooler. And they can run their pooler machine and be one of our pooling nodes to generally prove for us. So that's in phase three. And then we will move to phase four, which is our mainnet. So the distance between like those, like before mainnet is that one is that, because ZKEVM has many lines of code, which as Vitalik indicated, it won't be bug free for quite a long time. So we need very rigid secure auditing for our ZKEVM to be really confident that we can reach the state of mainnet. And also we need to wrap up the sum of the rest ZK circuit to make that more sound and also improve our performance massively, like through poor optimization and circular optimization. And in the phase five, we will apply some research result, which we are doing like in parallel with the development that for example, the decentralized sequencer to make the sequencer more censorship-resistant. And also like we are doing some survey for some zero-node virtual machine and see if there are some interesting part to improve our ZKEVMC efficiency. And so that's our high-level roadmap. And the one thing which we hear really, like a lot of things from the community that people usually ask about our decentralized prover and what's the requirement for running such a prover node and what's our plan for hardware. So I will tell a little bit more about our plan for this hardware acceleration. So we have three stages. So in stage one, we will build a private ZKEVM GPU cluster for running this prover. So we have already built a very fast GPU solution to generate proof for our ZKEVMC circuits. So the current performance is really good. Like for example, one million guys only take six minutes to generate proof. Like people usually think like ZKEVMC proof has such overhead and it's not affordable, but you know like it's actually very fast on our GPU prover. And besides that, besides the GPU solution we have built, we have also built a private GPU cluster to provide the very stable computation power for our testing at this stage. And so it's already there and it's already live there. And meanwhile, we are collaborating with several large companies which are AmyEd like making their own proof faster. They are ZKE hardware companies and they build more customized solutions for making prover faster. For example, they are building some IPG solution, ASIC solution and the GPU solution. So that's in stage one. Like we started this collaboration, we already built a cluster there. And then in stage two, we will give access to our hardware partners to run our prover. So they can test their provers and generate proof for us. But at stage two, it's still for large partners which they are committed to generate proof for us and something like that. And we believe that using even more customized prover can shorten the finality time and massively improve the user experience because you get cheaper prover and with even faster finality. And so that's still true. And in stage three, we will finally move to this permissionless prover where I call that layer-to-proof outsourcing where you are letting the external parties to run the prover. And we will open source our GPU prover with a permissionless license for everyone to use. So even now, like our CPU prover is totally open source, you can already run the CPU prover if you want, but just the GPU prover, we are still improving the performance and it will be open source later. And anyone can run our prover and the prover access will be permissionless. And anyone can generate proof at home for us. And they can also buy some customized hardware from those companies or even stick to use because there are some companies that are providing some proof as a service. So you can stick there and use their service to generate proof for us. So that's basically our plan for this hardware acceleration. And one last thing is that so we have a very solid and decentralized tech team. So we have four directions. One is infrastructure team which is building out our whole infrastructure making that more robust and to support the permissionless testnet. And it's usually based in Asia and Europe. And we have ZK team which building the ZK circuits and some critical parts. And for example, optimizing the prover performance. So those two are like engineer teams and we are across like six or seven time zones. It's totally decentralized. And also besides the engineer team, we also have a in-house security team which makes things really special because the security team because we really care about user security, right? There are so many like bridges or platform get hacked. So we have this security team which composed of several experts like expertise in blockchain security, smart contract auditing and the crypto cryptography. So they will be like in charge of our security of the whole system and also like collaborate with external hackers and auditors to make our system more secure. And finally, we have a research team exploring very like multiple research directions. For example, how to decentralize the sequencer and how to upgrade the next generation proof system and doing a lot of interesting research like that. And also around Ethereum, like we are actually very contributing to a lot of EIPs. And yeah, so that's part of the research team. And our vision is that we want to onboard the next billion of users for Ethereum because we think making the transactions really cheap and your confirmation really fast will make more users go into Ethereum ecosystem. And everything we build is totally open. And especially for the ZKM part, we are co-built with a large community. For example, the privacy and the screen exploration team from Ethereum Foundation and several other community members. And we want to find for decentralization across different levels like starting from decentralization of the prover. So if you are a vision aligned and you really like what we are building and we are still hiring and check out our hiring page. And I think, yes, that's it. And thank you for that. Yeah, hi. So obviously you have this kind of cool infrastructure with the prover and the sequencer. Could you talk about how you like gas fees work in scroll? How you like price transactions? Yes, so the gas fee currently we are hard to call that to be exactly the same as the ECM layer one. But it's my subject to change if it doesn't match the proving cost. But it will be minor, mostly targeting at some pre-compelled, very expensive pre-compelled, which are not Vicky friendly, but most op-codes will be the same. And right now it's exactly the same. Hi, can I know the data availability strategy for scroll? Yes, so that's a good question. So currently we are directly submitting the role transaction data on chain as part of the data availability. And we do believe that Dunksharding and other like cheaper like data solutions on ECM is coming very soon. And also like by submitting the role transaction data, like users can replay the transaction when you are in the commuting stage. So you don't need to wait even wait for the proof generation time to get a stronger confirmation ahead of time. And yeah. Okay, thanks for the talk. What's the impact of reorgs on Ethereum on the components like the sequencer coordinator and the prover? So you're asking like how for... How do you handle reorgs on layer one? A hundred what? Reorgs, the organizations or blocks. You mean like layer one block are not confirmed or? Yeah, if blocks get reorganized on the two. Yeah, yeah, yeah. So basically when your transaction is within layer two, it can be confirmed really fast. So this like the it will only influence your deposit. So for now like we just wait for six blocks. And but yeah, in the future it might have to exchange if we think it's not so safe enough. But for now like we just wait for six blocks. Thank you for the presentation. I have two questions. One of them is about the hardware component. How do you make sure that there's like a decentralized network of like the people who are provers if you're like working with specific companies? Like how do you make sure that the provers are a decentralized network versus being like centralized to one or two specific like FPGA companies or GPU companies that become very large stakers? And then my second question is, so this process for decentralizing the prover, can you talk about some of the differences for challenges in decentralizing the sequencer? Like how does those two processes differ? And like what are some of like the different considerations for decentralizing a sequencer versus the prover? Thank you. Yeah, that's a very good question. So for the first one, as I mentioned, like we will have two versions. First is that as we are collaborating with the external companies, we will also open source permissionless license GPU prover. So anyone can directly use the GPU prover if they don't want to use FPGA or some other companies. And we are not incentivizing the fast prover because for example, like even if someone has the ASIC prover or someone has the FPGA prover, they don't necessarily like you can beat you because so the strategy is that we will have a time period for submitting the proof. As far as you can like submit the proof in time, like you can be like incentivized. So it doesn't necessarily, you have to generate like one minute, you can always beat the other provers. So it's more like for prioritization and how you are like making use of the computation power of course the whole network in parallel. It's not like, so even if you have those hardware partners for like companies, you can still choose like whether you just want run independently using GPU prover or using their service. And so that's for question one. And for question two, so what we are, when we are thinking of this is that so the prover is easier to be decentralized because for example, we are having for now like at this stage, we are having a centralized coordinator. So you can still have some like, you know, for example, verifying the proofs and doing something like that. And so when we are thinking of decentralizing the prover and sequencer, because it's actually two communities because the prover community requires specialized hardware but the sequencer might be like just some level of like decentralization. And when you're making the sequencer decentralized, there are some like problems. Like for example, like you, if you want to do some like force free draw and like all the interaction there, it's much harder there than like using a centralized sequencer. So that's part of the problems and also like how you incentivize between like sequencer and the prover and how to balance those incentivize that's also part of the challenging problem we face and how to make your whole system more efficient because you still need some consensus there among those sequencers. And yeah, it's that.